diff --git a/books/manual/SUMMARY.md b/books/manual/SUMMARY.md index 9e78964..f0d1a4c 100644 --- a/books/manual/SUMMARY.md +++ b/books/manual/SUMMARY.md @@ -50,330 +50,330 @@ - [TF Token Bridge](dashboard/tfchain/tf_token_bridge.md) - [TF Token Transfer](dashboard/tfchain/tf_token_transfer.md) - [TF Minting Reports](dashboard/tfchain/tf_minting_reports.md) - - [Developers](manual/documentation/developers/developers.md) - - [Javascript Client](manual/documentation/developers/javascript/grid3_javascript_readme.md) - - [Installation](manual/documentation/developers/javascript/grid3_javascript_installation.md) - - [Loading Client](manual/documentation/developers/javascript/grid3_javascript_loadclient.md) - - [Deploy a VM](manual/documentation/developers/javascript/grid3_javascript_vm.md) - - [Capacity Planning](manual/documentation/developers/javascript/grid3_javascript_capacity_planning.md) - - [Deploy Multiple VMs](manual/documentation/developers/javascript/grid3_javascript_vms.md) - - [Deploy CapRover](manual/documentation/developers/javascript/grid3_javascript_caprover.md) - - [Gateways](manual/documentation/developers/javascript/grid3_javascript_vm_gateways.md) - - [Deploy a Kubernetes Cluster](manual/documentation/developers/javascript/grid3_javascript_kubernetes.md) - - [Deploy a ZDB](manual/documentation/developers/javascript/grid3_javascript_zdb.md) - - [Deploy ZDBs for QSFS](manual/documentation/developers/javascript/grid3_javascript_qsfs_zdbs.md) - - [QSFS](manual/documentation/developers/javascript/grid3_javascript_qsfs.md) - - [Key Value Store](manual/documentation/developers/javascript/grid3_javascript_kvstore.md) - - [VM with Wireguard and Gateway](manual/documentation/developers/javascript/grid3_wireguard_gateway.md) - - [GPU Support](manual/documentation/developers/javascript/grid3_javascript_gpu_support.md) - - [Go Client](manual/documentation/developers/go/grid3_go_readme.md) - - [Installation](manual/documentation/developers/go/grid3_go_installation.md) - - [Loading Client](manual/documentation/developers/go/grid3_go_load_client.md) - - [Deploy a VM](manual/documentation/developers/go/grid3_go_vm.md) - - [Deploy Multiple VMs](manual/documentation/developers/go/grid3_go_vms.md) - - [Deploy Gateways](manual/documentation/developers/go/grid3_go_gateways.md) - - [Deploy Kubernetes](manual/documentation/developers/go/grid3_go_kubernetes.md) - - [Deploy a QSFS](manual/documentation/developers/go/grid3_go_qsfs.md) - - [GPU and Go](manual/documentation/developers/go/grid3_go_gpu.md) - - [GPU Support](manual/documentation/developers/go/grid3_go_gpu_support.md) - - [Deploy a VM with GPU](manual/documentation/developers/go/grid3_go_vm_with_gpu.md) - - [TFCMD](manual/documentation/developers/tfcmd/tfcmd.md) - - [Getting Started](manual/documentation/developers/tfcmd/tfcmd_basics.md) - - [Deploy a VM](manual/documentation/developers/tfcmd/tfcmd_vm.md) - - [Deploy Kubernetes](manual/documentation/developers/tfcmd/tfcmd_kubernetes.md) - - [Deploy ZDB](manual/documentation/developers/tfcmd/tfcmd_zdbs.md) - - [Gateway FQDN](manual/documentation/developers/tfcmd/tfcmd_gateway_fqdn.md) - - [Gateway Name](manual/documentation/developers/tfcmd/tfcmd_gateway_name.md) - - [Contracts](manual/documentation/developers/tfcmd/tfcmd_contracts.md) - - [TFROBOT](manual/documentation/developers/tfrobot/tfrobot.md) - - [Installation](manual/documentation/developers/tfrobot/tfrobot_installation.md) - - [Configuration File](manual/documentation/developers/tfrobot/tfrobot_config.md) - - [Deployment](manual/documentation/developers/tfrobot/tfrobot_deploy.md) - - [Commands and Flags](manual/documentation/developers/tfrobot/tfrobot_commands_flags.md) - - [Supported Configurations](manual/documentation/developers/tfrobot/tfrobot_configurations.md) - - [ThreeFold Chain](manual/documentation/developers/tfchain/dev_tfchain.md) - - [Introduction](manual/documentation/developers/tfchain/introduction.md) - - [Farming Policies](manual/documentation/developers/tfchain/farming_policies.md) - - [External Service Contract](manual/documentation/developers/tfchain/tfchain_external_service_contract.md) - - [Solution Provider](manual/documentation/developers/tfchain/tfchain_solution_provider.md) - - [Grid Proxy](manual/documentation/developers/proxy/proxy_readme.md) - - [Introducing Grid Proxy](manual/documentation/developers/proxy/proxy.md) - - [Setup](manual/documentation/developers/proxy/setup.md) - - [DB Testing](manual/documentation/developers/proxy/db_testing.md) - - [Commands](manual/documentation/developers/proxy/commands.md) - - [Contributions](manual/documentation/developers/proxy/contributions.md) - - [Explorer](manual/documentation/developers/proxy/explorer.md) - - [Database](manual/documentation/developers/proxy/database.md) - - [Production](manual/documentation/developers/proxy/production.md) - - [Release](manual/documentation/developers/proxy/release.md) - - [Flist](manual/documentation/developers/flist/flist.md) - - [Zero-OS Hub](manual/documentation/developers/flist/flist_hub/zos_hub.md) - - [Generate an API Token](manual/documentation/developers/flist/flist_hub/api_token.md) - - [Convert Docker Image Into Flist](manual/documentation/developers/flist/flist_hub/convert_docker_image.md) - - [Supported Flists](manual/documentation/developers/flist/grid3_supported_flists.md) - - [Flist Case Studies](manual/documentation/developers/flist/flist_case_studies/flist_case_studies.md) - - [Case Study: Debian 12](manual/documentation/developers/flist/flist_case_studies/flist_debian_case_study.md) - - [Case Study: Nextcloud AIO](manual/documentation/developers/flist/flist_case_studies/flist_nextcloud_case_study.md) - - [Internals](manual/documentation/developers/internals/internals.md) - - [Reliable Message Bus - RMB](manual/documentation/developers/internals/rmb/rmb_toc.md) - - [Introduction to RMB](manual/documentation/developers/internals/rmb/rmb_intro.md) - - [RMB Specs](manual/documentation/developers/internals/rmb/rmb_specs.md) - - [RMB Peer](manual/documentation/developers/internals/rmb/uml/peer.md) - - [RMB Relay](manual/documentation/developers/internals/rmb/uml/relay.md) - - [Zero-OS](manual/documentation/developers/internals/zos/readme.md) - - [Manual](manual/documentation/developers/internals/zos/manual/manual.md) - - [Workload Types](manual/documentation/developers/internals/zos/manual/workload_types.md) - - [Internal Modules](manual/documentation/developers/internals/zos/internals/internals.md) - - [Identity](manual/documentation/developers/internals/zos/internals/identity/readme.md) - - [Node ID Generation](manual/documentation/developers/internals/zos/internals/identity/identity.md) - - [Node Upgrade](manual/documentation/developers/internals/zos/internals/identity/upgrade.md) - - [Node](manual/documentation/developers/internals/zos/internals/node/readme.md) - - [Storage](manual/documentation/developers/internals/zos/internals/storage/readme.md) - - [Network](manual/documentation/developers/internals/zos/internals/network/readme.md) - - [Introduction](manual/documentation/developers/internals/zos/internals/network/introduction.md) - - [Definitions](manual/documentation/developers/internals/zos/internals/network/definitions.md) - - [Mesh](manual/documentation/developers/internals/zos/internals/network/mesh.md) - - [Setup](manual/documentation/developers/internals/zos/internals/network/setup_farm_network.md) - - [Flist](manual/documentation/developers/internals/zos/internals/flist/readme.md) - - [Container](manual/documentation/developers/internals/zos/internals/container/readme.md) - - [VM](manual/documentation/developers/internals/zos/internals/vmd/readme.md) - - [Provision](manual/documentation/developers/internals/zos/internals/provision/readme.md) - - [Capacity](manual/documentation/developers/internals/zos/internals/capacity.md) - - [Performance Monitor Package](manual/documentation/developers/internals/zos/performance/performance.md) - - [Public IPs Validation Task](manual/documentation/developers/internals/zos/performance/publicips.md) - - [CPUBenchmark](manual/documentation/developers/internals/zos/performance/cpubench.md) - - [IPerf](manual/documentation/developers/internals/zos/performance/iperf.md) - - [Health Check](manual/documentation/developers/internals/zos/performance/healthcheck.md) - - [API](manual/documentation/developers/internals/zos/manual/api.md) - - [Grid Deployment](manual/documentation/developers/grid_deployment/grid_deployment.md) - - [TFGrid Stacks](manual/documentation/developers/grid_deployment/tfgrid_stacks.md) - - [Full VM Grid Deployment](manual/documentation/developers/grid_deployment/grid_deployment_full_vm.md) - - [Grid Snapshots](manual/documentation/developers/grid_deployment/snapshots.md) - - [Farmers](manual/documentation/farmers/farmers.md) - - [Build a 3Node](manual/documentation/farmers/3node_building/3node_building.md) - - [1. Create a Farm](manual/documentation/farmers/3node_building/1_create_farm.md) - - [2. Create a Zero-OS Bootstrap Image](manual/documentation/farmers/3node_building/2_bootstrap_image.md) - - [3. Set the Hardware](manual/documentation/farmers/3node_building/3_set_hardware.md) - - [4. Wipe All the Disks](manual/documentation/farmers/3node_building/4_wipe_all_disks.md) - - [5. Set the BIOS/UEFI](manual/documentation/farmers/3node_building/5_set_bios_uefi.md) - - [6. Boot the 3Node](manual/documentation/farmers/3node_building/6_boot_3node.md) - - [Farming Optimization](manual/documentation/farmers/farming_optimization/farming_optimization.md) - - [GPU Farming](manual/documentation/farmers/3node_building/gpu_farming.md) - - [Set Additional Fees](manual/documentation/farmers/farming_optimization/set_additional_fees.md) - - [Minting Receipts](manual/documentation/farmers/3node_building/minting_receipts.md) - - [Minting Periods](manual/documentation/farmers/farming_optimization/minting_periods.md) - - [Room Parameters](manual/documentation/farmers/farming_optimization/farm_room_parameters.md) - - [Farming Costs](manual/documentation/farmers/farming_optimization/farming_costs.md) - - [Calculate Your ROI](manual/documentation/farmers/farming_optimization/calculate_roi.md) - - [Advanced Networking](manual/documentation/farmers/advanced_networking/advanced_networking_toc.md) - - [Networking Overview](manual/documentation/farmers/advanced_networking/networking_overview.md) - - [Network Considerations](manual/documentation/farmers/advanced_networking/network_considerations.md) - - [Network Setup](manual/documentation/farmers/advanced_networking/network_setup.md) - - [Farmerbot](manual/documentation/farmers/farmerbot/farmerbot_intro.md) - - [Quick Guide](manual/documentation/farmers/farmerbot/farmerbot_quick.md) - - [Additional Information](manual/documentation/farmers/farmerbot/farmerbot_information.md) - - [Minting and the Farmerbot](manual/documentation/farmers/farmerbot/farmerbot_minting.md) - - [System Administrators](manual/documentation/system_administrators/system_administrators.md) - - [Getting Started](manual/documentation/system_administrators/getstarted/tfgrid3_getstarted.md) - - [SSH Remote Connection](manual/documentation/system_administrators/getstarted/ssh_guide/ssh_guide.md) - - [SSH with OpenSSH](manual/documentation/system_administrators/getstarted/ssh_guide/ssh_openssh.md) - - [SSH with PuTTY](manual/documentation/system_administrators/getstarted/ssh_guide/ssh_putty.md) - - [SSH with WSL](manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wsl.md) - - [WireGuard Access](manual/documentation/system_administrators/getstarted/ssh_guide/ssh_wireguard.md) - - [Remote Desktop and GUI](manual/documentation/system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md) - - [Cockpit: a Web-based Interface for Servers](manual/documentation/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md) - - [XRDP: an Open-Source Remote Desktop Protocol](manual/documentation/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md) - - [Apache Guacamole: a Clientless Remote Desktop Gateway](manual/documentation/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md) - - [Planetary Network](manual/documentation/system_administrators/getstarted/planetarynetwork.md) - - [TFGrid Services](manual/documentation/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md) - - [GPU](manual/documentation/system_administrators/gpu/gpu_toc.md) - - [GPU Support](manual/documentation/system_administrators/gpu/gpu.md) - - [Terraform](manual/documentation/system_administrators/terraform/terraform_toc.md) - - [Overview](manual/documentation/system_administrators/terraform/terraform_readme.md) - - [Installing Terraform](manual/documentation/system_administrators/terraform/terraform_install.md) - - [Terraform Basics](manual/documentation/system_administrators/terraform/terraform_basics.md) - - [Full VM Deployment](manual/documentation/system_administrators/terraform/terraform_full_vm.md) - - [GPU Support](manual/documentation/system_administrators/terraform/terraform_gpu_support.md) - - [Resources](manual/documentation/system_administrators/terraform/resources/terraform_resources_readme.md) - - [Using Scheduler](manual/documentation/system_administrators/terraform/resources/terraform_scheduler.md) - - [Virtual Machine](manual/documentation/system_administrators/terraform/resources/terraform_vm.md) - - [Web Gateway](manual/documentation/system_administrators/terraform/resources/terraform_vm_gateway.md) - - [Kubernetes Cluster](manual/documentation/system_administrators/terraform/resources/terraform_k8s.md) - - [ZDB](manual/documentation/system_administrators/terraform/resources/terraform_zdb.md) - - [Quantum Safe Filesystem](manual/documentation/system_administrators/terraform/resources/terraform_qsfs.md) - - [QSFS on Micro VM](manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md) - - [QSFS on Full VM](manual/documentation/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md) - - [CapRover](manual/documentation/system_administrators/terraform/resources/terraform_caprover.md) - - [Advanced](manual/documentation/system_administrators/terraform/advanced/terraform_advanced_readme.md) - - [Terraform Provider](manual/documentation/system_administrators/terraform/advanced/terraform_provider.md) - - [Terraform Provisioners](manual/documentation/system_administrators/terraform/advanced/terraform_provisioners.md) - - [Mounts](manual/documentation/system_administrators/terraform/advanced/terraform_mounts.md) - - [Capacity Planning](manual/documentation/system_administrators/terraform/advanced/terraform_capacity_planning.md) - - [Updates](manual/documentation/system_administrators/terraform/advanced/terraform_updates.md) - - [SSH Connection with Wireguard](manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_ssh.md) - - [Set a Wireguard VPN](manual/documentation/system_administrators/terraform/advanced/terraform_wireguard_vpn.md) - - [Synced MariaDB Databases](manual/documentation/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md) - - [Nomad](manual/documentation/system_administrators/terraform/advanced/terraform_nomad.md) - - [Nextcloud Deployments](manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_toc.md) - - [Nextcloud All-in-One Deployment](manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_aio.md) - - [Nextcloud Single Deployment](manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_single.md) - - [Nextcloud Redundant Deployment](manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md) - - [Nextcloud 2-Node VPN Deployment](manual/documentation/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md) - - [Pulumi](manual/documentation/system_administrators/pulumi/pulumi_readme.md) - - [Introduction to Pulumi](manual/documentation/system_administrators/pulumi/pulumi_intro.md) - - [Installing Pulumi](manual/documentation/system_administrators/pulumi/pulumi_install.md) - - [Deployment Examples](manual/documentation/system_administrators/pulumi/pulumi_examples.md) - - [Deployment Details](manual/documentation/system_administrators/pulumi/pulumi_deployment_details.md) - - [Mycelium](manual/documentation/system_administrators/mycelium/mycelium_toc.md) - - [Overview](manual/documentation/system_administrators/mycelium/overview.md) - - [Installation](manual/documentation/system_administrators/mycelium/installation.md) - - [Additional Information](manual/documentation/system_administrators/mycelium/information.md) - - [Message](manual/documentation/system_administrators/mycelium/message.md) - - [Packet](manual/documentation/system_administrators/mycelium/packet.md) - - [Data Packet](manual/documentation/system_administrators/mycelium/data_packet.md) - - [API YAML](manual/documentation/system_administrators/mycelium/api_yaml.md) - - [Computer and IT Basics](manual/documentation/system_administrators/computer_it_basics/computer_it_basics.md) - - [CLI and Scripts Basics](manual/documentation/system_administrators/computer_it_basics/cli_scripts_basics.md) - - [Docker Basics](manual/documentation/system_administrators/computer_it_basics/docker_basics.md) - - [Git and GitHub Basics](manual/documentation/system_administrators/computer_it_basics/git_github_basics.md) - - [Firewall Basics](manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md) - - [UFW Basics](manual/documentation/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md) - - [Firewalld Basics](manual/documentation/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md) - - [File Transfer](manual/documentation/system_administrators/computer_it_basics/file_transfer.md) - - [Advanced](manual/documentation/system_administrators/advanced/advanced.md) - - [Token Transfer Keygenerator](manual/documentation/system_administrators/advanced/token_transfer_keygenerator.md) - - [Cancel Contracts](manual/documentation/system_administrators/advanced/cancel_contracts.md) - - [Contract Bills Reports](manual/documentation/system_administrators/advanced/contract_bill_report.md) - - [Listing Free Public IPs](manual/documentation/system_administrators/advanced/list_public_ips.md) - - [Redis](manual/documentation/system_administrators/advanced/grid3_redis.md) - - [IPFS](manual/documentation/system_administrators/advanced/ipfs/ipfs_toc.md) - - [IPFS on a Full VM](manual/documentation/system_administrators/advanced/ipfs/ipfs_fullvm.md) - - [IPFS on a Micro VM](manual/documentation/system_administrators/advanced/ipfs/ipfs_microvm.md) - - [ThreeFold Token](manual/documentation/threefold_token/threefold_token.md) - - [TFT Bridges](manual/documentation/threefold_token/tft_bridges/tft_bridges.md) - - [TFChain-Stellar Bridge](manual/documentation/threefold_token/tft_bridges/tfchain_stellar_bridge.md) - - [BSC-Stellar Bridge](manual/documentation/threefold_token/tft_bridges/bsc_stellar_bridge.md) - - [BSC-Stellar Bridge Verification](manual/documentation/threefold_token/tft_bridges/bsc_stellar_bridge_verification.md) - - [Ethereum-Stellar Bridge](manual/documentation/threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md) - - [Storing TFT](manual/documentation/threefold_token/storing_tft/storing_tft.md) - - [ThreeFold Connect App - Stellar](manual/documentation/threefold_token/storing_tft/tf_connect_app.md) - - [Lobstr Wallet - Stellar](manual/documentation/threefold_token/storing_tft/lobstr_wallet.md) - - [MetaMask - BSC & ETH](manual/documentation/threefold_token/storing_tft/metamask.md) - - [Hardware Wallet](manual/documentation/threefold_token/storing_tft/hardware_wallet.md) - - [Buy and Sell TFT](manual/documentation/threefold_token/buy_sell_tft/buy_sell_tft.md) - - [Quick Start - Stellar](manual/documentation/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md) - - [Lobstr Wallet - Stellar](manual/documentation/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md) - - [MetaMask - BSC & ETH](manual/documentation/threefold_token/buy_sell_tft/tft_metamask/tft_metamask.md) - - [Pancake Swap - BSC](manual/documentation/threefold_token/buy_sell_tft/pancakeswap.md) - - [Liquidity Provider - LP](manual/documentation/threefold_token/liquidity/liquidity_readme.md) - - [Pancake Swap LP](manual/documentation/threefold_token/liquidity/liquidity_pancake.md) - - [1inch.io LP](manual/documentation/threefold_token/liquidity/liquidity_1inch.md) - - [Albedo LP](manual/documentation/threefold_token/liquidity/liquidity_albedo.md) - - [Transaction Fees](manual/documentation/threefold_token/transaction_fees.md) + - [Developers](developers/developers.md) + - [Javascript Client](developers/javascript/grid3_javascript_readme.md) + - [Installation](developers/javascript/grid3_javascript_installation.md) + - [Loading Client](developers/javascript/grid3_javascript_loadclient.md) + - [Deploy a VM](developers/javascript/grid3_javascript_vm.md) + - [Capacity Planning](developers/javascript/grid3_javascript_capacity_planning.md) + - [Deploy Multiple VMs](developers/javascript/grid3_javascript_vms.md) + - [Deploy CapRover](developers/javascript/grid3_javascript_caprover.md) + - [Gateways](developers/javascript/grid3_javascript_vm_gateways.md) + - [Deploy a Kubernetes Cluster](developers/javascript/grid3_javascript_kubernetes.md) + - [Deploy a ZDB](developers/javascript/grid3_javascript_zdb.md) + - [Deploy ZDBs for QSFS](developers/javascript/grid3_javascript_qsfs_zdbs.md) + - [QSFS](developers/javascript/grid3_javascript_qsfs.md) + - [Key Value Store](developers/javascript/grid3_javascript_kvstore.md) + - [VM with Wireguard and Gateway](developers/javascript/grid3_wireguard_gateway.md) + - [GPU Support](developers/javascript/grid3_javascript_gpu_support.md) + - [Go Client](developers/go/grid3_go_readme.md) + - [Installation](developers/go/grid3_go_installation.md) + - [Loading Client](developers/go/grid3_go_load_client.md) + - [Deploy a VM](developers/go/grid3_go_vm.md) + - [Deploy Multiple VMs](developers/go/grid3_go_vms.md) + - [Deploy Gateways](developers/go/grid3_go_gateways.md) + - [Deploy Kubernetes](developers/go/grid3_go_kubernetes.md) + - [Deploy a QSFS](developers/go/grid3_go_qsfs.md) + - [GPU and Go](developers/go/grid3_go_gpu.md) + - [GPU Support](developers/go/grid3_go_gpu_support.md) + - [Deploy a VM with GPU](developers/go/grid3_go_vm_with_gpu.md) + - [TFCMD](developers/tfcmd/tfcmd.md) + - [Getting Started](developers/tfcmd/tfcmd_basics.md) + - [Deploy a VM](developers/tfcmd/tfcmd_vm.md) + - [Deploy Kubernetes](developers/tfcmd/tfcmd_kubernetes.md) + - [Deploy ZDB](developers/tfcmd/tfcmd_zdbs.md) + - [Gateway FQDN](developers/tfcmd/tfcmd_gateway_fqdn.md) + - [Gateway Name](developers/tfcmd/tfcmd_gateway_name.md) + - [Contracts](developers/tfcmd/tfcmd_contracts.md) + - [TFROBOT](developers/tfrobot/tfrobot.md) + - [Installation](developers/tfrobot/tfrobot_installation.md) + - [Configuration File](developers/tfrobot/tfrobot_config.md) + - [Deployment](developers/tfrobot/tfrobot_deploy.md) + - [Commands and Flags](developers/tfrobot/tfrobot_commands_flags.md) + - [Supported Configurations](developers/tfrobot/tfrobot_configurations.md) + - [ThreeFold Chain](developers/tfchain/dev_tfchain.md) + - [Introduction](developers/tfchain/introduction.md) + - [Farming Policies](developers/tfchain/farming_policies.md) + - [External Service Contract](developers/tfchain/tfchain_external_service_contract.md) + - [Solution Provider](developers/tfchain/tfchain_solution_provider.md) + - [Grid Proxy](developers/proxy/proxy_readme.md) + - [Introducing Grid Proxy](developers/proxy/proxy.md) + - [Setup](developers/proxy/setup.md) + - [DB Testing](developers/proxy/db_testing.md) + - [Commands](developers/proxy/commands.md) + - [Contributions](developers/proxy/contributions.md) + - [Explorer](developers/proxy/explorer.md) + - [Database](developers/proxy/database.md) + - [Production](developers/proxy/production.md) + - [Release](developers/proxy/release.md) + - [Flist](developers/flist/flist.md) + - [Zero-OS Hub](developers/flist/flist_hub/zos_hub.md) + - [Generate an API Token](developers/flist/flist_hub/api_token.md) + - [Convert Docker Image Into Flist](developers/flist/flist_hub/convert_docker_image.md) + - [Supported Flists](developers/flist/grid3_supported_flists.md) + - [Flist Case Studies](developers/flist/flist_case_studies/flist_case_studies.md) + - [Case Study: Debian 12](developers/flist/flist_case_studies/flist_debian_case_study.md) + - [Case Study: Nextcloud AIO](developers/flist/flist_case_studies/flist_nextcloud_case_study.md) + - [Internals](developers/internals/internals.md) + - [Reliable Message Bus - RMB](developers/internals/rmb/rmb_toc.md) + - [Introduction to RMB](developers/internals/rmb/rmb_intro.md) + - [RMB Specs](developers/internals/rmb/rmb_specs.md) + - [RMB Peer](developers/internals/rmb/uml/peer.md) + - [RMB Relay](developers/internals/rmb/uml/relay.md) + - [Zero-OS](developers/internals/zos/readme.md) + - [Manual](developers/internals/zos/manual/manual.md) + - [Workload Types](developers/internals/zos/manual/workload_types.md) + - [Internal Modules](developers/internals/zos/internals/internals.md) + - [Identity](developers/internals/zos/internals/identity/readme.md) + - [Node ID Generation](developers/internals/zos/internals/identity/identity.md) + - [Node Upgrade](developers/internals/zos/internals/identity/upgrade.md) + - [Node](developers/internals/zos/internals/node/readme.md) + - [Storage](developers/internals/zos/internals/storage/readme.md) + - [Network](developers/internals/zos/internals/network/readme.md) + - [Introduction](developers/internals/zos/internals/network/introduction.md) + - [Definitions](developers/internals/zos/internals/network/definitions.md) + - [Mesh](developers/internals/zos/internals/network/mesh.md) + - [Setup](developers/internals/zos/internals/network/setup_farm_network.md) + - [Flist](developers/internals/zos/internals/flist/readme.md) + - [Container](developers/internals/zos/internals/container/readme.md) + - [VM](developers/internals/zos/internals/vmd/readme.md) + - [Provision](developers/internals/zos/internals/provision/readme.md) + - [Capacity](developers/internals/zos/internals/capacity.md) + - [Performance Monitor Package](developers/internals/zos/performance/performance.md) + - [Public IPs Validation Task](developers/internals/zos/performance/publicips.md) + - [CPUBenchmark](developers/internals/zos/performance/cpubench.md) + - [IPerf](developers/internals/zos/performance/iperf.md) + - [Health Check](developers/internals/zos/performance/healthcheck.md) + - [API](developers/internals/zos/manual/api.md) + - [Grid Deployment](developers/grid_deployment/grid_deployment.md) + - [TFGrid Stacks](developers/grid_deployment/tfgrid_stacks.md) + - [Full VM Grid Deployment](developers/grid_deployment/grid_deployment_full_vm.md) + - [Grid Snapshots](developers/grid_deployment/snapshots.md) + - [Farmers](farmers/farmers.md) + - [Build a 3Node](farmers/3node_building/3node_building.md) + - [1. Create a Farm](farmers/3node_building/1_create_farm.md) + - [2. Create a Zero-OS Bootstrap Image](farmers/3node_building/2_bootstrap_image.md) + - [3. Set the Hardware](farmers/3node_building/3_set_hardware.md) + - [4. Wipe All the Disks](farmers/3node_building/4_wipe_all_disks.md) + - [5. Set the BIOS/UEFI](farmers/3node_building/5_set_bios_uefi.md) + - [6. Boot the 3Node](farmers/3node_building/6_boot_3node.md) + - [Farming Optimization](farmers/farming_optimization/farming_optimization.md) + - [GPU Farming](farmers/3node_building/gpu_farming.md) + - [Set Additional Fees](farmers/farming_optimization/set_additional_fees.md) + - [Minting Receipts](farmers/3node_building/minting_receipts.md) + - [Minting Periods](farmers/farming_optimization/minting_periods.md) + - [Room Parameters](farmers/farming_optimization/farm_room_parameters.md) + - [Farming Costs](farmers/farming_optimization/farming_costs.md) + - [Calculate Your ROI](farmers/farming_optimization/calculate_roi.md) + - [Advanced Networking](farmers/advanced_networking/advanced_networking_toc.md) + - [Networking Overview](farmers/advanced_networking/networking_overview.md) + - [Network Considerations](farmers/advanced_networking/network_considerations.md) + - [Network Setup](farmers/advanced_networking/network_setup.md) + - [Farmerbot](farmers/farmerbot/farmerbot_intro.md) + - [Quick Guide](farmers/farmerbot/farmerbot_quick.md) + - [Additional Information](farmers/farmerbot/farmerbot_information.md) + - [Minting and the Farmerbot](farmers/farmerbot/farmerbot_minting.md) + - [System Administrators](system_administrators/system_administrators.md) + - [Getting Started](system_administrators/getstarted/tfgrid3_getstarted.md) + - [SSH Remote Connection](system_administrators/getstarted/ssh_guide/ssh_guide.md) + - [SSH with OpenSSH](system_administrators/getstarted/ssh_guide/ssh_openssh.md) + - [SSH with PuTTY](system_administrators/getstarted/ssh_guide/ssh_putty.md) + - [SSH with WSL](system_administrators/getstarted/ssh_guide/ssh_wsl.md) + - [WireGuard Access](system_administrators/getstarted/ssh_guide/ssh_wireguard.md) + - [Remote Desktop and GUI](system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md) + - [Cockpit: a Web-based Interface for Servers](system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md) + - [XRDP: an Open-Source Remote Desktop Protocol](system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md) + - [Apache Guacamole: a Clientless Remote Desktop Gateway](system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md) + - [Planetary Network](system_administrators/getstarted/planetarynetwork.md) + - [TFGrid Services](system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md) + - [GPU](system_administrators/gpu/gpu_toc.md) + - [GPU Support](system_administrators/gpu/gpu.md) + - [Terraform](system_administrators/terraform/terraform_toc.md) + - [Overview](system_administrators/terraform/terraform_readme.md) + - [Installing Terraform](system_administrators/terraform/terraform_install.md) + - [Terraform Basics](system_administrators/terraform/terraform_basics.md) + - [Full VM Deployment](system_administrators/terraform/terraform_full_vm.md) + - [GPU Support](system_administrators/terraform/terraform_gpu_support.md) + - [Resources](system_administrators/terraform/resources/terraform_resources_readme.md) + - [Using Scheduler](system_administrators/terraform/resources/terraform_scheduler.md) + - [Virtual Machine](system_administrators/terraform/resources/terraform_vm.md) + - [Web Gateway](system_administrators/terraform/resources/terraform_vm_gateway.md) + - [Kubernetes Cluster](system_administrators/terraform/resources/terraform_k8s.md) + - [ZDB](system_administrators/terraform/resources/terraform_zdb.md) + - [Quantum Safe Filesystem](system_administrators/terraform/resources/terraform_qsfs.md) + - [QSFS on Micro VM](system_administrators/terraform/resources/terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md) + - [CapRover](system_administrators/terraform/resources/terraform_caprover.md) + - [Advanced](system_administrators/terraform/advanced/terraform_advanced_readme.md) + - [Terraform Provider](system_administrators/terraform/advanced/terraform_provider.md) + - [Terraform Provisioners](system_administrators/terraform/advanced/terraform_provisioners.md) + - [Mounts](system_administrators/terraform/advanced/terraform_mounts.md) + - [Capacity Planning](system_administrators/terraform/advanced/terraform_capacity_planning.md) + - [Updates](system_administrators/terraform/advanced/terraform_updates.md) + - [SSH Connection with Wireguard](system_administrators/terraform/advanced/terraform_wireguard_ssh.md) + - [Set a Wireguard VPN](system_administrators/terraform/advanced/terraform_wireguard_vpn.md) + - [Synced MariaDB Databases](system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md) + - [Nomad](system_administrators/terraform/advanced/terraform_nomad.md) + - [Nextcloud Deployments](system_administrators/terraform/advanced/terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](system_administrators/terraform/advanced/terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](system_administrators/terraform/advanced/terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](system_administrators/terraform/advanced/terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](system_administrators/terraform/advanced/terraform_nextcloud_vpn.md) + - [Pulumi](system_administrators/pulumi/pulumi_readme.md) + - [Introduction to Pulumi](system_administrators/pulumi/pulumi_intro.md) + - [Installing Pulumi](system_administrators/pulumi/pulumi_install.md) + - [Deployment Examples](system_administrators/pulumi/pulumi_examples.md) + - [Deployment Details](system_administrators/pulumi/pulumi_deployment_details.md) + - [Mycelium](system_administrators/mycelium/mycelium_toc.md) + - [Overview](system_administrators/mycelium/overview.md) + - [Installation](system_administrators/mycelium/installation.md) + - [Additional Information](system_administrators/mycelium/information.md) + - [Message](system_administrators/mycelium/message.md) + - [Packet](system_administrators/mycelium/packet.md) + - [Data Packet](system_administrators/mycelium/data_packet.md) + - [API YAML](system_administrators/mycelium/api_yaml.md) + - [Computer and IT Basics](system_administrators/computer_it_basics/computer_it_basics.md) + - [CLI and Scripts Basics](system_administrators/computer_it_basics/cli_scripts_basics.md) + - [Docker Basics](system_administrators/computer_it_basics/docker_basics.md) + - [Git and GitHub Basics](system_administrators/computer_it_basics/git_github_basics.md) + - [Firewall Basics](system_administrators/computer_it_basics/firewall_basics/firewall_basics.md) + - [UFW Basics](system_administrators/computer_it_basics/firewall_basics/ufw_basics.md) + - [Firewalld Basics](system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md) + - [File Transfer](system_administrators/computer_it_basics/file_transfer.md) + - [Advanced](system_administrators/advanced/advanced.md) + - [Token Transfer Keygenerator](system_administrators/advanced/token_transfer_keygenerator.md) + - [Cancel Contracts](system_administrators/advanced/cancel_contracts.md) + - [Contract Bills Reports](system_administrators/advanced/contract_bill_report.md) + - [Listing Free Public IPs](system_administrators/advanced/list_public_ips.md) + - [Redis](system_administrators/advanced/grid3_redis.md) + - [IPFS](system_administrators/advanced/ipfs/ipfs_toc.md) + - [IPFS on a Full VM](system_administrators/advanced/ipfs/ipfs_fullvm.md) + - [IPFS on a Micro VM](system_administrators/advanced/ipfs/ipfs_microvm.md) + - [ThreeFold Token](threefold_token/threefold_token.md) + - [TFT Bridges](threefold_token/tft_bridges/tft_bridges.md) + - [TFChain-Stellar Bridge](threefold_token/tft_bridges/tfchain_stellar_bridge.md) + - [BSC-Stellar Bridge](threefold_token/tft_bridges/bsc_stellar_bridge.md) + - [BSC-Stellar Bridge Verification](threefold_token/tft_bridges/bsc_stellar_bridge_verification.md) + - [Ethereum-Stellar Bridge](threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md) + - [Storing TFT](threefold_token/storing_tft/storing_tft.md) + - [ThreeFold Connect App - Stellar](threefold_token/storing_tft/tf_connect_app.md) + - [Lobstr Wallet - Stellar](threefold_token/storing_tft/lobstr_wallet.md) + - [MetaMask - BSC & ETH](threefold_token/storing_tft/metamask.md) + - [Hardware Wallet](threefold_token/storing_tft/hardware_wallet.md) + - [Buy and Sell TFT](threefold_token/buy_sell_tft/buy_sell_tft.md) + - [Quick Start - Stellar](threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md) + - [Lobstr Wallet - Stellar](threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md) + - [MetaMask - BSC & ETH](threefold_token/buy_sell_tft/tft_metamask/tft_metamask.md) + - [Pancake Swap - BSC](threefold_token/buy_sell_tft/pancakeswap.md) + - [Liquidity Provider - LP](threefold_token/liquidity/liquidity_readme.md) + - [Pancake Swap LP](threefold_token/liquidity/liquidity_pancake.md) + - [1inch.io LP](threefold_token/liquidity/liquidity_1inch.md) + - [Albedo LP](threefold_token/liquidity/liquidity_albedo.md) + - [Transaction Fees](threefold_token/transaction_fees.md) - [FAQ](manual/documentation/faq/faq.md) - [Knowledge Base](manual/knowledge_base.md) - - [About](manual/knowledge_base/about/about.md) - - [ThreeFold History](manual/knowledge_base/about/threefold_history.md) - - [Token History](manual/knowledge_base/about/token_history.md) - - [Genesis Pool](manual/knowledge_base/about/genesis_pool.md) - - [Genesis Pool Dubai](manual/knowledge_base/about/genesis_pool_dubai.md) - - [Genesis Pool Ghent](manual/knowledge_base/about/genesis_pool_ghent.md) - - [Genesis Pool Details](manual/knowledge_base/about/genesis_block_pool_details.md) - - [ThreeFold Tech](manual/knowledge_base/about/threefold_tech.md) - - [Organisation Structure](manual/knowledge_base/about/orgstructure.md) - - [Governance](manual/knowledge_base/about/governance.md) - - [ThreeFold Companies](manual/knowledge_base/about/threefold_companies.md) - - [ThreeFold Dubai](manual/knowledge_base/about/threefold_dubai.md) - - [ThreeFold VZW](manual/knowledge_base/about/threefold_vzw.md) - - [ThreeFold AG](manual/knowledge_base/about/threefold_ag.md) - - [Mazraa](manual/knowledge_base/about/mazraa.md) - - [BetterToken](manual/knowledge_base/about/bettertoken.md) - - [DAO](manual/knowledge_base/about/dao/dao.md) - - [ThreeFold DAO](manual/knowledge_base/about/dao/tfdao.md) - - [TFChain](manual/knowledge_base/about/org_tfchain.md) - - [ThreeFold Roadmap](manual/knowledge_base/about/roadmap/roadmap_readme.md) - - [Release Notes](manual/knowledge_base/about/roadmap/releasenotes/releasenotes_readme.md) - - [TFGrid v3.10.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_10_0.md) - - [TFGrid v3.9.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_9_0.md) - - [TFGrid v3.8.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_8_0.md) - - [TFGrid v3.7.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_7_0.md) - - [TFGrid v3.6.1](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_6_1.md) - - [TFGrid v3.6.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_6_0.md) - - [TFGrid v3.0.0 Alpha-5](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_0_a5.md) - - [TFGrid v3.0.0 Alpha-4](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_0_a4.md) - - [TFGrid v3.0.0 Alpha-2](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_0_a2.md) - - [TFGrid v3.0.0](manual/knowledge_base/about/roadmap/releasenotes/tfgrid_release_3_0.md) - - [ThreeFold Token](manual/knowledge_base/about/token_overview/token_overview.md) - - [Special Wallets](manual/knowledge_base/about/token_overview/special_wallets/stats_special_wallets.md) - - [Technology](manual/knowledge_base/technology/technology_toc.md) - - [How It Works](manual/knowledge_base/technology/grid3_howitworks.md) - - [Grid Concepts](manual/knowledge_base/technology/concepts/concepts_readme.md) - - [TFGrid Primitives](manual/knowledge_base/technology/concepts/grid_primitives.md) - - [TFGrid Component List](manual/knowledge_base/technology/concepts/grid3_components.md) - - [Infrastructure as Code](manual/knowledge_base/technology/concepts/grid3_iac.md) - - [Proof of Utilization](manual/knowledge_base/technology/concepts/proof_of_utilization.md) - - [Contract Grace Period](manual/knowledge_base/technology/concepts/contract_grace_period.md) - - [What's New on TFGrid v3.x](manual/knowledge_base/technology/concepts/grid3_whatsnew.md) - - [TFChain](manual/knowledge_base/technology/concepts/concepts_tfchain.md) - - [TFGrid by Design](manual/knowledge_base/technology/concepts/tfgrid_by_design.md) - - [Primitives](manual/knowledge_base/technology/primitives/primitives_toc.md) - - [Compute](manual/knowledge_base/technology/primitives/compute/compute_toc.md) - - [ZKube](manual/knowledge_base/technology/primitives/compute/zkube.md) - - [ZMachine](manual/knowledge_base/technology/primitives/compute/zmachine.md) - - [CoreX](manual/knowledge_base/technology/primitives/compute/corex.md) - - [Storage](manual/knowledge_base/technology/primitives/storage/storage_toc.md) - - [ZOS Filesystem](manual/knowledge_base/technology/primitives/storage/zos_fs.md) - - [ZOS Mount](manual/knowledge_base/technology/primitives/storage/zmount.md) - - [Quantum Safe File System](manual/knowledge_base/technology/primitives/storage/qsfs.md) - - [Zero-DB](manual/knowledge_base/technology/primitives/storage/zdb.md) - - [Zero-Disk](manual/knowledge_base/technology/primitives/storage/zdisk.md) - - [Network](manual/knowledge_base/technology/primitives/network/network_toc.md) - - [ZNET](manual/knowledge_base/technology/primitives/network/znet.md) - - [ZNIC](manual/knowledge_base/technology/primitives/network/znic.md) - - [WebGateway](manual/knowledge_base/technology/primitives/network/webgw3.md) - - [Zero-OS Advantages](manual/knowledge_base/technology/zos/benefits/zos_advantages.md) - - [Quantum Safe Storage](manual/knowledge_base/technology/qsss/qsss_home.md) - - [Smart Contract IT](manual/knowledge_base/technology/smartcontract_it/smartcontract_toc.md) - - [Introduction](manual/knowledge_base/technology/smartcontract_it/smartcontract_tfgrid3.md) - - [Infrastructure As Code - IAC](manual/knowledge_base/technology/smartcontract_it/smartcontract_iac.md) - - [3Bot Integration](manual/knowledge_base/technology/smartcontract_it/smartcontract_3bot.md) - - [Farming](manual/knowledge_base/farming/farming_toc.md) - - [Farming Rewards](manual/knowledge_base/farming/farming_reward.md) - - [Proof-of-Capacity](manual/knowledge_base/farming/proof_of_capacity.md) - - [Proof-of-Utilization](manual/knowledge_base/farming/proof_of_utilization.md) - - [PoC DAO Rules](manual/knowledge_base/farming/poc_dao_rules.md) - - [Cloud](manual/knowledge_base/cloud/cloud_toc.md) - - [Cloud Units](manual/knowledge_base/cloud/cloudunits.md) - - [Pricing](manual/knowledge_base/cloud/pricing/pricing_toc.md) - - [Pricing Overview](manual/knowledge_base/cloud/pricing/pricing.md) - - [Staking Discounts](manual/knowledge_base/cloud/pricing/staking_discount_levels.md) - - [Cloud Pricing Compare](manual/knowledge_base/cloud/pricing/cloud_pricing_compare.md) - - [Grid Billing](manual/knowledge_base/cloud/grid_billing/grid_billing.md) - - [Resource Units](manual/knowledge_base/cloud/resource_units_calc_cloudunits.md) - - [Resource Units Advanced](manual/knowledge_base/cloud/resourceunits_advanced.md) - - [Collaboration](manual/knowledge_base/collaboration/collaboration_toc.md) - - [How to Contribute](manual/knowledge_base/collaboration/contribute.md) - - [Development Process](manual/knowledge_base/collaboration/development_process.md) - - [Feature Request](manual/knowledge_base/collaboration/feature_request.md) - - [Bug Report](manual/knowledge_base/collaboration/bug_report.md) - - [Issue Labels](manual/knowledge_base/collaboration/issue_labels.md) - - [Development Cycle](manual/knowledge_base/collaboration/development_cycle.md) - - [Release Process](manual/knowledge_base/collaboration/release_process.md) - - [Pull Request Template](manual/knowledge_base/collaboration/PULL_REQUEST_TEMPLATE.md) - - [Collaboration Tools](manual/knowledge_base/collaboration/collaboration_tools/collaboration_tools.md) - - [Circle Tool](manual/knowledge_base/collaboration/collaboration_tools/circle_tool.md) - - [Website Deployer](manual/knowledge_base/collaboration/collaboration_tools/website_tool.md) - - [Website Link Checker](manual/knowledge_base/collaboration/collaboration_tools/website_link_checker.md) - - [How to Test](manual/knowledge_base/collaboration/testing/testing_readme.md) - - [TestLodge](manual/knowledge_base/collaboration/testing/testlodge.md) - - [Code of Conduct](manual/knowledge_base/collaboration/code_conduct.md) - - [Legal](manual/knowledge_base/legal/terms_conditions_all3.md) - - [Disclaimer](manual/knowledge_base/legal/disclaimer.md) - - [Definitions](manual/knowledge_base/legal/definitions_legal.md) - - [Privacy Policy](manual/knowledge_base/legal/privacypolicy.md) - - [Terms & Conditions](manual/knowledge_base/legal/terms_conditions/terms_conditions_toc.md) - - [Terms & Conditions ThreeFold Related Websites](manual/knowledge_base/legal/terms_conditions/terms_conditions_websites.md) - - [Terms & Conditions TFGrid Users TFGrid 3](manual/knowledge_base/legal/terms_conditions/terms_conditions_griduser.md) - - [TFTA to TFT](manual/knowledge_base/legal/terms_conditions/tfta_to_tft.md) - - [Terms & Conditions TFGrid Farmers TFGrid 3](manual/knowledge_base/legal/terms_conditions/terms_conditions_farmer3.md) - - [Terms & Conditions Sales](manual/knowledge_base/legal/terms_conditions/terms_conditions_sales.md) \ No newline at end of file + - [About](about/about.md) + - [ThreeFold History](about/threefold_history.md) + - [Token History](about/token_history.md) + - [Genesis Pool](about/genesis_pool.md) + - [Genesis Pool Dubai](about/genesis_pool_dubai.md) + - [Genesis Pool Ghent](about/genesis_pool_ghent.md) + - [Genesis Pool Details](about/genesis_block_pool_details.md) + - [ThreeFold Tech](about/threefold_tech.md) + - [Organisation Structure](about/orgstructure.md) + - [Governance](about/governance.md) + - [ThreeFold Companies](about/threefold_companies.md) + - [ThreeFold Dubai](about/threefold_dubai.md) + - [ThreeFold VZW](about/threefold_vzw.md) + - [ThreeFold AG](about/threefold_ag.md) + - [Mazraa](about/mazraa.md) + - [BetterToken](about/bettertoken.md) + - [DAO](about/dao/dao.md) + - [ThreeFold DAO](about/dao/tfdao.md) + - [TFChain](about/org_tfchain.md) + - [ThreeFold Roadmap](about/roadmap/roadmap_readme.md) + - [Release Notes](about/roadmap/releasenotes/releasenotes_readme.md) + - [TFGrid v3.10.0](about/roadmap/releasenotes/tfgrid_release_3_10_0.md) + - [TFGrid v3.9.0](about/roadmap/releasenotes/tfgrid_release_3_9_0.md) + - [TFGrid v3.8.0](about/roadmap/releasenotes/tfgrid_release_3_8_0.md) + - [TFGrid v3.7.0](about/roadmap/releasenotes/tfgrid_release_3_7_0.md) + - [TFGrid v3.6.1](about/roadmap/releasenotes/tfgrid_release_3_6_1.md) + - [TFGrid v3.6.0](about/roadmap/releasenotes/tfgrid_release_3_6_0.md) + - [TFGrid v3.0.0 Alpha-5](about/roadmap/releasenotes/tfgrid_release_3_0_a5.md) + - [TFGrid v3.0.0 Alpha-4](about/roadmap/releasenotes/tfgrid_release_3_0_a4.md) + - [TFGrid v3.0.0 Alpha-2](about/roadmap/releasenotes/tfgrid_release_3_0_a2.md) + - [TFGrid v3.0.0](about/roadmap/releasenotes/tfgrid_release_3_0.md) + - [ThreeFold Token](about/token_overview/token_overview.md) + - [Special Wallets](about/token_overview/special_wallets/stats_special_wallets.md) + - [Technology](technology/technology_toc.md) + - [How It Works](technology/grid3_howitworks.md) + - [Grid Concepts](technology/concepts/concepts_readme.md) + - [TFGrid Primitives](technology/concepts/grid_primitives.md) + - [TFGrid Component List](technology/concepts/grid3_components.md) + - [Infrastructure as Code](technology/concepts/grid3_iac.md) + - [Proof of Utilization](technology/concepts/proof_of_utilization.md) + - [Contract Grace Period](technology/concepts/contract_grace_period.md) + - [What's New on TFGrid v3.x](technology/concepts/grid3_whatsnew.md) + - [TFChain](technology/concepts/concepts_tfchain.md) + - [TFGrid by Design](technology/concepts/tfgrid_by_design.md) + - [Primitives](technology/primitives/primitives_toc.md) + - [Compute](technology/primitives/compute/compute_toc.md) + - [ZKube](technology/primitives/compute/zkube.md) + - [ZMachine](technology/primitives/compute/zmachine.md) + - [CoreX](technology/primitives/compute/corex.md) + - [Storage](technology/primitives/storage/storage_toc.md) + - [ZOS Filesystem](technology/primitives/storage/zos_fs.md) + - [ZOS Mount](technology/primitives/storage/zmount.md) + - [Quantum Safe File System](technology/primitives/storage/qsfs.md) + - [Zero-DB](technology/primitives/storage/zdb.md) + - [Zero-Disk](technology/primitives/storage/zdisk.md) + - [Network](technology/primitives/network/network_toc.md) + - [ZNET](technology/primitives/network/znet.md) + - [ZNIC](technology/primitives/network/znic.md) + - [WebGateway](technology/primitives/network/webgw3.md) + - [Zero-OS Advantages](technology/zos/benefits/zos_advantages.md) + - [Quantum Safe Storage](technology/qsss/qsss_home.md) + - [Smart Contract IT](technology/smartcontract_it/smartcontract_toc.md) + - [Introduction](technology/smartcontract_it/smartcontract_tfgrid3.md) + - [Infrastructure As Code - IAC](technology/smartcontract_it/smartcontract_iac.md) + - [3Bot Integration](technology/smartcontract_it/smartcontract_3bot.md) + - [Farming](farming/farming_toc.md) + - [Farming Rewards](farming/farming_reward.md) + - [Proof-of-Capacity](farming/proof_of_capacity.md) + - [Proof-of-Utilization](farming/proof_of_utilization.md) + - [PoC DAO Rules](farming/poc_dao_rules.md) + - [Cloud](cloud/cloud_toc.md) + - [Cloud Units](cloud/cloudunits.md) + - [Pricing](cloud/pricing/pricing_toc.md) + - [Pricing Overview](cloud/pricing/pricing.md) + - [Staking Discounts](cloud/pricing/staking_discount_levels.md) + - [Cloud Pricing Compare](cloud/pricing/cloud_pricing_compare.md) + - [Grid Billing](cloud/grid_billing/grid_billing.md) + - [Resource Units](cloud/resource_units_calc_cloudunits.md) + - [Resource Units Advanced](cloud/resourceunits_advanced.md) + - [Collaboration](collaboration/collaboration_toc.md) + - [How to Contribute](collaboration/contribute.md) + - [Development Process](collaboration/development_process.md) + - [Feature Request](collaboration/feature_request.md) + - [Bug Report](collaboration/bug_report.md) + - [Issue Labels](collaboration/issue_labels.md) + - [Development Cycle](collaboration/development_cycle.md) + - [Release Process](collaboration/release_process.md) + - [Pull Request Template](collaboration/PULL_REQUEST_TEMPLATE.md) + - [Collaboration Tools](collaboration/collaboration_tools/collaboration_tools.md) + - [Circle Tool](collaboration/collaboration_tools/circle_tool.md) + - [Website Deployer](collaboration/collaboration_tools/website_tool.md) + - [Website Link Checker](collaboration/collaboration_tools/website_link_checker.md) + - [How to Test](collaboration/testing/testing_readme.md) + - [TestLodge](collaboration/testing/testlodge.md) + - [Code of Conduct](collaboration/code_conduct.md) + - [Legal](manual_legal/terms_conditions_all3.md) + - [Disclaimer](manual_legal/disclaimer.md) + - [Definitions](manual_legal/definitions_legal.md) + - [Privacy Policy](manual_legal/privacypolicy.md) + - [Terms & Conditions](manual_legal/terms_conditions/terms_conditions_toc.md) + - [Terms & Conditions ThreeFold Related Websites](manual_legal/terms_conditions/terms_conditions_websites.md) + - [Terms & Conditions TFGrid Users TFGrid 3](manual_legal/terms_conditions/terms_conditions_griduser.md) + - [TFTA to TFT](manual_legal/terms_conditions/tfta_to_tft.md) + - [Terms & Conditions TFGrid Farmers TFGrid 3](manual_legal/terms_conditions/terms_conditions_farmer3.md) + - [Terms & Conditions Sales](manual_legal/terms_conditions/terms_conditions_sales.md) \ No newline at end of file diff --git a/collections/dashboard/.collection b/collections/about/.collection similarity index 100% rename from collections/dashboard/.collection rename to collections/about/.collection diff --git a/collections/about/about.md b/collections/about/about.md new file mode 100644 index 0000000..ba76824 --- /dev/null +++ b/collections/about/about.md @@ -0,0 +1,13 @@ +

About

+ +This section of the manual covers information about ThreeFold, its history, its roadmap, its vision and more. + +It's a good place to start if you want to have an overview of ThreeFold since its beginning. + +

Table of Contents

+ +- [ThreeFold History](./threefold_history.md) +- [ThreeFold Tech](./threefold_tech.md) +- [Organisation Structure](./orgstructure.md) +- [ThreeFold Roadmap](./roadmap/roadmap_readme.md) +- [ThreeFold Token](./token_overview/token_overview.md) \ No newline at end of file diff --git a/collections/about/bettertoken.md b/collections/about/bettertoken.md new file mode 100644 index 0000000..81ff291 --- /dev/null +++ b/collections/about/bettertoken.md @@ -0,0 +1,33 @@ +

BetterToken NV

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Income](#income) +- [Structure](#structure) +- [Expected Changes](#expected-changes) + +*** + +## Introduction + +European Farming Cooperative for the foundation: + +- Operates a data center in Lochristi (Belgium) offering hosting and connectivity for TF Farmers +- Currently, 100+ nodes – many of them are owned by TF farmers +- [ThreeFold Tech](./threefold_tech.md) NV uses some of their equipment today for development +- Sale of small servers to TF Farmers, was done mainly via an online webshop + +## Income + +- Hosting fees for the maintenance & running of the farmer pool + +## Structure + +- Started 30 November 2016 +- [limited liable company in belgium](http://www.ejustice.just.fgov.be/tsv_pdf/2016/11/30/16324281.pdf) +- Peter Van der Henst is the managing director + +## Expected Changes + +BetterToken continues to be farming cooperative for Europe. Right now there is not much happening in BetterToken. \ No newline at end of file diff --git a/collections/about/dao/dao.md b/collections/about/dao/dao.md new file mode 100644 index 0000000..61f6568 --- /dev/null +++ b/collections/about/dao/dao.md @@ -0,0 +1,55 @@ +

DAO

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview of a DAO](#overview-of-a-dao) +- [Comparisons](#comparisons) +- [DAO Rules](#dao-rules) +- [DAO and Blockchain](#dao-and-blockchain) + +*** + +## Introduction + +We present the main concept of a DAO. + +A DAO is a decentralized autonomous organization. To be more precise, this means the following: + +- Decentralized = Online, global, uncensorable. +- Autonomous = Self-governing. +- Organization = Coordination & collaboration around shared objectives. + +## Overview of a DAO + +DAOs are an effective and safe way to work with like-minded folks around the globe. + +Think of them like an internet-native business that's collectively owned and managed by its members. They have built-in treasuries that no one has the authority to access without the approval of the group. Decisions are governed by proposals and voting to ensure everyone in the organization has a voice. + +This opens up so many new opportunities for global collaboration and coordination. + +DAOs operate using smart contracts, which are essentially chunks of code that automatically execute whenever a set of criteria are met. These smart contracts establish the DAO’s rules. Those with a stake in a DAO then get voting rights and may influence how the organization operates by deciding on or creating new governance proposals. + +## Comparisons + +The following table provides some comparisons between a DAO and a traditional organization. + +| DAO | A traditional organization | +| ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | +| Usually flat, and fully democratized. | Usually hierarchical. | +| Voting required by members for any changes to be implemented. | Depending on structure, changes can be demanded from a sole party, or voting may be offered. | +| Votes tallied, and outcome implemented automatically without trusted intermediary. | If voting is allowed, votes are tallied internally, and the outcome of voting must be handled manually. | +| Services offered are handled automatically in a decentralized manner (for example distribution of philanthropic funds). | Requires human handling, or centrally controlled automation, prone to manipulation. | +| All activity is transparent and fully public. | Requires human handling, or centrally controlled automation, prone to manipulation. | + +*Info from https://ethereum.org/en/dao/, picture from https://cointelegraph.com/ethereum-for-beginners* + +## DAO Rules + +Decentralized autonomous organization (DAO), is an organization represented by rules encoded as a computer program that is transparent, controlled by the organization members and not influenced by a central government. A DAO's financial and voting transaction record and program rules are maintained on a blockchain. + +## DAO and Blockchain + +Decentralized autonomous organizations are typified by the use of blockchain technology to provide a secure digital ledger to track financial and other community interactions across the internet, hardened against forgery by trusted timestamping and dissemination of a distributed database. This approach eliminates the need to involve a mutually acceptable trusted third party in a transaction, simplifying the transaction. The costs of a blockchain-enabled transaction and of the associated data reporting may be substantially offset by the elimination of both the trusted third party and of the need for repetitive recording of contract exchanges in different records. For example, the blockchain data could, in principle and if regulatory structures permit it, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration. + +*Info from [here](https://en.wikipedia.org/wiki/Decentralized_autonomous_organization)* \ No newline at end of file diff --git a/collections/about/dao/dao_info.md b/collections/about/dao/dao_info.md new file mode 100644 index 0000000..19aa1f8 --- /dev/null +++ b/collections/about/dao/dao_info.md @@ -0,0 +1,33 @@ + +![](img/dao_whatis_.jpg) + + +- Decentralized = Online, global, uncensorable. +- Autonomous = Self-governing. +- Organization = Coordination & collaboration around shared objectives. + +DAOs are an effective and safe way to work with like-minded folks around the globe. + +Think of them like an internet-native business that's collectively owned and managed by its members. They have built-in treasuries that no one has the authority to access without the approval of the group. Decisions are governed by proposals and voting to ensure everyone in the organization has a voice. + +This opens up so many new opportunities for global collaboration and coordination. + +DAOs operate using smart contracts, which are essentially chunks of code that automatically execute whenever a set of criteria are met. These smart contracts establish the DAO’s rules. Those with a stake in a DAO then get voting rights and may influence how the organization operates by deciding on or creating new governance proposals. + + +| DAO | A traditional organization | +| ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | +| Usually flat, and fully democratized. | Usually hierarchical. | +| Voting required by members for any changes to be implemented. | Depending on structure, changes can be demanded from a sole party, or voting may be offered. | +| Votes tallied, and outcome implemented automatically without trusted intermediary. | If voting is allowed, votes are tallied internally, and the outcome of voting must be handled manually. | +| Services offered are handled automatically in a decentralized manner (for example distribution of philanthropic funds). | Requires human handling, or centrally controlled automation, prone to manipulation. | +| All activity is transparent and fully public. | Requires human handling, or centrally controlled automation, prone to manipulation. | + +*Info from https://ethereum.org/en/dao/, picture from https://cointelegraph.com/ethereum-for-beginners* + +Decentralized autonomous organization (DAO), is an organization represented by rules encoded as a computer program that is transparent, controlled by the organization members and not influenced by a central government. A DAO's financial and voting transaction record and program rules are maintained on a blockchain. + +Decentralized autonomous organizations are typified by the use of blockchain technology to provide a secure digital ledger to track financial and other community interactions across the internet, hardened against forgery by trusted timestamping and dissemination of a distributed database. This approach eliminates the need to involve a mutually acceptable trusted third party in a transaction, simplifying the transaction. The costs of a blockchain-enabled transaction and of the associated data reporting may be substantially offset by the elimination of both the trusted third party and of the need for repetitive recording of contract exchanges in different records. For example, the blockchain data could, in principle and if regulatory structures permit it, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration. + + + diff --git a/collections/about/dao/dao_more_info.md b/collections/about/dao/dao_more_info.md new file mode 100644 index 0000000..dc6599e --- /dev/null +++ b/collections/about/dao/dao_more_info.md @@ -0,0 +1,6 @@ +### More Info: + +- [ThreeFold DAO](@tfdao). +- [Why are DAO's important](@dao_why) +- [ThreeFold Validators](@validators_faq). +- [Decentralization Info](@decentralization). diff --git a/collections/about/dao/dao_why.md b/collections/about/dao/dao_why.md new file mode 100644 index 0000000..c5884f1 --- /dev/null +++ b/collections/about/dao/dao_why.md @@ -0,0 +1,97 @@ +# What is a decentralized autonomous organization, and how does a DAO work? + +![](img/dao_whatis_.jpg) + +A decentralized autonomous organization (DAO) is an entity with no central leadership. Decisions get made from the bottom-up, governed by a community organized around a specific set of rules enforced on a blockchain. + +DAOs are internet-native organizations collectively owned and managed by their members. They have built-in treasuries that are only accessible with the approval of their members. Decisions are made via proposals the group votes on during a specified period. + +A DAO works without hierarchical management and can have a large number of purposes. Freelancer networks where contracts pool their funds to pay for software subscriptions, charitable organizations where members approve donations and venture capital firms owned by a group are all possible with these organizations. + +Before moving on, it’s important to distinguish a DAO, an internet-native organization, from The DAO, one of the first such organizations ever created. The DAO was a project founded in 2016 that ultimately failed and led to a dramatic split of the Ethereum network. + +## How does a DAO work? + +As mentioned above, a DAO is an organization where decisions get made from the bottom-up; a collective of members owns the organization. There are various ways to participate in a DAO, usually through the ownership of a token. + +DAOs operate using smart contracts, which are essentially chunks of code that automatically execute whenever a set of criteria are met. Smart contracts are deployed on numerous blockchains nowadays, though Ethereum was the first to use them. + +These smart contracts establish the DAO’s rules. Those with a stake in a DAO then get voting rights and may influence how the organization operates by deciding on or creating new governance proposals. + +This model prevents DAOs from being spammed with proposals: A proposal will only pass once the majority of stakeholders approve it. How that majority is determined varies from DAO to DAO and is specified in the smart contracts. + +DAOs are fully autonomous and transparent. As they are built on open-source blockchains, anyone can view their code. Anyone can also audit their built-in treasuries, as the blockchain records all financial transactions. + +Typically, a DAO launch occurs in three major steps +Smart contract creation: First, a developer or group of developers must create the smart contract behind the DAO. After launch, they can only change the rules set by these contracts through the governance system. That means they must extensively test the contracts to ensure they don’t overlook important details. + +Funding: After the smart contracts have been created, the DAO needs to determine a way to receive funding and how to enact governance. More often than not, tokens are sold to raise funds; these tokens give holders voting rights. + +Deployment: Once everything is set up, the DAO needs to be deployed on the blockchain. From this point on, stakeholders decide on the future of the organization. The organization’s creators — those who wrote the smart contracts — no longer influence the project any more than other stakeholders. + +## Why do we need DAOs? + +Being internet-native organizations, DAOs have several advantages over traditional organizations. One significant advantage of DAOs is the lack of trust needed between two parties. While a traditional organization requires a lot of trust in the people behind it — especially on behalf of investors — with DAOs, only the code needs to be trusted. + +Trusting that code is easier to do as it’s publicly available and can be extensively tested before launch. Every action a DAO takes after being launched has to be approved by the community and is completely transparent and verifiable. + +Such an organization has no hierarchical structure. Yet, it can still accomplish tasks and grow while being controlled by stakeholders via its native token. The lack of a hierarchy means any stakeholder can put forward an innovative idea that the entire group will consider and improve upon. Internal disputes are often easily solved through the voting system, in line with the pre-written rules in the smart contract. + +By allowing investors to pool funds, DAOs also give them a chance to invest in early-stage startups and decentralized projects while sharing the risk or any profits that may come out of them. + +## The principal-agent dilemma + +The main advantage of DAOs is that they offer a solution to the principal-agent dilemma. This dilemma is a conflict in priorities between a person or group (the principal) and those making decisions and acting on their behalf (the agent). + +Problems can occur in some situations, with a common one being in the relationship between stakeholders and a CEO. The agent (the CEO) may work in a way that’s not in line with the priorities and goals determined by the principal (the stakeholders) and instead act in their own self-interest. + +Another typical example of the principal-agent dilemma occurs when the agent takes excessive risk because the principal bears the burden. For example, a trader can use extreme leverage to chase a performance bonus, knowing the organization will cover any downside. + +DAOs solve the principal-agent dilemma through community governance. Stakeholders aren’t forced to join a DAO and only do so after understanding the rules that govern it. They don’t need to trust any agent acting on their behalf and instead work as part of a group whose incentives are aligned. + +Token holders’ interests align as the nature of a DAO incentivizes them not to be malicious. Since they have a stake in the network, they will want to see it succeed. Acting against it would be acting against their self-interests. + +## What was The DAO? + +The DAO was an early iteration of modern decentralized autonomous organizations. It was launched back in 2016 and designed to be an automated organization that acted as a form of venture capital fund. + +Those who owned DAO tokens could profit from the organization’s investments by either reaping dividends or benefitting from price appreciation of the tokens. The DAO was initially seen as a revolutionary project and raised $150 million in Ether (ETH), one of the greatest crowdfunding efforts of the time. + +The DAO launched on April 30, 2016, after Ethereum protocol engineer Christoph Jentzsch released the open-source code for an Ethereum-based investment organization. Investors bought DAO tokens by moving Ether to its smart contracts. + +A few days into the token sale, some developers expressed concerns that a bug in The DAO’s smart contracts could allow malicious actors to drain its funds. While a governance proposal was set forth to fix the bug, an attacker took advantage of it and siphoned over $60 million worth of ETH from The DAO’s wallet. + +At the time, around 14% of all ETH in circulation was invested in The DAO. The hack was a significant blow to DAOs in general and the then one-year-old Ethereum network. A debate within the Ethereum community ensued as everyone scrambled to figure out what to do. Initially, Ethereum co-founder Vitalik Buterin proposed a soft fork that would blacklist the attacker’s address and prevent them from moving the funds. + +The attacker or someone posing as them then responded to that proposal, claiming the funds had been obtained in a “legal” way according to the smart contract’s rules. They claimed they were ready to take legal action against anyone who tried to seize the funds. + +The hacker even threatened to bribe ETH miners with some of the stolen funds to thwart a soft fork attempt. In the debate that ensued, a hard fork was determined to be the solution. That hard fork was implemented to roll back the Ethereum network’s history to before The DAO was hacked and reallocate the stolen funds to a smart contract that allowed investors to withdraw them. Those who disagreed with the move rejected the hard fork and supported an earlier version of the network, known as Ethereum Classic (ETC). + +## Disadvantages of DAOs + +Decentralized autonomous organizations aren’t perfect. They are an extremely new technology that has attracted much criticism due to lingering concerns regarding their legality, security and structure. + +MIT Technology Review has, for example, revealed it considers it a bad idea to trust the masses with important financial decisions. While MIT shared its thoughts back in 2016, the organization appears to have never changed its mind on DAOs — at not least publicly. The DAO hack also raised security concerns, as flaws in smart contracts can be hard to fix even after they are spotted. + +DAOs can be distributed across multiple jurisdictions, and there’s no legal framework for them. Any legal issues that may arise will likely require those involved to deal with numerous regional laws in a complicated legal battle. + +In July 2017, for example, the United States Securities and Exchange Commission issued a report in which it determined that The DAO sold securities in the form of tokens on the Ethereum blockchain without authorization, violating portions of securities law in the country. + +## Examples of DAOs + +Decentralized autonomous organizations have gained traction over the last few years and are now fully incorporated into many blockchain projects. The decentralized finance (DeFi) space uses DAOs to allow applications to become fully decentralized, for example. + +To some, the Bitcoin (BTC) network is the earliest example of a DAO there is. The network scales via community agreement, even though most network participants have never met each other. It also does not have an organized governance mechanism, and instead, miners and nodes have to signal support. + +However, Bitcoin is not seen as a DAO by today’s standards. By current measures, Dash would be the first true DAO, as the project has a governance mechanism that allows stakeholders to vote on the use of its treasury. + +Other, more advanced DAOs, including decentralized networks built on top of the Ethereum blockchain, are responsible for launching cryptocurrency-backed stablecoins. In some cases, the organizations that initially launched these DAOs slowly give away control of the project to one day become irrelevant. Token holders can actively vote on governance proposals to hire new contributors, add new tokens as collateral for their coins or adjust other parameters. + +In 2020, a DeFi lending protocol launched its own governance token and distributed it through a liquidity mining process. Essentially, anyone who interacted with the protocol would receive tokens as a reward. Other projects have since replicated and adapted the model. + +Now, the list of DAOs is extensive. Over time, it has become a clear concept that has been gaining traction. Some projects are still looking to achieve complete decentralization through the DAO model, but it’s worth pointing out they are only a few years old and have yet to achieve their final goals and objectives. + +As internet-native organizations, DAOs have the potential to change the way corporate governance works completely. While the concept matures and the legal gray area they operate in is cleared, more and more organizations may adopt a DAO model to help govern some of their activities. + + +> info from: https://cointelegraph.com/ethereum-for-beginners/what-is-a-decentralized-autonomous-organization-and-how-does-a-dao-work diff --git a/collections/about/dao/gep.md b/collections/about/dao/gep.md new file mode 100644 index 0000000..d63a0a2 --- /dev/null +++ b/collections/about/dao/gep.md @@ -0,0 +1,14 @@ +# Grid Enhancement Proposal + +![](img/gep.png) + +GEP stands for Grid Enhancement Proposal. A GEP is a design document providing information to the ThreeFold community, or describing a new feature for the TFGrid or its processes or environment. The GEP should provide a concise technical specification of the feature and a rationale for the feature. + +- A GEP gets registered in TFChain. TFDAO makes this possible +- Community has to approve or not a GEP +- Once enough consensus achieved the GEP will be executed upon, whereas "consensus" is a variable % per GEP and will be defined when sepcs are ready. +- Validator Nodes of our L1 TFChain will make sure GEP is properly implemented and consensus also achieved on that level. + +*some inspiration comes from https://www.python.org/dev/peps/pep-0001* + +!!!def alias:gep diff --git a/collections/about/dao/img/dao_whatis_.jpg b/collections/about/dao/img/dao_whatis_.jpg new file mode 100644 index 0000000..0b99259 Binary files /dev/null and b/collections/about/dao/img/dao_whatis_.jpg differ diff --git a/collections/about/dao/img/gep.png b/collections/about/dao/img/gep.png new file mode 100644 index 0000000..dc20ecf Binary files /dev/null and b/collections/about/dao/img/gep.png differ diff --git a/collections/about/dao/tfdao.md b/collections/about/dao/tfdao.md new file mode 100644 index 0000000..79020b5 --- /dev/null +++ b/collections/about/dao/tfdao.md @@ -0,0 +1,138 @@ +

ThreeFold DAO

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [DAO Tasks](#dao-tasks) +- [What is a DAO?](#what-is-a-dao) +- [How does a DAO work?](#how-does-a-dao-work) +- [Why do we need DAOs?](#why-do-we-need-daos) +- [The principal-agent dilemma](#the-principal-agent-dilemma) +- [What was The DAO?](#what-was-the-dao) +- [Disadvantages of DAOs](#disadvantages-of-daos) +- [Examples of DAOs](#examples-of-daos) + +*** + +## Introduction + +The ThreeFold DAO allows autonomous operation of the TFChain and TFGrid. + +## DAO Tasks + +Amongst others the DAO needs to arrange + +| Utility Token model | | +| -------------------------------------------- | ------------------------------------------ | +| [Proof Of Capacity](../../farming/proof_of_capacity.md) | Farming (creation) of TFT | +| [Proof Of Utilization](../../farming/proof_of_utilization.md) | Utilization (burning, distribution) of TFT | + +As well as + +- distribution of TFT grants +- manage code upgrade of TFChain and ZOS +- approval for changes to anything in our ecosystem, by means of GEP e.g. + - changes to tokenomics e.g. changes related to + - farming rewards + - cultivation flows + - pricing of grid capacity + - new features in TFChain + - rewards for sales channels, solution providers (v3.2+) + +## What is a DAO? + +A decentralized autonomous organization (DAO) is an entity with no central leadership. Decisions get made from the bottom-up, governed by a community organized around a specific set of rules enforced on a blockchain. + +DAOs are internet-native organizations collectively owned and managed by their members. They have built-in treasuries that are only accessible with the approval of their members. Decisions are made via proposals the group votes on during a specified period. + +A DAO works without hierarchical management and can have a large number of purposes. Freelancer networks where contracts pool their funds to pay for software subscriptions, charitable organizations where members approve donations and venture capital firms owned by a group are all possible with these organizations. + +Before moving on, it’s important to distinguish a DAO, an internet-native organization, from The DAO, one of the first such organizations ever created. The DAO was a project founded in 2016 that ultimately failed and led to a dramatic split of the Ethereum network. + +## How does a DAO work? + +As mentioned above, a DAO is an organization where decisions get made from the bottom-up; a collective of members owns the organization. There are various ways to participate in a DAO, usually through the ownership of a token. + +DAOs operate using smart contracts, which are essentially chunks of code that automatically execute whenever a set of criteria are met. Smart contracts are deployed on numerous blockchains nowadays, though Ethereum was the first to use them. + +These smart contracts establish the DAO’s rules. Those with a stake in a DAO then get voting rights and may influence how the organization operates by deciding on or creating new governance proposals. + +This model prevents DAOs from being spammed with proposals: A proposal will only pass once the majority of stakeholders approve it. How that majority is determined varies from DAO to DAO and is specified in the smart contracts. + +DAOs are fully autonomous and transparent. As they are built on open-source blockchains, anyone can view their code. Anyone can also audit their built-in treasuries, as the blockchain records all financial transactions. + +Typically, a DAO launch occurs in three major steps +Smart contract creation: First, a developer or group of developers must create the smart contract behind the DAO. After launch, they can only change the rules set by these contracts through the governance system. That means they must extensively test the contracts to ensure they don’t overlook important details. + +Funding: After the smart contracts have been created, the DAO needs to determine a way to receive funding and how to enact governance. More often than not, tokens are sold to raise funds; these tokens give holders voting rights. + +Deployment: Once everything is set up, the DAO needs to be deployed on the blockchain. From this point on, stakeholders decide on the future of the organization. The organization’s creators — those who wrote the smart contracts — no longer influence the project any more than other stakeholders. + +## Why do we need DAOs? + +Being internet-native organizations, DAOs have several advantages over traditional organizations. One significant advantage of DAOs is the lack of trust needed between two parties. While a traditional organization requires a lot of trust in the people behind it — especially on behalf of investors — with DAOs, only the code needs to be trusted. + +Trusting that code is easier to do as it’s publicly available and can be extensively tested before launch. Every action a DAO takes after being launched has to be approved by the community and is completely transparent and verifiable. + +Such an organization has no hierarchical structure. Yet, it can still accomplish tasks and grow while being controlled by stakeholders via its native token. The lack of a hierarchy means any stakeholder can put forward an innovative idea that the entire group will consider and improve upon. Internal disputes are often easily solved through the voting system, in line with the pre-written rules in the smart contract. + +By allowing investors to pool funds, DAOs also give them a chance to invest in early-stage startups and decentralized projects while sharing the risk or any profits that may come out of them. + +## The principal-agent dilemma + +The main advantage of DAOs is that they offer a solution to the principal-agent dilemma. This dilemma is a conflict in priorities between a person or group (the principal) and those making decisions and acting on their behalf (the agent). + +Problems can occur in some situations, with a common one being in the relationship between stakeholders and a CEO. The agent (the CEO) may work in a way that’s not in line with the priorities and goals determined by the principal (the stakeholders) and instead act in their own self-interest. + +Another typical example of the principal-agent dilemma occurs when the agent takes excessive risk because the principal bears the burden. For example, a trader can use extreme leverage to chase a performance bonus, knowing the organization will cover any downside. + +DAOs solve the principal-agent dilemma through community governance. Stakeholders aren’t forced to join a DAO and only do so after understanding the rules that govern it. They don’t need to trust any agent acting on their behalf and instead work as part of a group whose incentives are aligned. + +Token holders’ interests align as the nature of a DAO incentivizes them not to be malicious. Since they have a stake in the network, they will want to see it succeed. Acting against it would be acting against their self-interests. + +## What was The DAO? + +The DAO was an early iteration of modern decentralized autonomous organizations. It was launched back in 2016 and designed to be an automated organization that acted as a form of venture capital fund. + +Those who owned DAO tokens could profit from the organization’s investments by either reaping dividends or benefitting from price appreciation of the tokens. The DAO was initially seen as a revolutionary project and raised $150 million in Ether (ETH), one of the greatest crowdfunding efforts of the time. + +The DAO launched on April 30, 2016, after Ethereum protocol engineer Christoph Jentzsch released the open-source code for an Ethereum-based investment organization. Investors bought DAO tokens by moving Ether to its smart contracts. + +A few days into the token sale, some developers expressed concerns that a bug in The DAO’s smart contracts could allow malicious actors to drain its funds. While a governance proposal was set forth to fix the bug, an attacker took advantage of it and siphoned over $60 million worth of ETH from The DAO’s wallet. + +At the time, around 14% of all ETH in circulation was invested in The DAO. The hack was a significant blow to DAOs in general and the then one-year-old Ethereum network. A debate within the Ethereum community ensued as everyone scrambled to figure out what to do. Initially, Ethereum co-founder Vitalik Buterin proposed a soft fork that would blacklist the attacker’s address and prevent them from moving the funds. + +The attacker or someone posing as them then responded to that proposal, claiming the funds had been obtained in a “legal” way according to the smart contract’s rules. They claimed they were ready to take legal action against anyone who tried to seize the funds. + +The hacker even threatened to bribe ETH miners with some of the stolen funds to thwart a soft fork attempt. In the debate that ensued, a hard fork was determined to be the solution. That hard fork was implemented to roll back the Ethereum network’s history to before The DAO was hacked and reallocate the stolen funds to a smart contract that allowed investors to withdraw them. Those who disagreed with the move rejected the hard fork and supported an earlier version of the network, known as Ethereum Classic (ETC). + +## Disadvantages of DAOs + +Decentralized autonomous organizations aren’t perfect. They are an extremely new technology that has attracted much criticism due to lingering concerns regarding their legality, security and structure. + +MIT Technology Review has, for example, revealed it considers it a bad idea to trust the masses with important financial decisions. While MIT shared its thoughts back in 2016, the organization appears to have never changed its mind on DAOs — at not least publicly. The DAO hack also raised security concerns, as flaws in smart contracts can be hard to fix even after they are spotted. + +DAOs can be distributed across multiple jurisdictions, and there’s no legal framework for them. Any legal issues that may arise will likely require those involved to deal with numerous regional laws in a complicated legal battle. + +In July 2017, for example, the United States Securities and Exchange Commission issued a report in which it determined that The DAO sold securities in the form of tokens on the Ethereum blockchain without authorization, violating portions of securities law in the country. + +## Examples of DAOs + +Decentralized autonomous organizations have gained traction over the last few years and are now fully incorporated into many blockchain projects. The decentralized finance (DeFi) space uses DAOs to allow applications to become fully decentralized, for example. + +To some, the Bitcoin (BTC) network is the earliest example of a DAO there is. The network scales via community agreement, even though most network participants have never met each other. It also does not have an organized governance mechanism, and instead, miners and nodes have to signal support. + +However, Bitcoin is not seen as a DAO by today’s standards. By current measures, Dash would be the first true DAO, as the project has a governance mechanism that allows stakeholders to vote on the use of its treasury. + +Other, more advanced DAOs, including decentralized networks built on top of the Ethereum blockchain, are responsible for launching cryptocurrency-backed stablecoins. In some cases, the organizations that initially launched these DAOs slowly give away control of the project to one day become irrelevant. Token holders can actively vote on governance proposals to hire new contributors, add new tokens as collateral for their coins or adjust other parameters. + +In 2020, a DeFi lending protocol launched its own governance token and distributed it through a liquidity mining process. Essentially, anyone who interacted with the protocol would receive tokens as a reward. Other projects have since replicated and adapted the model. + +Now, the list of DAOs is extensive. Over time, it has become a clear concept that has been gaining traction. Some projects are still looking to achieve complete decentralization through the DAO model, but it’s worth pointing out they are only a few years old and have yet to achieve their final goals and objectives. + +As internet-native organizations, DAOs have the potential to change the way corporate governance works completely. While the concept matures and the legal gray area they operate in is cleared, more and more organizations may adopt a DAO model to help govern some of their activities. + + +> info from: https://cointelegraph.com/ethereum-for-beginners/what-is-a-decentralized-autonomous-organization-and-how-does-a-dao-work + + diff --git a/collections/about/genesis_block_pool_details.md b/collections/about/genesis_block_pool_details.md new file mode 100644 index 0000000..7c96cbb --- /dev/null +++ b/collections/about/genesis_block_pool_details.md @@ -0,0 +1,69 @@ +

Genesis Pool Details

+ +

Table of Contents

+ +- [Genesis Pool](#genesis-pool) +- [Genesis Block](#genesis-block) + - [Genesis Block Value](#genesis-block-value) + - [Calculation](#calculation) +- [Genesis Pool Details](#genesis-pool-details) + +*** + +## Genesis Pool + +Genesis pool is the initial capacity with which the network started, was available when the project officially launched (blockchain launch March 2018). + +- +-300 computer (all owned by ThreeFold_Dubai) + - Belgium: 117+30 (hosted by BetterToken) + - Dubai: 148 (hosted by TF FZC itself) +- total estimate resource/compute units + - CRU: 4800, + - HRU: 8100000 + - MRU: 18600 + - SRU: 106000 + +## Genesis Block + +Genesis block is the first block registered in the blockchain. This consists of a number of TFT, in our case 695M TFT. + +> Maximum amount of tokens in the ThreeFold Blockchain at launch = 100 Billion (in other words genesis pool < 1% at start of max nr TFT) + +### Genesis Block Value + +It's hard to define the value of the genesis block when it was calculated, there was no established TFT price. + +- If TFT price = USD 0.01: +-7M USD (this token price has not been established but could be 2016-17) +- Summer 2023 the price is back on USD 0.01, which we believe is too low for the value created, lets hope for a better future. + +### Calculation + +To come up with a reasonable number and show the community that there was hardware available for the genesis block, we made an excel calculation. + +- Servers as part of genesis pool calculation + - +-300 computer (all owned by ThreeFold_Dubai) + - Belgium: 117+30 (hosted by BetterToken) + - Dubai: 148 (hosted by TF FZC itself) + - Hardware as used in many years before token launch (March 2018) + - At least 100+ servers over quite some years +- Total estimate resource/compute units + - CRU: 4,800 + - HRU: 8,100,000 + - MRU: 18,600 + - SRU: 106,000 +- Cloud Units + - Results in 3,927 CU and 8,225 SU +- The farming rules used were farming/minting rules v1 but with no difficulty level and TFT price 0.01 +- Duration + - We took +- 1.5 years in our calculation + - Averaged out, it's for sure not exact science + - But we can say that the amount of capacity listed has been available long enough for our engineers during the pre-launch period. Probably not with those exact listed servers but in general. +- Result: **695M TFT** + +_The purpose of this exercise is to demonstrate there is a reasoning behind the 695M TFT and computers which have been available. It's not intended as exact proof nor defense. We believe the value given was in line with the situation at that time._ + +## Genesis Pool Details + +- Block 0: [Block 0 on Explorer](https://explorer2.threefoldtoken.com/hash.html?hash=a2ee0aa706c9de46ec57dbba1af8d352fbcc3cc5e9626fc56337fd5e9ca44c8d) +- Genesis Block Code: [Code of Block 0](https://github.com/threefoldfoundation/tfchain/blob/master/pkg/config/config.go#L103) + diff --git a/collections/about/genesis_pool.md b/collections/about/genesis_pool.md new file mode 100644 index 0000000..ad6fdb2 --- /dev/null +++ b/collections/about/genesis_pool.md @@ -0,0 +1,70 @@ +

Genesis Pool

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Genesis Pool Token Usage](#genesis-pool-token-usage) +- [Remarks](#remarks) + +*** + +## Introduction + +At the end of March 2018, ThreeFold launched the public blockchain. + +ThreeFold developed their own blockchain software called Rivine, which was probably the first proof of blockstake blockchain in the world. We did not like the way how the other blockchains at that time were doing proof of work, which is basically burning a lot of energy to prove the validity of their blockchain. + +Rivine is a fork from the blockchain work done by the team of SIA and since then a lot of work has been done on it to fulfill our own requirements. The Rivine blockchain will no longer be used after May 2020. + +ThreeFold is the result of more than 20 years of work in the Internet space, over a number of companies. + +The technology used at start in March 2018 has been developed mainly out of three companies: ThreeFold_Dubai, BetterToken(bettertoken) and GreenIT Globe. Later in 2018, TF Tech was spun off from our incubator. + +TF Tech is a company born out of our Incubator called [Incubaid](http://www.incubaid.com/) in Belgium. + +TF Tech has a purpose to further develop the software and commercialize the capabilities on a global basis, mainly by working together with tech partners. + +The public version of our blockchain was started March 2018. The servers used during development and mining tokens already started years before. + +Many hundreds of servers have been used to develop the technology which now makes up our ThreeFold_Grid. + ++-300 servers are the foundation of our TF Grid. + +Most of the servers are in Dubai and in Ghent (Belgium). + +- To see the [genesis pool in Ghent, see here](./genesis_pool_ghent.md) +- To see the [genesis pool in Dubai, see here](./genesis_pool_dubai.md) + +All genesis pools were owned by the foundation. Many of those servers are at this point no longer active. The operations were done by ThreeFold_Dubai and BetterToken as Farming Cooperative. + +> For information about genesis pool/block, see [here](./genesis_block_pool_details.md). + +## Genesis Pool Token Usage + +- A lot of the genesis pool tokens went to the original shareholders of a company who created a lot of the technology which was the basis at that time for ThreeFold. + - Most of these tokens are locked up and are not tradeable. + - This was a deal made mid 2018 and provided the ThreeFold Dubai with technology and a global engineering team. +- The other part went to ThreeFold Dubai, to allow the Foundation to promote & further grow the project. + +> [See Token Overview](./token_overview/token_overview.md) for more details. + +The tokens were used from out of ThreeFold_Dubai to create value for the ThreeFold Grid. + +- Initial funding: sell TFT as future IT capacity + - IT capacity delivered amongst others from the computers deployed by genesis pools (+300 servers) +- grants to community, bounties for coders, evangelists, ... + - max bounty given to contributors/founders = 2.5m TFT + - funding for projects like coding, marketing, ... + - There is a token grant program, but not really active yet. +- Fund the day-to-day operation of threefold_dubai +- Fund some development projects for our open source technology +- Public exchange fees +- Operational costs of keeping the genesis pool operational (engineers, data center, bandwidth, ...) +- Reward for the ThreeFold larger community and contributors + +## Remarks + +- ThreeFold_Dubai is run as a [not-for-profit organization](../legal/definitions_legal.md) +- All (future) profits generated, tokens=IT capacity sold are used to promote and grow the ThreeFold Project. + - None of the potential profits generated go to the shareholders of the company. + - Investments and loans given will of course be paid back to the relevant investors. \ No newline at end of file diff --git a/collections/about/genesis_pool_dubai.md b/collections/about/genesis_pool_dubai.md new file mode 100644 index 0000000..a25dc91 --- /dev/null +++ b/collections/about/genesis_pool_dubai.md @@ -0,0 +1,7 @@ +## The Genesis Pool Dubai + +![](img/genesispool_1.jpg) +![](img/genesispool_2.jpg) + + +Read more about ThreeFold Dubai [here](./threefold_dubai.md). \ No newline at end of file diff --git a/collections/about/genesis_pool_ghent.md b/collections/about/genesis_pool_ghent.md new file mode 100644 index 0000000..2aa17b0 --- /dev/null +++ b/collections/about/genesis_pool_ghent.md @@ -0,0 +1,11 @@ + +## The Genesis Pool (Ghent) + + +![](img/lochristi_1_.jpg) +![](img/lochristi_2_.jpg) +![](img/lochristi_3.jpg) +![](img/lochristi_4.jpg) +![](img/lochristi_5_.jpg) +![](img/lochristi_6_.jpg) +![](img/lochristi_7.jpg) diff --git a/collections/about/governance.md b/collections/about/governance.md new file mode 100644 index 0000000..2381fa5 --- /dev/null +++ b/collections/about/governance.md @@ -0,0 +1,60 @@ +

Governance

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Project History](#project-history) +- [Type of Token](#type-of-token) +- [Governance Process](#governance-process) +- [Organic Growth](#organic-growth) +- [Genesis Pool](#genesis-pool) +- [Decentralized and Open-Source](#decentralized-and-open-source) + +*** + +## Introduction + +We introduce the ThreeFold governance and provide some context around ThreeFold in general. + +## Project History + +The project is grateful of the support of its community and the commercial entity TFTech. + +ThreeFold is fundamentally a decentralized initiative. Within this framework, ThreeFold Dubai plays a pivotal role in championing and advancing the ThreeFold Grid and the broader movement. + +For more information, read the [ThreeFold History](./threefold_history.md). + +## Type of Token + +The regulators and legal advisors believe that we are a payment token with some flavor of a utility token (hybrid). + +We conducted research and obtained legal counsel in multiple jurisdictions: + +- Switzerland +- Belgium +- Dubai +- Singapore + +For more information, [read the legal opinions](https://drive.google.com/file/d/1kNu2cFjMkgqdadrOOQTTC5FPAM4OgKEb/view?usp=drive_link). + +## Governance Process + +To make sure that all our funds are used properly and that decisions are taken for the benefit of ThreeFold and its community as a whole, we make use of different tools and features, such as multi-signature wallets, the [ThreeFold DAO](./dao/tfdao.md) and the [ThreeFold Forum](https://forum.threefold.io/). + +## Organic Growth + +We never organized a pump and dump or any other synthetic mechanism to boost the token price and benefit from this. We believe in organic growth and TFT going up as result of utilization and grid expansion. + +## Genesis Pool + +The Genesis pool was based on real hardware located in Dubai and Ghent. + +The tokens out of this pool are safe and well managed. We are acquiring a lot of them with ThreeFold Cloud (ThreeFold Dubai). + +For more information on the Genesis pool, [read this section](./genesis_pool.md). + +## Decentralized and Open-Source + +In essence, ThreeFold is a decentralized and open-source project. We invite everyone to contribute and participate within the ThreeFold ecosystem. + +You can read the code on the [ThreeFold Tech GitHub repository](https://github.com/threefoldtech). \ No newline at end of file diff --git a/collections/about/img/al_jadaf.jpg b/collections/about/img/al_jadaf.jpg new file mode 100644 index 0000000..b43ab97 Binary files /dev/null and b/collections/about/img/al_jadaf.jpg differ diff --git a/collections/about/img/aljadaf2.jpg b/collections/about/img/aljadaf2.jpg new file mode 100644 index 0000000..f90719e Binary files /dev/null and b/collections/about/img/aljadaf2.jpg differ diff --git a/collections/about/img/bettertoken_web.jpg b/collections/about/img/bettertoken_web.jpg new file mode 100644 index 0000000..a8318b4 Binary files /dev/null and b/collections/about/img/bettertoken_web.jpg differ diff --git a/collections/about/img/blockchain.png b/collections/about/img/blockchain.png new file mode 100644 index 0000000..fa6290e Binary files /dev/null and b/collections/about/img/blockchain.png differ diff --git a/collections/about/img/crypto_valley_zug_.jpg b/collections/about/img/crypto_valley_zug_.jpg new file mode 100644 index 0000000..08a3180 Binary files /dev/null and b/collections/about/img/crypto_valley_zug_.jpg differ diff --git a/collections/about/img/dubai_office1.jpg b/collections/about/img/dubai_office1.jpg new file mode 100644 index 0000000..e29f0d3 Binary files /dev/null and b/collections/about/img/dubai_office1.jpg differ diff --git a/collections/about/img/foundation_header_image.jpg b/collections/about/img/foundation_header_image.jpg new file mode 100644 index 0000000..2165278 Binary files /dev/null and b/collections/about/img/foundation_header_image.jpg differ diff --git a/collections/about/img/genesispool_1.jpg b/collections/about/img/genesispool_1.jpg new file mode 100644 index 0000000..aa3c163 Binary files /dev/null and b/collections/about/img/genesispool_1.jpg differ diff --git a/collections/about/img/genesispool_2.jpg b/collections/about/img/genesispool_2.jpg new file mode 100644 index 0000000..4f26416 Binary files /dev/null and b/collections/about/img/genesispool_2.jpg differ diff --git a/collections/about/img/korenlei_22.jpg b/collections/about/img/korenlei_22.jpg new file mode 100644 index 0000000..5c757d4 Binary files /dev/null and b/collections/about/img/korenlei_22.jpg differ diff --git a/collections/about/img/korenlei_old.jpg b/collections/about/img/korenlei_old.jpg new file mode 100644 index 0000000..72a2dd9 Binary files /dev/null and b/collections/about/img/korenlei_old.jpg differ diff --git a/collections/about/img/labs_it_license.jpg b/collections/about/img/labs_it_license.jpg new file mode 100644 index 0000000..f4df0aa Binary files /dev/null and b/collections/about/img/labs_it_license.jpg differ diff --git a/collections/about/img/lochristi_1_.jpg b/collections/about/img/lochristi_1_.jpg new file mode 100644 index 0000000..5e72e15 Binary files /dev/null and b/collections/about/img/lochristi_1_.jpg differ diff --git a/collections/about/img/lochristi_2_.jpg b/collections/about/img/lochristi_2_.jpg new file mode 100644 index 0000000..49f3dba Binary files /dev/null and b/collections/about/img/lochristi_2_.jpg differ diff --git a/collections/about/img/lochristi_3.jpg b/collections/about/img/lochristi_3.jpg new file mode 100644 index 0000000..e6e8392 Binary files /dev/null and b/collections/about/img/lochristi_3.jpg differ diff --git a/collections/about/img/lochristi_4.jpg b/collections/about/img/lochristi_4.jpg new file mode 100644 index 0000000..0f16e4c Binary files /dev/null and b/collections/about/img/lochristi_4.jpg differ diff --git a/collections/about/img/lochristi_5_.jpg b/collections/about/img/lochristi_5_.jpg new file mode 100644 index 0000000..08d8cab Binary files /dev/null and b/collections/about/img/lochristi_5_.jpg differ diff --git a/collections/about/img/lochristi_6_.jpg b/collections/about/img/lochristi_6_.jpg new file mode 100644 index 0000000..f5b3781 Binary files /dev/null and b/collections/about/img/lochristi_6_.jpg differ diff --git a/collections/about/img/lochristi_7.jpg b/collections/about/img/lochristi_7.jpg new file mode 100644 index 0000000..8b9c713 Binary files /dev/null and b/collections/about/img/lochristi_7.jpg differ diff --git a/collections/about/img/mazraa_web1.jpg b/collections/about/img/mazraa_web1.jpg new file mode 100644 index 0000000..17a2451 Binary files /dev/null and b/collections/about/img/mazraa_web1.jpg differ diff --git a/collections/about/img/tf_companies_.jpg b/collections/about/img/tf_companies_.jpg new file mode 100644 index 0000000..ef7bc62 Binary files /dev/null and b/collections/about/img/tf_companies_.jpg differ diff --git a/collections/about/img/threefold_commodities_1_.jpg b/collections/about/img/threefold_commodities_1_.jpg new file mode 100644 index 0000000..84afbb4 Binary files /dev/null and b/collections/about/img/threefold_commodities_1_.jpg differ diff --git a/collections/about/img/threefold_dmcc_license_certificate.jpg b/collections/about/img/threefold_dmcc_license_certificate.jpg new file mode 100644 index 0000000..6c6d34a Binary files /dev/null and b/collections/about/img/threefold_dmcc_license_certificate.jpg differ diff --git a/collections/about/img/threefold_tech.jpg b/collections/about/img/threefold_tech.jpg new file mode 100644 index 0000000..d4d2f0b Binary files /dev/null and b/collections/about/img/threefold_tech.jpg differ diff --git a/collections/about/img/threefold_tech_location.jpg b/collections/about/img/threefold_tech_location.jpg new file mode 100644 index 0000000..8bd7427 Binary files /dev/null and b/collections/about/img/threefold_tech_location.jpg differ diff --git a/collections/about/img/threefold_vzw_official_doc.jpg b/collections/about/img/threefold_vzw_official_doc.jpg new file mode 100644 index 0000000..82c1173 Binary files /dev/null and b/collections/about/img/threefold_vzw_official_doc.jpg differ diff --git a/collections/about/img/view_dubai.jpg b/collections/about/img/view_dubai.jpg new file mode 100644 index 0000000..022fad0 Binary files /dev/null and b/collections/about/img/view_dubai.jpg differ diff --git a/collections/about/mazraa.md b/collections/about/mazraa.md new file mode 100644 index 0000000..c280c8e --- /dev/null +++ b/collections/about/mazraa.md @@ -0,0 +1,22 @@ +

Mazraa

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [History](#history) +- [Mission](#mission) + +*** + +## Introduction + +Mazraa is a brand name of [ThreeFold Dubai](./threefold_dubai.md). You can read about ThreeFold Dubai for more details. + +## History + +Mazraa was established in early 2016 in the United Arab Emirates and was one of the first provider's of Peer2Peer Internet capacity on the ThreeFold Grid. Currently, Mazraa's capacity pool of storage and compute capacity can be accessed for workload development on the TF Capacity Explorer. + + +## Mission + +Mazraa supports ThreeFold Foundation's mission to create a responsible Internet for all, one that is accessible, affordable and environmentally conscious. Mazraa is a founding capacity farmer on the ThreeFold Network and actively supports the expansion and adoption of ThreeFold's P2P Cloud. Mazraa's focus is to provide P2P Cloud capacity for developers, nodes for new and existing farmers, as well as, providing over the counter access to TFT's to enable reservations of Internet capacity. Additionally, Mazraa supports the promotion and growth of the ThreeFold Network, through marketing resources and funding contributions. We believe it's time for the internet to have a major upgrade to empower people and protect our planet. \ No newline at end of file diff --git a/collections/about/orgstructure.md b/collections/about/orgstructure.md new file mode 100644 index 0000000..8f1ec9d --- /dev/null +++ b/collections/about/orgstructure.md @@ -0,0 +1,14 @@ +# Organisation Structure + +

Table of Contents

+ +- [Governance](./governance.md) +- [ThreeFold Companies](./threefold_companies.md) +- [ThreeFold Dubai](./threefold_dubai.md) +- [ThreeFold VZW](./threefold_vzw.md) +- [ThreeFold AG](./threefold_ag.md) +- [Mazraa](./mazraa.md) +- [BetterToken](./bettertoken.md) +- [DAO](./dao/dao.md) +- [ThreeFold DAO](./dao/tfdao.md) +- [TFChain](./tfchain.md) \ No newline at end of file diff --git a/collections/about/roadmap/releasenotes/img/releasenotes.png b/collections/about/roadmap/releasenotes/img/releasenotes.png new file mode 100644 index 0000000..bb1005a Binary files /dev/null and b/collections/about/roadmap/releasenotes/img/releasenotes.png differ diff --git a/collections/about/roadmap/releasenotes/releasenotes_readme.md b/collections/about/roadmap/releasenotes/releasenotes_readme.md new file mode 100644 index 0000000..21e8011 --- /dev/null +++ b/collections/about/roadmap/releasenotes/releasenotes_readme.md @@ -0,0 +1,19 @@ +![](img/releasenotes.png) + +# ThreeFold Grid Release Notes + + We're delighted to have you here as we explore the latest updates and enhancements to our decentralized grid ecosystem. In these release notes, you'll discover a wealth of information about the exciting features, bug fixes, performance optimizations, and new functionalities that have been introduced in each release. + + Whether you're a developer, a farmer, a user, or simply curious about the cutting-edge advancements happening in the world of distributed computing, these release notes will provide you with valuable insights and keep you up to date with our progress. So dive in, explore the details, and join us in shaping the future of the ThreeFold Grid! + +## ThreeFold TFGrid v3.x Release Notes +- [TFGrid v3.10.0](./tfgrid_release_3_10_0.md) +- [TFGrid v3.9.0](./tfgrid_release_3_9_0.md) +- [TFGrid v3.8.0](./tfgrid_release_3_8_0.md) +- [TFGrid v3.7.0](./tfgrid_release_3_7_0.md) +- [TFGrid v3.6.1](./tfgrid_release_3_6_1.md) +- [TFGrid v3.6.0](./tfgrid_release_3_6_0.md) +- [TFGrid v3.0.0 Alpha-5](./tfgrid_release_3_0_a5.md) +- [TFGrid v3.0.0 Alpha-4](./tfgrid_release_3_0_a4.md) +- [TFGrid v3.0.0 Alpha-2](./tfgrid_release_3_0_a2.md) +- [TFGrid v3.0.0](./tfgrid_release_3_0.md) diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_0.md new file mode 100644 index 0000000..32ea6d9 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_0.md @@ -0,0 +1,43 @@ +# TFGrid release 3.0 + +TFGrid 3.0 will be released gradually during Q3/Q4 2021. + +## What's new ? + +TFGrid 3.0 is a full redesign of the ThreeFold Grid architecture. The main purpose of this redesign is to decentralize all the components that the Grid is built with. + +### TFChain 3.0 + +A decentralised chain holding all information on entities that make up the ThreeFold Grid. It runs on Parity Substrate blockchain infrastructure. + +Features : +- Your identity and proofs/reputation on our blockchain +- All info about TFGrid (nodes, farmers, …) +- A Graphql interface to be able to query the blockchain +- Support of side chains (unlimited scalability, allow others to run their own blockchain) +- TFT exists now also on TFChain (allows us to work around Stellar scalability issues) +- Bridge between TFT on Stellar and TFT on TFChain (one way to start) +- Blockchain based provisioning process +- TFChain API (javascript, golang, vlang) +- Support for 'Infrastructure as Code' : IAC frameworks + - Terraform + - Kubernetes, Helm, Kubernetes + - Ansible (planned for Q4 2021) +- Use RMB = peer2peer secure Reliable Message Bus to communicate with Zero-OS + +### Proof of Utilization + +- Resource utilisation is captured and calculated on hourly basis +- Resource utilisation stored in TFChain +- An automated discount system has been put in place, rewarding users who pre-purchased their cloud needs. Price discounts are applied, in line with amount of TFT you have in your account and the period you are holding these TFT. +E.g. if you have 12 months worth of TFT in your account in relation to the last hour used capacity you get 40% discount, 36 months results in 60% discount. + +### New Explorer UI +- An updated User Interface of the TF Grid Explorer, nicer and easier to use +- It uses the Graphql layer of TFChain + +## Roadmap + +The feature overview split over different releases can be found [here](https://circles.threefold.me/project/despiegk-product_tfgrid3_roadmap/wiki/roadmap). + +More info and announcements on Grid 3.0 to be found on our [forum](https://forum.threefold.io/t/announcement-of-tfgrid-3-0/1132) diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a2.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a2.md new file mode 100644 index 0000000..0f9cc1a --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a2.md @@ -0,0 +1,204 @@ +# ThreeFold Release Notes TFGrid 3.0.0 Alpha 2 (Live on testnet) + +## TFChain v1.0.0 + +- DAO Requests + - Becoming council member + - Becoming a validator node + - Pricing changes + - Upgrading tfchain + - Changing farming rewards + +## Admin Portal v3.0.1-rc1 + +- Terms and conditions support +- Fix reasking for activation when the balance reaches 0 +- Show amount of bridge deposit/withdraw fee +- Get more TFT button when connected to devnet + +## tfchain explorer v3.0.0-rc19 + +- certification type filter +- adding certification type to nodes +- add zos version to node details +- update map to reflect the selected node +- UX fixes for filters and data sorting +- include version of tfchain, explorer, grid proxy +- showing available resources +- showing online / offline nodes +- showing number of available IPs in a farm +- adding favicon +- statistics page improvements + +## ZOS v3.0.4 + +- public IPv6 support +- Min rootfs for more than 1 CU = 2GB, and anything less will be 500MB +- Mainnet image +- Fix IPv6 rules that broke SLAAC +- Update SRU calculation +- bug don't wait for QSFS shutdown +- Update traefik version +- Fixing crashes caused by slow disks +- Avoid lsblk blocking for QSFS +- Decommission on too many QSFS metric fetches failure +https://github.com/threefoldtech/zos/releases + +## Terraform v0.1.20 +- Support for public IPv6 +- Support planetary option for k8s + +https://github.com/threefoldtech/tf-terraform-provider/releases + + +## grid3_client_ts v1.0.3 +- Cert type for nodes +- public IPv6 support +- TwinServer command to be used from other langauges + + +## Weblets v1.2.0 + +- Support peertube +- Support funkwhale +- Remove rootfs specification from machine +- Support adding/deleting workers in kubernetes +- Add more images ubuntu, alpine, centos +- Updating the balance periodically +- Adding access for nodes by default for hidden nodes issues +- Resolving issues + +### detailed projects list + +- https://github.com/threefoldtech/grid_weblets/projects/1 +- https://github.com/threefoldtech/grid_weblets/projects/4 +- https://github.com/threefoldtech/grid_weblets/projects/6 +- https://github.com/threefoldtech/grid_weblets/projects/6 +- https://github.com/threefoldtech/grid_weblets/projects/7 +- https://github.com/threefoldtech/grid_weblets/projects/8 + +https://github.com/threefoldtech/grid_weblets/releases + +## QSFS + +TODO + +## gridproxy v1.0.0-rc8 + +- generic performance improvements +- reduce caching time +- enable CORS in version +- include certification types in nodes +- fix regression on nodes query +- + +## Known Issues 3.0.0 Alpha 2 + +Following list is incomplete but gives some issues to think about. + +- Weblets [limitations](https://library.threefold.me/info/manual/#/manual__weblets_home?id=limitations) +- QSFS integration is a work in progress +- ZOS and SSD performance [issue](https://github.com/threefoldtech/zos/issues/1467) +- Threefold Connect having [issues](https://circles.threefold.me/project/test-tfgrid3/issue/52) +- Docker & ZOS containers [differences](https://github.com/threefoldtech/zos/issues/1483) +- ZOS workloads upgrade [issue](https://github.com/threefoldtech/zos/issues/1425) +- Terraform projects [don't reflect in the weblets](https://github.com/threefoldtech/terraform-provider-grid/issues/146) +- Can't detach public IP from a VM and removing it from a contract [issue](https://github.com/threefoldtech/tfchain_pallets/issues/73), please note you can still create each in separate contracts. + +# ThreeFold Release Notes TFGrid 3.0.0 Alpha 1 (Live on mainnet) + +- [TFgrid 3.0 announcement](https://forum.threefold.io/t/announcement-of-tfgrid-3-0/1132) +- [Whats new in TFGrid 3.0](https://forum.threefold.io/t/what-is-new-in-tfgrid-3-0/1133) +- [Roadmap](https://circles.threefold.me/project/despiegk-product_tfgrid3_roadmap/wiki/home) +- +## TFChain + +- Staking support (as the moment of this writing it's only on devnet now) +- KeyValue store support +- Bridging tokens from stellar to tfchain +- Smart contract for IT +- Billing +- Consumption Reports +- Discounts support + +## Admin Portal + +- Creation of twins +- Bridge from and to Stellar +- Farm Management + + +## Tfchain explorer + +- Nodes view +- Gateways listing +- Farms information +- Resources/utilization +- Better filtering + + +## ZOS +- zmachine support +- Integration with latest subtsrate client event types +- public ipv6 support in VMs +- planetary support in VMs +- upgrade to new file system RFS +- support for QSFS +- support for gateways +- capacity reporting to the blockchain support +- Support of SR25519 +- Improvements in .zosrc creation +- Safer mechanism for environment variables and init arguments +- improvments in cleaning unused mounts + +https://github.com/threefoldtech/zos/releases + +## Terraform +- Support ZMachine +- Support Kubernetes +- Support QSFS +- Support Capacity Planning +- Support Gateways + +https://github.com/threefoldtech/tf-terraform-provider/releases + + +## grid3_client_ts +- Support ZMachine +- Support Kubernetes +- Support QSFS +- Support Capacity Planning +- Support Gateways + + +## Weblets + +- Support Profile manager +- Support Virtual machine +- Support CapRover +- Support Kubernetes +https://github.com/threefoldtech/grid_weblets/releases +- Capacity planning deployment + +## QSFS + +TODO + + +- [TFgrid 3.0 announcement](https://forum.threefold.io/t/announcement-of-tfgrid-3-0/1132) +- [Whats new in TFGrid 3.0](https://forum.threefold.io/t/what-is-new-in-tfgrid-3-0/1133) +- [Roadmap](https://circles.threefold.me/project/despiegk-product_tfgrid3_roadmap/wiki/home) + +## Known Issues 3.0.0 Alpha 1 + +Following list is incomplete but gives some issues to think about. + +- Weblets [limitations](https://library.threefold.me/info/manual/#/manual__weblets_home?id=limitations) +- Public IP6 [support](https://github.com/threefoldtech/zos/pull/1488) in ZOS +- QSFS integration is a work in progress +- ZOS and SSD performance [issue](https://github.com/threefoldtech/zos/issues/1467) +- Threefold Connect having [issues](https://circles.threefold.me/project/test-tfgrid3/issue/52) +- Docker & ZOS containers [differences](https://github.com/threefoldtech/zos/issues/1483) +- ZOS workloads upgrade [issue](https://github.com/threefoldtech/zos/issues/1425) +- Terraform projects [don't reflect in the weblets](https://github.com/threefoldtech/terraform-provider-grid/issues/146) +- Can't detach public IP from a VM and removing it from a contract [issue](https://github.com/threefoldtech/tfchain_pallets/issues/73), please note you can still create each in separate contracts. diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a4.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a4.md new file mode 100644 index 0000000..97a9456 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a4.md @@ -0,0 +1,53 @@ +# ThreeFold Grid v3.0.0 Alpha - 4 Release Note + +## TFConnect v3.4.0 + +### TFConnect Backend Services Migration + +We moved ThreeFold Connect’s backend services from a data storage in Lochristi to TFGrid. From TF Wallet, to TF account, and TFNews, all active services are migrated by using Helm charts. + +## TFPlay v1.1.4 + +### Listed Mattermost as Deployable Solution + +On this release, we added Mattermost as one of our deployable decentralised solutions. +Mattermost is a secure, open source platform for communication, collaboration, and workflow orchestration across tools and teams. + +### Separated TFPlay into 3 different networks + +Node upgrades could happen anytime, and it could make solution deployments on different networks incompatible. Therefore, we separated TFPlay into 3 different networks: + +- Deployment on TFGrid Mainnet: play.grid.tf +- Deployment on TFGrid Testnet: play.test.grid.tf +- Deployment on TFGrid Devnet: play.dev.grid.tf + +This way, if some nodes on one network are being upgraded, deployments on the other nets should not be affected. + +## Minting v3.0 + +### Minting V3 code + +Repo: https://github.com/threefoldtech/minting_v3 + +There was a change the way how the CU/SU are calculated from the resource units calculations, please see https://library.threefold.me/info/threefold#/resource_units_calc_cloudunits for details. +Therefore, we updated the calculations on the minting code (minting v3), as well as adjusted price calculation for workloads on TFchain. + +## GetTFT Shop v1.0.4 + +Story: https://github.com/threefoldtech/home/issues/1171 + +### Minor UX / UI improvements + +On this release we created minor UX improvements on the existing GetTFT Shop website that create a better experience for our customers, such as improved interactivity, fixed embedded media, revised UX content, improved screen responsiveness, and many more. + +## TF Capacity Explorer v0.1.0 + +### An all-in-one Unified Capacity Explorer + +Currently we have a few separate capacity explorers for both TFGrid v2 Explorer and TFGrid v3 Explorer. On this release we unified all versions and networks into one explorer, where users can find capacity information on both TFGrid v2 and v3 mainnet, testnet, and devnet. This all-in-one unified Capacity Explorer will be hosted under the domain https://explorer.threefold.io. + +## TF Farm Management v1.1 + +On TFGrid v3, node and farm management are also moved to substrate-based blockchains. A farm can be managed by making calls directly to the blockchain using objects created in TFChain called Twins. TFWallet app can reuse the wallet keypair to support a twin. A twin is also associated with a Planetary Network address that is supported by the Threefold Connect App. + +Therefore we added a ‘Farm Management’ feature on the TFConnect App that would enable farmers to list their farms and create new ones directly on the mobile app. The ‘Farm Management’ feature will allow users to create new farms, list farms, as well as migrate their farms from TFGrid v2 to TFGrid v3. diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a5.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a5.md new file mode 100644 index 0000000..94b28b2 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_0_a5.md @@ -0,0 +1,72 @@ +# ThreeFold Grid v3.0.0 Alpha - 5 Release Note + +This is the release note of TFGrid v3.0.0 Alpha 5. It includes updates, improvements and fixes of numerous grid components as described below. + +## TFT Shop v1.1.0 + +On the first initial release of TFT Shop v1.0.0, we have made it easier for users to buy TFT by BTC on the [TFT Shop website](https://gettft.com/gettft/). + +On this v1.1.0 release, we are giving users another **option to buy TFT by using fiat currency**. This is made possible by integrating [mercuryo.io](http://www.mercuryo.io/) (third-party) widget onto the shop. By buying TFT using TFT Shop, you confirm that you have read and agree to [ThreeFold’s terms and conditions](https://library.threefold.me/info/legal/#/legal__terms_conditions_gettft). + + +## ZOS v3.1.0 + +### Performance Improvements + +This new feature release of ZOS v3.1.0 includes a lot of improvements such as **improvements on performance issues** (disk and IO), grid events handling, and improvement on the current yggdrasil network by start and maintaining our public peers. + +### ZOS Supoort Dedicated Nodes + +To empower community-driven decentralization on the TFGrid, we would like to soon invite anyone to deploy their own solutions on the TFGrid. This is feasible to do by allowing any external developers to** deploy their own workloads on** **dedicated nodes** and provide the deployment documentation. By choosing to deploy on dedicated nodes, a user can reserve an entire node, then use it exclusively to deploy solutions for themselves or for other customers. Therefore on this release we are happy to announce that we are supporting dedicated nodes deployment, apply the dedicated node contracts on TFChain and support mechanism on ZOS' next release. + + +## TF Playground v3.0.0 Alpha-5 + +### New community and blockchain solutions +On the last release, we have added new deployable community solutions on [TF Playground](https://play.grid.tf/#/), such as Peertube, Funkwhale and Taiga, Mattermost as well as some developer tools like CapRover, Virtual Machine, Kubernetes and Owncloud. + +On this release, we have added **community and blockchain solutions** such as Discourse (forum), Presearch Node and Casperlabs validator node. + +## Uhuru v1.1 (beta) + +### Uhuru Backend Changes + +[Uhuru](https://www.uhuru.me) is a digital product on top of a the TFGrid that enhance collaboration with features such as chat, videocall, office tools, and file storage, all in one platform. + + +## ThreeFold Wallet v3.0.0 + +### Add (substrate-based) TFChain Wallet + +TFGrid v3 is powered by substrate-based blockchain. A TFT is moveable from Stellar blockchain to TFChain through the use of a bridge.ThreeFold Wallet now has successfully **added (substrate-based) TFChain Wallet in order to support the bridge transaction**. + +However, TFT is still the native currency on TFChain. As such, there is no need for an external service to transfer tokens on TFChain. A transaction fee is charged (currently 0.01 TFT) for every transaction/extrinsic call. + +## TF Farm Management Tool v3.0.2 + +### Adjust farm management for the latest TFChain upgrade + +On the last release of 3 alpha-4, we have released Farm Management Tool v3.0.1 which allows farmers to migrate their farms from v2 to v3 through the TF Connect application. + +Recently a change was made within the codes of TFChain, thus broke the farm management function in the wallet. Therefore, on the v3.0.2 release, code changes were done on **Farm Management Tool to adapt itself with TFChain changes**. This was done quickly in production. + +## TFConnect App v3.5.0 + +### Generic Frontend and Backend Improvements +This new feature release of TFConnect App includes new features such as **enabling user to sign documents on the app** directly, and many other backend improvements. + +### Integrate TFConnect SSO to TFPlay Solutions + +We need to simplify peer-to-peer collaboration and how users interact with their TFPlay solutions. On this release, we have eliminated a complicated way of signing up to solutions (emails, username and password) by replacing it using **TFConnect app SSO login**. Therefore on this release, we successfully created TFConnect Native SSO backend environments for the following TFPlay solutions**: Discourse. Mattermost, and Gitea, that will allow users to sign in and start using the solutions with just few clicks. + +## TFChain v1.2.0 + +### ThreeFold DAO Pt. 2: Adjoint Validator-Council member request + +From version 3.0 on, [ThreeFold Grid operates as a DAO](https://library.threefold.me/info/threefold#/tfgrid/threefold__dao). On the last release of TFChain v1.0.0, we have successfully implemented The first TF-DAO that allows users to request to become DAO council members. + +On this release we also have successfully implemented **ThreeFold DAO Request part 2 where any user could request to become an adjoint validator-council member**: by running a a validator node, not only that they become a validator, they would also gain a seat as DAO council member that give them the right to vote for organizational changes. + +### ThreeFold DAO Pt. 2: Enable Validator Application + +On this release we implemented ThreeFold DAO Request part 2 any user could **apply to become a validator **and register the validator application on-chain if they meet the validator requirements. diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_10_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_10_0.md new file mode 100644 index 0000000..8cad3e3 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_10_0.md @@ -0,0 +1,119 @@ +# ThreeFold Grid v3.10.0 Release Note + +Release Note of ThreeFold Grid v3.10.0. + +- Deployed on Mainnet on 3rd July 2023. + +## Components and Services + +The following components and services have been upgraded in this release: + +- TFChain +- ZOS +- Terraform +- TFGrid-SDK-GO +- TF Gridclient +- TF Gridproxy +- RMB +- TF Weblets +- TF Playground +- TF-Grid-CLI +- Gridify +- TFGrid-SDK-TS + +## Upgrades and Improvement Highlights + +Below are some of the key highlights of the TFGrid v3.10.0 component upgrades and improvements. + +### TFChain 2.4.0 + +- Addressed syncing issues. +- Introduced the attachment of solution provider IDs to contracts. +- Enabled the bonding of a stash account to a twin. +- Implemented various bug fixes. + +### ZOS 3.7.1 + +- Restructured the capacity to enhance dynamism. +- Added support for proxying traffic to private networks using WireGuard-based gateways. +- Introduced support for cloud-based consoles. +- Resolved various issues related to error messages, user validations, and error handling. + +### Terraform 1.9 + +- Added support for WireGuard-based gateway options. +- Implemented proper timeout handling for deployments. +- Introduced gateway node validation before submitting deployments. +- Resolved various bugs and issues. + +### TFGrid-SDK-GO 0.8.0 + +- Consolidated multiple Go projects into a single repository for simplified administration and quicker releases. +- Extracted reusable code from the Terraform project and created a standalone library for creating new platforms or plugins. + +#### Grid-Client + +- Enhanced the grid client to serve as the foundation layer for the Terraform plugin, enabling deployment of networks, virtual machines, and Kubernetes. + +#### Grid-Proxy + +- Added support for standby status for nodes powered off by the farmerbot. +- Enabled farm filtering based on requested resources. + +#### RMB + +- Improved the direct client's resilience to recover from close connections. + +#### TF-Grid-CLI + +- Introduced a simple tool for creating virtual machines and Kubernetes clusters. Note that `TF-Grid-CLI` is now `TFCMD`. +- Get started [here](../../../../documentation/developers/tfcmd/tfcmd.md). + +#### Gridify + +- An experimental project that allows developers to deploy their projects on ThreeFold as a platform with a single command, "gridify," using a Procfile in their code repository. +- Currently supported platforms include: + - Go 1.18 + - Python 3.10.10 + - Node 16.17.1 + - NPM 8.10.0 + - Caddy +- Learn more [here](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/gridify). + +### TFGrid-SDK-TS 2.0.0 + +- Consolidated all components targeting web/TypeScript developers and frontend efforts into a single repository for easier management and rapid releases. +- Moved gridclient, dashboard, statistics websites, and other TypeScript-based projects to the new repository [here](https://github.com/threefoldtech/tfgrid-sdk-ts). + +#### Grid-Client + +- Gateways now support WireGuard backends. +- Added support for hex secrets. +- Various fixes are detailed [here](https://github.com/orgs/threefoldtech/projects/192/views/12?filterQuery=repo%3A%22threefoldtech%2Ftfgrid-sdk-ts%22+label%3Agrid_client). + +#### TF Dashboard + +- Added support for IPv4 pricing in the resources calculator. +- Included TFT/USD exchange rate in the dashboard navbar. +- Introduced new standby status for nodes powered off by the farmerbot. +- In the explorer, a node monitoring page is now available. +- Fixed high CPU usage in the DAO Pages. +- Tracking improperly set serial number on nodes with a clear message. + +#### TFGrid Weblets + +- We are phasing out the TFGrid Weblets for a newer playground rewritten in vue3, however, we introduced some maintenance bugfixes. +- [Support umbrel on the grid](https://github.com/threefoldtech/home/issues/1394). + +### TF Playground v2.0.0 + +This release introduces a new playground with a more consistent user experience. Some components have been reworked for consistency. + +- Simplified the profile manager, requiring only the provision of a mnemonic and a password for encryption on the device. Mnemonics are never shared or sent across the network. +- Real-time calculation of deployment costs. +- Ability to generate WireGuard configurations. +- Direct link to the monitoring page of a deployment’s hosting node. + +## RMB 1.0.5 + +- Deprecated seed flag. \ No newline at end of file diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_6_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_6_0.md new file mode 100644 index 0000000..533a1a3 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_6_0.md @@ -0,0 +1,67 @@ +# ThreeFold Grid v3.6 Release Note + +Release Note of ThreeFold Grid v3.6. It includes updates, improvements and fixes of numerous grid components as described below. + +## TFPlayground v1.4.3 +- Updated Farming Calculator +- Better node-filtering mechanism by adding 'capacity' filter +- Simplified K8s solution deployment by eliminating 'add ssh key' part +- Improved UX for manual solution deployment on dedicated nodes +- Fixed solution's post-deployment bad gateway issue. +- Validation enhancements + +## Uhuru v1.2 (beta) +- Tackled the UI/UX issues and bugs. +- Added many features except for main missing things such as 'logout' option. + +## ThreeFold Wallet v3.1.0 +- Enable token unlocking feature +- Allow users to unlock their locked tokens via TFwallet. +- Improvements and fixes included, including usability supports for iOS devices that will be greatly improved. + +## TFConnect App v3.6.0 + +- Better usability and user experience through the app workflow improvements as well as +- Improved design and interface, look and feel. + +## TF Planetary Network v0.3.0 +TF Planetary Network is an application that allows users to access[ Peer To Peer end2end encrypted global network](https://library.threefold.me/info/manual/#/technology/threefold__planetary_network) which lives on top of the existing internet or other Peer To Peer networks created. This release's improvements: + +- New P2P functionalities on Desktop Client +- Improved the desktop clients for planetary network by adding support for M1 version of Mac. +- Allowed the application to refresh the list of ‘peers’, allowing extra ‘peers’ to be added by TF org +- Debugged multiple account issues on Mac. + +## TFTShop (GetTFT) v1.1.1 + +- Better usability and user experience through the app workflow improvements +- Improved design and interface, look and feel, such as Improvement on TFT purchase flows on all BTC-TFT, and FIAT-TFT transactions + +## TFGrid Proxy v1.5.0 +TFGrid Proxy is a REST API-based server used to interact with TFGridDB (Database) in order to access all available node-related information. This release's improvements: + +- Added querying for dedicated nodes support in gridproxy API. +- Added support for twins and contracts. +- Added filter for dedicated nodes +- Added missing queries on farms +- Added country API for node distribution + +## ZOS v3.1.0 +- Support pausing workloads to allow grace period before canceling contract. +- Enabled log streaming from VMs/Containers to a remote logs aggregation server. + +## TFNode-Pilot v0.1.0 +Pocket Network is a blockchain data platform built for applications that use cost-efficient economics to coordinate and distribute data at scale, enabling seamless interactions between blockchains and applications. This release's content: + +- Reverse-engineered the Pokt node pilot into Node Pilot Light. +- Deployed first version of PoktNetwork with TF Terraform Grid Provider. + +## TFChain v1.12 +- DAO support +- Dedicated nodes support +- General stability improvement +- Reworked farming policies +- Introduction of contract grace periods +- Farm certificaation through DAO +- New bridge code + diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_6_1.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_6_1.md new file mode 100644 index 0000000..cc1bc3d --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_6_1.md @@ -0,0 +1,103 @@ +# ThreeFold Grid v3.6.1 Release Note + +Release Note of ThreeFold Grid v3.6.1 It includes updates, improvements and fixes of numerous grid components as described below. + +## TFGrid 3.6.1 components + - testnet tfchain 1.12.1 + - substrate client (go) release for type change + - tfchain client (JS) release for type change + - graphql 2.3.3 + - tfchain bridge v2.1.0 + - ZOS 3.1.0-rc1 + - weblets 1.4.3-rc1 + - terraform 1.2.1 + - gridproxy v1.5.1 + - explorer 3.2.2 + - tfgrid_dashboard 1.0.6 + +## Component Upgrades + +### TFPlayground v1.4.3 +- Updated Farming Calculator +- Better node-filtering mechanism by adding 'capacity' filter +- Simplified K8s solution deployment by eliminating 'add ssh key' part +- Improved UX for manual solution deployment on dedicated nodes +- Fixed solution's post-deployment bad gateway issue. +- Validation enhancements + + +### ThreeFold Wallet v3.1.0 +- Enable token unlocking feature +- Allow users to unlock their locked tokens via TFwallet. +- Improvements and fixes included, including usability supports for iOS devices that will be greatly improved. + +### TFConnect App v3.6.0 + +- Better usability and user experience through the app workflow improvements as well as +- Improved design and interface, look and feel. + +### TF Planetary Network v0.3.0 +TF Planetary Network is an application that allows users to access[ Peer To Peer end2end encrypted global network](https://library.threefold.me/info/manual/#/technology/threefold__planetary_network) which lives on top of the existing internet or other Peer To Peer networks created. This release's improvements: + +- New P2P functionalities on Desktop Client +- Improved the desktop clients for planetary network by adding support for M1 version of Mac. +- Allowed the application to refresh the list of ‘peers’, allowing extra ‘peers’ to be added by TF org +- Debugged multiple account issues on Mac. + +### TFTShop (GetTFT) v1.1.1 + +- Better usability and user experience through the app workflow improvements +- Improved design and interface, look and feel, such as Improvement on TFT purchase flows on all BTC-TFT, and FIAT-TFT transactions + +### TFGrid Proxy v1.5.0 +TFGrid Proxy is a REST API-based server used to interact with TFGridDB (Database) in order to access all available node-related information. This release's improvements: + +- Added querying for dedicated nodes support in gridproxy API. +- Added support for twins and contracts. +- Added filter for dedicated nodes +- Added missing queries on farms +- Added country API for node distribution + +### ZOS v3.1.0 +- Support pausing workloads to allow grace period before canceling contract. +- Enabled log streaming from VMs/Containers to a remote logs aggregation server. + +### TFNode-Pilot v0.1.0 +Pocket Network is a blockchain data platform built for applications that use cost-efficient economics to coordinate and distribute data at scale, enabling seamless interactions between blockchains and applications. This release's content: + +- Optimized node pilot by Threefold. +- Deployed first version of PoktNetwork with TF Terraform Grid Provider. + +### TFChain v1.12 +- DAO support +- Dedicated nodes support +- General stability improvement +- Reworked farming policies +- Introduction of contract grace periods +- Farm certificaation through DAO +- New bridge code + +### TFgrid Dashboard +Tfgrid Dashboard is the mainhighlight of this release. We aim to have a simpler workflow for our Threefold users and more unified experience. The supported functionalites for this release are: +- Farm management +- Twin management +- Dedicated nodes +- Tfchain DAO +- Transferring money to TFChain accounts +- Swapping tokens on Binance and stellar +- Exploring farms +- Explorring nodes +- Grid statistics + +service is deployed on https://dashboard.test.grid.tf + +### Uhuru v1.4.0 (beta) +- Improved mobile View +- Improved multiple screen size views +- Added support and usability for more browsers (Firefox, Safari, etc) +- Added features on chat group management +- Full backend rewrite for improved performance, stability and security + +### TFConnect App v3.6.0 +- UX rewrite on for userflows like the welcome screen, registration screen, planetary network and many more +- Added Planetary network for iOS users \ No newline at end of file diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_7_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_7_0.md new file mode 100644 index 0000000..5294152 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_7_0.md @@ -0,0 +1,101 @@ +# ThreeFold Grid v3.7.0 Release Note + +Release Note of ThreeFold Grid v3.7.0. +It includes updates, improvements and fixes of numerous grid components as described below. + + +## Component Upgrades + +### ThreeFold Wallet v3.7.0 +- Include the option to show transaction details (Sender, Receiver, Memo, Blockchain hash, Amount, Asset, Date +- Added Farmer types details +- GraphQL types fix +- Enabled wallet deletion +- Wallet Cache fixes +- Bugs fixes and generic improvements + +### ThreeFold Connect v3.7.0 +- Improved loading of webpages +- Released the IOS version of planetary network +- Improved login flow (backend) +- Removed the concept of .3bot from the app registration and other display +- Re-enable TFChain bridge after twin changed +- Backend: Upgraded from Vue v2 to Vue v3 +- Backend: Full kubernetes test deploy stack. +- Frontend: Add typescript, tailwind. Removed the use of vuetify +- Generic Bug fixes + + +### ThreeFold Grid Proxy Client v1.5.9 +- Initial Grid Proxy Client implementation +- Includes the gridproxy API client along with API-specific information +- Includes classes that represent entities in the context of the API in the sub-module model (for making conversions between JSON objects and V objects). +- Added CI pipeline to run tests + + +### ThreeFold Chain v2.1.0 +- Improved Validation for Public Config (Node) by Implementing a maximum size on all types that are filled in by the user or ZOS tfchain +- Improved Validation for Interfaces (Node) by Implementing a maximum size on all types that are filled in by the user or ZOS tfchain +- Improved Validationfor Public IPs (Farm) +- Improved Validation for Twin IPs +- Reworked public IPs on Contract, they are now shown as a list with the actual public IP object +- Executed billing in a transactional operation tfchain +- Added Restriction of deployment hash length (32 bytes) + +### Planetary Network v.3.7.0 +- Added support for M1 Mac. +- Refresh list of peers. +- Allow extra peers to be added by TF org. +- Fix UI crashing/lags +- Build for Ubuntu, Windows, Build for Mac + +### Freeflow Twin Beta 1.5 +- Major Rebranding from Uhuru to Freeflow Twin +- Generic Bug fixes +- Enabled tagging people in chats +- Improved dev setup +- Improved staging+production link setup. +- Self-deploy improvements. +- PWA support +- Add info labels +- HTML encoding of messages +- Overflow handling +- Remember login session +- Chat: Added File upload progress view +- Chat: Added Link preview +- Chat: Adding more search options + +### TFGrid Dashboard v1.1.4 +- UX/UI : Updated Color Palette +- Updated Font styles +- Updated sidebar menu UX to include TF Portal, TF Explorer +- Enabled day/night mode +- TFGrid Explorer: Added Nodes page, Statistic Page, and Farms page ON TFGrid Explorer +- TFGrid Explorer: Added category of listed nodes as (dedicated, rented, and arentable) +- TFGrid Stats: Updated Minting Details on TF Dashboard +- TFGrid Stats: show receipts of previous nodes +- TFGrid Stats: Added Calendar UI +- Clickable Live Support Chat Popup + +### TF Playground v1.4.4 +- UX/UI : Updated Color Palette +- Updated Fonts. +- New Deployment/Solutions Icons in the sidebar. +- New Actions Icons in the deployment list. +- Added Solution Categories +- Enabled custom ‘Presearch instance’ deployment +- New Capacity Filter +- Add IPv4 Planetary Network Filter for specific instance deployments +- Newly improved Capacity Management for solution deployment: enabling the setting of a full VM as the default Virtual Machine for deployment, +- Easily fund a deployment profile / ID by scanning your ID wallet QR Code +- Profile Manager: Avoid losing deployment with Grace Period Listing +- provide a simple list where you can select one of the online Grid gateways +- Profile Management: Add ‘Confirmation’ popup before deleting a deployment profile +- Added an Identifier for the current network in the sidebar (main/test/devnet) +- TF Playground Wallet: show unlocked/locked tokens in balance +- Profile Management: allow a user to create a profile with no SSH key. +- TFGrid Client TS supports Algorand, Stellar, and TFChain Modules. +- NEW Node Pilot Instance Deployment +- NEW Subsquid Solution + + diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_8_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_8_0.md new file mode 100644 index 0000000..9142adf --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_8_0.md @@ -0,0 +1,74 @@ +# ThreeFold Grid v3.8.0 Release Note + +Release Note of ThreeFold Grid v3.8.0. +Live on Testnet 02/02/2023 + +This release note includes updates, improvements and fixes of numerous grid components as described below: + +## The Components + +- TFChain v2.2.0 +- ZOS v3.4.0 +- TF Weblets v1.6.0 +- TF Dashboard v1.3.0 +- TFGrid Client v1.5.0 +- TFGrid Proxy v1.6.5 +- Terraform v1.6.0 + +## Upgrades and Improvements + +### TFChain v2.2.0 +- Added [Third Party Billing Services](https://github.com/threefoldtech/tfchain/blob/12bc8842c7c321d22e36667a91dfc5d3c7d04ab8/substrate-node/pallets/pallet-smart-contract/service_consumer_contract_flow.md), allowing defining contracts between TFChain users for a service and the billing. +- Reworked billing flow, see [details here](https://github.com/threefoldtech/tfchain/issues/269). +- Infrastructure wise, we have integrated [Firesquid](https://docs.subsquid.io/), which is showing promising improvements in regards of the storage and data syncing. +- Added Bugfixes around data validations and improving migrations + +### ZOS v3.4.0 +This release was mainly focused on the stabilization of ZOS, Monitoring Support, upgrading components and fixing bugs as described below: +- Vector and Node-exporter support for [monitoring](https://metrics.grid.tf/) +- Bugfixes / hardening around uptime reports, capacity reports and QSFS workloads cleanup +- Added fixes for Grace Period regression +- Added fixes for ZOS Nodes Recovery after Network Outages +- Uptime reports rework: allowing it to happen every 40 minutes, instead of evey 2 hours +- Added Grace Period Workload Regression fixes + +[3.4 milestone](https://github.com/threefoldtech/zos/milestone/11) for more details + +### TF Weblets v1.6.0 +- Support [Algorand](https://www.algorand.com/) solution deployment +- Simplified Weblet's Profile Manager +- Support [Mastodon](https://joinmastodon.org/) solution deployment +- Upgraded [Discourse](https://www.discourse.org/) solution deployment support +- Various bugfixes and [UI Improvements](https://github.com/orgs/threefoldtech/projects/172/views/6) + +For more detailed information on this component release, please see [TF Weblets v1.6.0 Milestone](https://github.com/threefoldtech/grid_weblets/milestone/10) + +### TF Dashboard v1.3.0 +- Fixed broken 'Filter by Farm ID' +- Added fixes on HRU Filter +- Added Validation function on recipient's TFT address +- Added updates to sidebar icons +- Improved new farm addition function +- Added node filters validations fix +- Support filtering nodes by farm name +- Added Monitoring dashboard + +For more detailed information on this component release, please see [TF Dashboard v1.3.0 Milestone](https://github.com/threefoldtech/tfgrid_dashboard/milestone/12) + +### TFGrid Client 1.5.0 +- Added ZLogs workload support +- Added documentation updates + +### Terraform 1.6.0 +- Capacity planning upgrade +- Added Kubernetes token validation function + +## TFGrid Proxy v1.6.5 +- Added fixes on dedicated nodes reservation +- Added fixes on TCP connection leaks +- Added Swagger Docs fixes +- Added Updates to stats endpoint +- Added new queries for total resources +- Added more parameters to /nodes enpoint for filter by twin_id and node_id + +For more detailed information on this component release, please see [TFGrid Proxy v1.6.5 Milestone](https://github.com/threefoldtech/tfgridclient_proxy/milestone/5) diff --git a/collections/about/roadmap/releasenotes/tfgrid_release_3_9_0.md b/collections/about/roadmap/releasenotes/tfgrid_release_3_9_0.md new file mode 100644 index 0000000..972cb78 --- /dev/null +++ b/collections/about/roadmap/releasenotes/tfgrid_release_3_9_0.md @@ -0,0 +1,170 @@ +# ThreeFold Grid v3.9.0 Release Note + +Release Note of ThreeFold Grid v3.9.0. + +- Live on Mainnet 12/04/2023 +- Live on Testnet 23/03/2023 + + +This release is mainly around power management/capacity planning orchestrated by the farmerbot based on Wake-on Lan (WOL) and the reliable message bus (RMB) and the toolings update to utilize both. It also includes several other updates, improvements and fixes of numerous grid components as described below: + +## The Components + +- TFChain v2.3.0 +- ZOS v3.6.0 +- TF Farmerbot v1.0.0 +- TF Weblets v1.7.0 +- TF Dashboard v1.4.0 +- TF Gridclient v2.0.0 +- TF Gridproxy v1.7.0 +- Terraform v1.8.x +- RMB-RS v1.0.2 +- TFChain-GraphQL v2.9.0 + +## Upgrades and Improvement Highlights + +Below are some of the highlights of TFGrid v3.9.0 component upgrades and improvements. +Feel free to check [TFGrid v3.9.0 Project](https://github.com/orgs/threefoldtech/projects/172) for a more detailed overview of the TFGrid v3.9.0 release. + + +### RMB-RS v1.0.2 + +Reliable Message Bus Relay (RMB-RS) is a secure communication panel that allows bots to communicate together in a chat-like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT. + +- Guarantee authenticity of the messages. You are always sure that the received message is authentic from the sender. +- End-to-end encryption support. +- Support for third-party hosted Relays. Anyone can host a Relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to home servers by matrix. + +See [Specifications](https://github.com/threefoldtech/rmb-rs/blob/main/docs/readme.md) for more information. + +> Below is the list of the __Public Relay Addresses__ hosted by Threefold: + +- Dev: wss://relay.dev.grid.tf +- QA: wss://relay.qa.grid.tf +- Test: wss://relay.test.grid.tf +- Main: wss://relay.grid.tf + +__Impacted Clients:__ + +- [RMB-SDK-TS](https://github.com/threefoldtech/rmb-sdk-ts/releases/tag/v1.1.1) +- [RMB-SDK-GO](https://github.com/threefoldtech/rmb-sdk-go/releases/tag/v1.0.0) + + +### TFChain v2.3.0 + +On this release, we modified the twin objects on TFChain and removed the notion of an `IP`. We added 2 fields (`Relay` and `PK`) onto the twins. + +- __Relay__: an RMB Relay Address which a client can connect to (See RMB changes) +- __PK__: a public key for an encryption key which can be used to encrypt messages on the Public Relay, if not set, traffic will be unencrypted. + +__Impacted Clients:__ + +- [Grid3_Client_RS](https://github.com/threefoldtecharchive/grid3_client_rs/releases/tag/v0.2.0) + +### TFChain-GraphQL v2.9.0 + +An important note for users, that multiple steps would be required to upgrade your TFChain-GraphQL into the latest v2.9.0 release, as described below: + +1. Restart the ingester from scratch using the new config +2. Restart the processor from scratch using the new code + +Please make sure all data is wiped before restarting both services. + +### TF Famerbot v1.0.0 + +TF Farmerbot is a new component that aim as a power management solution that would allow farmer to setup to enable Wake-on-LAN mechanism on their farms. + +## Other Component Changelogs + +### TFChain v2.3.0 + +- Fixed locked balances +- Added extra field to twin for publickey +- Fixed serial number validation was blocking nodes from registration +- Added fixes on Farming policies on Testnet +- Allow farms to Add public IP ranges +- Support power management and capacity planning +- Fixed TFT price on mainnet +- Reworked migrations +- Set node's last uptime when the node send an uptime event +- Disable twin deletion +- Bug fixes around data validations, and more. + +Please follow [this milestone](https://github.com/threefoldtech/tfchain/milestone/11) for more. + +### ZOS v3.6.0 + +- Support Switching dhcpd from udhcpd +- WOL support +- Power Management support +- Fixed gateways backend validation +- Added number of workloads and deployments to zos reported statistics +- Support the new RMB and Relay +- Provide clearer messaging during twin registration + +Please follow [this milestone](https://github.com/threefoldtech/zos/milestone/12) for more details + +### TF Farmerbot v1.0.0 + +- Initial Release +- Added Support for Power Management feature +- Added Support for Capacity Planning feature + +### TF Weblets v1.7.0 + +- NEW Wordpress solution +- NEW Umbrel solution +- Added live button support +- Better error reporting mechanism +- Support Mnemonics field editing +- Removed flash messages after successfull deployment + +Please follow [this milestone](https://github.com/threefoldtech/grid_weblets/milestone/9) for more details + +### TF Dashboard v1.4.0 + +- Public IP validation +- RenameD 'Swap' page to 'Bridge' +- Support setting Relay and Public Key +- Added filter by Country validation +- Filter farms by pricing policy support +- Resource pricing calculator discount distinction between shared and dedicated nodes + +Please follow [this milestone](https://github.com/threefoldtech/tfgrid_dashboard/milestone/13) for more details + +### TF GridClient v2.0.0 + +- Added Support for RMB and Public Key of Twins +- Added Support for Farmerbot +- Added pricing calculator module +- Support service contracts +- Added size property to QSFS model +- HTTP server mode allows configuration file for user credentials +- Added fixes on 'Filter nodes by farmID' featue + +### Terraform v1.8.x + +- Added Support for RMB and RMB Relay +- Added Support for deployment using direct client +- Added Support for parallel deployment of resources +- Expand resources and data sources documentation + +Please follow [this milestone](https://github.com/threefoldtech/terraform-provider-grid/milestone/16) for more details + +## RMB v1.0.2 + +The new version of RMB written in Rust + +- Added Federation support +- Added Signing and end-to-end encryption +- RMB-Peer for compatibility +- Added Ratelimiting support + +## TFGrid Proxy v1.7.0 + +- Removed the proxying features, obsoleted by the new RMB. + +Please follow [this milestone](https://github.com/threefoldtech/tfgridclient_proxy/milestone/6) for more details + + + diff --git a/collections/about/roadmap/roadmap_readme.md b/collections/about/roadmap/roadmap_readme.md new file mode 100644 index 0000000..acf698c --- /dev/null +++ b/collections/about/roadmap/roadmap_readme.md @@ -0,0 +1,11 @@ +# ThreeFold Grid and Product Roadmap + +Welcome to ThreeFold's product roadmap! We are thrilled to have you on board as we journey towards a decentralized and sustainable future. Our product roadmap outlines the innovative solutions and technologies we are developing to revolutionize the way we compute, store data, and connect. Here, you will find a comprehensive overview of our latest and upcoming releases, enhancements, and advancements across our ecosystem. + +> Click [here](../../technology/concepts/grid3_components.md) to see the complete TFGrid Component List + +## Table of Contents + +- [TFGrid v3.x Announcement (Aug 2021 - Forum)](https://forum.threefold.io/t/announcement-of-tfgrid-3-0/1132) +- [What's new on TFGrid v3.x](../../technology/concepts/grid3_whatsnew.md) +- [Release Notes](./releasenotes/releasenotes_readme.md) \ No newline at end of file diff --git a/collections/about/tfchain.md b/collections/about/tfchain.md new file mode 100644 index 0000000..f7d875a --- /dev/null +++ b/collections/about/tfchain.md @@ -0,0 +1,30 @@ +

TFChain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFChain Uses](#tfchain-uses) + +*** + +## Introduction + +TFChain is a blockchain based on Parity Substrate which manages the TFGrid 3.x. + +TFChain is a combination of TFChain nodes. + +## TFChain Uses + +This blockchain is used for: + +- storing information as needed on the ThreeFold Grid + - identity information of entities (person and company) + - 3node phone book, where are the 3nodes, how much capacity, which farmer + - TF Farmer's, where are they based, how long active, reputation + - DigitalTwin Phonebook, registry of all digital_twins, where are they, public key, unique id, ... (\*1) + - Reputation information : how good is a farmer, uptime of a 3Node (\*2) + - Account_Metadata which is information about a digital currency wallet/account needed for vesting, locking, ... +- backend for Consensus_Engine. +- smartcontract_it layer (how to provision workloads on top of TFGrid) +- the backend for TFChainDB + diff --git a/collections/about/threefold_ag.md b/collections/about/threefold_ag.md new file mode 100644 index 0000000..cd5ba89 --- /dev/null +++ b/collections/about/threefold_ag.md @@ -0,0 +1,30 @@ +

ThreeFold Switzerland

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Some Plans](#some-plans) + +*** + +## Introduction + +ThreeFold Switzerland is officially called ThreeFold AG. + +While there are activities done at this time, we are preparing for future activities. + +## Some Plans + +- Promotion of ThreeFold, specifically in Zug Communities (CH). + + + +![](img/crypto_valley_zug_.jpg) \ No newline at end of file diff --git a/collections/about/threefold_companies.md b/collections/about/threefold_companies.md new file mode 100644 index 0000000..280a3f0 --- /dev/null +++ b/collections/about/threefold_companies.md @@ -0,0 +1,40 @@ +

ThreeFold Related Companies

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Companies Overview](#companies-overview) + +*** + +## Introduction + +The following companies are related parties to ThreeFold. Our terms and conditions apply. + +## Companies Overview + +| THREEFOLD RELATED COMPANIES | Description | +| --------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| [ThreeFold Dubai or ThreeFold Cloud](./threefold_dubai.md) | Promotion of TFGrid + Delivery of ThreeFold Cloud | +| [Threefold_Tech](./threefold_tech.md) | Belgium-based tech company owns IP (Intellectual Property) of tech, is open source | +| [ThreeFold_VZW](./threefold_vzw.md) | Non for profit organization in BE, intented to be used for grants work. | +| [ThreeFold_AG](./threefold_ag.md) | ThreeFold in Zug, Switzerland | +| TF Hub Limited | ThreeFold in BVI | +| Codescalers | Egypt-based software development team, creates a lot of code for ThreeFold | + + +| FARMING COOPERATIVES | | +| ------------------------------------ | ------------------------------------------------ | +| [Mazraa](./mazraa.md) | A farmer in Middle East who is part of ThreeFold_Dubai | +| [BetterToken](./bettertoken.md) | BetterToken is the very first ThreeFold Farming Cooperative in Europe | + + +| SOME LARGER FARMERS | | +| ------------------- | ---------------------------------------------------------------- | +| Green Edge | Early ThreeFold Farmer providing decentralized compute & storage | +| Bancadati | Large ThreeFold Farmer in Switzerland | +| Moresi | A neutral, technologically advanced data center in Switzerland | +| there are many more | ... | + +> Please note, ThreeFold Grid 3.x operates as a [DAO](./dao/dao.md) every party who wants to participate with the ThreeFold Grid uses the [TFChain](./tfchain.md) and our Forums. +> [Click here for more info about our DAO](./dao/tfdao.md) \ No newline at end of file diff --git a/collections/about/threefold_dubai.md b/collections/about/threefold_dubai.md new file mode 100644 index 0000000..9c1836b --- /dev/null +++ b/collections/about/threefold_dubai.md @@ -0,0 +1,61 @@ +

ThreeFold Dubai

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Responsibilities](#responsibilities) +- [NEW 2023](#new-2023) +- [Some memories from 2015+](#some-memories-from-2015) +- [Structure: Oct 2021 - Dec 2022](#structure-oct-2021---dec-2022) +- [Official License](#official-license) + +*** + +## Introduction + +ThreeFold Dubai is the original team of ThreeFold operated from Dubai and Belgium. We started in 2016. + +## Responsibilities + +- Promote ThreeFold Grid and the ThreeFold Token +- Work with [ThreeFold Tech](./threefold_tech.md) for the creation and maintenance of the technology. +- Legal: signing party with all T&C (terms and conditions) with all future farmers +- Work with many people and companies around the world to grow the threefold ecosystem +- Look for partners who are willing to grow the threefold ecosystem + +## NEW 2023 + +ThreeFold Dubai = ThreeFold DMCC and will launch a commercial business on top of the TFGrid. + +See more info in [this google doc](https://docs.google.com/document/d/10Ieu1D00vZdVNP9nQESk4WMszAM5vqi8XoWzSBy3xPU/edit) + +## Some memories from 2015+ + +At one point in time we had our office on the 74th floor of a building close to the Dubai International Airport. It was a cool spot with a great view, but we also realized the importance of being located closer to the ground. We only stayed there for just a little more than 1 year. + +![](img/view_dubai.jpg) +![](img/dubai_office1.jpg) + +Our main office was and still is in Al Jadaf which is, interestingly enough, a boat shipyard. This is where a lot of the ideas and work has been done to make ThreeFold possible. + +![](img/al_jadaf.jpg) +![](img/aljadaf2.jpg) + +The tower on the left in the photo above is where our office was. The place behind (to the right) is called Al Jadaf. We decided to do something different compared to most. No office in a fancy office building. Instead we have our office next to the water in a very old shipyard. Very unique, and it much more cost effective as well. (-: + +Still today there are more than 100 servers located there in our testlab, and the ThreeFold Dubai was run from there. + +## Structure: Oct 2021 - Dec 2022 + +- ThreeFold Dubai is our operational HQ from where all Foundation activities are coordinated. +- ThreeFold Dubai was mainly funded from TFTech (during 2019-2022), this will now change in 2023 +- ThreeFold Dubai sometimes uses ThreeFold Labs IT which is a Dubai onshore company for when we need onshore activities like visa's for our people, workpermits, invoicing, ... ThreeFold Labs IT is just a services company to deal with some of these practical elements. + +Adnan Fatayerji is the managing director and shareholder, in the future the shares of ThreeFold Dubai will be 100% owned by The OurWorld Venture Creator + +## Official License + +Please see below the ThreeFold DMCC license: + +![](img/threefold_dmcc_license_certificate.jpg) + diff --git a/collections/about/threefold_history.md b/collections/about/threefold_history.md new file mode 100644 index 0000000..df6c74c --- /dev/null +++ b/collections/about/threefold_history.md @@ -0,0 +1,59 @@ +

ThreeFold History

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [History](#history) +- [ThreeFold Project Funding Total](#threefold-project-funding-total) +- [Status](#status) +- [Genesis Pool](#genesis-pool) +- [History of Tokens](#history-of-tokens) + +*** + +## Introduction + +The project, now in its sixth year and is gratefulf or the support of its community and commercial entity [ThreeFold Tech](https://github.com/threefoldtech). + +ThreeFold is fundamentally a decentralized initiative. Within this framework, ThreeFold Dubai plays a pivotal role in championing and advancing the ThreeFold Grid and the broader movement. + +Our founders have largely retained their tokens, with only minimal sales, if any. Their intent is clear: they plan to hold onto their tokens until the grid achieves global recognition and the token value surpasses 0.2 USD. + +## History + +In the earlier days of ThreeFold, there were multiple teams collaborating, but the two core teams were located in Dubai and Belgium. + +A group of early supporters bought IT capacity (through buying TFT) from our Genesis pool and our early farmers. These buyers could use their TFT to buy IT capacity from [ThreeFold Dubai](./threefold_dubai.md) or [BetterToken](./bettertoken.md) BV until April 2020, or from the TF Grid directly in a fully decentralized way starting May 2020. + +The ThreeFold Grid is the result of many farmers using the open source technology of ThreeFold Tech. + +Originally, the technology used was created by three companies: GreenIT Globe, ThreeFold Dubai & ThreeFold Tech. The last two still actively participate in the creation of tech components or content as used by all ThreeFold Farmers today. + +## ThreeFold Project Funding Total + +How much funding was used to make the ThreeFold project possible? + +> +- 50M USD + +- +20M USD for all farming (thank you farmers) +- 15M USD in ThreeFold Tech as convertible loan (by 50+ investors) +- 5M USD in early IT capacity purchases (as TFT) +- +10M USD funding from Incubaid/Kristof (estimate) + - ThreeFold Tech was established Oct 2018, from out of Incubaid + - Related to people related to [Incubaid](https://www.incubaid.com) + - Over quite some years, multiple companies/projects + +## Status + +We have worked with multiple regions over the years to look for appropriate structures, we realize we need more funding as such we have launched a venture creator in mauritius who will hopefully invest 7.5m EUR in TFTech as well as in TF Dubai. + +See our [overview of our companies](./threefold_companies.md) + + +## Genesis Pool + +To kickstart the ThreeFold Grid back in 2017 the foundation committed large amounts of capacity to the grid. This was called the [Genesis Pool](./genesis_pool.md) and the tokens sold as mentioned could be used to use capacity from this pool and more. + +## History of Tokens + +For more info about history of tokens, see [token history](./token_history.md). \ No newline at end of file diff --git a/collections/about/threefold_tech.md b/collections/about/threefold_tech.md new file mode 100644 index 0000000..5f90ee4 --- /dev/null +++ b/collections/about/threefold_tech.md @@ -0,0 +1,41 @@ +

ThreeFold Tech

+ +

Table of Contents

+ +- [Overview](#overview) +- [Location](#location) + +*** + +## Overview + +Company developing & promoting software for self-healing, self-driving cloud & blockchain workloads. Has developed most of the software as used in the ThreeFold_Grid. + +- TFTech is working together with industry partners to sell its software + - Major partners: HPE, Solidaridad, Kleos (learn more on [threefold.io/partners](https://threefold.io/partners) +- Income - License and OEM agreements involving the TFTech technology: - License fees can be in the form of a revenue share on commercial products being developed on top of the TF platform. - With respect to the TF Grid, a fee of 10% of revenue generated is charged + for as a license fee for certified edge Internet Capacity registered on the TF Grid + network +- Investors to this point: + - Self-funded by founders & current funding round + +see https://threefold.tech/ + +We believe that doing good for the world and growing a successful software company can go hand in hand. + +ThreeFold Tech is a Belgium-based for-profit software company that believes that doing good for the world and building a successful company can go hand-in-hand. They are responsible for the technology behind the ThreeFold_Grid. + +ThreeFold Tech business wise focusses on + +- [X] sell licenses to companies and/or governments to deploy private versions of our cloud technology. +- [X] create an antidote for the Cyberpandemic, help customers to protect themselves against this huge threat. + +The company is 80% engineering centric today. + +> TFTech has no links to tokens, at this point (March 2021) does not own any of them either. All Token & TFGrid activities are coordinate from ThreeFold Dubai. + +![](img/threefold_tech.jpg) + +## Location + +![](img/threefold_tech_location.jpg) diff --git a/collections/about/threefold_vzw.md b/collections/about/threefold_vzw.md new file mode 100644 index 0000000..584653f --- /dev/null +++ b/collections/about/threefold_vzw.md @@ -0,0 +1,48 @@ +

ThreeFold VZW

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Functions](#functions) +- [Some History](#some-history) +- [Belgium Official Doc](#belgium-official-doc) + +*** + +## Introduction + +ThreeFold VZW is a non for profit organization based in Belgium. + +A **VZW** has no shareholders, only members. + + + +## Functions + +- owner of the wisdom_council +- eventuallly ThreeFold VZW will own some decentralized organizations as operating in the ThreeFold world e.g. [TF Dubai](./threefold_dubai.md) + +## Some History + +We all started in Belgium from Korenlei 22, a super old building in the middle of the town. It dates back to 1731. + +![](img/korenlei_22.jpg) + +![](img/korenlei_old.jpg) + +Now the foundation has another address in Lochristi. + + + +## Belgium Official Doc + +![](img/threefold_vzw_official_doc.jpg) + + + +See also https://trendstop.knack.be/en/detail/747872572/threefold-foundation.aspx \ No newline at end of file diff --git a/collections/about/token_history.md b/collections/about/token_history.md new file mode 100644 index 0000000..613d000 --- /dev/null +++ b/collections/about/token_history.md @@ -0,0 +1,96 @@ +

Token History

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Quick History Overview](#quick-history-overview) +- [Organic Growth](#organic-growth) +- [Farming Model Improvements](#farming-model-improvements) + - [TFT Versions](#tft-versions) +- [Migration](#migration) + - [Migration from TFTv1 Rivine to TFTv1 Stellar (2020)](#migration-from-tftv1-rivine-to-tftv1-stellar-2020) + - [Migration from TFTv1 Stellar (Staking Pool=TFTA) to TFTv2 Stellar (Trading or Production Pool=TFT)](#migration-from-tftv1-stellar-staking-pooltfta-to-tftv2-stellar-trading-or-production-pooltft) + - [Technical Information](#technical-information) + +*** + +## Introduction + +We present the ThreeFold token history and the path from TFT v1 towards TFT v2. + +## Quick History Overview + +- More than 10 years ago, this project started from out of our incubator (see [Incubaid](https://www.incubaid.com)) +- More than 6 years ago TF Foundation started deploying capacity for development purposes + - This became our ([our genesis pool](./genesis_pool.md)), which is the beginning of farming + - In 2017-18, value of the genesis pools were about 7m USD in TFT (tokens did not exist yet) + - Genesis pools are owned by ThreeFold Dubai (ThreeFold_Dubai). +- March 2018 our first-generation blockchain for the ThreeFold_Token saw daylight + - The TFT v1 was launched on a blockchain called Rivine (PTO) + - The genesis pool resulted in the first batch initial TFT + - The blockchain nodes were hosted by +30 different parties completely unrelated to each other +- In Q2 2019, ThreeFold_Dubai launched our generation 1 of our TF Grid +- April 2020 ThreeFold_Dubai launched the TFGrid v2.0 which is now public and usable by the world + - ThreeFold has a new website and a new wiki + - The farmers & TFT holders have at their own will upgraded their wallets, zero-nodes, ... +- May 2020: ThreeFold_Dubai launched our 2nd version of our token called TFT but this time on Stellar + - The original TFTv1 kept all same properties and benefits and is now called TFTA also on Stellar (is technology choice), anyone can move from TFTv1 to TFTv2 + - TF Foundation Dubai has provisioned the TFTv1 & TFTv2 on Stellar blockchain, but has no influence or access to any of the wallets or for that matter the 3Nodes (the boxes providing IT capacity) + - See below for more info, this was the result of 12 months of work with our community and of-course consensus to do this. + +## Organic Growth + +We didn't artificially pump the value of the tokens. + +We did not issue (print) tokens and go out onto an exchange to offer these tokens to the market. This is referred to as a public ICO. Some ICOs were not very clean in how they created hype and convinced people to invest. Because of our decision not to do a public ICO, we have not been able to raise much money, but we feel that this was more aligned with our values. + +We have sold some TFT over the counter but please note every buyer could at any point in time use these TFTs to buy IT capacity, this makes these TFT purposeful, even from the very start. + +## Farming Model Improvements + +In Q2 2020 we were launching TF Grid 2.0 with updated minting rules. As part of these farming rules the max number of tokens became 4 billion, which changes the optics of the original size of the genesis token pool. + +In Q3 2021 we launched TF Grid 3.0 which has again brought improvement to the farming model. Its up to the farmers to choose if they want to change to the new farming model or not. + +### TFT Versions + +| | version 1 Rivine | version 1 Stellar | version 2 Stellar | +| ------------------------------- | --------------------------- | ----------------- | --------------------------- | +| blockchain tech | Rivine, proof of blockstake | Public, Stellar | Public, Stellar | +| on public blockchain | march 2018 | 2020 May | 2020 May | +| farmed since | +-2017 | 2020 May | tbd | +| freely transferable (\*) | YES | YES | YES | +| complete blockchain feature set | YES | YES | YES | +| decentralized exchange | YES (atomic swap) | YES (Stellar) | YES (Stellar) | +| public exchange | BTC Alpha till Dec 2019 | Stellar | Stellar, BTC Alpha & Liquid (until August 2022) | +| freely tradable on exchange | YES | YES | YES | +| Name on Blockchain | TFT | TFTA | TFT | +| Purpose | v1 token | Staking Pool | Trading Pool | + +## Migration + +### Migration from TFTv1 Rivine to TFTv1 Stellar (2020) + +- TF Tech decided to no longer support development of Rivine, at this time there are better technologies available as blockchain +- The Foundation investigated many blockchain platforms & recommended to use Stellar +- Jimber (company which maintains the wallet, which is open source code), has made the changes in the wallet to be able to support this new blockchain +- The conversation had to be a mandatory one, because otherwise there would be the potential of double-spending problems over both simultaneously-active blockchains +- What happened here can be compared to a website deciding to change the database backend (change from e.g. MS Sql to Oracle). The users of the website should not have to be aware of this migration +- Every user had to do the transaction themselves, no developer or anyone else had control over this migration step. This was an automatic step +- Everyone can use the validation scripts available to check the correct conversion between two blockchain technologies. The validation scripts prove that every transaction in the conversion happened well + +### Migration from TFTv1 Stellar (Staking Pool=TFTA) to TFTv2 Stellar (Trading or Production Pool=TFT) + +- See [TFTA to TFT](../legal/terms_conditions/tfta_to_tft.md) + +### Technical Information + +[TFTA TrustLine](https://stellar.expert/explorer/public/asset/TFTA-GBUT4GP5GJ6B3XW5PXENHQA7TXJI5GOPW3NF4W3ZIW6OOO4ISY6WNLN2) + +Accounts that got initial balances migrated them from the previous blockchain, [rivine](https://explorer2.threefoldtoken.com/). + +To validate this, each migration transaction contain hash of the rivine lock transaction in their memo in hex format. + +[Rivine block explorer](https://explorer2.threefoldtoken.com/) can be used for validation purposes. + +> Important note: The ThreeFold Token (TFT) is not an investment instrument. TFTs represent IT capacity on the ThreeFold Grid, farmers create TFT, developers use TFT. \ No newline at end of file diff --git a/collections/about/token_overview/img/token_distribution.png b/collections/about/token_overview/img/token_distribution.png new file mode 100644 index 0000000..a05ad5a Binary files /dev/null and b/collections/about/token_overview/img/token_distribution.png differ diff --git a/collections/about/token_overview/special_wallets/img/polkadot_wallet_example.png b/collections/about/token_overview/special_wallets/img/polkadot_wallet_example.png new file mode 100644 index 0000000..f8b65cc Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/polkadot_wallet_example.png differ diff --git a/collections/about/token_overview/special_wallets/img/wallet_solution_provider_main.png b/collections/about/token_overview/special_wallets/img/wallet_solution_provider_main.png new file mode 100644 index 0000000..bb301f0 Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/wallet_solution_provider_main.png differ diff --git a/collections/about/token_overview/special_wallets/img/wallet_solution_provider_test.png b/collections/about/token_overview/special_wallets/img/wallet_solution_provider_test.png new file mode 100644 index 0000000..5ef4669 Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/wallet_solution_provider_test.png differ diff --git a/collections/about/token_overview/special_wallets/img/wallet_staking_pool.png b/collections/about/token_overview/special_wallets/img/wallet_staking_pool.png new file mode 100644 index 0000000..5d08a46 Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/wallet_staking_pool.png differ diff --git a/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_main.png b/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_main.png new file mode 100644 index 0000000..5893319 Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_main.png differ diff --git a/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_test.png b/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_test.png new file mode 100644 index 0000000..83812ee Binary files /dev/null and b/collections/about/token_overview/special_wallets/img/wallet_tf_foundation_test.png differ diff --git a/collections/about/token_overview/special_wallets/stats_special_wallets.md b/collections/about/token_overview/special_wallets/stats_special_wallets.md new file mode 100644 index 0000000..96b882f --- /dev/null +++ b/collections/about/token_overview/special_wallets/stats_special_wallets.md @@ -0,0 +1,105 @@ +

ThreeFold Special Wallets

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Exchange and OTC Wallets](#exchange-and-otc-wallets) +- [ThreeFold Contribution Wallets](#threefold-contribution-wallets) +- [Wisdom Council Wallets](#wisdom-council-wallets) +- [Important Note](#important-note) +- [Remarks](#remarks) +- [Proof-of-Utilization Wallets](#proof-of-utilization-wallets) + +*** + +## Introduction + +We present special wallets that hold a given amount of TFT. + +## Exchange and OTC Wallets + +| **Description** | **TFT Balance** | **Address** | +| ------------------ | ----------- | -------------------------------------------------------------------------------- | +| Liquid Exchange #1 | {{#include ./wallet_data/GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM.md}} | [GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM](https://stellar.expert/explorer/public/account/GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM) | +| Liquid Exchange #2 | {{#include ./wallet_data/GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU.md}} | [GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU](https://stellar.expert/explorer/public/account/GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU) | +| gettft.com | {{#include ./wallet_data/GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD.md}} | [GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD](https://stellar.expert/explorer/public/account/GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD) | +| BTC-Alpha Exchange | {{#include ./wallet_data/GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S.md}} | [GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S](https://stellar.expert/explorer/public/account/GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S) | + +## ThreeFold Contribution Wallets + +| **Description** | **TFT Balance** | **Address** | +| ------------------------------- | ----------- | -------------------------------------------------------------------------------- | +| TF DAY2DAY operations | {{#include ./wallet_data/GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7.md}} | [GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7](https://stellar.expert/explorer/public/account/GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7) | +| TF Promotion Wallet | {{#include ./wallet_data/GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO.md}} | [GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO](https://stellar.expert/explorer/public/account/GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO) | +| TF Grants Wallet | {{#include ./wallet_data/GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ.md}} | [GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ](https://stellar.expert/explorer/public/account/GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ) | +| ThreeFold Carbon Credit Funding | {{#include ./wallet_data/GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU.md}} | [GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU](https://stellar.expert/explorer/public/account/GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU) | +| TF Team Wallet | {{#include ./wallet_data/GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH.md}} | [GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH](https://stellar.expert/explorer/public/account/GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH) | + +## Wisdom Council Wallets + +| **Description** | **TFT Balance** | **Address** | +| --------------------------------------- | ----------- | -------------------------------------------------------------------------------- | +| Liquidity/Ecosystem Contribution Wisdom | {{#include ./wallet_data/GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6.md}} | [GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6](https://stellar.expert/explorer/public/account/GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6) | +| TF Promotion Wisdom | {{#include ./wallet_data/GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3.md}} | [GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3](https://stellar.expert/explorer/public/account/GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3) | +| TF Grants Wisdom | {{#include ./wallet_data/GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR.md}} | [GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR](https://stellar.expert/explorer/public/account/GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR) | +| TF Team Wisdom | {{#include ./wallet_data/GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4.md}} | [GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4](https://stellar.expert/explorer/public/account/GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4) | +| Wisdom Council Locked | {{#include ./wallet_data/GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR.md}} | [GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR](https://stellar.expert/explorer/public/account/GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR) | + +## Important Note + +ThreeFold DMCC (Dubai) is in the process of acquiring a substantial number of tokens. While these tokens possess liquidity from a technical standpoint, they are not currently accessible or traded on the open market. This reserve of tokens has been allocated for our upcoming commercial rollout, and their governance will be managed through consensus based system with input from the community. + +## Remarks + +- All wisdom council wallets are protected by multisignature of the members of the wisdom council +- All foundation wallets are protected by members of the foundation (4 on 6 need to sign) +- Signatures can be checked by going to detail of account and then to the stellar link +- The foundation will never spend tokens if the markets cannot support it and all proceeds are 100% used for the benefit of the ThreeFold project. + +## Proof-of-Utilization Wallets + +There are some wallets associated with [proof-of-utilization](../../../farming/proof_of_utilization.md). These wallets are on TFChain. + +The addresses are the following: + +- Mainnet ThreeFold Foundation: 5DCaGQfz2PH35EMJTHFMjc6Tk5SkqhjekVvrycY5M5xiYzis +- Mainnet Default Solution Provider: 5Dd6adUJH8wvqb9SPC96JdZ85nK1671MeMSxkPZ6Q7rE4byc +- Testnet ThreeFold Foundation: 5H6XYX17yJyjazoLVZqxxEPwMdGn99wginjmFBKtjvk8iJ3e +- Testnet Default Solution Provider: 5Esq6iLLBGGJFsCEXpoFhxHhqcaGqTvDasdwy8jPFDH1jYaM +- Staking Pool: 5CNposRewardAccount11111111111111111111111111FSU + +To check the balance of any of those wallets, follow those steps: + +- Go to the Polkadot API ([Mainnet](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate), [Testnet](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/chainstate)) +- Under `selected state query`, select `system` +- On the right drop down menu, select `account(AccountId32): FrameSystemAccountInfo` +- Under `Option`, write the wallet address of one of the three accounts displayed above +- Click on the `plus` button on the far right of the `selected state query` line. + +As a general example, here's what it looks like: + +![Wallet example](./img/polkadot_wallet_example.png) + +Here are the outputs for three wallets shown above: + +- Mainnet ThreeFold Foundation + +![Mainnet TF Foundation Wallet](./img/wallet_tf_foundation_main.png) + +- Mainnet Default Solution Provider + +![Mainnet Solution Provider Wallet](./img/wallet_solution_provider_main.png) + +- Testnet ThreeFold Foundation + +![Testnet TF Foundation Wallet](./img/wallet_tf_foundation_test.png) + +- Testnet Default Solution Provider + +![Testnet Solution Provider Wallet](./img/wallet_solution_provider_test.png) + +- Staking Pool + +![Staking Pool Wallet](./img/wallet_staking_pool.png) + +> Note: To get the proper TFT amount, you need to account fo the fact that TFT uses 7 decimal places. For this reason, to get the proper quantity in TFT, move the decimal place by dividing by 1e7 (i.e. 1x10⁷). \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM.md b/collections/about/token_overview/special_wallets/wallet_data/GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM.md new file mode 100644 index 0000000..a34c7a2 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GA7OPN4A3JNHLPHPEWM4PJDOYYDYNZOM7ES6YL3O7NC3PRY3V3UX6ANM.md @@ -0,0 +1 @@ +3340735.94 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3.md b/collections/about/token_overview/special_wallets/wallet_data/GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3.md new file mode 100644 index 0000000..cd1f5fd --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GAI4C2BGOA3YHVQZZW7OW4FHOGGYWTUBEVNHB6MW4ZAFG7ZAA7D5IPC3.md @@ -0,0 +1 @@ +258.80 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4.md b/collections/about/token_overview/special_wallets/wallet_data/GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4.md new file mode 100644 index 0000000..bab954e --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GAQXBLFG4BZGIVY6DBJVWE5EAP3UNHMIA2PYCUVLY2JUSPVWPUF36BW4.md @@ -0,0 +1 @@ +5000000.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR.md b/collections/about/token_overview/special_wallets/wallet_data/GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR.md new file mode 100644 index 0000000..5fd95eb --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GAUGOSYLCX7JZTQYF2K7RIMHFWKSA3WSI2OQ4IRKXMDMVE6ABJIJMFQR.md @@ -0,0 +1 @@ +10468506.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7.md b/collections/about/token_overview/special_wallets/wallet_data/GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7.md new file mode 100644 index 0000000..6641a70 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GB2C5HCZYWNGVM6JGXDWQBJTMUY4S2HPPTCAH63HFAQVL2ALXDW7SSJ7.md @@ -0,0 +1 @@ +12447988.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD.md b/collections/about/token_overview/special_wallets/wallet_data/GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD.md new file mode 100644 index 0000000..8cd46f1 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GBQHN7RL4LSRPR2TT74ID2UJPZ2AXCHQY2WKGCTDLJM3NXVJ7GQHUCOD.md @@ -0,0 +1 @@ +2908686.63 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S.md b/collections/about/token_overview/special_wallets/wallet_data/GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S.md new file mode 100644 index 0000000..d49104a --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GBTPAXXP6534UPC4MLNGFGJWCD6DNSRVIPPOZWXAQAWI4FKTLOJY2A2S.md @@ -0,0 +1 @@ +1832242.73 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6.md b/collections/about/token_overview/special_wallets/wallet_data/GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6.md new file mode 100644 index 0000000..df521e3 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GBV734I2SV4YDDPVJMYXU3IZ2AIU5GEAJRAD4E4BQG7CA2N63NXSPMD6.md @@ -0,0 +1 @@ +16999920.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR.md b/collections/about/token_overview/special_wallets/wallet_data/GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR.md new file mode 100644 index 0000000..1b01949 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GCEJ7DMULFTT25UH4FAAGOZ6KER4WXAYQGJUSIITQD527DGTKSXKBQGR.md @@ -0,0 +1 @@ +10000000.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH.md b/collections/about/token_overview/special_wallets/wallet_data/GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH.md new file mode 100644 index 0000000..182e928 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GCWHWDRXYPXQAOYMQKB66SZPLM6UANKGMSL4SP7LSOIA6OTTOYQ6HBIH.md @@ -0,0 +1 @@ +328241.81 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU.md b/collections/about/token_overview/special_wallets/wallet_data/GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU.md new file mode 100644 index 0000000..76e10f5 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GDIJY6K2BBRIRX423ZFUYKKFDN66XP2KMSBZFQSE2PSNDZ6EDVQTRLSU.md @@ -0,0 +1 @@ +10110962.98 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ.md b/collections/about/token_overview/special_wallets/wallet_data/GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ.md new file mode 100644 index 0000000..c280877 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GDKXTUYNW4BJKDM2L7B5XUYFUISV52KUU4G7VPNLF4ZSIKBURM622YPZ.md @@ -0,0 +1 @@ +12996500.00 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO.md b/collections/about/token_overview/special_wallets/wallet_data/GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO.md new file mode 100644 index 0000000..7e6fce5 --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GDLVIB44LVONM5K67LUPSFZMSX7G2RLYVBM5MMHUJ4NAQJU7CH4HBJBO.md @@ -0,0 +1 @@ +14181682.63 \ No newline at end of file diff --git a/collections/about/token_overview/special_wallets/wallet_data/GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU.md b/collections/about/token_overview/special_wallets/wallet_data/GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU.md new file mode 100644 index 0000000..1d65b3e --- /dev/null +++ b/collections/about/token_overview/special_wallets/wallet_data/GDSKFYNMZWTB3V5AN26CEAQ27643Q3KB4X6MY4UTO2LIIDFND4SPQZYU.md @@ -0,0 +1 @@ +0.00 \ No newline at end of file diff --git a/collections/about/token_overview/token_overview.md b/collections/about/token_overview/token_overview.md new file mode 100644 index 0000000..4319621 --- /dev/null +++ b/collections/about/token_overview/token_overview.md @@ -0,0 +1,90 @@ +

ThreeFold Token Overview

+ +

Table of Contents

+ +- [Introduction to TFT](#introduction-to-tft) +- [Proof-of-Capacity](#proof-of-capacity) +- [Proof-of-Utilization](#proof-of-utilization) + - [Proof-of-Utility Distribution Flow](#proof-of-utility-distribution-flow) +- [TFT Distribution](#tft-distribution) +- [TFT Marketcap and Market Price](#tft-marketcap-and-market-price) +- [Complemetary Information](#complemetary-information) +- [Disclaimer](#disclaimer) + +*** + +## Introduction to TFT + +ThreeFold tokens, or TFTs, are exclusively generated when new capacity is added to the TF Grid. There are no centralized issuers. Tokens have not been created out of thin air. + +While the ThreeFold Grid can expand, a maximum of 1 billion TFTs can ever be in circulation. This limit ensures stability of value and incentivization for all stakeholders. + +TFT lives on the Stellar Blockchain. TFT holders benefit from a big ecosystem of proven wallets and mediums of exchange. + +By employing Stellar technology, TFT transactions and smart contracts are powered by one of the most energy-efficient blockchains available. Furthermore, TFT is the medium of exchange on the greenest internet network in the world. The market for farming, cultivating and trading TFT is open to all. + +Anyone with internet connection, power supply and necessary hardware can become a Farmer or trade ThreeFold tokens (TFT). + +By farming, buying, holding, and utilizing ThreeFold Tokens, you are actively supporting the expansion of the ThreeFold Grid and its use cases — creating a more sustainable, fair, and equally accessible Internet. + +## Proof-of-Capacity + +ThreeFold uses proof-of-capacity to mint tokens. Since the genenis pool, all tokens that are being minted are the result of farming. Minting will stop during 2024, to keep the total amount of TFT at 1 billion, instead of the previously planned 4 billion. Read more about this [here](https://forum.threefold.io/t/end-feb-2024-update-from-the-team/4233). + +> For more details, see [Proof of Capacity](../../farming/proof_of_capacity.md) + +## Proof-of-Utilization + +TFT is used on the TFGrid to purchase network, compute and storage resources through the proof-of-utilization protocol. + +### Proof-of-Utility Distribution Flow + +![](img/token_distribution.png) + +> For more details, see [Proof-of-Utilization](../../farming/proof_of_utilization.md) + +## TFT Distribution + +The supply distribution of TFT is as follows: + +| Supply Distribution | Qty (Millions) | +| ------------------- | -------------- | +| Total supply | 942 | +| TF Foundation Supply | 162 | +| Circulating supply | 780 | +| Maximum supply | 1000 | + +The total supply of TFT is distributed as follows: + +| Total Supply Distribution | Qty (Millions) | +| ------------------------------------------- | -------------- | +| TF Foundation: Ecosystem Grants | 22 | +| TF Foundation: Promotion & Marketing Effort | 100 | +| TF Foundation: Ecosystem Contribution & Liquidity Exchanges | 40 | +| Genesis Pool & Farming Rewards | 780 | + +## TFT Marketcap and Market Price + +The TFT market price and marketcap are as follows: + +| **Description** | **Value** | +| ------------------------- | ------------- | +| TFT Market Price | {{#include ../../../values/tft_value.md}} USD | +| TFT Market Cap | {{#include ../../../values/tft_marketcap.md}} USD | + +The market cap is equal to the product of the TFT market price and the circulating supply. + +> Market Cap = (TFT Market Price) X (TFT Circulating Supply) + +The values here are subject to change. Check the current market conditions. + +## Complemetary Information + +- [ThreeFold History](../../about/threefold_history.md) +- [Token History](../../about/token_history.md) +- [Special Wallets](./special_wallets/stats_special_wallets.md) + +## Disclaimer + +> Important Note: The ThreeFold Token (TFT) is not an investment instrument. +TFTs represent IT capacity on the ThreeFold Grid, farmers create TFT, developers use TFT. \ No newline at end of file diff --git a/collections/cloud/.collection b/collections/cloud/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/cloud/cloud_toc.md b/collections/cloud/cloud_toc.md new file mode 100644 index 0000000..4a50bd7 --- /dev/null +++ b/collections/cloud/cloud_toc.md @@ -0,0 +1,16 @@ +

Cloud

+ +This section covers the essential information concerning Cloud utilization. + +To deploy on the ThreeFold Grid, refer to the [System Administrators](../../documentation/system_administrators/system_administrators.md) section. + +

Table of Contents

+ +- [Cloud Units](./cloudunits.md) +- [Pricing](./pricing/pricing_toc.md) + - [Pricing Overview](./pricing/pricing.md) + - [Staking Discounts](./pricing/staking_discount_levels.md) + - [Cloud Pricing Compare](./pricing/cloud_pricing_compare.md) + - [Grid Billing](./grid_billing/grid_billing.md) +- [Resource Units](./resource_units_calc_cloudunits.md) +- [Resource Units Advanced](./resourceunits_advanced.md) \ No newline at end of file diff --git a/collections/cloud/cloudunits.md b/collections/cloud/cloudunits.md new file mode 100644 index 0000000..e9f34c4 --- /dev/null +++ b/collections/cloud/cloudunits.md @@ -0,0 +1,69 @@ +

Cloud Units

+ +

Table of Contents

+ +- [What are Cloud Units?](#what-are-cloud-units) +- [How is the price of Cloud Units (v4) calculated?](#how-is-the-price-of-cloud-units-v4-calculated) + - [Compute Capacity](#compute-capacity) + - [Storage Capacity](#storage-capacity) + - [Network](#network) + +*** + +## What are Cloud Units? +Cloud units are the basis for price calculation for anyone intending to use/deploy on the Threefold Grid. + +Cloud units are a unified way to account for virtual hardware resources on the ThreeFold Grid. They represent compute, storage and network equivalents to energy (kW - kilowatt). The are three categories of cloud units: + +- Compute Unit (CU): The amount of data processing power in terms of virtual CPU (vCPU) cores (logical [CPUs](https://en.wikipedia.org/wiki/Central_processing_unit)) and Random Access Momory ([RAM](https://en.wikipedia.org/wiki/Random-access_memory)). +- Storage Unit (SU): The amount of storage capacity in terms of Hard Disk Drives (HDDs) and Solid State Drives (SSDs) in Gigabytes (GB). +- Network Unit (NU): The amount of data that travels in and out of storage units or compute units expressed in GB. + +> Note: [Resource units](./resource_units_calc_cloudunits.md) are used to calculate SU & CU. Resource Units are used to measure compute and storage capacity produced by hardware. + +When a solution is deployed on the ThreeFold Grid, the system automatically gathers the required amount of CU, SU, or NU. It is important to note that users are not billed upon reservation but only when utilizing the actualy CU, SU and NU. TF Certified Farmers can define the price of CU, SU, and NU they make available on the ThreeFold Grid. + +## How is the price of Cloud Units (v4) calculated? + +The following tables display how cloud units (v4) are calculated on the ThreeFold Grid. The 4th version of cloud units are used since Grid 2.2+ in mid 2020. + +### Compute Capacity + +| CU (Compute Unit) | | | | | +| ------------------------------------- | --- | --- | ---- | --------------- | +| GB Memory | 4 | 8 | 2 | | +| nr vCPU | 2 | 1 | 4 | | +| Passmark Minimum (expected is double) | 500 | 250 | 1000 | CPU performance | + +The passmark (CPU benchmark or alternative) is not measured on the grid yet. It is used in simulators to check the mechanisms and ensure enough performance per CU is delivered. + +Example of Compute unit: +- 4 GB memory & 2 virtual CPU (and 50GB of SSD disk space) +- Recommended price on TF Grid = 10 USD +- Alternative cloud price = between 40 USD and 180 USD + +See how we compare with the market compute prices [here](./pricing/pricing.md). + +### Storage Capacity + +| SU (Storage Unit) | HDD | SSD | +| ------------------- | ---- | --- | +| GB Storage Capacity | 1200 | 200 | + +HDD is only usable for Zero Database driven storage (e.g. ThreeFold Quantum Safe Storage). 1.2 TB of HDD is provided following the advised storage policy of 16+4 with 20% overhead. So the net usable storage would be 1TB. In other words, the SU corresponds in that case to 1TB of net usable storage and an extra 200GB for redundancy. + +Example of Storage unit: + +- 1TB of usable storage as provided by the Zero-DBs (the backend storage systems) +- Recommended price on TF Grid for 1 SU = 10 USD +- Alternative cloud price = between 20 USD and 200 USD + +See how we compare with market storage prices [here](./pricing/pricing.md). + +### Network + +| NU (Network Unit = per GB) = NRU per month | GB (NRU) | +| ------------------------------------------ | -------- | +| GB transferred OUT or IN | 1 | + +> We use SU-month and CU-month to show SU monthly costs. This can be compared to kilowatts (kW) to see electricity usage per month. Learn more about how this is calculated with [Resource units](./resource_units_calc_cloudunits.md), a way to measure the compute and storage capacity produced by hardware. \ No newline at end of file diff --git a/collections/cloud/cloudunits_advanced.md b/collections/cloud/cloudunits_advanced.md new file mode 100644 index 0000000..05bd98c --- /dev/null +++ b/collections/cloud/cloudunits_advanced.md @@ -0,0 +1,52 @@ +# Cloud Units Advanced + +## How is the price of Cloud Units (v4) calculated? + +The following tables display how cloud units (v4) are calculated on the ThreeFold Grid. The 4th version of cloud units are used since Grid 2.2+ in mid 2020. + +> Note: [Resource units](resource_units.md) are used to calculate SU & CU. Resource Units are used to measure compute and storage capacity produced by hardware. + +### Compute Capacity + +| CU (Compute Unit) | | | | | +| ------------------------------------- | --- | --- | ---- | --------------- | +| GB Memory | 4 | 8 | 2 | | +| nr vCPU | 2 | 1 | 4 | | +| Passmark Minimum (expected is double) | 500 | 250 | 1000 | CPU performance | + +The passmark (CPU benchmark or alternative) is not measured on the grid yet. It is used in simulators to check the mechanisms and ensure enough performance per CU is delivered. + +Example of Compute unit: + +- 4 GB memory & 2 virtual CPU (and 50GB of SSD disk space) +- Recommended price on TF Grid = 10 USD +- Alternative cloud price = between 40 USD and 180 USD + +See how we compare with the market compute prices [here](pricing). + +### Storage Capacity + +| SU (Storage Unit) | HDD | SSD | +| ------------------- | ---- | --- | +| GB Storage Capacity | 1200 | 200 | + +HDD is only usable for Zero Database driven storage (e.g. ThreeFold Quantum Safe Storage). 1.2 TB of HDD is provided following the advised storage policy of 16+4 with 20% overhead. So the net usable storage would be 1TB. In other words, the SU corresponds in that case to 1TB of net usable storage and an extra 200GB for redundancy. + +Example of Storage unit: + +- 1TB of usable storage as provided by the Zero-DBs (the backend storage systems) +- Recommended price on TF Grid for 1 SU = 10 USD +- Alternative cloud price = between 20 USD and 200 USD + +See how we compare with market storage prices [here](pricing). + +### Network + +| NU (Network Unit = per GB) = NRU per month | GB (NRU) | +| ------------------------------------------ | -------- | +| GB transferred OUT or IN | 1 | + +> We use SU-month and CU-month to show SU monthly costs. This can be compared to kilowatts (kW) to see electricity usage per month. Learn more about how this is calculated with [Resource units](resource_units), a way to measure the compute and storage capacity produced by hardware. + + + diff --git a/collections/cloud/grid_billing/grid_billing.md b/collections/cloud/grid_billing/grid_billing.md new file mode 100644 index 0000000..bb9c407 --- /dev/null +++ b/collections/cloud/grid_billing/grid_billing.md @@ -0,0 +1,422 @@ +

Grid Billing

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Resources to Update](#resources-to-update) + - [Current TFT Price](#current-tft-price) + - [Current Cloud Units Values](#current-cloud-units-values) +- [Node Contract](#node-contract) + - [Calculating the CU](#calculating-the-cu) + - [Calculating the SU](#calculating-the-su) + - [Calculating the Billing Rate for the Contract](#calculating-the-billing-rate-for-the-contract) + - [Applying the Discounts](#applying-the-discounts) +- [Rent Contract](#rent-contract) + - [Getting the Resources](#getting-the-resources) + - [Calculating the CU](#calculating-the-cu-1) + - [Calculating the SU](#calculating-the-su-1) + - [Calculating the Billing Rate for the Contract](#calculating-the-billing-rate-for-the-contract-1) + - [Applying the Dedicated Node Discount](#applying-the-dedicated-node-discount) + - [Applying the Staking Discount](#applying-the-staking-discount) +- [Name Contract](#name-contract) + - [Applying the Staking Discount](#applying-the-staking-discount-1) +- [Public IP](#public-ip) + - [Applying the Staking Discount](#applying-the-staking-discount-2) +- [Network Usage](#network-usage) + - [Data Usage](#data-usage) + - [NU Value](#nu-value) + - [Applying the Staking Discount](#applying-the-staking-discount-3) +- [Billing History](#billing-history) + +*** + +## Introduction + +In this section, we explain how the billing works on the TFGrid by showing different examples such as node, rent and name contracts as well as public IP and network usage. + + + +## Resources to Update + +Some of the used resources should be updated whenever you try to do these calculations, these resources are the TFT price and the cloud units (SU and CU). + +### Current TFT Price + +TFT price can be retrieved directly through [Stellar](https://stellar.expert/explorer/public/asset/TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47-1?asset[]=TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47-1&filter=markets&market=USDC-GA5ZSEJYB37JRC5AVCIA5MOP4RHTM335X2KGX3IHOJAPP5RE34K4KZVN-1) or from the [Dashboard](https://dashboard.grid.tf/), through the price available in the header. + +![image](./img/grid_billing_1.png) + +### Current Cloud Units Values + +The current cloud units values can be retrieved directly from TChain with the [Polkadot UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate) and the current chain state. + +- On the page `Chain State`, select `tfgridModule` as the `selected state query` +- Select `pricingPolicies(u32): Option` +- Enter the value of the default pricing policy, which is `1`, or enter the value of any other policy if you need to use a custom one +- Press `Enter` + +![image](./img/grid_billing_2.png) + +> Note: Values on chain are expressed as "units USD per hour", where "1 unit USD" == 10.000.000 (or 1e7) + +## Node Contract + +For this example, we will assume that the resources for this deployment are the following: + +``` +CRU: 2 +MRU: 2 +SRU: 15 +HRU: 0 +``` + +### Calculating the CU + +Let's calculate the CU of this deployment. + +For our example, the CU value is `10 mUSD/h`. Make sure that this value is updated according to the current values. + +``` +CU = min( max(MRU/4, CRU/2), max(MRU/8, CRU), max(MRU/2, CRU/4) ) + = min( max(2/4, 2/2), max(2/8, 2), max(2/2, 2/4) ) + = min( max(0.5, 1), max(0.25, 2), max(1, 0.5) ) + = min( 1, 2, 1 ) + = 1 +CU cost/hour = CU * CU pricing + = 1 * 10 mUSD/h + = 10 mUSD/h +``` + + +### Calculating the SU + +Let's calculate the SU of this deployment. + +The current SU value is `5 mUSD/h`. Make sure that this value is updated according to the current values. + +``` +SU = HRU/1200 + SRU/200 + = 0/1200 + 15/200 + = 0 + 0.075 + = 0.075 +SU cost/hour = SU * SU pricing + = 0.075 * 5 mUSD/h + = 0.375 mUSD/h +``` + + +### Calculating the Billing Rate for the Contract + +Let's calculate the billing rate by combining the CU and SU from above. + +For this example, the current TFT value is `0.011 USD`. Make sure that this value is updated according to the current TFT value. + +``` +Contract cost/hour = CU cost/hour + SU cost/hour + = 10 mUSD/h + 0.375 mUSD/h + = 10.375 mUSD/h + = 0.010375 USD/h + = 0.010375 * 24 * 30 + = 7.47 USD/month + = 679.090909 TFT/month + = 0.943182 TFT/hour +``` + +### Applying the Discounts + +Before assuming that the price above is the final price, check first if your twin is eligible for any of the available staking discount levels. To understand more about discount levels, please read [this section](../pricing/staking_discount_levels.md). + +For this example, we assume that this twin has 18 months worth of TFTs staked, so the user will be eligible for a Gold discount level (60% discount). + +The 60% discount is thus equivalent to paying only 40% of the total price, as shown below: + +``` +Cost with 60% discount = 0.943182 * 0.4 + = 0.377273 TFT/hour +``` + + +## Rent Contract + + +### Getting the Resources + +You can get the resources of a node using different methods. You can use Grid Proxy, GraphQL and the Polkadot UI. + +- Using Grid Proxy + - Grid Proxy API + - Go to the section [nodes endpoint](https://gridproxy.grid.tf/swagger/index.html#/GridProxy/get_nodes__node_id_) + - Click on `Try it out` + - Write the node ID + - Click on `Execute` + - Grid Proxy URL + - You can use the following URL and replace by the node ID: + ``` + https://gridproxy.grid.tf/nodes/ + ``` + +- Using GraphQL + + Navigate to [ThreeFold's GraphQL](https://graphql.grid.tf/graphql), then use the following query and replace the node id with the desired node id. + + ``` + query MyQuery { + nodes(where: {nodeID_eq: 83}) { + id + farmingPolicyId + resourcesTotal { + cru + mru + sru + hru + } + } + } + + ``` + +- TFChain and Polkadot UI + - On the page Chain State of the [polakdot UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate), select `tfgridModule` as the `selected state query` + - Select `nodes(u32): Option` + - Enter the node ID + - Press `Enter` + + ![image](./img/grid_billing_3.png) + + +For our example, these are the resources for node `83` that will be used for the calculations. + +``` +CRU = 4 +SRU = 119.24 +HRU = 1863 +MRU = 15.55 +``` + +### Calculating the CU + +Let's calculate the CU of this deployment. + +For our example, the CU value is `10 mUSD/h`. Make sure that this value is updated according to the current values. + +``` +CU = min( max(MRU/4, CRU/2), max(MRU/8, CRU), max(MRU/2, CRU/4) ) + = min( max(15.55/4, 4/2), max(15.55/8, 4), max(15.55/2, 4/4) ) + = min( max(3.8875, 2), max(1.94375, 4), max(7.775, 1) ) + = min( 3.8875, 4, 7.775 ) + = 3.8875 +CU cost/hour = CU * CU pricing + = 3.8875 * 10 mUSD/h + = 38.875 mUSD/h +``` + + +### Calculating the SU + +For our example, the SU value is `5 mUSD/h`. Make sure that this value is updated according to the current values. + +``` +SU = HRU/1200 + SRU/200 + = 1863/1200 + 119.24/200 + = 1.5525 + 0.5962 + = 2.1487 +SU cost/hour = SU * SU pricing + = 2.1487 * 5 mUSD/h + = 10.7435 mUSD/h +``` + +### Calculating the Billing Rate for the Contract + +For our example, the current TFT value is `0.011 USD`. Make sure that this value is updated according to the current values. + +``` +Contract cost/hour = CU cost/hour + SU cost/hour + = 38.875 mUSD/h + 10.7435 mUSD/h + = 49.6185 mUSD/h + = 0.0496185 USD/h + = (0.0496185 * 24 * 30) + = 35.72532 USD/month + = 3247.75636 TFT/month +``` + +### Applying the Dedicated Node Discount + +There's a default `50%` discount for renting a node, this discount is not related to the staking discount. For more information on dedicated node discounts, please [read this section](../../../documentation/dashboard/deploy/node_finder.md#dedicated-nodes). + +``` +Cost with 50% discount = 35.72532 * 0.5 + = 17.86266 TFT/month +``` + +### Applying the Staking Discount + +Before assuming that the price above is the final price, check first if your twin is eligible for any of the available staking discount levels. To understand more about discount levels, please read [this section](../pricing/staking_discount_levels.md). + +For this example, let's assume that this twin has 18 months worth of TFTs staked, so the user will be eligible for a Gold discount level (60% discount). + + + +``` +Cost with 60% discount = 17.86266 * 0.4 + = 7.145064 TFT/month +``` + +## Name Contract + +Let's calculate the cost of a name contract. + +For our example, we use the following value from the Pricing Policy. + +![image](./img/grid_billing_4.png) + +This value can then be converted to USD. + +``` +uniqueName in USD = 2500 / 10000000 + = 0.00025 USD/hour + +``` +Since the current TFT conversion rate is `1 USD = 100 TFT`, we have the following: + +``` +uniqueName in TFT = 0.00025 * 100 + = 0.025 TFT/hour +``` + +### Applying the Staking Discount + +Before assuming that the price above is the final price, check first if your twin is eligible for any of the available staking discount levels. To understand more about discount levels, please read [this section](../pricing/staking_discount_levels.md). + +For this example, let's assume that this twin has 18 months worth of TFTs staked, so the user will be eligible for a Gold discount level (60% discount). + +``` +Cost with 60% discount = 0.025 * 0.4 + = 0.01 TFT/hour + +``` + +## Public IP + +Let's calculate the cost of public IPs. + +For our example, we use the following value from the Pricing Policy. + +![image](./img/grid_billing_5.png) + +This value can then be converted to USD. + +``` +Public IP in USD = 40000 / 10000000 + = 0.004 USD/hour + +``` + +Since the current TFT conversion rate is `1 USD = 100 TFT`, we have the following: + +``` +Public IP in TFT = 0.004 * 100 + = 0.4 TFT/hour +``` + +### Applying the Staking Discount + +Before assuming that the price above is the final price, check first if your twin is eligible for any of the available staking discount levels. To understand more about discount levels, please read [this section](../pricing/staking_discount_levels.md). + +For this example, let's assume that this twin has 18 months worth of TFTs staked, so the user will be eligible for a Gold discount level (60% discount). + +``` +Cost with 60% discount = 0.4 * 0.4 + = 0.16 TFT/hour + +``` + +> Note: This value gets added to the billing rate of your deployment. + + +## Network Usage + +Network Usage is calculated for deployments with public IPs. It's reported every hour and its cost can be calculated approximately as follows, where the data usage is the value of data sent and received: + +``` +network usage = data usage * NU value +``` + +### Data Usage + +To start, let's calculate the data usage. This can be tracked with a network tool like [nload](https://github.com/rolandriegel/nload), where the total amount of data sent and received can be displayed. + +![image](./img/grid_billing_6.png) + + +### NU Value + +Let's find the NU value of this deployment. + +For our example, we use the following value from the Pricing Policy. + +![image](./img/grid_billing_7.png) + +This value can then be converted to USD. + +``` +NU price in USD = 15000 / 10000000 + = 0.0015 USD/hour + +``` +Since in our example the current TFT conversion rate is `1 USD = 100 TFT`, we have the following: + +``` +NU price in TFT = 0.0015 * 100 + = 0.15 TFT/hour +``` + +### Applying the Staking Discount + +Before assuming that the price above is the final price, check first if your twin is eligible for any of the available staking discount levels. To understand more about discount levels, please read [this section](../pricing/staking_discount_levels.md). + +For this example, let's assume that this twin has 18 months worth of TFTs staked, so the user will be eligible for a Gold discount level (60% discount). + +``` +Cost with 60% discount = 0.15 * 0.4 + = 0.06 TFT/hour + +``` + +As an example, let's assume that we used a total of 10GB in the last hour, so the next hour the billing rate should be updated to: + +``` +Total network usage = 10GB * 0.06 TFT/hour + = 0.6 TFT/hour +``` + +The billing rate in the next hour should be the following: + +``` +hourly billing rate = actual cost of the deployment + total network usage +``` + + +> Note: The calculated value will always be an approximation since it's not possible to manually calculate the exact value of the data used. + + +## Billing History + +Since the billing rate gets updated hourly, you can check the billing history from [GraphQL](https://graphql.grid.tf/graphql) using the following query. Make sure to enter the proper contract ID. + +``` +query MyQuery { + contractBillReports(where: {contractID_eq: ""}) { + contractID + amountBilled + discountReceived + timestamp + } +} + +``` \ No newline at end of file diff --git a/collections/cloud/grid_billing/img/grid_billing_1.png b/collections/cloud/grid_billing/img/grid_billing_1.png new file mode 100644 index 0000000..b6123fa Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_1.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_2.png b/collections/cloud/grid_billing/img/grid_billing_2.png new file mode 100644 index 0000000..a3d647e Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_2.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_3.png b/collections/cloud/grid_billing/img/grid_billing_3.png new file mode 100644 index 0000000..216ebb6 Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_3.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_4.png b/collections/cloud/grid_billing/img/grid_billing_4.png new file mode 100644 index 0000000..9a6f291 Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_4.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_5.png b/collections/cloud/grid_billing/img/grid_billing_5.png new file mode 100644 index 0000000..8b9a85c Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_5.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_6.png b/collections/cloud/grid_billing/img/grid_billing_6.png new file mode 100644 index 0000000..51a0e26 Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_6.png differ diff --git a/collections/cloud/grid_billing/img/grid_billing_7.png b/collections/cloud/grid_billing/img/grid_billing_7.png new file mode 100644 index 0000000..6317825 Binary files /dev/null and b/collections/cloud/grid_billing/img/grid_billing_7.png differ diff --git a/collections/cloud/img/cloudunits_abstract.jpg b/collections/cloud/img/cloudunits_abstract.jpg new file mode 100644 index 0000000..4e5e089 Binary files /dev/null and b/collections/cloud/img/cloudunits_abstract.jpg differ diff --git a/collections/cloud/pricing/cloud_pricing_compare.md b/collections/cloud/pricing/cloud_pricing_compare.md new file mode 100644 index 0000000..5b95675 --- /dev/null +++ b/collections/cloud/pricing/cloud_pricing_compare.md @@ -0,0 +1,55 @@ +

Price comparison Cloudorado

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Compute](#compute) + - [Compute Conclusion](#compute-conclusion) +- [Storage](#storage) + - [Storage Conclusion](#storage-conclusion) + +*** + +## Introduction + +We compare cloud pricing with Cloudorado. Note that the information here can be subject to change and might not reflect current market prices. + +## Compute + +A ThreeFold Compute Unit is (CU) + +- 4 GB memory +- 2 virtual CPU cores + +A good site to compare is Cloudorado: https://www.cloudorado.com/ + +![](img/cloudorado_compute_choices.jpg ':size=600x240') + +| Compute | Compute | +| --------------------------------- | --------------------------------- | +| ![](img/cloudorado_compute_1.jpg) | ![](img/cloudorado_compute_2.jpg) | + + +### Compute Conclusion + +> Our price can easily be < 10 USD for 1 compute unit (CU)
+> In market, this is between 36 and 202 USD + +## Storage + +A ThreeFold Storage Unit is (SU) + +- 1 TB of storage + +Again, a good site to compare is Cloudorado: https://www.cloudorado.com/ + +![](img/cloudorado_storage_choices.jpg ':size=600x270') + +![](img/cloudorado_storage_1.jpg ':size=500x610') + +### Storage Conclusion + +> Our price can easily be < 8 USD for 1 storage unit (CU)
+> In market, this is between 19 and 154 USD + + \ No newline at end of file diff --git a/collections/cloud/pricing/cloudunits_pricing.md b/collections/cloud/pricing/cloudunits_pricing.md new file mode 100644 index 0000000..f8ab13d --- /dev/null +++ b/collections/cloud/pricing/cloudunits_pricing.md @@ -0,0 +1,69 @@ + +![](img/tfgrid_pricing.jpg) + +## Cloud Unit Pricing + +| Cloud Units | description | mUSD | mTFT | +| ----------------- | ------------------------------------------------ | ------------------ | ------------------ | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | $CU_MUSD_HOUR/hour | $CU_MTFT_HOUR/hour | +| Storage Unit (SU) | typically 1 TB of netto usable storage (*) | $SU_MUSD_HOUR/hour | $SU_MTFT_HOUR/hour | +| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | $NU_MUSD_GB/GB | $NU_MTFT_GB/GB | + + +| Network Addressing | description | mUSD | mTFT | +| ------------------ | ------------------------------------------ | --------------------- | --------------------- | +| IPv4 Address | Public Ip Address as used by a TFGrid user | $IP_MUSD_HOUR/hour | $IP_MTFT_HOUR/hour | +| Unique Name | Usable as name on webgateways | $NAME_MUSD_HOUR/hour | $NAME_MTFT_HOUR/hour | +| Unique Domain Name | Usable as dns name on webgateways | $DNAME_MUSD_HOUR/hour | $DNAME_MTFT_HOUR/hour | + +- mUSD = 1/1000 of USD, mTFT = 1/1000 of TFT +- TFT pricing pegged to USD (pricing changes in line with TFT/USD rate) +- **current TFT to USD price is $TFTUSD** , calculated on $NOW +- pricing is calculated per hour for the TFGrid 3.0 + + + +### Pricing Expressed Per Month + +| Cloud Units | description | USD NO DISCOUNT | USD 60% DISCOUNT | +| ----------------- | ------------------------------------------------ | ------------------- | ---------------------------- | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | $CU_USD_MONTH/month | $CU_USD_MONTH_DISCOUNT/month | +| Storage Unit (SU) | typically 1 TB of netto usable storage (*) | $SU_USD_MONTH/month | $SU_USD_MONTH_DISCOUNT/month | +| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | $NU_USD_GB/GB | $NU_USD_MONTH_DISCOUNT/GB | +| IPv4 Address | Public Ip Address as used by a TFGrid user | $IP_USD_MONTH/month | $IP_USD_MONTH_DISCOUNT/month | + + +> Please check pricing calculator on http://pricing.threefold.me + +### Dedicated Servers + +Starting April 2022, the TFGrid 3.0/a5 has support for dedicated servers. You can reserve a full server and the server is only usable for you, a minimum of 70% discount is given for this usecase. + +- Dedicated Node, 192 GB mem, 24 cores, 1000 GB SSD = 75 USD per month (max discount, 3Y staking) +- Dedicated Node, 32 GB mem, 8 cores, 1000 GB SSD = 31 USD per month (max discount, 3Y staking) + +Above example was with generous 5TB of bandwidth used per node per month, which is huge. + +These nodes are ideal to deploy blockchain nodes, or other high demanding workloads. Dedicated nodes leads to amazing pricing. + +To use a dedicated node, you will have to reserve a 3node for yourself in your admin portal of TFGrid, only you can then deploy on this node and there is no additional cost. + +> Please check pricing calculator on http://pricing.threefold.me + + +### How to buy and use capacity + +- More info about [how to use the grid see here](grid_use) +- See our manual how to get started. +- [TFT's can be bought on multiple locations](how_to_buy). + +### More Info + +- See [here for more info about planet positive certification](certified_farming) +- Pricing is done based on cloud units, see [cloudunits](cloudunits) + +!!!include:staking_discount_levels + +!!!def alias:tfpricing,cloudunit_pricing,threefold_pricing + +!!!tfpriceinfo diff --git a/collections/cloud/pricing/img/cloudorado.jpg b/collections/cloud/pricing/img/cloudorado.jpg new file mode 100644 index 0000000..1cf6e01 Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado.jpg differ diff --git a/collections/cloud/pricing/img/cloudorado_compute_1.jpg b/collections/cloud/pricing/img/cloudorado_compute_1.jpg new file mode 100644 index 0000000..0f261ff Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado_compute_1.jpg differ diff --git a/collections/cloud/pricing/img/cloudorado_compute_2.jpg b/collections/cloud/pricing/img/cloudorado_compute_2.jpg new file mode 100644 index 0000000..1284fa9 Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado_compute_2.jpg differ diff --git a/collections/cloud/pricing/img/cloudorado_compute_choices.jpg b/collections/cloud/pricing/img/cloudorado_compute_choices.jpg new file mode 100644 index 0000000..7074de6 Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado_compute_choices.jpg differ diff --git a/collections/cloud/pricing/img/cloudorado_storage_1.jpg b/collections/cloud/pricing/img/cloudorado_storage_1.jpg new file mode 100644 index 0000000..022c737 Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado_storage_1.jpg differ diff --git a/collections/cloud/pricing/img/cloudorado_storage_choices.jpg b/collections/cloud/pricing/img/cloudorado_storage_choices.jpg new file mode 100644 index 0000000..d18a2a3 Binary files /dev/null and b/collections/cloud/pricing/img/cloudorado_storage_choices.jpg differ diff --git a/collections/cloud/pricing/img/tfgrid_pricing.jpg b/collections/cloud/pricing/img/tfgrid_pricing.jpg new file mode 100644 index 0000000..5775ded Binary files /dev/null and b/collections/cloud/pricing/img/tfgrid_pricing.jpg differ diff --git a/collections/cloud/pricing/pricing.md b/collections/cloud/pricing/pricing.md new file mode 100644 index 0000000..8854e3d --- /dev/null +++ b/collections/cloud/pricing/pricing.md @@ -0,0 +1,98 @@ +

Cloud Unit Pricing

+ +

Table of Contents

+ +- [Pricing Policy](#pricing-policy) +- [Pricing Expressed Per Month](#pricing-expressed-per-month) +- [Operation Fees](#operation-fees) +- [Certified Capacity](#certified-capacity) +- [Dedicated Nodes](#dedicated-nodes) +- [Staking Discount](#staking-discount) + - [Example for 40% discount (Silver)](#example-for-40-discount-silver) + +*** + +## Pricing Policy + +- The current prices are for resources usage on mainnet (testnet get 50% discount) +- A month is considered as 30 days (720 hours) + +| Cloud Units | Description | mUSD | mTFT | +| ----------------- | ------------------------------------------------ | ------------------ | ------------------ | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | {{#include ../../../values/CU_MUSD_HOUR.md}}/hour | {{#include ../../../values/CU_MTFT_HOUR.md}}/hour | +| Storage Unit (SU) | typically 1 TB of netto usable storage (*) | {{#include ../../../values/SU_MUSD_HOUR.md}}/hour | {{#include ../../../values/SU_MTFT_HOUR.md}}/hour | +| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | {{#include ../../../values/NU_MUSD_HOUR.md}}/hour | {{#include ../../../values/NU_MTFT_HOUR.md}}/hour | + +
+ +| Network Addressing | Description | mUSD | mTFT | +| ------------------ | ------------------------------------------ | --------------------- | --------------------- | +| IPv4 Address | Public Ip Address as used by a TFGrid user | {{#include ../../../values/IP_MUSD_HOUR.md}}/hour | {{#include ../../../values/IP_MTFT_HOUR.md}}/hour | +| Unique Name | Usable as name on webgateways | {{#include ../../../values/NAME_MUSD_HOUR.md}} | {{#include ../../../values/NAME_MTFT_HOUR.md}}/hour | +| Unique Domain Name | Usable as dns name on webgateways | {{#include ../../../values/DNAME_MUSD_HOUR.md}}/hour | {{#include ../../../values/DNAME_MTFT_HOUR.md}}/hour | + +- mUSD = 1/1000 of USD, mTFT = 1/1000 of TFT +- TFT pricing pegged to USD (pricing changes in line with TFT/USD rate) +- The current TFT to USD price is {{#include ../../../values/tft_value.md}} USD +- pricing is calculated per hour for the TFGrid 3.0 + +> Please check our [Cloud Pricing for utilization sheet](https://docs.google.com/spreadsheets/d/1E6MpGs15h1_flyT5AtyKp1TixH1ILuGo5tzHdmjeYdQ/edit#gid=2014089775) for more details. + +## Pricing Expressed Per Month + +| Cloud Units | description | USD NO DISCOUNT | USD 60% DISCOUNT | +| ----------------- | ------------------------------------------------ | ------------------- | ---------------------------- | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | 22.00/month | 8.80/month | +| Storage Unit (SU) | typically 1 TB of netto usable storage | 14.00/month | 5.60/month | +| Network Unit (NU) | 1 GB transfer, bandwidth as used by TFGrid users | 0.05/GB | 0.03/GB | +| IPv4 Address | Public Ip Address as used by a TFGrid user | 5.00/month | 3.00/month | + +## Operation Fees + +Operations on TFChain have a base fee of 0.001 TFT. Creating and destroying deployments usually includes several operations. + +## Certified Capacity + +Renting capacity on certified nodes is charged 25% extra (x 1.25). + +## Dedicated Nodes + +Since April 2022, TFGrid has introduced dedicated server support. With dedicated servers, you can reserve a full server exclusively for your use. This comes with 50% discount, making it a cost-effective option. + +Here are two examples of dedicated nodes and their prices, with maximum staking discount level (Gold => -60%) for 18 months staking: + +- Dedicated Node 1: 192 GB memory, 24 cores, 1000 GB SSD = $75 per month +- Dedicated Node 2: 32 GB memory, 8 cores, 1000 GB SSD = $31 per month + +> Please check our [Cloud Pricing for utilization sheet](https://docs.google.com/spreadsheets/d/1E6MpGs15h1_flyT5AtyKp1TixH1ILuGo5tzHdmjeYdQ/edit#gid=2014089775) for more details. + +These dedicated nodes come with a generous 5TB bandwidth usage per node per month. They are well-suited for deploying blockchain nodes or other resource-intensive workloads. Using a dedicated node requires reserving a 3node in your TFGrid admin portal. Once reserved, you have exclusive deployment rights for that node, and there are no additional costs. + +When renting a dedicated node, you receive a 50% discount for the entire node. However, it's important to note that you will still be required to pay for the entire node, even with the discount applied. This means that while you enjoy the discount, the cost of the dedicated node is not prorated based on the resources you utilize. + +## Staking Discount + +| Type | Pricing Level | Nr months of TFT linked to account | +| ---------- | ------------- | ---------------------------------- | +| No staking | - 0% | 0 | +| Default | - 20% | 1.5 months | +| Bronze | - 30% | 3 months | +| Silver | - 40% | 6 months | +| Gold | - 60% | 18 months | + +TFChain charges users for proof of utilization on an hourly basis. The discount applied is determined by the amount of TFT (ThreeFold Token) available in the user's TFChain account. It's important to note that the discount is calculated based on the TFT balance in the TFChain account, not on other supported blockchains like Stellar. + +This discount mechanism operates automatically, and users don't need to take any specific actions to avail themselves of this benefit. However, it's worth mentioning that the maximum discount for network-related services is 40%. + +### Example for 40% discount (Silver) + +Let's break down the example for a 40% discount on Internet Capacity consumption: + +- Suppose your consumption on the ThreeFold Grid is worth 10 TFT per hour. +- To be eligible for a 40% discount, you need to have a minimum of 43,200 TFT in your account, calculated as 10 TFT * 24 hours * 30 days * 6 months. +- Similarly, to be eligible for a 60% discount, you would need a minimum of 129,600 TFT in your account, calculated as 10 TFT * 24 hours * 30 days * 18 months. +- If you have 60,000 TFT in your TFChain account, you would receive a 40% discount. +- However, since you don't have enough tokens to qualify for the 60% discount, it won't be applicable. +- With the 40% discount, your effective payment for the consumption would be 6 TFT per hour, as long as the amount of TFT in your account falls within the range of 43,200 to 129,600 (as calculated above). + +Keep in mind that these calculations are based on the example provided and the specific discount levels mentioned. \ No newline at end of file diff --git a/collections/cloud/pricing/pricing_toc.md b/collections/cloud/pricing/pricing_toc.md new file mode 100644 index 0000000..8e7a128 --- /dev/null +++ b/collections/cloud/pricing/pricing_toc.md @@ -0,0 +1,8 @@ +# Pricing + +

Table of Contents

+ +- [Pricing Overview](./pricing.md) +- [Staking Discounts](./staking_discount_levels.md) +- [Cloud Pricing Compare](./cloud_pricing_compare.md) +- [Grid Billing](../grid_billing/grid_billing.md) \ No newline at end of file diff --git a/collections/cloud/pricing/sidebar.md b/collections/cloud/pricing/sidebar.md new file mode 100644 index 0000000..71e6615 --- /dev/null +++ b/collections/cloud/pricing/sidebar.md @@ -0,0 +1,5 @@ +- [**Home**](@threefold_home) +----- +**Pricing** +- [Cloud Pricing](@pricing) +- [Cloud Pricing Compare](@cloud_pricing_compare) diff --git a/collections/cloud/pricing/staking_discount_levels.md b/collections/cloud/pricing/staking_discount_levels.md new file mode 100644 index 0000000..0c4f62e --- /dev/null +++ b/collections/cloud/pricing/staking_discount_levels.md @@ -0,0 +1,34 @@ +

Staking Discount Levels

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Table of Discounts](#table-of-discounts) +- [Example for 40% discount](#example-for-40-discount) + +*** + +## Introduction + +By staking TFT on their TFChain wallet, TFGrid users can benefit to discounts up to 60%. + +## Table of Discounts + +| type | pricing level | nr months of TFT linked to account | +| ---------- | ------------- | ---------------------------------- | +| no staking | - 0% | 0 | +| default | - 20% | 3 months | +| bronze | - 30% | 6 months | +| silver | - 40% | 12 months | +| gold | - 60% | 36 months | + +TFChain charges the user for proof_of_utilization every hour. TFChain will calculate the discount based on amount of TFT available in the account of the user on TFChain (not on Stellar or any of the other blockchains we also support). This is an automatic form of staking, the user does not have to do anything to have this benefit. For network related services its max 40% discount. + +## Example for 40% discount + +- I've used 10 TFT worth of Internet Capacity on tf_grid for the last hour. +- 10TFT * 24 * 30 * 12 = I would need 86,400 TFT for one year +- 10TFT * 24 * 30 * 36 = I would need 259,200 TFT for 3 year +- I have 120,000 TFT in my account on TF_Chain, this means that I will get 40% discount. +- I don't have enough tokens to get to 60% discount. + diff --git a/collections/cloud/pricing/staking_discount_levels0.md b/collections/cloud/pricing/staking_discount_levels0.md new file mode 100644 index 0000000..75db027 --- /dev/null +++ b/collections/cloud/pricing/staking_discount_levels0.md @@ -0,0 +1,18 @@ +| type | pricing level | nr months of TFT linked to account | +| ---------- | ------------- | ---------------------------------- | +| no staking | - 0% | 0 | +| default | - 20% | 3 months | +| bronze | - 30% | 6 months | +| silver | - 40% | 12 months | +| gold | - 60% | 36 months | + +TFChain charges the user for proof_of_utilization every hour. TFChain will calculate the discount based on amount of TFT available in the account of the user on TFChain (not on Stellar or any of the other blockchains we also support). This is an automatic form of staking, the user does not have to do anything to have this benefit. For network related services its max 40% discount. + +### Example for 40% discount + +- I've used 10 TFT worth of Internet Capacity on tf_grid for the last hour. +- 10TFT * 24 * 30 * 12 = I would need 86,400 TFT for one year +- 10TFT * 24 * 30 * 36 = I would need 259,200 TFT for 3 year +- I have 120,000 TFT in my account on TF_Chain, this means that I will get 40% discount. +- I don't have enough tokens to get to 60% discount. + diff --git a/collections/cloud/resource_units_calc_cloudunits.md b/collections/cloud/resource_units_calc_cloudunits.md new file mode 100644 index 0000000..647cbf9 --- /dev/null +++ b/collections/cloud/resource_units_calc_cloudunits.md @@ -0,0 +1,89 @@ + +

Resource Units

+ +

Table of Contents

+ +- [Resource Units Overview](#resource-units-overview) + - [Compute](#compute) + - [Storage](#storage) + - [Storage cost price verification Dec 2021](#storage-cost-price-verification-dec-2021) +- [Change Log](#change-log) +- [Remarks](#remarks) + +*** + +## Resource Units Overview + +The ThreeFold Zero-OS and TFChain software translates resource units (CRU, MRU, HRU, SRU) into cloud units (CU, SU) for farming reward purposes. + +Resource units are used to measure and convert capacity on the hardware level into cloud units: CU & SU. + + +| Unit Type | Description | Code | +| ------------ | ------------------------------------ | ---- | +| Core Unit | 1 Logical Core (Hyperthreaded Core) | CRU | +| Mem Unit | 1 GB mem | MRU | +| HD Unit | 1 GB | HRU | +| SSD Unit | 1 GB | SRU | +| Network Unit | 1 GB of bandwidth transmitted in/out | NRU | + +These are raw capacities as measured by the ThreeFold software running on Zero-OS. + +To learn how they convert into cloudunits see [here](./resourceunits_advanced.md) + +### Compute + +For farming, 1 CU equals: + +- 2 virtual CPUs with a maximum over subscription of 4 CPUs and minimum required memory of 4GB. +- An over subscription of 4 CPUs remains still gentle as we understand many other providers use more. +- There needs to be at least 50GB SSD per CU, if not there is penalty for nr of CU, reasoning is that otherwise people cannot deploy their VM's or Containers if there would not be minimal SSD. + +```python +cu = min((mru - 1) / 4, cru * 4 / 2, sru / 50) +``` + +- 1 GB of memory is subtracted for the operating system to function. +- please note minimal passmark per CU (with 4GB mem), needs to be 1000 passmark at farming side, this is not being checked today but might be done in future. If your chosen CPU has less than 1000 passmark per CU (of 4 GB mem), it could be your final CU's will be lower once that feature is introduced. + + + + + +### Storage + +For farming, 1 SU equals to: +- 1.2 TB of HD capacity (which can deliver 1 TB of net usable storage) +- 200 GB of SSD capacity with a buffer of 20% + +```python +su = hru / 1200 + sru * 0.8 / 200 +``` + +#### Storage cost price verification Dec 2021 + +- price for 16 TB HDD = 300 USD + - 16000 / 1200 = 13.3 SU + - 1 SU costs = 300 / 13.3 = 22.5 for HDD +- price for 2 TB SSD = 200 USD + - 2000 * 0.8 / 200 = 8 SU + - 1 SU costs 200 / 8 = 25 for SSD + + + + +## Change Log + +- original non final specs from Summer 2021, was mentioned its not final. +- Dec 2021 update for launch v3.x + - there need to be at least 50 GB SSD capacity per CU + - was in the specs of farming reward but formula above did not take it into consideration, in others nonconsistenty between specs & formula. + - sru division to 200, was 300, to be more in line with HDD vs SSD pricing, this check needs to be done +- every 6 months, results in slightly more SU, which is good for farmers. +- Jan 2022 update for launch v3.x + - reverted change done in Dec, sru does not have to be deducted from cpu, results in increase of farming rewards, in other words good for farmers. Also good formula more easy. + - Introduced warning about minimum CPU requirements in relation to passmark. + +## Remarks + +- There seems to be issue in simulator play.grid.tf, will check asap (17-jan) +- We are checking all numbers & the DAO is coming life in Jan/Feb 2022, once we have the DAO every change of the specs needs to be approved by DAO voting members. For now we use the forum to notify and ask feedback and make sure farming reward goes up for the changes if possible. There ia already a minimal DAO life, the blockchain TFChain validators (L1) need to find consensus to upgrade code. diff --git a/collections/cloud/resourceunits.md b/collections/cloud/resourceunits.md new file mode 100644 index 0000000..7265f41 --- /dev/null +++ b/collections/cloud/resourceunits.md @@ -0,0 +1,16 @@ +## Resource Units + +Resource units are used to measure and convert capacity on the hardware level into cloud units: CU & SU. + + +| Unit Type | Description | Code | +| ------------ | ------------------------------------ | ---- | +| Core Unit | 1 Logical Core (Hyperthreaded Core) | CRU | +| Mem Unit | 1 GB mem | MRU | +| HD Unit | 1 GB | HRU | +| SSD Unit | 1 GB | SRU | +| Network Unit | 1 GB of bandwidth transmitted in/out | NRU | + +These are raw capacities as measured by the ThreeFold software running on Zero-OS. + +To learn how they convert into cloudunits see [here](./resourceunits_advanced.md) \ No newline at end of file diff --git a/collections/cloud/resourceunits_advanced.md b/collections/cloud/resourceunits_advanced.md new file mode 100644 index 0000000..674afb1 --- /dev/null +++ b/collections/cloud/resourceunits_advanced.md @@ -0,0 +1,62 @@ +

Resource Units Calculation

+ +

Table of Contents

+ +- [Calculation from resource units to CU/SU for farming purposes](#calculation-from-resource-units-to-cusu-for-farming-purposes) +- [Compute](#compute) +- [Storage](#storage) + - [Storage cost price verification Dec 2021](#storage-cost-price-verification-dec-2021) + +*** + +## Calculation from resource units to CU/SU for farming purposes + +The threefold Zero-OS and TFChain software translates resource units (CRU, MRU, HRU, SRU) into cloud units (CU, SU) for farming reward purposes. + +## Compute + +For farming, 1 CU equals: + +- 2 virtual CPUs with a maximum over subscription of 4 CPUs and minimum required memory of 4GB. +- An over subscription of 4 CPUs remains still gentle as we understand many other providers use more. +- There needs to be at least 50GB SSD per CU, if not there is penalty for nr of CU, reasoning is that otherwise people cannot deploy their VM's or Containers if there would not be minimal SSD. + +```python +cu = min((mru - 1) / 4, cru * 4 / 2, sru / 50) +``` + +- 1 GB of memory is subtracted for the operating system to function. +- please note minimal passmark per CU (with 4GB mem), needs to be 1000 passmark at farming side, this is not being checked today but might be done in future. If your chosen CPU has less than 1000 passmark per CU (of 4 GB mem), it could be your final CU's will be lower once that feature is introduced. + + +## Storage + +For farming, 1 SU equals to: +- 1.2 TB of HD capacity (which can deliver 1 TB of net usable storage) +- 200 GB of SSD capacity with a buffer of 20% + +```python +su = hru / 1200 + sru * 0.8 / 200 +``` + +### Storage cost price verification Dec 2021 + +- price for 16 TB HDD = 300 USD + - 16000 / 1200 = 13.3 SU + - 1 SU costs = 300 / 13.3 = 22.5 for HDD +- price for 2 TB SSD = 200 USD + - 2000 * 0.8 / 200 = 8 SU + - 1 SU costs 200 / 8 = 25 for SSD + + + \ No newline at end of file diff --git a/collections/collaboration/.collection b/collections/collaboration/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/collaboration/PULL_REQUEST_TEMPLATE.md b/collections/collaboration/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 0000000..c41fc2e --- /dev/null +++ b/collections/collaboration/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,18 @@ +### Description + +Describe the changes introduced by this PR and what does it affect + +### Changes + +List of changes this PR includes + +### Related Issues + +List of related issues + +### Checklist + +- [ ] Tests included +- [ ] Build pass +- [ ] Documentation +- [ ] Code format and docstring \ No newline at end of file diff --git a/collections/collaboration/bug_report.md b/collections/collaboration/bug_report.md new file mode 100644 index 0000000..db75a12 --- /dev/null +++ b/collections/collaboration/bug_report.md @@ -0,0 +1,29 @@ +--- +name: Bug report +about: Create a report to help us improve +title: '' +labels: '' +assignees: '' + +--- + +## Describe the bug + +A clear and concise description of what the bug is. + +## To Reproduce + +Steps to reproduce the behavior: + + 1. Go to '...' + 2. Click on '....' + 3. Scroll down to '....' + 4. See error + +## Expected behavior + +A clear and concise description of what you expected to happen. + +## Screenshots + +If applicable, add screenshots to help explain your problem. \ No newline at end of file diff --git a/collections/collaboration/code_conduct.md b/collections/collaboration/code_conduct.md new file mode 100644 index 0000000..2c8ffd6 --- /dev/null +++ b/collections/collaboration/code_conduct.md @@ -0,0 +1,318 @@ +

Code of Conduct

+ +

Table of Contents

+ +- [Introduction: Collaboration Manifest](#introduction-collaboration-manifest) +- [Code of Conduct](#code-of-conduct) + - [Forum \& Chat Rules](#forum--chat-rules) + - [Moderation Rights](#moderation-rights) + - [Contribution](#contribution) + - [Keep It Simple and Relevant](#keep-it-simple-and-relevant) + - [Content Verification Guidelines](#content-verification-guidelines) + - [Freedom of Speech.](#freedom-of-speech) +- [Contribution Guidelines](#contribution-guidelines) + - [This is a Civilized Place for Public Discussion](#this-is-a-civilized-place-for-public-discussion) + - [Improve the Discussion](#improve-the-discussion) + - [Be Agreeable, Even When You Disagree](#be-agreeable-even-when-you-disagree) + - [Your Participation Counts](#your-participation-counts) + - [If You See a Problem, Flag It](#if-you-see-a-problem-flag-it) + - [Always Be Civil](#always-be-civil) + - [Keep It Tidy](#keep-it-tidy) + - [Post Only Your Own Stuff](#post-only-your-own-stuff) + - [Powered by You](#powered-by-you) + - [Terms of Service](#terms-of-service) +- [Terms of Service (TOS)](#terms-of-service-tos) + - [Important Terms](#important-terms) + - [Your Permission to Use the Forum)](#your-permission-to-use-the-forum) + - [Conditions for Use of the Forum](#conditions-for-use-of-the-forum) + - [Acceptable Use](#acceptable-use) + - [Content Standards](#content-standards) + - [Enforcement](#enforcement) + - [Your Account](#your-account) + - [Your Content](#your-content) + - [Your Responsibility](#your-responsibility) + - [Disclaimers](#disclaimers) + - [Limits on Liability](#limits-on-liability) + - [Feedback](#feedback) + - [Termination](#termination) + - [Disputes](#disputes) + - [General Terms](#general-terms) + - [Contact](#contact) + - [Changes](#changes) + +*** + +# Introduction: Collaboration Manifest + +FreeFlow nation created this collaboration manifest which can be freely used by any organization who finds them useful. If you would like to see any changes to this document please email to **info@freeflownation.org**. +We at ThreeFold want to follow these guidelines. + +This document has been created honoring the [FreeFlow Nation Manifesto](https://www.freeflownation.org/manifesto.html). + +# Code of Conduct + +## Forum & Chat Rules + +We are committed to making participation a harassment-free experience for everyone, regardless of level of experience, gender, gender identity or expression, sexual orientation, disability, personal appearance, race, ethnicity, age, religion, or nationality. + +Examples of unacceptable behavior include, but are not limited to: + +* Use of sexualized language or imagery +* Personal attacks +* Trolling or insulting/derogatory comments +* Public or private harassment +* Publishing private information without explicit permission (e.g. physical or electronic address) + +## Moderation Rights + +Content moderators have the right and responsibility to remove, edit, or reject all posts, comments, and other contributions that are not aligned to this Code of Conduct. They may also suspend or ban any forum member for behaviors that they deem inappropriate, threatening, offensive, or harmful. + +By adopting this Code of Conduct, moderators commit themselves to fairly and consistently applying these principles to every aspect of managing this community. Moderators who do not follow or enforce the Code of Conduct may be permanently removed from the moderation team. + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by flagging the post or comment in question. All complaints will be reviewed and investigated and will result in a response deemed both necessary and appropriate to the circumstances. Moderators are obligated to maintain confidentiality regarding the identity of the reporting party. + +The forum is a place where we bring people together that share the same values and want to make the world a better place. Positive feedback is well appreciated but must always stay respectful and helpful. Moderators thus have the right to remove any posts spreading negativity and/or attacking the project and its founders. + +## Contribution + +We welcome everyone to contribute and thank you for your content contribution. + +Content can be chat, questions, answers, blogs, articles, knowledge base information, tutorials, … + +The more people collaborate the more relevant set of information will be created. + +Creating content constitutes your agreement to release submitted content as public domain ([CC0 10 4](https://creativecommons.org/publicdomain/zero/1.0/)). + +## Keep It Simple and Relevant + +Overload of information is maybe even worse as not enough information. + +Please keep your discussions and contributions information-rich. If your signal-to-noise ratio gets too low, you will be muted or banned from the forum or relevant chat. This is up to the discretion of ThreeFold moderators. + +## Content Verification Guidelines + +We believe it’s in the best interest of our community to know the origin and authenticity of any information contributed. We want to do everything possible to verify the authenticity of the content created (e.g. messages, knowledge-base, chats, etc.). As such it’s our right to block a contributor from creating any content if identity cannot be verified. + +## Freedom of Speech. + +We believe in freedom of speech as long as it’s not in contradiction with our Content Verification Guidelines. + +# Contribution Guidelines + +## This is a Civilized Place for Public Discussion + +Please treat this discussion forum with the same respect you would a public park. We, too, are a shared community resource — a place to share skills, knowledge and interests through ongoing conversation. + +These are not hard and fast rules, merely guidelines to aid the human judgment of our community and keep this a clean and well-lighted place for civilized public discourse. + +## Improve the Discussion + +Help us make this a great place for discussion by always working to improve the discussion in some way, however small. If you are not sure your post adds to the conversation, think over what you want to say and try again later. + +The topics discussed here matter to us, and we want you to act as if they matter to you, too. Be respectful of the topics and the people discussing them, even if you disagree with some of what is being said. + +One way to improve the discussion is by discovering ones that are already happening. Spend time browsing the topics here before replying or starting your own, and you’ll have a better chance of meeting others who share your interests. + +## Be Agreeable, Even When You Disagree + +You may wish to respond to something by disagreeing with it. That’s fine. But remember to criticize ideas, not people. Please avoid any of the following: + +* Name-calling +* Ad hominem attacks +* Responding to a post’s tone instead of its actual content +* Knee-jerk contradiction + +Instead, provide reasoned counter-arguments that improve the conversation. + +## Your Participation Counts + +The conversations we have here set the tone for every new arrival. Help us influence the future of this community by choosing to engage in discussions that make this forum an interesting place to be — and avoiding those that do not. + +The forum we use (Discourse) provides tools that enable the community to collectively identify the best (and worst) contributions: bookmarks, likes, flags, replies, edits, and so forth. Use these tools to improve your own experience, and everyone else’s, too. + +Let’s leave our community better than we found it. + +## If You See a Problem, Flag It + +Moderators have special authority: they are responsible for this forum, but so are you. With your help, moderators can be community facilitators, not just janitors or police. + +When you see bad behavior, don’t reply. It encourages bad behavior by acknowledging it, consumes your energy, and wastes everyone’s time. Just flag it. If enough flags accrue, action will be taken, either automatically or by moderator intervention. + +In order to maintain our community, moderators reserve the right to remove any content and any user account for any reason at any time. Moderators do not preview new posts: the moderators and site operators take no responsibility for any content posted by the community. + +## Always Be Civil + +Nothing sabotages a healthy conversation like rudeness: + +* Be civil. Don’t post anything that a reasonable person would consider offensive, abusive, or hate speech. +* Keep it clean. Don’t post anything obscene or sexually explicit. +* Respect each other. Don’t harass or grief anyone, impersonate people, or expose their private information. +* Respect our forum. Don’t post spam or otherwise vandalize the forum. + +These are not concrete terms with precise definitions — avoid even the appearance of any of these things. If you’re unsure, ask yourself how you would feel if your post was featured on the front page of the New York Times. + +This is a public forum, and search engines index these discussions. Keep the language, links, and images safe for family and friends. + +## Keep It Tidy + +Make the effort to put things in the right place, so that we can spend more time discussing and less time cleaning up. + +In brief: + +* Don’t start a topic in the wrong category. +* Don’t cross-post the same thing in multiple topics. +* Don’t post no-content replies. +* Don’t divert a topic by changing it midstream. +* Don’t sign your posts — every post has your profile information attached to it. +* Rather than posting “+1” or “Agreed”, use the Like button. Rather than taking an existing topic in a radically different direction, use Reply as a Linked Topic. + +## Post Only Your Own Stuff + +You may not post anything digital that belongs to someone else without permission. You may not post descriptions of, links to, or methods for stealing someone’s intellectual property (e.g. software, video, audio, images, etc.), or for breaking any other law. + +## Powered by You + +This site is operated by your friendly local staff and you, the community. If you have any further questions about how things should work here, open a new topic in the site feedback category and let’s discuss! If there’s a critical or urgent issue that can’t be handled by a meta topic or flag, contact us via the support live chat. + +## Terms of Service + +Yes, legalese is boring, but we must protect ourselves – and by extension, you and your data – against unfriendly folks. We have a Terms of Service describing your (and our) behavior and rights related to content, privacy, and laws. To use this service, you must agree to abide by our TOS. See below for more information on this. + +# Terms of Service (TOS) + +These terms include a number of important provisions that affect your rights and responsibilities, such as the disclaimers in Disclaimers, limits on the company’s liability to you in Limits on Liability, your agreement to cover the company for damages caused by your misuse of the forum in Responsibility for Your Use, and an agreement to arbitrate disputes in Disputes. + +Please read: + +To use this forum, or any other collaboration/communication tool of ThreeFold you must agree to these terms with threefold, the company that runs the forum or any other tool (FreeFlow Pages and others). + +The company may offer other products and services, under different terms. These terms apply only to use of the forum. + +## Important Terms + +These terms include a number of important provisions that affect your rights and responsibilities, such as the disclaimers in [Disclaimers](#disclaimers), limits on the company’s liability to you in [Limits on Liability](#limits-on-liability), your agreement to cover the company for damages caused by your misuse of the forum in [Responsibility for Your Use](#your-responsibility), and an agreement to arbitrate disputes in [Disputes](#disputes). + +## Your Permission to Use the Forum) + +Subject to these terms, the company gives you permission to use the forum. Everyone needs to agree to these terms to use the forum. + +## Conditions for Use of the Forum + +Your permission to use the forum is subject to the following conditions: + +1. You must be at least thirteen years old. +2. You may no longer use the forum if the company contacts you directly to say that you may not. +3. You must use the forum in accordance with [Acceptable Use](#acceptable-use) and [Content Standards](#content-standards). + +## Acceptable Use + +1. You may not break the law using the forum. +2. You may not use or try to use another’s account on the forum without their specific permission. +3. You may not buy, sell, or otherwise trade in user names or other unique identifiers on the forum. +4. You may not send advertisements, chain letters, or other solicitations through the forum, or use the forum to gather addresses or other personal data for commercial mailing lists or databases. +5. You may not automate access to the forum, or monitor the forum, such as with a web crawler, browser plug-in or add-on, or other computer program that is not a web browser. You may crawl the forum to index it for a publicly available search engine, if you run one. +6. You may not use the forum to send e-mail to distribution lists, newsgroups, or group mail aliases. +7. You may not falsely imply that you’re affiliated with or endorsed by the company. +8. You may not hyperlink to images or other non-hypertext content on the forum on other webpages. +9. You may not remove any marks showing proprietary ownership from materials you download from the forum. +10. You may not show any part of the forum on other websites with \. +11. You may not disable, avoid, or circumvent any security or access restrictions of the forum. +12. You may not strain infrastructure of the forum with an unreasonable volume of requests, or requests designed to impose an unreasonable load on information systems underlying the forum. +13. You may not impersonate others through the forum. +14. You may not encourage or help anyone in violation of these terms. + +## Content Standards + +1. You may not submit content to the forum that is illegal, offensive, or otherwise harmful to others. This includes content that is harassing, inappropriate, or abusive. +2. You may not submit content to the forum that violates the law, infringes anyone’s intellectual property rights, violates anyone’s privacy, or breaches agreements you have with others. +3. You may not submit content to the forum containing malicious computer code, such as computer viruses or spyware. +4. You may not submit content to the forum as a mere placeholder, to hold a particular address, user name, or other unique identifier. +5. You may not use the forum to disclose information that you don’t have the right to disclose, like others’ confidential or personal information. + +## Enforcement + +The company may investigate and prosecute violations of these terms to the fullest legal extent. The company may notify and cooperate with law enforcement authorities in prosecuting violations of the law and these terms. + +The company reserves the right to change, redact, and delete content on the forum for any reason. If you believe someone has submitted content to the forum in violation of these terms, [contact us immediately](#contact). + +## Your Account + +You must create and log into an account to use some features of the forum. + +To create an account, you must provide some information about yourself. If you create an account, you agree to provide, at a minimum, a valid e-mail address, and to keep that address up-to-date. You may close your account at any time by e-mailing **info@threefold.io**. + +You agree to be responsible for all action taken using your account, whether authorized by you or not, until you either close your account or notify the company that your account has been compromised. You agree to notify the company immediately if you suspect your account has been compromised. You agree to select a secure password for your account, and keep it secret. + +The company may restrict, suspend, or close your account on the forum according to its policy for handling copyright-related takedown requests, or if the company reasonably believes that you’ve broken any rule in these terms. + +## Your Content + +Nothing in these terms gives the company any ownership rights in intellectual property that you share with the forum, such as your account information, posts, or other content you submit to the forum. Nothing in these terms gives you any ownership rights in the company’s intellectual property, either. + +Between you and the company, you remain solely responsible for content you submit to the forum. You agree not to wrongly imply that content you submit to the forum is sponsored or approved by the company. These terms do not obligate the company to store, maintain, or provide copies of content you submit, and to change it, according to these terms. + +Content you submit to the forum belongs to you, and you decide what permission to give others for it. But at a minimum, you license the company to provide content that you submit to the forum to other users of the forum. That special license allows the company to copy, publish, and analyze content you submit to the forum. + +When content you submit is removed from the forum, whether by you or by the company, the company’s special license ends when the last copy disappears from the company’s backups, caches, and other systems. Other licenses you apply to content you submit, such as [Creative Commons](https://creativecommons.org/) licenses, may continue after your content is removed. Those licenses may give others, or the company itself, the right to share your content through the forum again. + +Others who receive content you submit to the forum may violate the terms on which you license your content. You agree that the company will not be liable to you for those violations or their consequences. + +## Your Responsibility + +You agree to indemnify the company from legal claims by others related to your breach of these terms, or breach of these terms by others using your account on the forum. Both you and the company agree to notify the other side of any legal claims for which you might have to indemnify the company as soon as possible. If the company fails to notify you of a legal claim promptly, you won’t have to indemnify the company for damages that you could have defended against or mitigated with prompt notice. You agree to allow the company to control investigation, defense, and settlement of legal claims for which you would have to indemnify the company, and to cooperate with those efforts. The company agrees not to agree to any settlement that admits fault for you or imposes obligations on you without your prior agreement. + +## Disclaimers + +You accept all risks of using the forum and content on the forum. As far as the law allows, the company and its suppliers provide the forum as is, without any warranty whatsoever. + +The forum may hyperlink to and integrate forums and services run by others. The company does not make any warranty about services run by others, or content they may provide. Use of services run by others may be governed by other terms between you and the one running service. + +## Limits on Liability + +Neither the company nor its suppliers will be liable to you for breach-of-contract damages their personnel could not have reasonably foreseen when you agreed to these terms. + +As far as the law allows, the total liability to you for claims of any kind that are related to the forum or content on the forum will be limited to $50. + +## Feedback + +The company welcomes your feedback and suggestions for the forum. See the [Contact](#contact) section below for ways to get in touch with us. + +You agree that the company will be free to act on feedback and suggestions you provide, and that the company won’t have to notify you that your feedback was used, get your permission to use it, or pay you. You agree not to submit feedback or suggestions that you believe might be confidential or proprietary, to you or others. + +## Termination + +Either you or the company may end the agreement written out in these terms at any time. When our agreement ends, your permission to use the forum also ends. + +The following provisions survive the end of our agreement: [Your Content](#your-content), [Feedback](#feedback), [Your Responsibility](#your-responsibility), [Disclaimers](#disclaimers), [Limits on Liability](#limits-on-liability), and [General Terms](#general-terms). + +## Disputes + +The governing law will govern any dispute related to these terms or your use of the forum. + +You and the company agree to seek injunctions related to these terms only in state or federal court in the city for disputes. Neither you nor the company will object to jurisdiction, forum, or venue in those courts. + +Other than to seek an injunction or for claims under the Computer Fraud and Abuse Act, you and the company will resolve any dispute by binding American Arbitration Association arbitration. Arbitration will follow the AAA’s Commercial Arbitration Rules and Supplementary Procedures for Consumer Related Disputes. Arbitration will happen in the city for disputes. You will settle any dispute as an individual, and not as part of a class action or other representative proceeding, whether as the plaintiff or a class member. No arbitrator will consolidate any dispute with any other arbitration without the company’s permission. + +Any arbitration award will include costs of the arbitration, reasonable attorneys’ fees, and reasonable costs for witnesses. You and the company may enter arbitration awards in any court with jurisdiction. + +## General Terms + +If a provision of these terms is unenforceable as written, but could be changed to make it enforceable, that provision should be modified to the minimum extent necessary to make it enforceable. Otherwise, that provision should be removed. + +You may not assign your agreement with the company. The company may assign your agreement to any affiliate of the company, any other company that obtains control of the company, or any other company that buys assets of the company related to the forum. Any attempted assignment against these terms has no legal effect. + +Neither the exercise of any right under this Agreement, nor waiver of any breach of this Agreement, waives any other breach of this Agreement. + +These terms embody all the terms of agreement between you and the company about use of the forum. These terms entirely replace any other agreements about your use of the forum, written or not. + +## Contact + +You may notify the company under these terms, and send questions to the company, at **info@threefold.io**. + +The company may notify you under these terms using the e-mail address you provide for your account on the forum, or by posting a message to the homepage of the forum or your account page. + +## Changes + +The company last updated these terms on Aug, 2020, and may update these terms again. For updates that contain substantial changes, the company agrees to e-mail you, if you’ve created an account and provided a valid e-mail address. The company may also announce updates with special messages or alerts on the forum. + +Once you get notice of an update to these terms, you must agree to the new terms in order to keep using the forum. \ No newline at end of file diff --git a/collections/collaboration/collaboration_toc.md b/collections/collaboration/collaboration_toc.md new file mode 100644 index 0000000..3a6bc41 --- /dev/null +++ b/collections/collaboration/collaboration_toc.md @@ -0,0 +1,13 @@ +

Collaboration

+ +ThreeFold strongly believes in the power of open-source projects and community-driven collaboration. The following documentation is ideal for anyone who wants to know ways to collaborate to the ThreeFold ecosystem. + +To become a farmer, a developer or a sysadmin on the ThreeFold, read the [documentation](../../documentation/documentation.md). + +

Table of Contents

ThreeFold Circle Tool + +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) +- [Prerequisites](#prerequisites) +- [How to Use the Circle Tool](#how-to-use-the-circle-tool) +- [Circle Tool Overview](#circle-tool-overview) +- [Dashboard View](#dashboard-view) +- [Profile view](#profile-view) +- [Projects](#projects) +- [Scrum Module on Project](#scrum-module-on-project) + - [Scrum Backlog](#scrum-backlog) + - [Scrum Sprints](#scrum-sprints) +- [More Info](#more-info) + +*** + +## Introduction + +The [__ThreeFold Circle Tool__](https://circles.threefold.me ) is our own self-hosted (desktop only) project management tool based on [Taiga](https://www.taiga.io/), an open-source project management tool for cross-functional agile. It offers a lot of different project management kits and features such as the scrum board, kanban board, issues management, and many more. + +Our teams at ThreeFold use the Circle Tool to self-manage our tasks, thus it is deemed necessary for the new onboarded team members to learn how to use the tool. Unfortunately we only provide the desktop version of the tool at this moment since we normally manage our projects on the computer. + +This manual will be a beneficial thing to read for anyone: our team members, as well as our community members who are interested in using Circle Tool for their projects. + +## Overview + +Here is an overview of the tool. + +![ ](./img/taiga.png) + +## Prerequisites + +You would need to install and create an account on the [TF Connect App](../../../documentation/threefold_token/storing_tft/tf_connect_app.md) before being able to register and use the Circle Tool. + +## How to Use the Circle Tool + +* Go to [Circle Tool's desktop homepage](https://circles.threefold.me) on your computer as shown below. Click on '__Login__' button on the very top right corner of your screen. + +![ ](./img/circlehome.png) + +* Click on "TF Connect" button to log into the Circle Tool by using your TF Connect Account. + +![ ](./img/tfconnect.png) + +* Fill in your TF Connect usernames (without adding the '@' sign) on the provided box, and click on the 'Sign in' button. + +![ ](./img/signin.png) + +* Circle Tool will ask you to verify your login by clicking the right emoji that is sent to your TF Connect App. + +![ ](./img/emoji.png) + +* Verify your sign in process by loggin in to your TF Connect App on your mobile phone. Click on the same emoji that you see on your circle tool (desktop). + +![ ](./img/matchemoji.png) + +* Congratulations, you are now officially logged in on the circle tool. The Dashboard view will be the first thing you will see once you are logged in to the tool. + +![ ](./img/dashboard.png) + +*** + +## Circle Tool Overview + +The Circle Tool always try to make things easy and intuitive for new users but it’s good to have a nice & quick overview for your first couple of days. + +## Dashboard View + +![ ](./img/dashboard.png) + +Upon login or if you just go to your Circle Tool, you’re confronted with your dashboard with quick access to your working on items, a list of watched items and shortcuts to your projects. You can always go back to your dashboard by clicking on the Taiga +icon on the top bar. + +## Profile view + +![ ](./img/profile.png) + +Circle Tool has an additional section to get a multiproject view where you can find and list everything that is accessible to you. Simply click on your avatar and you access your personal profile section where you can check from what your personal bio looks like for people that might have access to it to all sorts of information on your activities and relevant content. + +## Projects + +![ ](./img/project.png) + +You can access your assigned projects by clicking on Projects link at the top left of your screen. You can hover and get an interactive shortlist or click on the link and go to a dedicated page where you can access them as well as rearrange them. Once you have clicked on a project, you access your default view for that project, which is always the Project’s Timeline if you haven’t changed that. + + +## Scrum Module on Project + +![ ](./img/homeproject.png) + +Every circle project can activate the Scrum module. This also happens automatically if you chose the Scrum template upon project creation. You can find the scrum module on the sidebar of your project page. + +Scrum is an agile framework for developing, delivering, and sustaining complex products. Although it had an initial emphasis on software development, it has been used in other fields including research, sales, marketing and advanced technologies. + +### Scrum Backlog + +![ ](./img/backlog.png) + +There are various so called artifacts in Scrum. The top three are the Backlog, the User Stories and the Sprints. They respectively represent what is to be done ordered by priority and readyness, the pieces of work themselves and the fixed time periods in which we put selected User Stories to be worked upon and finished. + +### Scrum Sprints + +![ ](./img/sprints.png) + +The Scrum Backlog view will always show a summary view of ongoing or closed Sprints but teams generally stick to the Sprint Taskboard view when they are focused on getting things done for that Sprint. Click on either the Sprint name or the “Sprint Taskboard” button so you can access the very important Sprint Taskboard. Open Sprints appear as shortcuts through the left navigation pane’s Scrum icon. + + +## More Info + +You can read more about the Circle Tool (Taiga), scrum, sprints, and other documentations on Taiga's project management features on Taiga's official documentation [here](https://community.taiga.io/). Happy Project Managing! \ No newline at end of file diff --git a/collections/collaboration/collaboration_tools/collaboration_tools.md b/collections/collaboration/collaboration_tools/collaboration_tools.md new file mode 100644 index 0000000..2060d09 --- /dev/null +++ b/collections/collaboration/collaboration_tools/collaboration_tools.md @@ -0,0 +1,18 @@ +

ThreeFold's Collaboration Tools

+ +In this section, we will introduce powerful collaboration tools utilized by ThreeFold, such as the Circle Tool and the Website Deployer. + +These tools play a crucial role in enhancing as well as simplifying collaboration and communication at ThreeFold. The tools we use at ThreeFold are chosen for their open-source design and their focus on ease of comprehension and use. + +

Table of Contents

+ +- [Circle Tool](./circle_tool.md) + - This ThreeFold's project management tool, made by using Taiga, is an open-source project management platform designed to facilitate collaboration and to streamline workflows for teams. It provides a comprehensive set of features and tools to help teams plan, track, and manage their projects effectively. + +- [Website Deployer](./website_tool.md) + - This ThreeFold's website builder tool, made by using Zola, is a static site generator (SSG) and content management system (CMS) that empowers developers and content creators to build and manage websites efficiently. It is an open-source framework written in the Rust programming language, known for its performance, security, and reliability. + +- [Website Link Checker](./website_link_checker.md) + - The ThreeFold website link checker is a wrapper around muffet to check for specific link errors on live websites written in Python. + + \ No newline at end of file diff --git a/collections/collaboration/collaboration_tools/img/backlog.png b/collections/collaboration/collaboration_tools/img/backlog.png new file mode 100644 index 0000000..2cc0f41 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/backlog.png differ diff --git a/collections/collaboration/collaboration_tools/img/circlehome.png b/collections/collaboration/collaboration_tools/img/circlehome.png new file mode 100644 index 0000000..a1ca18b Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/circlehome.png differ diff --git a/collections/collaboration/collaboration_tools/img/clone.png b/collections/collaboration/collaboration_tools/img/clone.png new file mode 100644 index 0000000..083e4ad Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/clone.png differ diff --git a/collections/collaboration/collaboration_tools/img/codeexample.png b/collections/collaboration/collaboration_tools/img/codeexample.png new file mode 100644 index 0000000..de6f82d Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/codeexample.png differ diff --git a/collections/collaboration/collaboration_tools/img/column.png b/collections/collaboration/collaboration_tools/img/column.png new file mode 100644 index 0000000..6f33e41 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/column.png differ diff --git a/collections/collaboration/collaboration_tools/img/config.png b/collections/collaboration/collaboration_tools/img/config.png new file mode 100644 index 0000000..c615104 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/config.png differ diff --git a/collections/collaboration/collaboration_tools/img/dashboard.png b/collections/collaboration/collaboration_tools/img/dashboard.png new file mode 100644 index 0000000..b0c8ce2 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/dashboard.png differ diff --git a/collections/collaboration/collaboration_tools/img/done.png b/collections/collaboration/collaboration_tools/img/done.png new file mode 100644 index 0000000..43f760c Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/done.png differ diff --git a/collections/collaboration/collaboration_tools/img/emoji.png b/collections/collaboration/collaboration_tools/img/emoji.png new file mode 100644 index 0000000..a710876 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/emoji.png differ diff --git a/collections/collaboration/collaboration_tools/img/folderdetail.png b/collections/collaboration/collaboration_tools/img/folderdetail.png new file mode 100644 index 0000000..019b1ee Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/folderdetail.png differ diff --git a/collections/collaboration/collaboration_tools/img/fork.png b/collections/collaboration/collaboration_tools/img/fork.png new file mode 100644 index 0000000..9fadc1e Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/fork.png differ diff --git a/collections/collaboration/collaboration_tools/img/gitpages.png b/collections/collaboration/collaboration_tools/img/gitpages.png new file mode 100644 index 0000000..7b49f85 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/gitpages.png differ diff --git a/collections/collaboration/collaboration_tools/img/homeproject.png b/collections/collaboration/collaboration_tools/img/homeproject.png new file mode 100644 index 0000000..70f0ca0 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/homeproject.png differ diff --git a/collections/collaboration/collaboration_tools/img/indexmd.png b/collections/collaboration/collaboration_tools/img/indexmd.png new file mode 100644 index 0000000..0a746b0 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/indexmd.png differ diff --git a/collections/collaboration/collaboration_tools/img/logo.png b/collections/collaboration/collaboration_tools/img/logo.png new file mode 100644 index 0000000..b945b27 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/logo.png differ diff --git a/collections/collaboration/collaboration_tools/img/mastodon.png b/collections/collaboration/collaboration_tools/img/mastodon.png new file mode 100644 index 0000000..cb24871 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/mastodon.png differ diff --git a/collections/collaboration/collaboration_tools/img/mastodonmd.jpeg b/collections/collaboration/collaboration_tools/img/mastodonmd.jpeg new file mode 100644 index 0000000..e34b84a Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/mastodonmd.jpeg differ diff --git a/collections/collaboration/collaboration_tools/img/matchemoji.png b/collections/collaboration/collaboration_tools/img/matchemoji.png new file mode 100644 index 0000000..24cad6c Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/matchemoji.png differ diff --git a/collections/collaboration/collaboration_tools/img/navbar.png b/collections/collaboration/collaboration_tools/img/navbar.png new file mode 100644 index 0000000..d322317 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/navbar.png differ diff --git a/collections/collaboration/collaboration_tools/img/navigate.png b/collections/collaboration/collaboration_tools/img/navigate.png new file mode 100644 index 0000000..5294595 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/navigate.png differ diff --git a/collections/collaboration/collaboration_tools/img/placeholder.png b/collections/collaboration/collaboration_tools/img/placeholder.png new file mode 100644 index 0000000..b39dbfd Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/placeholder.png differ diff --git a/collections/collaboration/collaboration_tools/img/preview.png b/collections/collaboration/collaboration_tools/img/preview.png new file mode 100644 index 0000000..803dddc Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/preview.png differ diff --git a/collections/collaboration/collaboration_tools/img/profile.png b/collections/collaboration/collaboration_tools/img/profile.png new file mode 100644 index 0000000..ee4a3bb Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/profile.png differ diff --git a/collections/collaboration/collaboration_tools/img/project.png b/collections/collaboration/collaboration_tools/img/project.png new file mode 100644 index 0000000..96285b5 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/project.png differ diff --git a/collections/collaboration/collaboration_tools/img/publishtools.jpeg b/collections/collaboration/collaboration_tools/img/publishtools.jpeg new file mode 100644 index 0000000..cffcdfe Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/publishtools.jpeg differ diff --git a/collections/collaboration/collaboration_tools/img/scoopsuccess.png b/collections/collaboration/collaboration_tools/img/scoopsuccess.png new file mode 100644 index 0000000..16e7eea Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/scoopsuccess.png differ diff --git a/collections/collaboration/collaboration_tools/img/signin.png b/collections/collaboration/collaboration_tools/img/signin.png new file mode 100644 index 0000000..fedebec Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/signin.png differ diff --git a/collections/collaboration/collaboration_tools/img/sprints.png b/collections/collaboration/collaboration_tools/img/sprints.png new file mode 100644 index 0000000..9fc2747 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/sprints.png differ diff --git a/collections/collaboration/collaboration_tools/img/success.png b/collections/collaboration/collaboration_tools/img/success.png new file mode 100644 index 0000000..9818dd1 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/success.png differ diff --git a/collections/collaboration/collaboration_tools/img/taiga.png b/collections/collaboration/collaboration_tools/img/taiga.png new file mode 100644 index 0000000..834927a Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/taiga.png differ diff --git a/collections/collaboration/collaboration_tools/img/tfconnect.png b/collections/collaboration/collaboration_tools/img/tfconnect.png new file mode 100644 index 0000000..0479e95 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/tfconnect.png differ diff --git a/collections/collaboration/collaboration_tools/img/threecolumns.png b/collections/collaboration/collaboration_tools/img/threecolumns.png new file mode 100644 index 0000000..e6ff5e8 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/threecolumns.png differ diff --git a/collections/collaboration/collaboration_tools/img/threecolumnsdone.png b/collections/collaboration/collaboration_tools/img/threecolumnsdone.png new file mode 100644 index 0000000..72b7a54 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/threecolumnsdone.png differ diff --git a/collections/collaboration/collaboration_tools/img/threefolders.png b/collections/collaboration/collaboration_tools/img/threefolders.png new file mode 100644 index 0000000..ba9aa74 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/threefolders.png differ diff --git a/collections/collaboration/collaboration_tools/img/twocolumns.png b/collections/collaboration/collaboration_tools/img/twocolumns.png new file mode 100644 index 0000000..45ea049 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/twocolumns.png differ diff --git a/collections/collaboration/collaboration_tools/img/twocolumnsdone.png b/collections/collaboration/collaboration_tools/img/twocolumnsdone.png new file mode 100644 index 0000000..eb49df2 Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/twocolumnsdone.png differ diff --git a/collections/collaboration/collaboration_tools/img/vscode.png b/collections/collaboration/collaboration_tools/img/vscode.png new file mode 100644 index 0000000..c9cda3a Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/vscode.png differ diff --git a/collections/collaboration/collaboration_tools/img/websitetool.jpeg b/collections/collaboration/collaboration_tools/img/websitetool.jpeg new file mode 100644 index 0000000..bc4315e Binary files /dev/null and b/collections/collaboration/collaboration_tools/img/websitetool.jpeg differ diff --git a/collections/collaboration/collaboration_tools/website_link_checker.md b/collections/collaboration/collaboration_tools/website_link_checker.md new file mode 100644 index 0000000..788c5e0 --- /dev/null +++ b/collections/collaboration/collaboration_tools/website_link_checker.md @@ -0,0 +1,87 @@ +

Website Link Checker

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How the Program Exits](#how-the-program-exits) +- [Program Arguments](#program-arguments) +- [How to Use the Program](#how-to-use-the-program) + - [With Python](#with-python) + - [With Docker](#with-docker) + - [With Github Action](#with-github-action) + +*** + +## Introduction + +This is a Python program that calls muffet on a whole website and then filters and displays the HTTP errors. + +> Note: It can take a couple of minutes to run if the website has a lot of URLs. + +## How the Program Exits + +Exits with error code 1 if at least one error is found, as specified with --errors +flag. Otherwise exits with code 0. Note that errors set as --warnings will always exit with code 0. + +## Program Arguments + +* url + * The URL to scan. Please include https:// or http://. (e.g. https://google.com) +* -h, --help + * show this help message and exit +* -e ERRORS [ERRORS ...], --errors ERRORS [ERRORS ...] + * Specify one, many or all error codes to be filtered (e.g. -e 404, -e 403 404, -e all). Use -e all to show all errors. +* -w WARNINGS [WARNINGS ...], --warnings WARNINGS [WARNINGS ...] + * Specify one, many or all error codes to be filtered as warnings (e.g. -w 404, -w 403 404, -w all). Use -w all to show all warnings. + +## How to Use the Program + +### With Python + +* Clone the repository + * ``` + git clone https://github.com/threefoldfoundation/website-link-checker + ``` +* Change directory + * ``` + cd website-link-checker + ``` +* Run the program + * ``` + python website-link-checker.py https://example.com -e 404 -w all + ``` + +### With Docker + +You can use the following command to run the website link checker with Docker: + +``` +docker run ghcr.io/threefoldfoundation/website-link-checker https://example.com -e 404 -w all +``` + +### With Github Action + +The website link checker can be run as an action (e.g. `action.yml`) set in `.github/workflows` of a Github repository. + +The following action example runs everytime there is a push on the development branch and also every Monday at 6:00AM as set by the cron job. + +``` +name: link-checker-example +on: + push: + branches: [ development ] + schedule: + - cron: '0 6 * * 1' # e.g. 6:00 AM each Monday + +jobs: + job_one: + name: Check for Broken Links + runs-on: ubuntu-latest + steps: + - name: Check for Broken Links + id: link-report + uses: docker://ghcr.io/threefoldfoundation/website-link-checker:latest + with: + args: 'https://example.com -e 404 -w all' +``` + diff --git a/collections/collaboration/collaboration_tools/website_tool.md b/collections/collaboration/collaboration_tools/website_tool.md new file mode 100644 index 0000000..c3b997f --- /dev/null +++ b/collections/collaboration/collaboration_tools/website_tool.md @@ -0,0 +1,338 @@ +

Zola Website Deployer

+ +

Table of Contents

+ +- [Overview](#overview) + - [What is Zola Framework?](#what-is-zola-framework) +- [Prerequisites](#prerequisites) + - [Important Links](#important-links) +- [Installing Zola Onto Your Machine](#installing-zola-onto-your-machine) + - [Important Links](#important-links-1) +- [Get Started](#get-started) + - [Fork ThreeFold's Website Template to Your Github Account](#fork-threefolds-website-template-to-your-github-account) + - [Clone the Forked Repository Locally](#clone-the-forked-repository-locally) + - [Open and Edit Your Cloned Zola Template with a Code Editor](#open-and-edit-your-cloned-zola-template-with-a-code-editor) + - [Template Guide](#template-guide) + - [Navigating the Template](#navigating-the-template) + - [Top Navbar Made Easy](#top-navbar-made-easy) + - [Replace Logo with your Own logo](#replace-logo-with-your-own-logo) + - [Important Links](#important-links-2) +- [Customization](#customization) + - [Some Tutorials on Markdown](#some-tutorials-on-markdown) + - [Creating A Single-Column Page Section](#creating-a-single-column-page-section) + - [Adding Image](#adding-image) + - [Creating Page Section with Multiple Columns](#creating-page-section-with-multiple-columns) + - [Important Links](#important-links-3) + - [Build and Preview Your Website Locally](#build-and-preview-your-website-locally) + - [Check the Website Links](#check-the-website-links) + - [Important Links](#important-links-4) +- [Publish Your Website (Via Github Pages)](#publish-your-website-via-github-pages) + - [Publish your Github page](#publish-your-github-page) +- [Important Links](#important-links-5) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Overview + +**ThreeFold Website Tool** is a customized open-source Zola-based web deployment framework and static website template repository that is available for anyone to use. + +At ThreeFold, we utilized Website Tool to deploy all of our web presences. For example, [**www.threefold.io**](https://threefold.io). + +### What is Zola Framework? +[**Zola**](https://www.getzola.org/) is a static site generator (SSG), similar to Hugo, Pelican, and Jekyll (for a comprehensive list of SSGs, please see Jamstack). It is written in Rust and uses the Tera template engine, which is similar to Jinja2, Django templates, Liquid, and Twig. Content is written in CommonMark, a strongly defined, highly compatible specification of Markdown. + +While you can also publish a static website using Zola alone, we at ThreeFold have customized the framework and created a static website template that makes it even easier for anyone to build a website by simply cloning template and fill it with their own website content. + + + +## Prerequisites + +- Github Account +- Zola Framework +- VS Code, or any code editor of choice +- Markdown language knowledge +- Basic Command Line (Terminal) Knowledge + +In order to deploy and publish a website using ThreeFold Website Tool, you would need to have an account on github (to store your website data in a github repository), as well as to have Zola framework installed on your machine. + +### Important Links + +> - [How to Sign Up for a Github Account](https://docs.github.com/en/get-started/signing-up-for-github/signing-up-for-a-new-github-account) +> - [Download VS Code](https://code.visualstudio.com/download) +> - [Learn Markdown Language](https://www.markdownguide.org/) +> - [Command Line Cheat Sheet](https://cs.colby.edu/maxwell/courses/tutorials/terminal/) + + + +## Installing Zola Onto Your Machine + +To install Zola on your machine, simply go to your terminal and run the following command: + +**MacOS (brew)**: + +``` +$ brew install zola +``` +Please make sure you have [Brew](https://brew.sh/) installed on your MacOS machine before installing Zola. + +Windows (scoop): + +``` +$ scoop install zola +``` +Please make sure you have [Scoop](https://scoop.sh/) installed on your Windows machine before installing Zola. + +You should see a similar screen as below when successful: + +![](./img/scoopsuccess.png) + +For more details on Zola Installation, and installation guidelines for other operating systems, please read: [**Zola Installation Manual**](https://www.getzola.org/documentation/getting-started/installation/). + + +### Important Links +> - [How to Install Brew (MacOS)](https://brew.sh/) +> - [How to Install Scoop (Windows)](https://github.com/ScoopInstaller/Scoop#readme) +> - [Zola Installation for other OS](https://www.getzola.org/documentation/getting-started/installation/) +> - [Command Line Cheat Sheet](https://cs.colby.edu/maxwell/courses/tutorials/terminal/) + +> Next Step: [Template Guide: How to use the TF Web Template](#template-guide) + + + +## Get Started + +Now that you have successfully installed Zola on your machine. You are ready to create and build your own website using ThreeFold Website Tool. + +In order to do that you would need to clone [**ThreeFold's Website Template**](https://github.com/threefoldfoundation/www_examplezola) to your own github account, and open it locally on your computer by using VS Code or your code editing program of choice. + +### Fork ThreeFold's Website Template to Your Github Account + +Our Team has especially created an html/css/markdown based template repository on github, free for anyone to use. To start working on your project, simply fork [this repository](https://github.com/threefoldfoundation/www_examplezola) to your own github account by clicking the 'fork' account on the repository, and rename it with your website's name. + +![](./img/fork.png) + +### Clone the Forked Repository Locally +After you forked the template, now you can [clone the repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) to your local computer so we can start working on it. Please remember the directory / folder of where you cloned the repository in your computer to make it easier for you to locate and edit it. + + +![](./img/clone.png) + +### Open and Edit Your Cloned Zola Template with a Code Editor + +Once the template is forked and cloned, open your code editor and start working on your website. I will explain a little more about the content editing process and procedure on the next pages... + +![](./img/vscode.png) + +### Template Guide + +On this page you will find an introduction on [TF Web Template](https://github.com/threefoldfoundation/www_examplezola) and how to navigate the different template component that enable you to edit the template with your own content. + +### Navigating the Template + +All editable content of your website would be found under **‘content’** folder. +Each page of your website is a **markdown (.md) file.** + +Each page and all the images on the page will be put into its own folder under content/ folder. +![](./img/folderdetail.png) +
+For example, here, my homepage (index.md) is put into **content/home** folder. +![](./img/indexmd.png) + +If I want to edit the homepage of my website, I would go to the following: +bb +``` +content/home/index.md +``` + and start editing. + + ![](./img/vscode.png) + + ### Top Navbar Made Easy + +![](./img/navbar.png) + +Every time you make a new page folder, we have designed it in a way that the website would automatically generate a new navbar item using the name of each folder you created. + + based on the navbar picture above, it means that I have created 3 separate content subfolders, each with an index.md file on it called Home, ThreeFoldFeed and GetServer. + +### Replace Logo with your Own logo +![](./img/logo.png) + +To replace the logo, **add your own logo image to ‘home’ folder.** + +And then go to **_index.md** file and replace the **logo_path**: images/yourlogoimagename.jpg + +![](./img/placeholder.png) + +### Important Links + +> - [TF Web Template](https://github.com/threefoldfoundation/www_examplezola) + + + +## Customization + +We have designed the template in certain ways that it would accommodate different indentation web page style, such as placeholders, footer, header, left-indentation, right-indentation. + +All you need to do is just replace the texts and images using markdown language, and use the indentation style you would like to use for your page. Don’t know how to markdown? Here’s a [**complete markdown syntax guide**](https://www.markdownguide.org/basic-syntax/) for you to begin with. + + Happy experimenting! + +### Some Tutorials on Markdown + +### Creating A Single-Column Page Section + +Since we only have one column, Every one column section begins only with row indentation syntax (style, margin, padding). + + +``` + + +{% row(style="" margin="" padding="t") %} +``` + +for example: + +``` + + +{% row(style="center" margin="narrow" padding="top") %} +``` + +and ends with + +``` +{% end %} +``` + +### Adding Image + +To add image to your page please use + +``` +![alt_text](yourimagename.png) +``` + +The Result: + +![](./img/mastodon.png) + + +### Creating Page Section with Multiple Columns + +For more than one column section, we need to configure the row and column syntax. +For example: +Sometimes you would like to have a page where you place your texts and buttons on left column and an image on the right column, like: + +What you need to do is add: + +``` +||| +``` +in between your text and images for every column you want to create. + +For example, this page consist of two columns (left and right): + +![](./img/twocolumns.png) + +The Result: + +![](./img/twocolumnsdone.png) + +You can add more than two column like this one, a page section consist of 3 columns. + +The code: + +![](./img/threecolumns.png) + +The Result: + +![](./img/threecolumnsdone.png) + + +### Important Links +> - [Learn Markdown Language](https://www.markdownguide.org/) + +### Build and Preview Your Website Locally + +After customizing your website, you might want to review and build your website locally before publishing it online. On this page you will find tutorials on how to preview and deploy your website. + +To preview your website locally, simply open the terminal via your code editor and type in: + +``` +./build.sh +``` + +So that the framework starts building your website. + +Then + +``` +./start.sh +``` + +So that the framework starts serving your website preview locally. Please make sure you are on located on the right website folder, for example: *$ user/doc/mywebsitename* before typing the command above. + + The preview won't successfuly be built if you run the command in the wrong folder. + +When successful, it will give you a link to a local preview of your website. Go ahead and copy paste the url onto your web browser to preview your website locally. + +![](./img/success.png) + +And, Congratulations! You just built your website locally! + +![](./img/preview.png) + +### Check the Website Links + +When you are in the main directory of your Zola website, you can check the following command to check the links of the complete website locally: + +``` +zola check +``` + +Once your website is online, you can also use the [Website Link Checker](./website_link_checker.md). + +### Important Links + +> - [Command Line Cheat Sheet](https://cs.colby.edu/maxwell/courses/tutorials/terminal/) + + + +## Publish Your Website (Via Github Pages) + +Since we're using github repository to save our website content, the easiest way to publish our website is also through github pages and by using our own domain. + +Once all commits have been pushed back to your github repository online, you can start publishing your website. + +The first thing you need to do is to go back to your code editor, and find **config.toml** file on your website repo. +Edit the **base_url** on the **config.toml** file on your repo to your own domain. + +![](./img/config.png) + +Save all your changes and push all your commits to its origin again. + +### Publish your Github page + +Later on, go to your github repo **settings**, go to **Pages** on the left navigation sidebar. Add your own custom domain to start publishing your website. + +![](./img/gitpages.png) + + +And you are done! Your website will be published, and it will take only a minute or so to complete the process. Refresh page, and you will see a link to your newly published website. + +![](./img/done.png) + + + +## Important Links +> - [Pushing Changes to Github](https://docs.github.com/en/desktop/contributing-and-collaborating-using-github-desktop/making-changes-in-a-branch/pushing-changes-to-github) +> - [Github Pages How-to](https://docs.github.com/en/pages) +> - [Adding Custom Domain to my Github Page](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages) + + + +## Questions and Feedback + +If you have any question or feedback, you can write a post on the [ThreeFold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/collaboration/contribute.md b/collections/collaboration/contribute.md new file mode 100644 index 0000000..698a477 --- /dev/null +++ b/collections/collaboration/contribute.md @@ -0,0 +1,119 @@ +

How to Contribute to the Threefold Manual

+ +

Table of Contents

+ +- [Quick Method: Create an Issue](#quick-method-create-an-issue) +- [Advanced Method: Create a Pull Request](#advanced-method-create-a-pull-request) + - [Main Steps to Add Content](#main-steps-to-add-content) + - [How to View the mdbook Locally](#how-to-view-the-mdbook-locally) + - [How to Install git and mdbook](#how-to-install-git-and-mdbook) + - [Markdown File Template (Optional)](#markdown-file-template-optional) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Quick Method: Create an Issue + +If you've found some issues or typos in the ThreeFold Manual, feel free to [create an issue on the ThreeFold Manual repository](https://github.com/threefoldtech/info_grid/issues) to let us know. We will then be able to fix it as soon as possible. + +The steps are simple: + +* Go to the [issues section of ThreeFold Manual](https://github.com/threefoldtech/info_grid/issues) repository on GitHub +* Click on `New Issue` +* Choose an appropriate title +* Explain briefly the issue you found +* Click `Submit New Issue` + + + +## Advanced Method: Create a Pull Request + +If you found an issue in the manual and you wish to fix the issue yourself, you can always fork the repository and propose the changes in a pull request. We present the main steps in this section as well as further details on how to proceed efficiently. + + + +### Main Steps to Add Content + + + +We present here the main steps to add content to the Threefold Manual by forking the repository [`threefoldtech/info_grid`](https://github.com/threefoldtech/info_grid) to your own Github account. + +* Go to the Threefold Manual repository: [https://github.com/threefoldtech/info_grid](https://github.com/threefoldtech/info_grid) +* Fork the Development branch + * On the top right corner, click `Fork -> Create a new fork` +* Make changes in the forked repository + * To add a new section + * Add a new Markdown file to the [src](https://github.com/threefoldtech/info_grid/blob/development/src) directory + * Add the path of the Markdown file to [SUMMARY](https://github.com/threefoldtech/info_grid/blob/development/src/SUMMARY.md). + * To modify an existing section: + * Make the changes directly in the Markdown file +* Ask for a pull request + * In the forked repository, click `Contribute -> Open pull request` +* Once the pull request is accepted, the changes of the Development branch will be available here: [https://www2.manual.grid.tf](https://www2.manual.grid.tf) +* The Threefold team will regularly update the [Development branch](https://github.com/threefoldtech/info_grid) to the [Master branch](https://github.com/threefoldtech/info_grid/tree/master) + * The new content will thus be available here: [https://www.manual.grid.tf](https://www.manual.grid.tf) + +Note: You can update your forked repository by clicking `Sync fork -> Update branch`. + + + +### How to View the mdbook Locally + + + +Once you've forked the TF Manual repository to your Github account, you might want to see the changes you've made before asking for a pull request. This will ensure that the final output is exactly what you have in mind. + +To do so, you simply need to clone the forked repository on your local computer and serve the mdbook on a given port. + +The steps are the following: + +* In the terminal, write the following line to clone the forked `info_grid` repository: + * ``` + git clone https://github.com/YOUR_GIT_ACCOUNT/info_grid + ``` + * make sure to write your own Github account in the URL +* To deploy the mdbook locally, first go to the **info_grid** directory: + * ``` + cd info_grid + ``` +* Then write the following line. It will open the manual automatically. + * ``` + mdbook serve -o + ``` + * Note that, by default, the URL is the following, using port `3000`, `http://localhost:3000/` +* You should now be able to see your changes. + + + +### How to Install git and mdbook + + + +To install git, follow the steps provided [here](https://github.com/git-guides/install-git). + +To install mdbook, you can download the executable binaries available on the [GitHub Releases Page](https://github.com/rust-lang/mdBook/releases). Simply download the binary for your platform (Windows, macOS, or Linux) and extract the archive. The archive contains an mdbook executable which you can run to build your books. To make it easier to run, you can put the path to the binary into your PATH. + +For more information, read the [mdbook Documentation](https://rust-lang.github.io/mdBook/guide/installation.html). + + + +### Markdown File Template (Optional) + + + +Here are some suggestions on how to organize a Markdown file (`.md`) when you submit contents to the ThreeFold Manual. This is not necessary, but it will ease the whole process. + +* Title: Heading 1 (`#` in Markdown syntax) +* Main sections: Heading 2 (`##` in Markdown syntax) +* For Markdown files that contain a *Table of Contents*: + * Use `

` instead of `#` for the _Title_ , and `

` instead of `##` for the _Table of Contents_. + * This quickens editing when creating and updating the ToC ([read this for more details](https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one#table-of-contents)). + * Other heading labels should use standard Markdown headings (`##`, etc.). +* If your text reaches heading level 4, you might want to separate your file into two or more files. + * A long article can be spread in many subsections. + + + +## Questions and Feedback + +If you have any questions or if you would like to share some feedback, let us know in this [Threefold forum post](https://forum.threefold.io/t/new-grid-manual/3783). \ No newline at end of file diff --git a/collections/collaboration/development_cycle.md b/collections/collaboration/development_cycle.md new file mode 100644 index 0000000..c321367 --- /dev/null +++ b/collections/collaboration/development_cycle.md @@ -0,0 +1,36 @@ +The development cycle is explained below: + +![Untitled presentation (1)](https://user-images.githubusercontent.com/8425762/170034170-7247a737-9d99-481d-9289-88d361275043.png) + + + +Devnet: + - continuous development for active version + - can be reset + - should be against a branch named with the version being developed (example: 10.2.3) + +Nextnet: + - for parallel version development + - oftentimes, the next major version while development has bugfixes + - should be against a branch named with the version being developed (example: 10.3.1) + +QAnet: + - once development is complete, each component is tagged with an rc (example: 10.2.3-rc1) and the new version to be tested is deployed on QAnet + - this net is for INTERNAL QA + - Here, we expect most bugs to be reported + - Once QA signs off, it moves to testnet + +Testnet: + - tag as beta release (example: 10.2.3-rc3-beta) + - This is for the community and stability testing + - should be almost completely stable + +Mainnet: + - if testnet has no blockers for 2 weeks, community votes to move to mainnet + - everything is merged to main + - final release is tagged (example: 10.2.3) + + + +## GOAL: +moving away from that model to be able to use ephermal environments instead of maintaining such environments, but now they're available for simplicity \ No newline at end of file diff --git a/collections/collaboration/development_process.md b/collections/collaboration/development_process.md new file mode 100644 index 0000000..ae6d64b --- /dev/null +++ b/collections/collaboration/development_process.md @@ -0,0 +1,422 @@ +

Development Process

+ +Our project development process is characterized by agility, collaboration, and, most importantly, respect. We firmly believe in harnessing the collective ingenuity of our team, recognizing that each individual contributes invaluable insights to our codebase, our development process is completely managed on Github, using Github based projects. + +

Table of Contents

+ +- [Quality Assurance (QA) Process](#quality-assurance-qa-process) + - [QA Responsibilities](#qa-responsibilities) + - [Daily Standups](#daily-standups) + - [Provide Test Plans](#provide-test-plans) + - [Test Execution](#test-execution) + - [Test Documentation](#test-documentation) + - [Verification and Closure](#verification-and-closure) + - [Cross-Environment Testing](#cross-environment-testing) + - [Bug Assessment Meetings (BAM)](#bug-assessment-meetings-bam) + - [Additional Testing Types](#additional-testing-types) + - [Expectations for QA Leads](#expectations-for-qa-leads) + - [Test Planning](#test-planning) + - [Test Strategy](#test-strategy) + - [Review and Closure](#review-and-closure) + - [Communication](#communication) + - [QA Verification and Testing](#qa-verification-and-testing) + - [Testplan](#testplan) + - [Verification Process](#verification-process) + +*** + +## Product Definition on Home + + +`Home` repo serves a special role in the organization, it's the starting point of all development. + +- It links to all products & components +- Put only stories, identified with tag `type_story` in the home repo + + +To streamline our development workflow, we have adopted the GitHub-style projects framework, with all repositories linked to the ThreeFold Grid (tfgrid) product (e.g., version 3.6.0). + +- Various views, such as StoryCards for a high-level overview, repository-specific views, and prioritized views, enhance project visibility. +- All repositories are managed within a centralized project, ensuring unified control and coordination. +- Milestones, aligned with semantic versioning, serve as a means to categorize and organize issues, providing versioning per component. +- Each product is clearly outlined in a dedicated project section within the "home" repository. +- The home page in the home repository serves as a hub linking to individual product pages. +- Products are associated with relevant components slated for the upcoming release. +- Product release milestones are clearly marked on the product page. +- Release notes, accessible through each product, offer a historical overview with links to specific components used in each release. +- Interlinked relationships between products and components, as well as links to third-party products with specified version numbers, provide comprehensive tracking. +- Components are meticulously monitored within the same product project. +- A commitment to [semantic versioning](https://semver.org) is mandated for all components. + +## Github Project + +When creating a new project, please use the grid template project (private repository) available at `https://github.com/orgs/threefoldtech/projects/205`. + + +### Github project columns + +- `No Status` + - Stakeholder or project owner suggests a feature/story/bug to be resolved in this release +- `Accepted` + - The project owner accepts the item, the issue will be worked on and he commits to solve within the release + - Once accepted = then escalation is needed if it can not be done in time +- `In progress` + - The issue is being worked on +- `Blocked` + - We are using the Kanban way of thinking - something in this swimlane needs to be resolved asap, can be e.g. a question + - Means issue cannot be completed, attention from e.g. stakeholders is needed +- `Verification` : work is being verified + - The team delivered the feature/bug/story + - Stakeholders need to agree that the issue has been resolved appropriately + - Project owner can never go from 'Verification' to 'Done' without approval from stakeholders (often represented by QA team) +- `Done` + - Everyone agreed (project owner and stakeholders) agreed that the issue was done ok + + +##### Project Special Columns + +Some projects require special columns like the following + +- `Pending Review`: Work is done, waiting for review; no need for daily progress updates. +- `Pending Deployment`: If deployment is needed for QA testing on the staging instance. + + +### Repository + +Creating a repository involves establishing a foundation for collaborative development. Follow these guidelines to ensure consistency and best practices in repository creation. + +#### Naming + +- Choose a clear and descriptive name for the repository. +- Use lowercase letters and hyphens for improved readability. + +#### README + +- Include a comprehensive README.md file. +- Provide essential information about the project, including setup instructions, dependencies, and usage guidelines. + +#### License + +- Include a LICENSE file specifying the project's licensing terms, threefoldtech is using [Apache2 License](https://github.com/threefoldtech/info_grid/blob/master/LICENSE). + +#### Github Templates + +- Use github templates to provide proper template for issues [bug_report](./bug_report.md) or [feature request](./feature_request.md) +- Use github templates to provide proper template for [pull requests](./PULL_REQUEST_TEMPLATE.md) + +#### Expected Workflows + +- Set up a continuous integration (CI) pipeline using a tool like GitHub Actions. +- Include linting, tests and code quality checks in the CI process. +- Set up automation to deployment on staging, and production server +- Building docker images +- Building flists +- Pushing to the hub +- Publishing packages + + +### Issues + +Consider the following for Effective Issue Reporting + +1. **Title:** + - Provide a clear and concise title that summarizes the issue. + +2. **Description:** + - Offer a detailed description of the issue, including what you expected to happen and what actually occurred. + - Provide steps to reproduce the issue, if possible. + - Include any error messages received. + +3. **Environment:** + - Specify the environment in which the issue occurred (e.g., operating system, browser, version). + +4. **Attachments:** + - Attach relevant files or screenshots to help visualize the problem. + +5. **Issue Type:** + - Label the issue with an appropriate type (e.g., bug, feature request, question). + +6. **Priority:** + - If applicable, assign a priority level to indicate the urgency of the issue. + +7. **Version Information:** + - Include information about the version of the software or application where the issue was encountered. + +8. **Labels:** + - Apply relevant labels to categorize the issue (e.g., priority levels, type of issue). + +9. **Reproducibility:** + - Clearly state whether the issue is reproducible and under what conditions. + +10. **Additional Context:** + - Provide any additional context that might help in understanding and addressing the issue. + +11. **Assigned:** + - If known, assign the issue to the responsible team member or developer. + +12. **Discussion:** + - Engage in discussions with the development team and other stakeholders to gather insights and potential solutions. + +By following these guidelines, you contribute to a more efficient issue resolution process, enabling developers and the team to address concerns promptly and effectively. + +#### Issue Labels + +See [issue labels](issue_labels.md) + + +#### Branch Names in Issue titles + +Each issue has the name of a branch in the title as [development_something], the name 'development' can be skipped and its the default or previous could also be written as [something] but don't forget branch is development_... +If not specified, it is to be fixed/developed on development. + +#### Milestones for Issues + +We use milestones for version numbers e.g `1.4.2` means this issue is going to be part of the release of `1.4.2` of the component. + +> It's very important that nobody works on any issue in milestones not part of the global project plan + +- No milestone means need to be sorted +- Number e.g `1.4.2` means + +So issues with no milestones can only be in 1 condition: new and not sorted out yet where (repo) it belongs + +### Branching + +We encourage collaborative branching. Meaning any group of people working within the same scope are highly encouraged to work on the same branch, trusting and communicating with one another. + +Our branching strategy is: + +- `master` is the last stable release +- `master_$hotfix` is only for solving BLOCKING issues which are in the field on the last release + - short living +- `development` is where all stories branch from, and the one that has hotfixes if needed +- `development_$storyname` + - branch for a story + - always updated from development(_hotfixes) +- `development_$storyname_$reviewname` + - short living branch for when reviews are needed for a story +- `development_hotfixes` short living hotfix(es) to allow people to review and then put on development + - now everyone should update from or development or development_hotfixes + - development_hotfixes is always newer than development +- `integration` is a branch used to integrate development branches + - never develop on it, its for verifying & doing tests + +We have branches for new features/disruptive changes. These have a prefix of `development_` + +Each project and story should define which branches to use & the branching strategy. + +There should never be any branch on the system which can not be found back by looking at the stories in the `home` repo. +Title of the story : in between [] + + +### Pull Requests + +When developers or a group initiate work on a separate branch and seek input from their peers, it is recommended to promptly open a `draft pull request` for seamless communication. Upon completion of the work, opening a pull request signals that the work is: + +- Complete as Defined in the Project: The work aligns with the predefined goals and specifications outlined in the project. +- Well Tested: Thorough testing has been conducted to ensure the reliability and functionality of the code. +- Well Documented: Comprehensive documentation accompanies the code, aiding in understanding and future maintenance. + +#### Pull Requests Best Practices + +When creating pull requests (PRs), adhere to the following best practices for effective collaboration: + +- Early Draft PRs: Open a draft pull request as soon as work begins on a different branch. This allows for ongoing communication and collaboration with peers throughout the development process. +- Timely Updates: Regularly update the PR as new changes are made to keep reviewers informed of progress. +- Clear and Concise Title: Use a clear and concise title that summarizes the purpose or goal of the pull request. +- Detailed Description: Provide a comprehensive description of the changes made, the problem solved, and any relevant context. This aids reviewers in understanding the purpose and impact of the changes. +- Link to Issues: If the pull request addresses specific issues, link them to provide additional context and traceability. +- Reviewers and Assignees: Assign the appropriate reviewers and, if applicable, assignees to ensure that the right people are involved in the review process. +- Complete Work: Ensure that the work is complete as defined in the project requirements. Address any outstanding issues before marking the PR as ready for review. +- Thorough Testing: Verify that the code has undergone thorough testing. Include details about the testing strategy and results in the PR description. +- Documentation: Confirm that the changes are well-documented. Documentation should not only explain how the code works but also guide future developers on its usage and maintenance. +- Address Feedback: Be responsive to feedback from reviewers. Address comments and concerns promptly to facilitate a smooth review process. +- Code Style and Standards: Ensure that the code follows established style guidelines and coding standards. Consistent formatting contributes to maintainability. +- Status Checks: Ensure that automated status checks, such as continuous integration (CI) tests, pass successfully before merging. +By adhering to these best practices, you contribute to a collaborative and efficient development process, fostering a culture of high-quality code and effective communication within the team. + +### Commits + +Clear and informative commit messages are essential for understanding the history of a project. Follow these guidelines to create meaningful commit messages. + +## Message Structure + +### Header + +- Start with a concise one-line summary +- Use present-tense verbs (e.g., "Add," "Fix," "Update") to describe the action. + +### Body + +- Optionally, provide a more detailed explanation. +- Break long explanations into bullet points if needed. + +## Be Descriptive + +- Clearly describe the purpose of the commit. +- Include information about why the change is necessary. + +## Reference Issues + +- If the commit is related to an issue, reference it in the message. +- Use keywords like "Fixes," "Closes," or "Resolves." + +## Consistency + +- Be consistent with your writing style and formatting. +- Use the imperative mood consistently throughout. + +## Examples + +### Good Example + +Here's a good example of a commit message +``` +Add user authentication feature + +- Implement user login functionality +- Enhance user registration form +- Fixes #123 +``` + +### Bad Example + +``` +Changes +- Bug fix +- Update +- Important changes +``` + +### Merge or Rebase + +If you're the sole developer on the branch, you can use rebase, if more people are collaborating together, use merge + +### Merge or Squash merge + +Squash only when it makes the history cleaner. Feature branch is a good example, because it is often short-lived, small, and authored by 1 developer. + +On the other hand, use regular merge for big feature co-authored by multiple developers eg. long-lived branches, please be aware of the [Disadvantages of squash merges](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/about-merge-methods-on-github#squashing-your-merge-commits) + +### Releasing Process + +Before tagging a release, open a branch named with the intended version e.g 10.5.x with the quality level + - alpha: doesn't have all the features, but you can use the features in there + - beta: no major, or blocking bugs. All features working for the customer as promised, no blocking bugs + - production: no major, no blocking, no minor bugs and the documentation is ready + +check the [release process document](release_process.md) for more information + +#### Blocking + +Issues categorized as blockers include: + +- Inability of the customer to access the functionality as described in the manual. +- Stability concerns that impede progress, particularly instances where the system crashes. +- Security issues that act as barriers to further development. +- Stability issues that hinder smooth operation. +- Performance concerns labeled as blockers when they prevent continuation. +- Performance issues classified as major when they allow for continued work. + +#### Progress Reporting + +In teams operating remotely, complete transparency is of utmost importance. + +Visibility into development progress is crucial and is best achieved through the use of storycards and issues. + +To facilitate clear communication, commenting daily is a critical aspect of our process. We advocate for the following format, which aids in asynchronous communication: + +``` + +## Work Completed: +Summarize the tasks successfully finished in relation to the issue. Provide specific details to ensure clarity. + +## Work in Progress (WIP): +Detail ongoing efforts and remaining tasks related to this issue. Clearly outline items currently being worked on and those still needing attention. + +## Investigation and Solution: +If no work has been completed or is in progress, elaborate on the investigative work undertaken to address the issue. Provide insights into the problem, and if a solution was reached, include it. +``` + +For issues or stories labeled with `priority_critical`, continuous updates should be at least two updates per day to keep stakeholders informed. + +Including an Estimated Time of Arrival (ETA) in the comments is essential. While it serves as an estimation subject to change with new findings, it provides a valuable projection of completion. + + + +# Quality Assurance (QA) Process + +QA plays a crucial role in delivering high-quality software. This document outlines responsibilities, expectations, and best practices. + +## QA Responsibilities + +### Daily Standups + +- Attend daily standups for progress updates, issue discussions, and coordination. + +### Provide Test Plans + +- Collaborate on test plans for each sprint. + +### Test Execution + +- Execute test plans manually and through automated testing. +- Log and prioritize defects. +- Track nightly tests. + +### Test Documentation + +- Maintain updated test documentation. + +### Verification and Closure + +- Verify issues and user stories before closure. + +### Cross-Environment Testing + +- Conduct test runs across different environments. + +### Bug Assessment Meetings (BAM) + +- Conduct BAM sessions twice weekly to address community feedback, covering both `test_feedback` repository and active projects. + +### Additional Testing Types + +- Expand responsibilities to include various testing types such as: + - Performance testing + - Security testing + - Compatibility testing + - Usability testing + - Regression testing + +## Expectations for QA Leads + +### Test Planning + +- Lead the creation of detailed test plans. + +### Test Strategy + +- Define a testing strategy, emphasizing automation. + +### Review and Closure + +- Review and close issues, ensuring alignment with the test plan. + +### Communication + +- Facilitate communication between QA and development teams. + +## QA Verification and Testing + +### Testplan + +- Provide a comprehensive test plan, authored exclusively by the QA lead that serves as the source of truth for the verification process + +### Verification Process + +- Verify stories in a two-step process + - As soon as the story is moved to In Verification column, QA team can pickup the issue, they need to log their scenarios, executions and link to the testcase in the testplan. + - QA lead can then verify it's aligned to the original requirements and it was properly verified before closing +- QA leads need to test the main features by themselves. +- Automate regression testing through github workflows, and any other means needed. \ No newline at end of file diff --git a/collections/collaboration/feature_request.md b/collections/collaboration/feature_request.md new file mode 100644 index 0000000..3390fde --- /dev/null +++ b/collections/collaboration/feature_request.md @@ -0,0 +1,16 @@ +--- +name: Feature request +about: Suggest an idea for this project +title: '' +labels: '' +assignees: '' + +--- + +## Is your feature request related to a problem? Please describe + +A clear and concise description of what the problem is. Ex. I'm always frustrated when \[...] + +## Describe the solution you'd like + +A clear and concise description of what you want to happen. \ No newline at end of file diff --git a/collections/collaboration/issue_labels.md b/collections/collaboration/issue_labels.md new file mode 100644 index 0000000..e771c03 --- /dev/null +++ b/collections/collaboration/issue_labels.md @@ -0,0 +1,24 @@ +# Issue Labels Usage Guidelines + +Kindly refrain from using labels other than the specified ones. + +## Priority-based Labels + +- `priority_critical`: This label indicates that the issue requires immediate attention, with a maximum resolution timeframe of the same day. + - If the assigned developer deems this timeline unachievable, they must escalate the issue immediately. + - The term "critical" implies that the resolution is of utmost urgency, and everyone involved should prioritize it until it is resolved. + +- `priority_major`: This label designates issues that are very urgent and should be addressed within a minimal timeframe, typically within 1-2 days but no more than 3 days. + - If the developer anticipates challenges in meeting this timeframe, they are required to escalate the issue promptly. +- `priority_minor`: Issues labeled as such are given a lower priority and are typically positioned towards the end of the sprint cycle. + +## Types Labels + +- `type_bug` +- `type_feature` +- `type_question` +- `type_story`: This label is used to distinguish story cards, providing an overview of a use case the team aims to achieve. + +### For monorepos + +Repository owners are free to create labels per component in the monorepo for easier repo management \ No newline at end of file diff --git a/collections/collaboration/release_process.md b/collections/collaboration/release_process.md new file mode 100644 index 0000000..a931a1b --- /dev/null +++ b/collections/collaboration/release_process.md @@ -0,0 +1,53 @@ +

Release Process

+ +

Table of Contents

+ +- [Github projects](#github-projects) +- [Releasing the Grid](#releasing-the-grid) + - [Environments](#environments) + - [Versions](#versions) + - [Branching/Tagging](#branchingtagging) + - [Blocking Bugfixes for Mainnet](#blocking-bugfixes-for-mainnet) + +*** + +## Github projects + +- We are going to use new github style projects to manage the development process, all repos are linked against tfgrid product e.g 3.6.0 +- You can have different views e.g StoryCards only view for Highlever overview, a view by repositories, priorities +- We will drive all repos from that one project +- We should use milestones (semantic version to sort out the issues) + +## Releasing the Grid + + +### Environments + +- Grid releases are no longer linked to an environment in a pipeline, while this makes sense in lots of scenarios, it won't scale +- An environment hosts a specific grid version based on the components it has +- In the future, we should be able to create ephermal environments e.g deploy this grid version on these X number of nodes + +### Versions + +- Releasing should follow [semantic versioning](https://semver.org/) +- Every grid release is linked to X number of components. For example, TFGrid 3.6.1 is linked to terraform 1.0.0, portal 2.0.1, .. etc + + +### Branching/Tagging + +- As mentioned above, releases should should follow semantic versioning. The tag itself is prefixed with a `v`. so vx.y.z or vx.y.z-rc1 +- Devnet(s) should host development branches and once it reaches a specific quality they get verified on that branch + - THIS IS NOT TRUE: it can be that on a dev net you have production components +- Once verification happens and everything goes ok, we should tag a release of a component +- Once all components are ready a grid release is complete and we can host that release on whatever environment +- Container image tags must not contain the `v`-prefix + +### Blocking Bugfixes for Mainnet + +- In case of a blocking bug happening only on mainnet, we branch out of the tag on the affected component repository +- do the fix on that branch +- host a new grid release on a testing environment to verify +- tag the new component +- merge to trunk +- create a new grid release +- host that grid release (its components) on mainnet diff --git a/collections/collaboration/testing/img/add_test.png b/collections/collaboration/testing/img/add_test.png new file mode 100644 index 0000000..1a8983d Binary files /dev/null and b/collections/collaboration/testing/img/add_test.png differ diff --git a/collections/collaboration/testing/img/deploy_evdc.png b/collections/collaboration/testing/img/deploy_evdc.png new file mode 100644 index 0000000..a662b35 Binary files /dev/null and b/collections/collaboration/testing/img/deploy_evdc.png differ diff --git a/collections/collaboration/testing/img/evdc_home_.jpg b/collections/collaboration/testing/img/evdc_home_.jpg new file mode 100644 index 0000000..b05af42 Binary files /dev/null and b/collections/collaboration/testing/img/evdc_home_.jpg differ diff --git a/collections/collaboration/testing/img/evdc_test.png b/collections/collaboration/testing/img/evdc_test.png new file mode 100644 index 0000000..c8baa69 Binary files /dev/null and b/collections/collaboration/testing/img/evdc_test.png differ diff --git a/collections/collaboration/testing/img/help_us_test_.jpg b/collections/collaboration/testing/img/help_us_test_.jpg new file mode 100644 index 0000000..a94fa94 Binary files /dev/null and b/collections/collaboration/testing/img/help_us_test_.jpg differ diff --git a/collections/collaboration/testing/img/my_test.png b/collections/collaboration/testing/img/my_test.png new file mode 100644 index 0000000..125fea8 Binary files /dev/null and b/collections/collaboration/testing/img/my_test.png differ diff --git a/collections/collaboration/testing/img/project_overview.png b/collections/collaboration/testing/img/project_overview.png new file mode 100644 index 0000000..5a4263e Binary files /dev/null and b/collections/collaboration/testing/img/project_overview.png differ diff --git a/collections/collaboration/testing/img/report_test.png b/collections/collaboration/testing/img/report_test.png new file mode 100644 index 0000000..998f5e4 Binary files /dev/null and b/collections/collaboration/testing/img/report_test.png differ diff --git a/collections/collaboration/testing/img/run_test.png b/collections/collaboration/testing/img/run_test.png new file mode 100644 index 0000000..00ebda3 Binary files /dev/null and b/collections/collaboration/testing/img/run_test.png differ diff --git a/collections/collaboration/testing/img/test_finish.png b/collections/collaboration/testing/img/test_finish.png new file mode 100644 index 0000000..4f9d5c4 Binary files /dev/null and b/collections/collaboration/testing/img/test_finish.png differ diff --git a/collections/collaboration/testing/img/test_home.png b/collections/collaboration/testing/img/test_home.png new file mode 100644 index 0000000..2fa3f7a Binary files /dev/null and b/collections/collaboration/testing/img/test_home.png differ diff --git a/collections/collaboration/testing/img/test_list.png b/collections/collaboration/testing/img/test_list.png new file mode 100644 index 0000000..8071301 Binary files /dev/null and b/collections/collaboration/testing/img/test_list.png differ diff --git a/collections/collaboration/testing/img/test_run.png b/collections/collaboration/testing/img/test_run.png new file mode 100644 index 0000000..e90517e Binary files /dev/null and b/collections/collaboration/testing/img/test_run.png differ diff --git a/collections/collaboration/testing/img/testlodge_invitation.png b/collections/collaboration/testing/img/testlodge_invitation.png new file mode 100644 index 0000000..f6bb9da Binary files /dev/null and b/collections/collaboration/testing/img/testlodge_invitation.png differ diff --git a/collections/collaboration/testing/testing_readme.md b/collections/collaboration/testing/testing_readme.md new file mode 100644 index 0000000..aebf6db --- /dev/null +++ b/collections/collaboration/testing/testing_readme.md @@ -0,0 +1,52 @@ +

Testing the ThreeFold Grid: Ensuring Reliability and User Feedback

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Automation Testing](#automation-testing) +- [Manual Testing](#manual-testing) +- [Covered Tests](#covered-tests) + +*** + +## Introduction + +With each release of a newer version of the ThreeFold Grid, the ThreeFold Community plays a vital role in testing the product components and providing constructive feedback to the engineering team. This article explores the testing strategy employed by ThreeFold, which includes both automation and manual testing, and highlights the covered functionality tested by the procedures. + +## Automation Testing +The internal QA team conducts automation testing, where they automate various test scenarios and run them in nightly builds. This approach helps identify the status of the code and allows for the early detection of functionality and regression issues. + +## Manual Testing +The QA team, along with the grid testing community, performs manual testing. [TestLodge](./testlodge.html) is the chosen platform for managing test plans, test cases, and test runs. By joining TestLodge as a user, individuals can actively participate in running test use cases and reporting any issues encountered during product deployment. Issues can be reported by creating an issue on [ThreeFold's Test Feedback repository](https://github.com/threefoldtech/test_feedback/issues) on Github. + +## Covered Tests +The ThreeFold Grid 3 encompasses a wide range of functionalities that are thoroughly tested to ensure their reliability and performance. Some of the covered functionalities include: + +- Compute + - Virtual machine + - Caprover + - Kubernetes + +- Network + - WebGateway + - Planetary Network + +- Storage + - Quantum Safe Storage System (Quantum Safe Filesystem) + - 0-DB + - S3 minio + +- TFChain + - Portal + - IPFS + +- Farming + - Create Farm + - Farm Management + +- TwinServer v2 +- TerraForm Deployments + +Testing is a crucial aspect of the ThreeFold Grid's development process. By actively involving the ThreeFold Community in testing the product components and leveraging automation and manual testing approaches, the engineering team ensures the reliability and quality of the Grid3. + +With TestLodge as the testing platform, users can contribute to the testing efforts by running test use cases and reporting any issues encountered. Through collaborative testing, the ThreeFold Grid continues to evolve and deliver a robust and efficient infrastructure for users worldwide. \ No newline at end of file diff --git a/collections/collaboration/testing/testlodge.md b/collections/collaboration/testing/testlodge.md new file mode 100644 index 0000000..fe5abf6 --- /dev/null +++ b/collections/collaboration/testing/testlodge.md @@ -0,0 +1,95 @@ +

How to Use TestLodge for Testing the ThreeFold Grid

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Getting Started on TestLodge](#getting-started-on-testlodge) + - [Joining the TF GRID Project on TestLodge by Invitation](#joining-the-tf-grid-project-on-testlodge-by-invitation) +- [Accessing the TF GRID 3.x Projects](#accessing-the-tf-grid-3x-projects) + - [Project Overview](#project-overview) + - [Creating Your Own Personal Test Run](#creating-your-own-personal-test-run) + +*** + +## Introduction + +After each release of a newer version of the ThreeFold Grid, we encourage the ThreeFold Community to participate in testing the grid's product components and provide valuable feedback to our engineering team. To facilitate this process, we have adopted TestLodge as our QA and testing platform. TestLodge allows us to efficiently manage test plans, test cases, and test runs for our products. By joining TestLodge as a user, you can assist us in running test cases and reporting any issues encountered during our product deployment processes. + +## Getting Started on TestLodge + +### Joining the TF GRID Project on TestLodge by Invitation + +To become one of our testers on TestLodge, please request an invitation by joining our official [TF Grid Tester +Telegram Group](https://t.me/joinchat/R75FxI_6J6tgn1jK) and sending a personal message to the group's moderator, providing your email address. + +Once you receive an invitation, check your email for further instructions and create an account on TestLodge. This will grant you access to the TF GRID Project on TestLodge. + +## Accessing the TF GRID 3.x Projects + +After successfully creating your account, you can access the TF GRID 3.x Project from your Testlodge dashboard. Simply click on the project to begin the testing process. + +### Project Overview + +Inside the project, you will find an overview that displays the project's testing environment. Here's a brief description of the project's content: + +- Total Test Plans: +Indicates the number of test plans or products being tested in this project. + +- Total Requirement Docs: +Represents the amount of testing documentation provided for each test within the project. + +- Total Test Suites: +Displays the number of individual test use cases for each product being tested. These test suites are the procedures you will follow as a user/tester. + +- Total Test Runs: +Reflects the total number of testing rounds conducted by users within the project. Each tester has their own Test Run, which serves as a testing dashboard for reporting test results. To get started with testing the TF Grid Test Suites, you need to create your own Test Run by using your name as the title. + +### Creating Your Own Personal Test Run +To create your personal Test Run, follow these steps: + +1. Click on the "Test Runs" tab in the top navigation bar and select "New Test Run." + +![](./img/test_run.png) + +2. Provide your name as the test name and select "eVDC Deployer" as your test suite since it is a test run for eVDC Deployer. Click on "Select Test Suites and Cases" to view the details of the use cases you want to test. + +![](./img/evdc_test.png) + + +3. On the "Select Test Suites and Cases" page, choose the "Deploy a new eVDC" test suite as your Test Suite. This suite includes the different use cases required to deploy an eVDC. + +![](./img/deploy_evdc.png) + +4. Click "Add Test Run" to complete the registration of your new test run. + +![](./img/add_test.png) + + +5. You will see a list of all created test runs, including your own. Click on the test run you just created to access your test run profile. + +![](./img/my_test.png) + + +6. In your test run profile, you will find a summary and a list of the test suites you need to run. + +![](./img/test_list.png) + +7. Click on the "Deploy eVDC" test suite from the list and select "Run Test" to begin testing. + +![](./img/run_test.png) + +8. Proceed to the eVDC Deployer and commence your test. + +![](./img/evdc_home_.jpg) + +9. Provide your remarks in the provided comment box and click "Pass," "Fail," or "Skip" based on the result of your test run to provide feedback to the ThreeFold QA Team. + +![](./img/report_test.png) + +10. Repeat the previous step to complete all + +11. Go back to the ‘test runs’ page to see the overview of all test runs, and make sure that you completed your own test runs as shown below. + +![](./img/test_finish.png) + +12. Thank you for completing test runs for ThreeFold Grid Project! You can now create an issue on [ThreeFold's Test Feedback repository](https://github.com/threefoldtech/test_feedback/issues) on Github, and report to our development teams about your test findings and feedback. \ No newline at end of file diff --git a/collections/dashboard/dashboard.md b/collections/dashboard/dashboard.md index 991b189..6623f7b 100644 --- a/collections/dashboard/dashboard.md +++ b/collections/dashboard/dashboard.md @@ -41,3 +41,12 @@ You can access the ThreeFold Dashboard on different TF Chain networks. - Regarding browser support, we're only supporting Google Chrome browser (and thus Brave browser) at the moment with more browsers to be supported soon. - Deploys one thing at a time. - Might take sometime to deploy a solution like Peertube, so you should wait a little bit until it's fully running. + +## Dashboard Backups + +If the main Dashboard URLs are not working for any reason, the following URLs can be used. Those Dashboard URLs are fully independent of the main Dashboard URLs shown above. + +- [https://dashboard.02.dev.grid.tf](https://dashboard.02.dev.grid.tf) for Dev net +- [https://dashboard.02.qa.grid.tf](https://dashboard.02.qa.grid.tf) for QA net +- [https://dashboard.02.test.grid.tf](https://dashboard.02.test.grid.tf) for Test net +- [https://dashboard.02.grid.tf](https://dashboard.02.grid.tf) for Main net \ No newline at end of file diff --git a/collections/dashboard/deploy/applications.md b/collections/dashboard/deploy/applications.md index ca16e19..81b63e2 100644 --- a/collections/dashboard/deploy/applications.md +++ b/collections/dashboard/deploy/applications.md @@ -18,6 +18,7 @@ Easily deploy your favourite applications on the ThreeFold grid with a click of - [ownCloud](../solutions/owncloud.md) - [Peertube](../solutions/peertube.md) - [Presearch](../solutions/presearch.md) +- [Static Website](../solutions/static_website.md) - [Subsquid](../solutions/subsquid.md) - [Taiga](../solutions/taiga.md) - [Umbrel](../solutions/umbrel.md) diff --git a/collections/dashboard/deploy/dashboard_node_finder.png b/collections/dashboard/deploy/dashboard_node_finder.png new file mode 100644 index 0000000..bafd53f Binary files /dev/null and b/collections/dashboard/deploy/dashboard_node_finder.png differ diff --git a/collections/dashboard/deploy/deploy.md b/collections/dashboard/deploy/deploy.md index a96decc..c32436a 100644 --- a/collections/dashboard/deploy/deploy.md +++ b/collections/dashboard/deploy/deploy.md @@ -5,12 +5,11 @@ Here you will find everything related to deployments on the ThreeFold grid. This - Checking the cost of a deployment using [Pricing Calculator](./pricing_calculator.md) - Finding a node to deploy on using the [Node Finder](./node_finder.md) - Deploying your desired workload from [Virtual Machines](../solutions/vm_intro.md), [Orchestrators](./orchestrators.md), or [Applictions](./applications.md) -- Renting your own node on the ThreeFold grid from [Dedicated Machines](./dedicated_machines.md) - Consulting [Your Contracts](./your_contracts.md) on the TFGrid - Finding or publishing Flists from [Images](./images.md) - Updating or generating your SSH key from [SSH Keys](./ssh_keys.md) - ![](../img/sidebar_2.png) +![](../img/dashboard_deploy.png) *** @@ -20,7 +19,6 @@ Here you will find everything related to deployments on the ThreeFold grid. This - [Node Finder](./node_finder.md) - [Virtual Machines](../solutions/vm_intro.md) - [Orchestrators](./orchestrators.md) -- [Dedicated Machines](./dedicated_machines.md) - [Applications](./applications.md) - [Your Contracts](./your_contracts.md) - [Images](./images.md) diff --git a/collections/dashboard/deploy/node_finder.md b/collections/dashboard/deploy/node_finder.md index dd93347..a1007d7 100644 --- a/collections/dashboard/deploy/node_finder.md +++ b/collections/dashboard/deploy/node_finder.md @@ -2,39 +2,119 @@

Table of Contents

-- [Nodes](#nodes) -- [GPU Support](#gpu-support) +- [Overview](#overview) +- [Filters](#filters) +- [Node Details](#node-details) +- [Gateway Nodes](#gateway-nodes) +- [Dedicated Nodes](#dedicated-nodes) + - [Reservation](#reservation) + - [Billing \& Pricing](#billing--pricing) + - [Discounts](#discounts) +- [GPU Nodes](#gpu-nodes) + - [GPU Support](#gpu-support) + - [GPU Support Links](#gpu-support-links) *** -## Nodes +## Overview -The Node Finder page provides a more detailed view for the nodes available on the ThreeFold grid With detailed information and statistics about any of the available nodes. +The Node Finder page provides a more detailed view for the nodes available on the ThreeFold grid with detailed information and statistics about nodes. -![](../img/nodes.png) +![](../img/dashboard_node_finder.png) -You can get a node with the desired specifications using the filters available in the nodes page. +## Filters -![](../img/nodes_filters.png) +You can use the filters to narrow your search and find a node with the desired specifications. -You can see all of the node details by clicking on a node record. +![](../img/dashboard_node_finder_filters_1.png) -![](../img/nodes_details.png) +![](../img/dashboard_node_finder_filters_2.png) -## GPU Support +You can use the toggle buttons to filter your search. -![GPU support](../img/gpu_filter.png) +- Dedicated nodes +- Gateways nodes +- GPU nodes +- Rentable nodes -- A new filter for GPU supported node is now available on the Nodes page. -- GPU count -- Filtering capabilities based on the model / device +You can choose a location for your node, with filters such as region and country. This can be highly useful for edge cloud projects. -On the details pages is shown the card information and its status (`reserved` or `available`) also the ID that’s needed to be used during deployments is easily accessible and has a copy to clipboard button. +Filtering nodes by their status (up, down, standby) can also improve your search. -![GPU details](../img/gpu_details.png) +If your deployment has some minimum requirements, you can easily filter relevant nodes with the different resource filters. -Here’s an example of how it looks in case of reserved +## Node Details -![GPU details](../img/gpu_details_reserved.png) +You can see all of the node details when you click on its row. -The TF Dashboard is where to reserve the nodes the farmer should be able to set the extra fees on the form and the user also should be able to reserve and get the details of the node (cost including the extrafees, GPU informations). +![](../img/dashboard_node_finder_node_view.png) + +Note that the network speed test displayed in the Node Finder is updated every 6 hours. + +## Gateway Nodes + +To see only gateway nodes, enable **Gateways** in the filters. + +![](../img/dashboard_node_finder_gateways.png) + +## Dedicated Nodes + +Dedicated machines are 3Nodes that can be reserved and rented entirely by one user. The user can thus reserve an entire node and use it exclusively to deploy solutions. This feature is ideal for users who want to host heavy deployments with the benefits of high reliability and cost effectiveness. + +To see only dedicated nodes, enable **Dedicated Nodes** in the filters. + +![](../img/dashboard_node_finder_dedicated.png) + +### Reservation + +When you have decided which node to reserve, you can easily rent it from the Node Finder page. + +To reserve a node, simply click on `Reserve` on the node row. + +![](../img/dashboard_node_finder_dedicated_reserve.png) + +To unreserve a node, simply click on `Unreserve` on the node row. + +![](../img/dashboard_node_finder_dedicated_unreserve.png) + +Note that once you've rented a dedicated node that has a GPU, you can deploy GPU workloads. + +### Billing & Pricing + +- Once a node is rented, there is a fixed charge billed to the tenant regardless of deployed workloads. +- Any subsequent NodeContract deployed on a node where a rentContract is active (and the same user is creating the nodeContracts) can be excluded from billing (apart from public ip and network usage). +- Billing rates are calculated hourly on the TFGrid. + - While some of the documentation mentions a monthly price, the chain expresses pricing per hour. The monthly price shown within the manual is offered as a convenience to users, as it provides a simple way to estimate costs. + +### Discounts + +- Received Discounts for renting a node on TFGrid internet capacity + - 50% for dedicated node (TF Pricing policies) + - A second level discount up to 60% for balance level see [Discount Levels](../../../knowledge_base/cloud/pricing/staking_discount_levels.md) +- Discounts are calculated every time the grid bills by checking the available TFT balance on the user wallet and seeing if it is sufficient to receive a discount. As a result, if the user balance drops below the treshold of a given discount, the deployment price increases. + +## GPU Nodes + +To see only nodes with GPU, enable **GPU Node** in the filters. + +![](../img/dashboard_node_finder_gpu.png) + +This will filter nodes and only show nodes with GPU. You can see several information such as the model of the GPU and a GPU score. + +![](../img/dashboard_node_finder_gpu2.png) + +You can click on a given GPU node and see the GPU details. + +![](../img/dashboard_node_finder_gpu3.png) + +The ID that’s needed to be used during deployments is easily accessible and has a button to copy to the clipboard. + +### GPU Support + +To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have rented a dedicated node equipped with a GPU, users can deploy workloads on their dedicated GPU node. + + + +### GPU Support Links + +The ThreeFold Manual covers many ways to use a GPU node on the TFGrid. Read [this section](../../system_administrators/gpu/gpu_toc.md) to learn more. \ No newline at end of file diff --git a/collections/dashboard/img/.done b/collections/dashboard/img/.done deleted file mode 100644 index d8eefe3..0000000 --- a/collections/dashboard/img/.done +++ /dev/null @@ -1,3 +0,0 @@ -dashboard_tc.png -dashboard_portal_terms_conditions.png -profile_manager1.png diff --git a/collections/dashboard/img/0_bootstrap.png b/collections/dashboard/img/0_Bootstrap.png similarity index 100% rename from collections/dashboard/img/0_bootstrap.png rename to collections/dashboard/img/0_Bootstrap.png diff --git a/collections/dashboard/img/minting.png b/collections/dashboard/img/Minting.png similarity index 100% rename from collections/dashboard/img/minting.png rename to collections/dashboard/img/Minting.png diff --git a/collections/dashboard/img/monitoring.png b/collections/dashboard/img/Monitoring.png similarity index 100% rename from collections/dashboard/img/monitoring.png rename to collections/dashboard/img/Monitoring.png diff --git a/collections/dashboard/img/ssh_key.png b/collections/dashboard/img/SSH_Key.png similarity index 100% rename from collections/dashboard/img/ssh_key.png rename to collections/dashboard/img/SSH_Key.png diff --git a/collections/dashboard/img/dashboard_T&C.png b/collections/dashboard/img/dashboard_T&C.png new file mode 100644 index 0000000..6173f1a Binary files /dev/null and b/collections/dashboard/img/dashboard_T&C.png differ diff --git a/collections/dashboard/img/dashboard_balances.png b/collections/dashboard/img/dashboard_balances.png new file mode 100644 index 0000000..6251033 Binary files /dev/null and b/collections/dashboard/img/dashboard_balances.png differ diff --git a/collections/dashboard/img/dashboard_deploy.png b/collections/dashboard/img/dashboard_deploy.png new file mode 100644 index 0000000..7aa56ef Binary files /dev/null and b/collections/dashboard/img/dashboard_deploy.png differ diff --git a/collections/dashboard/img/dashboard_node_finder.png b/collections/dashboard/img/dashboard_node_finder.png new file mode 100644 index 0000000..bafd53f Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_dedicated.png b/collections/dashboard/img/dashboard_node_finder_dedicated.png new file mode 100644 index 0000000..d1e817a Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_dedicated.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_dedicated_reserve.png b/collections/dashboard/img/dashboard_node_finder_dedicated_reserve.png new file mode 100644 index 0000000..61a7cdc Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_dedicated_reserve.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_dedicated_unreserve.png b/collections/dashboard/img/dashboard_node_finder_dedicated_unreserve.png new file mode 100644 index 0000000..4ce3c36 Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_dedicated_unreserve.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_filters.png b/collections/dashboard/img/dashboard_node_finder_filters.png new file mode 100644 index 0000000..2c6735b Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_filters.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_filters_1.png b/collections/dashboard/img/dashboard_node_finder_filters_1.png new file mode 100644 index 0000000..2f36cb3 Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_filters_1.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_filters_2.png b/collections/dashboard/img/dashboard_node_finder_filters_2.png new file mode 100644 index 0000000..6b99372 Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_filters_2.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_gateways.png b/collections/dashboard/img/dashboard_node_finder_gateways.png new file mode 100644 index 0000000..38a0fdc Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_gateways.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_gpu.png b/collections/dashboard/img/dashboard_node_finder_gpu.png new file mode 100644 index 0000000..1a9ff2d Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_gpu.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_gpu2.png b/collections/dashboard/img/dashboard_node_finder_gpu2.png new file mode 100644 index 0000000..8158dfc Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_gpu2.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_gpu3.png b/collections/dashboard/img/dashboard_node_finder_gpu3.png new file mode 100644 index 0000000..70a5931 Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_gpu3.png differ diff --git a/collections/dashboard/img/dashboard_node_finder_node_view.png b/collections/dashboard/img/dashboard_node_finder_node_view.png new file mode 100644 index 0000000..215074d Binary files /dev/null and b/collections/dashboard/img/dashboard_node_finder_node_view.png differ diff --git a/collections/dashboard/img/dashboard_portal_terms_conditions.png b/collections/dashboard/img/dashboard_portal_terms_conditions.png index b784fef..82f6c3e 100644 Binary files a/collections/dashboard/img/dashboard_portal_terms_conditions.png and b/collections/dashboard/img/dashboard_portal_terms_conditions.png differ diff --git a/collections/dashboard/img/dashboard_tc.png b/collections/dashboard/img/dashboard_tc.png deleted file mode 100644 index f58af46..0000000 Binary files a/collections/dashboard/img/dashboard_tc.png and /dev/null differ diff --git a/collections/dashboard/img/dashboard_terms_conditions.png b/collections/dashboard/img/dashboard_terms_conditions.png new file mode 100644 index 0000000..991a720 Binary files /dev/null and b/collections/dashboard/img/dashboard_terms_conditions.png differ diff --git a/collections/dashboard/img/dashboard_walletconnector_info.png b/collections/dashboard/img/dashboard_walletconnector_info.png new file mode 100644 index 0000000..f74b033 Binary files /dev/null and b/collections/dashboard/img/dashboard_walletconnector_info.png differ diff --git a/collections/dashboard/img/dashboard_walletconnector_window.png b/collections/dashboard/img/dashboard_walletconnector_window.png new file mode 100644 index 0000000..f453aff Binary files /dev/null and b/collections/dashboard/img/dashboard_walletconnector_window.png differ diff --git a/collections/dashboard/img/profile_manager1.png b/collections/dashboard/img/profile_manager1.png index c34e966..73d59d5 100644 Binary files a/collections/dashboard/img/profile_manager1.png and b/collections/dashboard/img/profile_manager1.png differ diff --git a/collections/dashboard/img/sidebar_1_copy_full.png b/collections/dashboard/img/sidebar_1 (copy)_full.png similarity index 100% rename from collections/dashboard/img/sidebar_1_copy_full.png rename to collections/dashboard/img/sidebar_1 (copy)_full.png diff --git a/collections/dashboard/img/sidebar_2_copy_full.png b/collections/dashboard/img/sidebar_2 (copy)_full.png similarity index 100% rename from collections/dashboard/img/sidebar_2_copy_full.png rename to collections/dashboard/img/sidebar_2 (copy)_full.png diff --git a/collections/dashboard/img/sidebar_3_copy_full.png b/collections/dashboard/img/sidebar_3 (copy)_full.png similarity index 100% rename from collections/dashboard/img/sidebar_3_copy_full.png rename to collections/dashboard/img/sidebar_3 (copy)_full.png diff --git a/collections/dashboard/img/sidebar_4_copy_full.png b/collections/dashboard/img/sidebar_4 (copy)_full.png similarity index 100% rename from collections/dashboard/img/sidebar_4_copy_full.png rename to collections/dashboard/img/sidebar_4 (copy)_full.png diff --git a/collections/documentation/dashboard/solutions/fullvm.md b/collections/dashboard/solutions/fullVm.md similarity index 98% rename from collections/documentation/dashboard/solutions/fullvm.md rename to collections/dashboard/solutions/fullVm.md index 59b01e6..0babaed 100644 --- a/collections/documentation/dashboard/solutions/fullvm.md +++ b/collections/dashboard/solutions/fullVm.md @@ -43,7 +43,7 @@ Deploy a new full virtual machine on the Threefold Grid - `Myceluim` to enable mycelium on the virtual machine - `Wireguard Access` to add a wireguard access to the Virtual Machine - `GPU` flag to add GPU to the Virtual machine - - To deploy a Full VM with GPU, you first need to [rent a dedicated node](../../dashboard/deploy/dedicated_machines.md) + - To deploy a Full VM with GPU, you first need to [rent a dedicated node](../../dashboard/deploy/node_finder.md#dedicated-nodes) - `Dedicated` flag to retrieve only dedicated nodes - `Certified` flag to retrieve only certified nodes - Choose the location of the node diff --git a/collections/dashboard/solutions/fullvm.md b/collections/dashboard/solutions/fullvm.md deleted file mode 100644 index 59b01e6..0000000 --- a/collections/dashboard/solutions/fullvm.md +++ /dev/null @@ -1,108 +0,0 @@ -

Full Virtual Machine

- -

Table of Contents

- -- [Introduction](#introduction) -- [Deployment](#deployment) -- [Difference Between Full VM and Micro VM](#difference-between-full-vm-and-micro-vm) -- [Manually Mounting Additional Disk](#manually-mounting-additional-disk) - - [Check All Disks Attached to the VM](#check-all-disks-attached-to-the-vm) - - [Create a Mount Directory](#create-a-mount-directory) - - [New file system](#new-file-system) - - [Mount drive](#mount-drive) - -*** - -## Introduction - -We present the steps to deploy a full VM on the TFGrid. - -## Deployment - -Deploy a new full virtual machine on the Threefold Grid - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Full Virtual Machine** - -**Process:** - -![ ](./img/solutions_fullvm.png) - -- Fill in the instance name: it's used to reference the Full VM in the future. -- Choose the image from the drop down (e.g Alpine, Ubuntu) or you can click on `Other` and manually specify the flist URL and the entrypoint. -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2, diskSize: 25 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 50 } - - **Large**: {cpu: 4, memory: 16, diskSize: 100} - - Or choose a **Custom** plan -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 - - `Public IPv6` flag gives the virtual machine a Public IPv6 - - `Planetary Network` to connect the Virtual Machine to Planetary network - - `Myceluim` to enable mycelium on the virtual machine - - `Wireguard Access` to add a wireguard access to the Virtual Machine -- `GPU` flag to add GPU to the Virtual machine - - To deploy a Full VM with GPU, you first need to [rent a dedicated node](../../dashboard/deploy/dedicated_machines.md) -- `Dedicated` flag to retrieve only dedicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Country` - - `Farm Name` -- Choose the node to deploy the Full Virtual Machine on - ![](./img/node_selection.png) - -You can attach one or more disks to the Virtual Machine by clicking on the Disks tab and the plus `+` sign and specify the following parameters -![ ](./img/new_vm3.png) - -- Disk name -- Disk size - -in the bottom of the page you can see a list of all of the virtual machines you deployed. you can click on `Show details` for more details - -![ ](./img/new_vm5.png) -You can also go to JSON tab for full details -![ ](./img/new_vm6.png) - -## Difference Between Full VM and Micro VM - -- Full VM contains a default disk attached to it which is not the case in the Micro VM where you needed to make sure to attach a disk to it or the VM will fail -- The default disk is mounted on / so if you want to attach any additional disks, you have to choose a different mounting point -- Only cloud init flists can be deployed on Full VM. You can check official Threefold flists [here](https://hub.grid.tf/tf-official-vms) -- In Full VM, you need to mount the additional disks manually after the VM is deployed - -## Manually Mounting Additional Disk - -- You can follow the following commands to add your disk manually: - -### Check All Disks Attached to the VM - -```bash -fdisk -l -``` - -The additional disk won't be mounted and you won't find it listed - -```bash -df -h -``` - -### Create a Mount Directory - -```bash -sudo mkdir /hdd6T -``` - -### New file system - -```bash -sudo mkfs.ext4 /dev/vdb -``` - -### Mount drive - -```bash -sudo mount /dev/vdb /hdd6T/ -``` - -![mounting additional disk](./img/fullvm6.png) diff --git a/collections/dashboard/solutions/img/captain_loginweblet_caprover_.png b/collections/dashboard/solutions/img/captain_login+weblet_caprover_.png similarity index 100% rename from collections/dashboard/solutions/img/captain_loginweblet_caprover_.png rename to collections/dashboard/solutions/img/captain_login+weblet_caprover_.png diff --git a/collections/dashboard/solutions/img/deleted_contract_info_copy.png b/collections/dashboard/solutions/img/deleted_contract_info copy.png similarity index 100% rename from collections/dashboard/solutions/img/deleted_contract_info_copy.png rename to collections/dashboard/solutions/img/deleted_contract_info copy.png diff --git a/collections/dashboard/solutions/img/nixos_micro1.png b/collections/dashboard/solutions/img/nixos-micro1.png similarity index 100% rename from collections/dashboard/solutions/img/nixos_micro1.png rename to collections/dashboard/solutions/img/nixos-micro1.png diff --git a/collections/dashboard/solutions/img/nixos_micro2.png b/collections/dashboard/solutions/img/nixos-micro2.png similarity index 100% rename from collections/dashboard/solutions/img/nixos_micro2.png rename to collections/dashboard/solutions/img/nixos-micro2.png diff --git a/collections/dashboard/solutions/img/nixos_micro3.png b/collections/dashboard/solutions/img/nixos-micro3.png similarity index 100% rename from collections/dashboard/solutions/img/nixos_micro3.png rename to collections/dashboard/solutions/img/nixos-micro3.png diff --git a/collections/dashboard/solutions/img/nodep_2.png b/collections/dashboard/solutions/img/nodeP_2.png similarity index 100% rename from collections/dashboard/solutions/img/nodep_2.png rename to collections/dashboard/solutions/img/nodeP_2.png diff --git a/collections/dashboard/solutions/img/nodepilot_2.png b/collections/dashboard/solutions/img/nodePilot_2.png similarity index 100% rename from collections/dashboard/solutions/img/nodepilot_2.png rename to collections/dashboard/solutions/img/nodePilot_2.png diff --git a/collections/dashboard/solutions/img/nodepilot_3.png b/collections/dashboard/solutions/img/nodePilot_3.png similarity index 100% rename from collections/dashboard/solutions/img/nodepilot_3.png rename to collections/dashboard/solutions/img/nodePilot_3.png diff --git a/collections/dashboard/solutions/img/nxios_micro1.png b/collections/dashboard/solutions/img/nxios-micro1.png similarity index 100% rename from collections/dashboard/solutions/img/nxios_micro1.png rename to collections/dashboard/solutions/img/nxios-micro1.png diff --git a/collections/dashboard/solutions/img/solutions_staticwebsite.png b/collections/dashboard/solutions/img/solutions_staticwebsite.png new file mode 100644 index 0000000..33c51d6 Binary files /dev/null and b/collections/dashboard/solutions/img/solutions_staticwebsite.png differ diff --git a/collections/dashboard/solutions/img/staticwebsite_list.png b/collections/dashboard/solutions/img/staticwebsite_list.png new file mode 100644 index 0000000..94fd8ab Binary files /dev/null and b/collections/dashboard/solutions/img/staticwebsite_list.png differ diff --git a/collections/dashboard/solutions/img/subsquid_list.jpg b/collections/dashboard/solutions/img/subsquid_list.jpeg similarity index 100% rename from collections/dashboard/solutions/img/subsquid_list.jpg rename to collections/dashboard/solutions/img/subsquid_list.jpeg diff --git a/collections/dashboard/solutions/nextcloud.md b/collections/dashboard/solutions/nextcloud.md index d25dd79..c340b8f 100644 --- a/collections/dashboard/solutions/nextcloud.md +++ b/collections/dashboard/solutions/nextcloud.md @@ -63,7 +63,6 @@ If you're not sure and just want the easiest, most affordable option, skip the p * **Recommended**: {cpu: 4, memory: 16gb, diskSize: 1000gb } * Or choose a **Custom** plan * If want to reserve a public IPv4 address, click on Network then select **Public IPv4** -* If you want a [dedicated](../deploy/dedicated_machines.md) and/or a certified node, select the corresponding option * Choose the location of the node * `Country` * `Farm Name` diff --git a/collections/dashboard/solutions/static_website.md b/collections/dashboard/solutions/static_website.md new file mode 100644 index 0000000..5584bcf --- /dev/null +++ b/collections/dashboard/solutions/static_website.md @@ -0,0 +1,53 @@ +

Static Website

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deployment](#deployment) + +--- + +## Introduction + +Static Website is an application where a user provides a GitHub repository URL for the files to be automatically served online using Caddy. + +## Prerequisites + +- Make sure you have a [wallet](../wallet_connector.md) +- From the sidebar click on **Applications** +- Click on **Static Website** + +## Deployment + +![ ](./img/solutions_staticwebsite.png) + +- Enter an instance name + +- Enter a GitHub repository URL that needs to be cloned + +- Enter the title for the cloned repository + +- Select a capacity package: + + - **Small**: {cpu: 1, memory: 2 , diskSize: 50 } + - **Medium**: {cpu: 2, memory: 4, diskSize: 100 } + - **Large**: {cpu: 4, memory: 16, diskSize: 250 } + - Or choose a **Custom** plan + +- `Dedicated` flag to retrieve only dedicated nodes +- `Certified` flag to retrieve only certified nodes +- Choose the location of the node + - `Region` + - `Country` + - `Farm Name` +- Choose the node to deploy on + - Note: You can select a specific node with manual selection +- `Custom Domain` flag allows the user to use a custom domain +- Choose a gateway node to deploy your static website + +Once this is done, you can see a list of all of your deployed instances: + +![ ](./img/staticwebsite_list.png) + +Click on the button **Visit** under **Actions** to go to your static website! diff --git a/collections/dashboard/tfchain/tf_dao.md b/collections/dashboard/tfchain/tf_dao.md index 648280b..e112e80 100644 --- a/collections/dashboard/tfchain/tf_dao.md +++ b/collections/dashboard/tfchain/tf_dao.md @@ -8,6 +8,7 @@ The TFChain DAO (i.e. Decentralized Autonomous Organization) feature integrates - [Prerequisites to Vote](#prerequisites-to-vote) - [How to Vote for a Proposal](#how-to-vote-for-a-proposal) - [The Goal of the Threefold DAO](#the-goal-of-the-threefold-dao) +- [Voting Weight](#voting-weight) *** @@ -39,3 +40,17 @@ To vote, you need to log into your Threefold Dashboard account, go to **TF DAO** The goal of DAO voting system is to gather the thoughts and will of the Threefold community and build projects that are aligned with the ethos of the project. We encourage anyone to share their ideas. Who knows? Your sudden spark of genius might lead to an accepted proposal on the Threefold DAO! + +## Voting Weight + +The DAO votes are weighted as follows: + +- Get all linked farms to the account +- Get all nodes per farm +- Get compute and storage units per node (CU and SU) +- Compute the weight of a farm: + ``` + 2 * (sum of CU of all nodes) + (sum of SU of all nodes) + ``` + +Voting weights are tracked per farm to keep it easy and traceable. Thus, if an account has multiple farms, the vote will be registered per farm. \ No newline at end of file diff --git a/collections/dashboard/wallet_connector.md b/collections/dashboard/wallet_connector.md index 842339a..a4ab1bc 100644 --- a/collections/dashboard/wallet_connector.md +++ b/collections/dashboard/wallet_connector.md @@ -4,13 +4,16 @@ - [Introduction](#introduction) - [Supported Networks](#supported-networks) -- [Process](#process) +- [Create a Wallet](#create-a-wallet) +- [Import a Wallet](#import-a-wallet) *** ## Introduction -To interact with TFChain, users need to set a wallet connector. +To interact with TFChain, users can connect their TFChain wallet to the wallet connector available on the ThreeFold Dashboard. + +You can create a new wallet or import an existing wallet. ## Supported Networks @@ -27,16 +30,36 @@ Currently, we're supporting four different networks: ![ ](./img/profile_manager1.png) -## Process +## Create a Wallet -Start entering the following information required to create your new profile. +To create a new wallet, open the ThreeFold Dashboard on the desired network, click on `Create Account`, enter the following information and click `Connect`. -![ ](./img/profile_manager2.png) +- `Mnemonics`: The secret words of your Polkadot account. Click on the **Create Account** button to generate yours. +- `Email`: Enter a valid email address. +- `Password`: Choose a password and confirm it. This will be used to access your account. -- `Mnemonics` are the secret words of your Polkadot account. Click on the **Create Account** button to generate yours. -- `Password` is used to access your account -- `Confirm Password` +![](./img/dashboard_walletconnector_window.png) -After you finish typing your credentials, click on **Connect**. Once your profile gets activated, you should find your **Twin ID** and **Address** generated under your **_Mnemonics_** for verification. Also, your **Account Balance** will be available at the top right corner under your profile name. +You will be asked to accept ThreeFold's Terms and Conditions: -![ ](./img/profile_manager3.png) +![](./img/dashboard_terms_conditions.png) + +Once you've set your credentials, clicked on **Connect** and accepted the terms and conditions, your profile will be activated. + +Upon activation, you will find your **Twin ID**, **Address** and wallet current **balance** generated under your **Mnemonics**. + +![](./img/dashboard_walletconnector_info.png) + +Your current and locked balances will also be available at the top right corner of the dashboard. Here's an example of the balances you can find for your wallet. Some TFT is locked during utilization as the TFGrid bills you for your workloads and traffic. + +![](./img/dashboard_balances.png) + +## Import a Wallet + +You can import an existing wallet by entering in `Mnemonics` the associated seed phrase or HEX secret of the existing wallet. + +- To import a wallet created with the TF Dashboard, use the seed phrase provided when you created the account. +- To import a wallet or a farm created on the TF Connect app, use the TFChain HEX secret. + - From the menu, open **Wallet** -> **Wallet name** -> **Info symbol (i)**, and then reveal and copy the **TFChain Secret**. + +When you import a new wallet, you can decide a new password and email address, i.e. you only need the mnemonics to import an existing wallet on the dashboard. \ No newline at end of file diff --git a/collections/developers/.collection b/collections/developers/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/developers/developers.md b/collections/developers/developers.md new file mode 100644 index 0000000..82cfb2a --- /dev/null +++ b/collections/developers/developers.md @@ -0,0 +1,91 @@ +# ThreeFold Developers + +This section covers all practical tutorials on how to develop and build on the ThreeFold Grid. + +For complementary information on the technology developed by ThreeFold, refer to the [Technology](../../knowledge_base/technology/technology_toc.md) section. + +

Table of Contents

+ +- [Javascript Client](./javascript/grid3_javascript_readme.md) + - [Installation](./javascript/grid3_javascript_installation.md) + - [Loading Client](./javascript/grid3_javascript_loadclient.md) + - [Deploy a VM](./javascript/grid3_javascript_vm.md) + - [Capacity Planning](./javascript/grid3_javascript_capacity_planning.md) + - [Deploy Multiple VMs](./javascript/grid3_javascript_vms.md) + - [Deploy CapRover](./javascript/grid3_javascript_caprover.md) + - [Gateways](./javascript/grid3_javascript_vm_gateways.md) + - [Deploy a Kubernetes Cluster](./javascript/grid3_javascript_kubernetes.md) + - [Deploy a ZDB](./javascript/grid3_javascript_zdb.md) + - [Deploy ZDBs for QSFS](./javascript/grid3_javascript_qsfs_zdbs.md) + - [QSFS](./javascript/grid3_javascript_qsfs.md) + - [Key Value Store](./javascript/grid3_javascript_kvstore.md) + - [VM with Wireguard and Gateway](./javascript/grid3_wireguard_gateway.md) + - [GPU Support](./javascript/grid3_javascript_gpu_support.md) +- [Go Client](./go/grid3_go_readme.md) + - [Installation](./go/grid3_go_installation.md) + - [Loading Client](./go/grid3_go_load_client.md) + - [Deploy a VM](./go/grid3_go_vm.md) + - [Deploy Multiple VMs](./go/grid3_go_vms.md) + - [Deploy Gateways](./go/grid3_go_gateways.md) + - [Deploy Kubernetes](./go/grid3_go_kubernetes.md) + - [Deploy a QSFS](./go/grid3_go_qsfs.md) + - [GPU Support](./go/grid3_go_gpu.md) +- [TFCMD](./tfcmd/tfcmd.md) + - [Getting Started](./tfcmd/tfcmd_basics.md) + - [Deploy a VM](./tfcmd/tfcmd_vm.md) + - [Deploy Kubernetes](./tfcmd/tfcmd_kubernetes.md) + - [Deploy ZDB](./tfcmd/tfcmd_zdbs.md) + - [Gateway FQDN](./tfcmd/tfcmd_gateway_fqdn.md) + - [Gateway Name](./tfcmd/tfcmd_gateway_name.md) + - [Contracts](./tfcmd/tfcmd_contracts.md) +- [TFROBOT](./tfrobot/tfrobot.md) + - [Installation](./tfrobot/tfrobot_installation.md) + - [Configuration File](./tfrobot/tfrobot_config.md) + - [Deployment](./tfrobot/tfrobot_deploy.md) + - [Commands and Flags](./tfrobot/tfrobot_commands_flags.md) + - [Supported Configurations](./tfrobot/tfrobot_configurations.md) +- [ThreeFold Chain](./tfchain/tfchain.md) + - [Introduction](./tfchain/introduction.md) + - [Farming Policies](./tfchain/farming_policies.md) + - [External Service Contract](./tfchain/tfchain_external_service_contract.md) + - [Solution Provider](./tfchain/tfchain_solution_provider.md) +- [Grid Proxy](./proxy/proxy_readme.md) + - [Introducing Grid Proxy](./proxy/proxy.md) + - [Setup](./proxy/setup.md) + - [DB Testing](./proxy/db_testing.md) + - [Commands](./proxy/commands.md) + - [Contributions](./proxy/contributions.md) + - [Explorer](./proxy/explorer.md) + - [Database](./proxy/database.md) + - [Production](./proxy/production.md) + - [Release](./proxy/release.md) +- [Flist](./flist/flist.md) + - [ThreeFold Hub Intro](./flist/flist_hub/zos_hub.md) + - [Generate an API Token](./flist/flist_hub/api_token.md) + - [Convert Docker Image Into Flist](./flist/flist_hub/convert_docker_image.md) + - [Supported Flists](./flist/grid3_supported_flists.md) + - [Flist Case Studies](./flist/flist_case_studies/flist_case_studies.md) + - [Case Study: Debian 12](./flist/flist_case_studies/flist_debian_case_study.md) + - [Case Study: Nextcloud AIO](./flist/flist_case_studies/flist_nextcloud_case_study.md) +- [Internals](./internals/internals.md) + - [Reliable Message Bus (RMB)](./internals/rmb/rmb_toc.md) + - [Introduction to RMB](./internals/rmb/rmb_intro.md) + - [RMB Specs](./internals/rmb/rmb_specs.md) + - [RMB Peer](./internals/rmb/uml/peer.md) + - [RMB Relay](./internals/rmb/uml/relay.md) + - [ZOS](./internals/zos/index.md) + - [Manual](./internals/zos/manual/manual.md) + - [Workload Types](./internals/zos/manual/workload_types.md) + - [Internal Modules](./internals/zos/internals/internals.md) + - [Capacity](./internals/zos/internals/capacity.md) + - [Performance Monitor Package](./internals/zos/performance/performance.md) + - [Public IPs Validation Task](./internals/zos/performance/publicips.md) + - [CPUBenchmark](./internals/zos/performance/cpubench.md) + - [IPerf](./internals/zos/performance/iperf.md) + - [Health Check](./internals/zos/performance/healthcheck.md) + - [API](./internals/zos/manual/api.md) +- [Grid Deployment](./grid_deployment/grid_deployment.md) + - [TFGrid Stacks](./grid_deployment/tfgrid_stacks.md) + - [Full VM Grid Deployment](./grid_deployment/grid_deployment_full_vm.md) + - [Grid Snapshots](./grid_deployment/snapshots.md) + - [Deploy the Dashboard](./grid_deployment/deploy_dashboard.md) \ No newline at end of file diff --git a/collections/developers/flist/flist.md b/collections/developers/flist/flist.md new file mode 100644 index 0000000..6c69e05 --- /dev/null +++ b/collections/developers/flist/flist.md @@ -0,0 +1,11 @@ +

Flist

+ +

Table of Contents

+ +- [Zero-OS Hub](./flist_hub/zos_hub.md) +- [Generate an API Token](./flist_hub/api_token.md) +- [Convert Docker Image Into Flist](./flist_hub/convert_docker_image.md) +- [Supported Flists](./grid3_supported_flists.md) +- [Flist Case Studies](./flist_case_studies/flist_case_studies.md) + - [Case Study: Debian 12](./flist_case_studies/flist_debian_case_study.md) + - [Case Study: Nextcloud AIO](./flist_case_studies/flist_nextcloud_case_study.md) \ No newline at end of file diff --git a/collections/developers/flist/flist_case_studies/flist_case_studies.md b/collections/developers/flist/flist_case_studies/flist_case_studies.md new file mode 100644 index 0000000..b258836 --- /dev/null +++ b/collections/developers/flist/flist_case_studies/flist_case_studies.md @@ -0,0 +1,6 @@ +

Flist Case Studies

+ +

Table of Contents

+ +- [Case Study: Debian 12](./flist_debian_case_study.md) +- [Case Study: Nextcloud AIO](./flist_nextcloud_case_study.md) \ No newline at end of file diff --git a/collections/developers/flist/flist_case_studies/flist_debian_case_study.md b/collections/developers/flist/flist_case_studies/flist_debian_case_study.md new file mode 100644 index 0000000..3777433 --- /dev/null +++ b/collections/developers/flist/flist_case_studies/flist_debian_case_study.md @@ -0,0 +1,300 @@ +

Flist Case Study: Debian 12

+ +

Table of Contents

+ +- [Introduction](#introduction) + - [You Said Flist?](#you-said-flist) + - [Case Study Objective](#case-study-objective) + - [The Overall Process](#the-overall-process) +- [Docker Image Creation](#docker-image-creation) + - [Dockerfile](#dockerfile) + - [Docker Image Script](#docker-image-script) + - [zinit Folder](#zinit-folder) + - [README.md File](#readmemd-file) + - [Putting it All Together](#putting-it-all-together) +- [Docker Publishing Steps](#docker-publishing-steps) + - [Create Account and Access Token](#create-account-and-access-token) + - [Build and Push the Docker Image](#build-and-push-the-docker-image) +- [Convert the Docker Image to an Flist](#convert-the-docker-image-to-an-flist) +- [Deploy the Flist on the TF Playground](#deploy-the-flist-on-the-tf-playground) +- [Conclusion](#conclusion) + +*** + +## Introduction + +For this tutorial, we will present a case study demonstrating how easy it is to create a new flist on the ThreeFold ecosystem. We will be creating a Debian Flist and we will deploy a micro VM on the ThreeFold Playground and access our Debian deployment. + +To do all this, we will need to create a Docker Hub account, create a Dockerfile, a docker image and a docker container, then convert the docker image to a Zero-OS flist. After all this, we will be deploying our Debian workload on the ThreeFold Playground. You'll see, it's pretty straightforward and fun to do. + + + +### You Said Flist? + +First, let's recall what an flist actually is and does. In short, an flist is a very effective way to deal with software data and the end result is fast deployment and high reliability. + +In a flist, we separate the metadata from the data. The metadata is a description of what files are in that particular image. It's the data providing information about the app/software. Thanks to flist, the 3Node doesn't need to install a complete software program in order to run properly. Only the necessary files are installed. Zero-OS can read the metadata of a container and only download and execute the necessary binaries and applications to run the workload, when it is necessary. + +Sounds great? It really is great, and very effective! + +One amazing thing about the flist technology is that it is possible to convert any Docker image into an flist, thanks to the [ThreeFold Docker Hub Converter tool](https://hub.grid.tf/docker-convert). If this sounds complicated, fear not. It is very easy and we will show you how to proceed in this case study. + + + +### Case Study Objective + +The goal of this case study is to give you enough information and tools so that you can yourself build your own flist projects and deploy on the ThreeFold Grid. + +This case study is not meant to show you all the detailed steps on creating an flist from scratch. We will instead start with some files templates available on the ThreeFold repository [tf-images](https://github.com/threefoldtech/tf-images). This is one of the many advantages of working with open-source projects: we can easily get inspiration from the already available codes of the many ThreeFold repositories and work our way up from there. + + + +### The Overall Process + +To give you a bird's view of the whole project, here are the main steps: + +* Create the Docker image +* Push the Docker image to the Docker Hub +* Convert the Docker image to a Zero-OS flist +* Deploy a micro VM with the flist on the ThreeFold Playground + + + +## Docker Image Creation + +As we've said previously, we will not explore all the details of creating an flist from scratch. This would be done in a subsequent guide. For now, we want to take existing codes and work our way from there. This is not only quicker, but it is a good way to get to know the ThreeFold's ecosystem and repositories. + +We will be using the code available on the [ThreeFold Tech's Github page](https://github.com/threefoldtech). In our case, we want to explore the repository [tf-images](https://github.com/threefoldtech/tf-images). + +If you go on the subsection [tfgrid3](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3), you can see many different flists available. In our case, we want to deploy the Debian Linux distribution. It is thus logic to try and find similar Linux distributions to take inspiration from. + +For this case study, we draw inspiration from the [Ubuntu 22.04](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/ubuntu22.04) directory. + +If we look at the Ubuntu 22.04 directory tree, this is what we get: + +``` +. +├── Dockerfile +├── README.md +├── start.sh +└── zinit + ├── ssh-init.yaml + └── sshd.yaml +``` + +We will now explore each of those files to get a good look at the whole repository and try to understand how it all works together. + +### Dockerfile + +We recall that to make a Docker image, you need to create a Dockerfile. As per [Docker's documentation](https://docs.docker.com/engine/reference/builder/), a Dockerfile is "a Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image". + +The Ubuntu 22.04 Dockerfile is as follows: + +File: `Dockerfile` + +```Dockerfile +FROM ubuntu:22.04 + +RUN apt update && \ + apt -y install wget openssh-server + +RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \ + chmod +x /sbin/zinit + +COPY zinit /etc/zinit +COPY start.sh /start.sh + +RUN chmod +x /sbin/zinit && chmod +x /start.sh +ENTRYPOINT ["zinit", "init"] +``` + +We can see from the first line that the Dockerfile will look for the docker image `ubuntu:22.04`. In our case, we want to get the Debian 12 docker image. This information is available on the Docker Hub (see [Debian Docker Hub](https://hub.docker.com/_/debian)). + +We will thus need to change the line `FROM ubuntu:22.04` to the line `FROM debian:12`. It isn't more complicated than that! + +We now have the following Dockerfile fore the Debian docker image: + +File: `Dockerfile` + +```Dockerfile +FROM debian:12 + +RUN apt update && \ + apt -y install wget openssh-server + +RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \ + chmod +x /sbin/zinit + +COPY zinit /etc/zinit +COPY start.sh /start.sh + +RUN chmod +x /sbin/zinit && chmod +x /start.sh +ENTRYPOINT ["zinit", "init"] +``` + +There is nothing more needed here. Pretty fun to start from some existing open-source code, right? + +### Docker Image Script + +The other important file we will be looking at is the `start.sh` file. This is the basic script that will be used to properly set the docker image. Thankfully, there is nothing more to change in this file, we can leave it as is. As we will see later, this file will be executed by zinit when the container starts. + +File: `start.sh` + +```.sh +#!/bin/bash + +mkdir -p /var/run/sshd +mkdir -p /root/.ssh +touch /root/.ssh/authorized_keys + +chmod 700 /root/.ssh +chmod 600 /root/.ssh/authorized_keys + +echo "$SSH_KEY" >> /root/.ssh/authorized_keys +``` + +### zinit Folder + +Next, we want to take a look at the zinit folder. + +But first, what is zinit? In a nutshell, zinit is a process manager (pid 1) that knows how to launch, monitor and sort dependencies. It thus executes targets in the proper order. For more information on zinit, check the [zinit repository](https://github.com/threefoldtech/zinit). + +When we start the Docker container, the files in the folder zinit will be executed. + +If we take a look at the file `ssh-init.yaml`, we find the following: + +```.yaml +exec: bash /start.sh +log: stdout +oneshot: true +```` + +We can see that the first line calls the [bash](https://www.gnu.org/software/bash/) Unix shell and that it will run the file `start.sh` we've seen earlier. + +In this zinit service file, we define a service named `ssh-init.yaml`, where we tell zinit which commands to execute (here `bash /start.sh`), where to log (here in `stdout`) and where `oneshot` is set to `true` (meaning that it should only be executed once). + +If we take a look at the file `sshd.yaml`, we find the following: + +```.yaml +exec: bash -c "/usr/sbin/sshd -D" +after: + - ssh-init +``` + +Here another service `sshd.yaml` runs after the `ssh-init.yaml` process. + +### README.md File + +As every good programmer knows, a good code is nothing without some good documentation to help others understand what's going on! This is where the `README.md` file comes into play. + +In this file, we can explain what our code is doing and offer steps to properly configure the whole deployment. For the users that will want to deploy the flist on the ThreeFold Playground, they would need the FLIst URL and the basic steps to deploy a Micro VM on the TFGrid. We will thus add this information in the README.md file. This information can be seen in the [section below](#deploy-the-flist-on-the-tf-playground). To read the complete README.md file, go to [this link](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/debian). + +### Putting it All Together + +We've now went through all the files available in the Ubuntu 22.04 directory on the tf-images repository. To build your own image, you would simply need to put all those files in a local folder on your computer and follow the steps presented at the next section, [Docker Publishing Steps](#docker-publishing-steps). + +To have a look at the final result of the changes we bring to the Ubuntu 22.04 version, have a look at this [Debian directory](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/debian) on the ThreeFold's tf-images repository. + + + +## Docker Publishing Steps + +### Create Account and Access Token + +To be able to push Docker images to the Docker Hub, you obviously need to create a Docker Hub account! This is very easy and please note that there are so many amazing documentation on Docker online. If you're lost, make the most of your favorite search engine and find a way out of the blue. + +Here are the steps to create an account and an access token. + +* Go to the [Docker Hub](https://hub.docker.com/) +* Click `Register` and follow the steps given by Docker +* On the top right corner, click on your account name and select `Account Settings` +* On the left menu, click on `Security` +* Click on `New Access Token` +* Choose an Access Token description that you will easily identify then click `Generate` + * Make sure to set the permissions `Read, Write, Delete` +* Follow the steps given to properly connect your local computer to the Docker Hub + * Run `docker login -u ` + * Set the password + +You now have access to the Docker Hub from your local computer. We will then proceed to push the Docker image we've created. + +### Build and Push the Docker Image + +* Make sure the Docker Daemon is running +* Build the docker container + * Template: + * ``` + docker build -t / . + ``` + * Example: + * ``` + docker build -t username/debian12 . + ``` +* Push the docker container to the [Docker Hub](https://hub.docker.com/) + * Template: + * ``` + docker push / + ``` + * Example: + * ``` + docker push username/debian12 + ``` +* You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`. + * Note that you can access this link quickly with the following template: + * ``` + https://hub.docker.com/u/ + ``` + + + +## Convert the Docker Image to an Flist + +We will now convert the Docker image into a Zero-OS flist. This part is so easy you will almost be wondering why you never heard about flist before! + +* Go to the [ThreeFold Hub](https://hub.grid.tf/). +* Sign in with the ThreeFold Connect app. +* Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) section. +* Next to `Docker Image Name`, add the docker image repository and name, see the example below: + * Template: + * `/docker_image_name:tagname` + * Example: + * `username/debian12:latest` +* Click `Convert the docker image`. +* Once the conversion is done, the flist is available as a public link on the ThreeFold Hub. +* To get the flist URL, go to the [TF Hub main page](https://hub.grid.tf/), scroll down to your 3Bot ID and click on it. +* Under `Name`, you will see all your available flists. +* Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like. + * Template: + * ``` + https://hub.grid.tf/<3BOT_name.3bot>/--.flist + ``` + * Example: + * ``` + https://hub.grid.tf/idrnd.3bot/username-debian12-latest.flist + ``` + + + +## Deploy the Flist on the TF Playground + +* Go to the [ThreeFold Playground](https://play.grid.tf). +* Set your profile manager. +* Go to the [Micro VM](https://play.grid.tf/#/vm) page. +* Choose your parameters (name, VM specs, etc.). +* Under `flist`, paste the Debian flist from the TF Hub you copied previously. +* Make sure the entrypoint is as follows: + * ``` + /sbin/zinit init + ``` +* Choose a 3Node to deploy on +* Click `Deploy` + +That's it! You can now SSH into your Debian deployment and change the world one line of code at a time! + +* + +## Conclusion + +In this case study, we've seen the overall process of creating a new flist to deploy a Debian workload on a Micro VM on the ThreeFold Playground. + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. diff --git a/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md new file mode 100644 index 0000000..4193317 --- /dev/null +++ b/collections/developers/flist/flist_case_studies/flist_nextcloud_case_study.md @@ -0,0 +1,858 @@ +

Flist Case Study: Nextcloud All-in-One

+ +

Table of Contents

+ +- [Introduction](#introduction) + - [Flist: What is It?](#flist-what-is-it) + - [Case Study Objective](#case-study-objective) + - [The Overall Process](#the-overall-process) +- [Docker Image Creation](#docker-image-creation) + - [Nextcloud Flist Directory Tree](#nextcloud-flist-directory-tree) + - [Caddyfile](#caddyfile) + - [Dockerfile](#dockerfile) + - [README.md File](#readmemd-file) + - [scripts Folder](#scripts-folder) + - [caddy.sh](#caddysh) + - [sshd\_init.sh](#sshd_initsh) + - [ufw\_init.sh](#ufw_initsh) + - [nextcloud.sh](#nextcloudsh) + - [nextcloud\_conf.sh](#nextcloud_confsh) + - [zinit Folder](#zinit-folder) + - [ssh-init.yaml and sshd.yaml](#ssh-inityaml-and-sshdyaml) + - [ufw-init.yaml and ufw.yaml](#ufw-inityaml-and-ufwyaml) + - [caddy.yaml](#caddyyaml) + - [dockerd.yaml](#dockerdyaml) + - [nextcloud.yaml](#nextcloudyaml) + - [nextcloud-conf.yaml](#nextcloud-confyaml) + - [Putting it All Together](#putting-it-all-together) +- [Docker Publishing Steps](#docker-publishing-steps) + - [Create Account and Access Token](#create-account-and-access-token) + - [Build and Push the Docker Image](#build-and-push-the-docker-image) +- [Convert the Docker Image to an Flist](#convert-the-docker-image-to-an-flist) +- [Deploy Nextcloud AIO on the TFGrid with Terraform](#deploy-nextcloud-aio-on-the-tfgrid-with-terraform) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy Nextcloud with Terraform](#deploy-nextcloud-with-terraform) + - [Nextcloud Setup](#nextcloud-setup) +- [Conclusion](#conclusion) + +*** + +# Introduction + +In this case study, we explain how to create a new flist on the ThreeFold ecosystem. We will show the process of creating a Nextcloud All-in-One flist and we will deploy a micro VM on the ThreeFold Playground to access our Nextcloud instance. As a reference, the official Nextcloud flist is available [here](https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist.md). + +To achieve all this, we will need to create a Docker Hub account, create a Dockerfile and its associated files, a docker image and a docker container, then convert the docker image to a Zero-OS flist. After all this, we will be deploying our Nextcloud instance on the ThreeFold Playground. + +As a general advice, before creating an flist for a ThreeFold deployment, you should make sure that you are able to deploy your workload properly by using a micro VM or a full VM on the TFGrid. Once you know all the steps to deploy your workload, and after some thorough tests, you can take what you've learned and incorporate all this into an flist. + +## Flist: What is It? + +Before we go any further, let us recall what is an flist. In short, an flist is a technology for storing and efficiently sharing sets of files. While it has many great features, it's purpose in this case is simply to deliver the image contents to Zero-OS for execution as a micro VM. It thus acts as a bundle of files like a normal archive. + +One convenient thing about the flist technology is that it is possible to convert any Docker image into an flist, thanks to the [ThreeFold Docker Hub Converter tool](https://hub.grid.tf/docker-convert). It is very easy to do and we will show you how to proceed in this case study. For a quick guide on converting Docker images into flists, read [this section](../flist_hub/convert_docker_image.md) of the ThreeFold Manual. + +## Case Study Objective + +The goal of this case study is to give you enough information and tools so that you can build your own flist projects and deploy on the ThreeFold Grid. + +We will explore the different files needed to create the flist and explain the overall process. Instead of starting from scratch, we will analyze the Nextcloud flist directory in the [tf-images](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) ThreeFold Tech repository. As the project is already done, it will be easier to get an overview of the process and the different components so you can learn to create your own. + +## The Overall Process + +To give you a bird's-eye view of the whole project, here are the main steps: + +* Create the Docker image +* Push the Docker image to the Docker Hub +* Convert the Docker image to a Zero-OS flist +* Deploy a micro VM with the flist on the ThreeFold Playground with Terraform + +One important thing to have in mind is that, when we create an flist, what we are doing is basically automating the required steps to deploy a given workload on the TFGrid. Usually, these steps would be done manually and step-by-step by an individual deploying on a micro or a full VM. + +Once we've successfully created an flist, we thus have a very quick way to deploy a specific workload while always obtaining the same result. This is why it is highly recommended to test a given deployment on a full or micro VM before building an flist. + +For example, in the case of building a Nextcloud All-in-One flist, the prerequisites would be to successfully deploy a Nextcloud AIO instance on a full VM by executing each step sequentially. This specific example is documented in the Terraform section [Nextcloud All-in-One Guide](../../../system_administrators/terraform/advanced/terraform_nextcloud_aio.md) of the System Administrators book. + +In our case, the flist we will be using has some specific configurations depending on the way we deploy Nextcloud (e.g. using or not the gateway and a custom domain). The Terraform **main.tf** we will be sharing later on will thus take all this into account for a smooth deployment. + +# Docker Image Creation + +As we've said previously, we will explore the different components of the existing Nextcloud flist directory. We thus want to check the existing files and try to understand as much as possible how the different components work together. This is also a very good introduction to the ThreeFold ecosystem. + +We will be using the files available on the [ThreeFold Tech Github page](https://github.com/threefoldtech). In our case, we want to explore the repository [tf-images](https://github.com/threefoldtech/tf-images). + +If you go in the subsection [tfgrid3](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3), you can see many different flists available. In our case, we want to deploy the [Nextcloud All-in-One Flist](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud). + +## Nextcloud Flist Directory Tree + +The Nextcloud flist directory tree is the following: + +``` +tree tf-images/tfgrid3/nextcloud +. +├── Caddyfile +├── Dockerfile +├── README.md +├── scripts +│ ├── caddy.sh +│ ├── nextcloud_conf.sh +│ ├── nextcloud.sh +│ ├── sshd_init.sh +│ └── ufw_init.sh +└── zinit + ├── caddy.yaml + ├── dockerd.yaml + ├── nextcloud-conf.yaml + ├── nextcloud.yaml + ├── sshd.yaml + ├── ssh-init.yaml + ├── ufw-init.yaml + └── ufw.yaml +``` + +We can see that the directory is composed of a Caddyfile, a Dockerfile, a README.md and two directories, **scripts** and **zinit**. We will now explore each of those components to have a good grasp of the whole repository and to understand how it all works together. + +To get a big picture of this directory, we could say that the **README.md** file provides the necessary documentation for the users to understand the Nextcloud flist, how it is built and how it works, the **Caddyfile** provides the necessary requirements to run the reverse proxy, the **Dockerfile** specifies how the Docker image is built, installing things such as [openssh](https://www.openssh.com/) and the [ufw firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) for secure remote connection, while the two folders, **scripts** and **zinit**, could be said to work hand-in-hand. + +Each `.yaml` file is a *unit file* for zinit. That means it specifies a single service for zinit to start. We'll learn more about these files later, but for now we can just note that each script file (ending with `.sh`) has an associated zinit file to make sure that the script is run. There are also some other files for running programs aside from our scripts. + +## Caddyfile + +For our Nextcloud deployment, we are using Caddy as a reverse proxy. A reverse proxy is an application that sits in front of back-end applications and forwards client requests to those applications. + +Since Nextcloud AIO actually includes two web applications, both Nextcloud itself and the AIO management interface, we use the reverse proxy to serve them both on a single domain. It also allows us to make some changes on the fly to the content of the AIO site to considerably enhance the user experience. Finally, we also use Caddy to provide SSL termination if the user reserves a public IP and no gateway, since otherwise SSL termination is provided by the gateway. + +File: `Caddyfile` +``` +{ + order replace after encode + servers { + trusted_proxies static 100.64.0.0/10 10.0.0.0/8 + } +} + + +{$DOMAIN}:{$PORT} { + handle_path /aio* { + replace { + href="/ href="/aio/ + src="/ src="/aio/ + action=" action="/aio + url(' url('/aio + `value="" placeholder="nextcloud.yourdomain.com"` `value="{$DOMAIN}"` + `"Submit domain"` `"Submit domain" id="domain-submit"` + {$REPLACEMENTS} + {$BODY} + } + + reverse_proxy localhost:8000 { + header_down Location "^/(.*)$" "/aio/$1" + header_down Refresh "^/(.*)$" "/aio/$1" + } + + } + + redir /api/auth/getlogin /aio{uri} + + reverse_proxy localhost:11000 + + handle_errors { + @502-aio expression {err.status_code} == 502 && path('/aio*') + handle @502-aio { + header Content-Type text/html + respond < + Nextcloud + Your Nextcloud management interface isn't ready. If you just deployed this instance, please wait a minute and refresh the page. + + HTML 200 + } + + @502 expression {err.status_code} == 502 + handle @502 { + redir /* /aio + } + } +} +``` + +We can see in the first section (`trusted_proxies static`) that we set a range of IP addresses as trusted proxy addresses. These include the possible source addresses for gateway traffic, which we mark as trusted for compatibility with some Nextcloud features. + +After the global config at the top, the line `{$DOMAIN}:{$PORT}` defines the port that Caddy will listen to and the domain that we are using for our site. This is important, because in the case that port `443` is specified, Caddy will handle SSL certificates automatically. + +The following blocks define behavior for different URL paths that users might try to access. + +To begin, we have `/aio*`. This is how we place the AIO management app in a "subfolder" of our main domain. To accomplish that we need a few rules that rewrite the contents of the returned pages to correct the links. We also add some text replacements here to accomplish the enhancements mentioned earlier, like automatically filling the domain entry field. + +With the `reverse_proxy` line, we specify that requests to all URLs starting with `/aio` should be sent to the web server running on port `8000` of `localhost`. That's the port where the AIO server is listening, as we'll see below. There's also a couple of header rewrite rules here that correct the links for any redirects the AIO site makes. + +The `redir` line is needed to support a feature where users open the AIO interface from within Nextcloud. This redirects the original request to the correct equivalent within the `/aio` "subfolder". + +Then there's a second `reverse_proxy` line, which is the catch-all for any traffic that didn't get intercepted earlier. This handles the actual Nextcloud app and sends the traffic to its separate server running on port `11000`. + +The section starting with `handle_errors` ensures that the user will receive an understandable error message when trying to access the Nextcloud deployment before it has fully started up. + +## Dockerfile + +We recall that to make a Docker image, you need to create a Dockerfile. As per the [Docker documentation](https://docs.docker.com/engine/reference/builder/), a Dockerfile is "a text document that contains all the commands a user could call on the command line to assemble an image". + +File: `Dockerfile` + +```Dockerfile +FROM ubuntu:22.04 + +RUN apt update && \ + apt -y install wget openssh-server curl sudo ufw inotify-tools iproute2 + +RUN wget -O /sbin/zinit https://github.com/threefoldtech/zinit/releases/download/v0.2.5/zinit && \ + chmod +x /sbin/zinit + +RUN wget -O /sbin/caddy 'https://caddyserver.com/api/download?os=linux&arch=amd64&p=github.com%2Fcaddyserver%2Freplace-response&idempotency=43631173212363' && \ + chmod +x /sbin/caddy + +RUN curl -fsSL https://get.docker.com -o /usr/local/bin/install-docker.sh && \ + chmod +x /usr/local/bin/install-docker.sh + +RUN sh /usr/local/bin/install-docker.sh + +COPY ./Caddyfile /etc/caddy/ +COPY ./scripts/ /scripts/ +COPY ./zinit/ /etc/zinit/ +RUN chmod +x /scripts/*.sh + +ENTRYPOINT ["/sbin/zinit", "init"] +``` + +We can see from the first line that this Dockerfile uses a base image of Ubuntu Linux version 22.04. + +With the first **RUN** command, we refresh the package lists, and then install **openssh**, **ufw** and other dependencies for our Nextcloud uses. Note that we also install **curl** so that we can quickly install **Docker**. + +With the second **RUN** command, we install **zinit** and we give it execution permission with the command `chmod +x`. More will be said about zinit in a section below. + +With the third **RUN** command, we install **caddy** and we give it execution permission with the command `chmod +x`. Caddy is an extensible, cross-platform, open-source web server written in Go. For more information on Caddy, check the [Caddy website](https://caddyserver.com/). + +With fourth **RUN** command, we download and give proper permissions to the script `install-docker.sh`. On a terminal, the common line to install Docker would be `curl -fsSL https://get.docker.com | sudo sh`. To understand really what's going here, we can simply go to the link provided at the line [https://get.docker.com](https://get.docker.com) for more information. + +The fifth **RUN** command runs the `install-docker.sh` script to properly install Docker within the image. + +Once those commands are run, we proceed to copy into our Docker image the necessary folders `scripts` and `zinit` as well as the Caddyfile. Once this is done, we give execution permissions to all scripts in the scripts folder using `chmod +x`. + +Finally, we set an entrypoint in our Dockerfile. As per the [Docker documentation](https://docs.docker.com/engine/reference/builder/), an entrypoint "allows you to configure a container that will run as an executable". Since we are using zinit, we set the entrypoint `/sbin/zinit`. + +## README.md File + +The **README.md** file has the main goal of explaining clearly to the user the functioning of the Nextcloud directory and its associated flist. In this file, we can explain what our code is doing and offer steps to properly configure the whole deployment. + +We also give the necessary steps to create the Docker image and convert it into an flist starting directly with the Nextcloud directory. This can be useful for users that want to create their own flist, instead of using the [official ThreeFold Nextcloud flist](https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist.md). + +To read the complete README.md file, go to [this link](https://github.com/threefoldtech/tf-images/blob/development/tfgrid3/nextcloud/README.md). + +## scripts Folder + +The **scripts** folder contains without surprise the scripts necessary to run the Nextcloud instance. + +In the Nextcloud Flist case, there are five scripts: + +* **caddy.sh** +* **nextcloud.sh** +* **nextcloud_conf.sh** +* **sshd_init.sh** +* **ufw_init.sh** + +Let's take a look at each of them. + +### caddy.sh + +File: `caddy.sh` + +```bash +#!/bin/bash +export DOMAIN=$NEXTCLOUD_DOMAIN + +if $IPV4 && ! $GATEWAY; then + export PORT=443 +else + export PORT=80 +fi + +if $IPV4; then + export BODY="\`\`" + +else + export BODY="\`\`" + + export REPLACEMENTS=' `name="talk"` `name="talk" disabled` + `needs ports 3478/TCP and 3478/UDP open/forwarded in your firewall/router` `running the Talk container requires a public IP and this VM does not have one. It is still possible to use Talk in a limited capacity. Please consult the documentation for details`' +fi + +caddy run --config /etc/caddy/Caddyfile +``` + +The script **caddy.sh** sets the proper port depending on the network configuration (e.g. IPv4 or Gateway) in the first if/else section. In the second if/else section, the script also makes sure that the proper domain is given to Nextcloud All-in-One. This quickens the installation process as the user doesn't have to set the domain in Nextcloud AIO after deployment. We also disable a feature that's not relevant if the user didn't reserve an IPv4 address and we insert a note about that. + +### sshd_init.sh + +File: `sshd_init.sh` + +```bash +#!/bin/bash + +mkdir -p ~/.ssh +mkdir -p /var/run/sshd +chmod 600 ~/.ssh +chmod 600 /etc/ssh/* +echo $SSH_KEY >> ~/.ssh/authorized_keys +``` + +This file starts with a shebang (`#!`) that instructs the operating system to execute the following lines using the [Bash shell](https://www.gnu.org/software/bash/). In essence, it lets us write `./sshd_init.sh` with the same outcome as `bash ./sshd_init.sh`, assuming the file is executable. + +The goal of this script is to add the public key within the VM in order for the user to get a secure and remote connection to the VM. The two lines starting with `mkdir` create the necessary folders. The lines starting with `chmod` give the owner the permission to write and read the content within the folders. Finally, the line `echo` will write the public SSH key in a file within the VM. In the case that the flist is used as a weblet, the SSH key is set in the Playground profile manager and passed as an environment variable when we deploy the solution. + +### ufw_init.sh + +File: `ufw_init.sh` + +```bash +#!/bin/bash + +ufw default deny incoming +ufw default allow outgoing +ufw allow ssh +ufw allow http +ufw allow https +ufw allow 8443 +ufw allow 3478 +ufw limit ssh +``` + +The goal of the `ufw_init.sh` script is to set the correct firewall parameters to make sure that our deployment is secure while also providing the necessary access for the Nextcloud users. + +The first two lines starting with `ufw default` are self-explanatory. We want to restrain incoming traffic while making sure that outgoing traffic has no restraints. + +The lines starting with `ufw allow` open the ports necessary for our Nextcloud instance. We note that **ssh** is port 22, **http** is port 80 and **https** is port 443. This means, for example, that the line `ufw allow 22` is equivalent to the line `ufw allow ssh`. + +Port 8443 can be used to access the AIO interface, as an alternative to using the `/aio` "subfolder" on deployments with a public IPv4 address. Finally, the port 3478 is used for Nextcloud Talk. + +The line `ufw limit ssh` will provide additional security by denying connection from IP addresses that attempt to initiate 6 or more connections within a 30-second period. + +### nextcloud.sh + +File: `nextcloud.sh` + +```bash +#!/bin/bash + +export COMPOSE_HTTP_TIMEOUT=800 +while ! docker info > /dev/null 2>&1; do + echo docker not ready + sleep 2 +done + +docker run \ +--init \ +--sig-proxy=false \ +--name nextcloud-aio-mastercontainer \ +--restart always \ +--publish 8000:8000 \ +--publish 8080:8080 \ +--env APACHE_PORT=11000 \ +--env APACHE_IP_BINDING=0.0.0.0 \ +--env SKIP_DOMAIN_VALIDATION=true \ +--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ +--volume /var/run/docker.sock:/var/run/docker.sock:ro \ +nextcloud/all-in-one:latest +``` + +The **nextcloud.sh** script is where the real action starts. This is where we run the Nextcloud All-in-One docker image. + +Before discussing the main part of this script, we note that the `while` loop is used to ensure that the `docker run` command starts only after the Docker daemon has properly started. + +The code section starting with `docker run` is taken from the [Nextcloud All-in-One repository on Github](https://github.com/nextcloud/all-in-one) with some slight modifications. The last line indicates that the Docker image being pulled will always be the latest version of Nextcloud All-in-One. + +We note here that Nextcloud AIO is published on the port 8000 and 8080. We also note that we set restart to **always**. This is very important as it will make sure that the Nextcloud instance is restarted if the Docker daemon reboots. We take the opportunity to note that the way zinit configures micro VMs, the Docker daemon restarts automatically after a reboot. Thus, this latter fact combined with the line `--restart always` ensures that the user that the Nextcloud instance will restart after a VM reboot. + +We also set **11000** as the Apache port with an IP binding of **0.0.0.0**. For our deployment, we want to skip the domain validation, thus it is set to **true**. + +Considering the line `--sig-proxy=false`, when this command is run interactively, it prevents the user from accidentally killing the spawned AIO container. While it is not of great importance in our case, it means that zinit will not kill the container if the service is stopped. + +For more information on this, we invite the readers to consult the [Nextcloud documentation](https://github.com/nextcloud/all-in-one#how-to-use-this). + +### nextcloud_conf.sh + +File: `nextcloud_conf.sh` + +```bash +#!/bin/bash + +# Wait for the nextcloud container to become healthy. Note that we can set the +# richtext config parameters even before the app is installed + +nc_ready () { + until [[ "`docker inspect -f {{.State.Health.Status}} nextcloud-aio-nextcloud 2> /dev/null`" == "healthy" ]]; do + sleep 1; + done; +} + +# When a gateway is used, AIO sets the WOPI allow list to only include the +# gateway IP. Since requests don't originate from the gateway IP, they are +# blocked by default. Here we add the public IP of the VM, or of the router +# upstream of the node +# See: github.com/nextcloud/security-advisories/security/advisories/GHSA-24x8-h6m2-9jf2 + +if $IPV4; then + interface=$(ip route show default | cut -d " " -f 5) + ipv4_address=$(ip a show $interface | grep -Po 'inet \K[\d.]+') +fi + +if $GATEWAY; then + nc_ready + wopi_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ config:app:get richdocuments wopi_allowlist) + + if $IPV4; then + ip=$ipv4_address + else + ip=$(curl -fs https://ipinfo.io/ip) + fi + + if [[ $ip ]] && ! echo $wopi_list | grep -q $ip; then + docker exec --user www-data nextcloud-aio-nextcloud php occ config:app:set richdocuments wopi_allowlist --value=$ip + fi +fi + + +# If the VM has a gateway and a public IPv4, then AIO will set the STUN/TURN +# servers to the gateway domain which does not point to the public IP, so we +# use the IP instead. In this case, we must wait for the Talk app to be +# installed before changing the settings. With inotifywait, we don't need +# a busy loop that could run indefinitely + +apps_dir=/mnt/data/docker/volumes/nextcloud_aio_nextcloud/_data/custom_apps/ + +if $GATEWAY && $IPV4; then + if [[ ! -d ${apps_dir}spreed ]]; then + inotifywait -qq -e create --include spreed $apps_dir + fi + nc_ready + + turn_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:list) + turn_secret=$(echo "$turn_list" | grep secret | cut -d " " -f 4) + turn_server=$(echo "$turn_list" | grep server | cut -d " " -f 4) + + if ! echo $turn_server | grep -q $ipv4_address; then + docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:delete turn $turn_server udp,tcp + docker exec --user www-data nextcloud-aio-nextcloud php occ talk:turn:add turn $ipv4_address:3478 udp,tcp --secret=$turn_secret + fi + + stun_list=$(docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:list) + stun_server=$(echo $stun_list | cut -d " " -f 2) + + if ! echo $stun_server | grep -q $ipv4_address; then + docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:add $ipv4_address:3478 + docker exec --user www-data nextcloud-aio-nextcloud php occ talk:stun:delete $stun_server + fi +fi +``` + +The script **nextcloud_conf.sh** ensures that the network settings are properly configured. In the first section, we use a function called **nc_ready ()**. This function will makes sure that the rest of the script only starts when the Nextcloud container is healthy. + +We note that the comments present in this script explain very well what is happening. In short, we want to set the Nextcloud instance according to the user's choice of network. For example, the user can decide to deploy using a ThreeFold gateway or a standard IPv4 connection. If the VM has a gateway and a public IPv4, then Nextcloud All-in-One will set the STUN/TURN servers to the gateway domain which does not point to the public IP, so we use the IP instead. + +## zinit Folder + +Next, we want to take a look at the zinit folder. + +But first, what is zinit? In a nutshell, zinit is a process manager (pid 1) that knows how to launch, monitor and sort dependencies. It thus executes targets in the proper order. For more information on zinit, check the [zinit repository](https://github.com/threefoldtech/zinit). + +When we start the Docker container, zinit will parse each unit file in the `/etc/zinit` folder and execute the contained command according to the specified parameters. + +In the Nextcloud Flist case, there are eight **.yaml** files: + +* **caddy.yaml** +* **dockerd.yaml** +* **nextcloud-conf.yaml** +* **nextcloud.yaml** +* **ssh-init.yaml** +* **sshd.yaml** +* **ufw-init.yaml** +* **ufw.yaml** + + +### ssh-init.yaml and sshd.yaml + +We start by taking a look at the **ssh-init.yaml** and **sshd.yaml** files. + +File: `ssh-init.yaml` + +```yaml +exec: /scripts/sshd_init.sh +oneshot: true +``` + +In this zinit service file, we define a service named `ssh-init.yaml`, where we tell zinit to execute the following command: `exec: /scripts/sshd_init.sh`. This unit file thus runs the script `sshd_init.sh` we covered in a previous section. + +We also note that `oneshot` is set to `true` and this means that it should only be executed once. This directive is often used for setup scripts that only need to run once. When it is not specified, the default value of `false` means that zinit will continue to start up a service if it ever dies. + +Now, we take a look at the file `sshd.yaml`: + +File: `sshd.yaml` + +```yaml +exec: bash -c "/usr/sbin/sshd -D" +after: + - ssh-init +``` + +We can see that this file executes a line from the Bash shell. It is important to note that, with zinit and .yaml files, you can easily order the executions of the files with the `after` directive. In this case, it means that the service `sshd` will only run after `ssh-init`. + +### ufw-init.yaml and ufw.yaml + +Let's take a look at the files **ufw-init.yaml** and **ufw.yaml**. + +File: `ufw-init.yaml` + +```yaml +exec: /scripts/ufw_init.sh +oneshot: true +``` + +The file `ufw-init.yaml` is very similar to the previous file `ssh-init.yaml`. + +File: `ufw.yaml` + +```yaml +exec: ufw --force enable +oneshot: true +after: + - ufw-init +``` + +We can see that the file `ufw.yaml` will only run once and only after the file `ufw-init.yaml` has been run. This is important since the file `ufw-init.yaml` executes the script `ufw_init.sh`. We recall this script allows different ports in the firewall. Once those ports are defined, we can then run the command `ufw --force enable`. This will start the ufw firewall. + +### caddy.yaml + +```yaml +exec: /scripts/caddy.sh +oneshot: true +``` + +This is also very similar to previous files and just runs the Caddy script as a oneshot. + +### dockerd.yaml + +We now take a look at the file **dockerd.yaml**. + +File: `dockerd.yaml` + +```yaml +exec: /usr/bin/dockerd --data-root /mnt/data/docker +``` + +This file will run the [dockerd daemon](https://docs.docker.com/engine/reference/commandline/dockerd/) which is the persistent process that manages containers. We also note that it sets the data to be stored in the directory **/mnt/data/docker**, which is important because we will mount a virtual disk there that will provide better performance, especially for Docker's storage driver. + +### nextcloud.yaml + +File: `nextcloud.yaml` + +```yaml +exec: /scripts/nextcloud.sh +after: + - dockerd +``` + +The file `nextcloud.yaml` runs after dockerd. + +This file will execute the `nextcloud.sh` script we saw earlier. We recall that this script starts the Nextcloud All-in-One image. + +### nextcloud-conf.yaml + +File: `nextcloud-conf.yaml` + +```yaml +exec: /scripts/nextcloud_conf.sh +oneshot: true +after: + - nextcloud +``` + +Finally, the file `nextcloud-conf.yaml` runs after `nextcloud.yaml`. + +This file will execute the `nextcloud-conf.sh` script we saw earlier. We recall that this script starts the Nextcloud All-in-One image. At this point, the deployment is complete. + +## Putting it All Together + +We've now gone through all the files in the Nextcloud flist directory. You should now have a proper understanding of the interplay between the zinit (.yaml) and the scripts (.sh) files as well as the basic steps to build a Dockerfile and to write clear documentation. + +To build your own Nextcloud docker image, you would simply need to clone this directory to your local computer and to follow the steps presented in the next section [Docker Publishing Steps](#docker-publishing-steps). + +To have a look at the complete directory, you can always refer to the [Nextcloud flist directory](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) on the ThreeFold tf-images repository. + +# Docker Publishing Steps + +In this section, we show the necessary steps to publish the Docker image to the Docker Hub. + +To do so, we need to create an account and an access token. Then we will build the Docker image and push it to the Docker Hub. + +## Create Account and Access Token + +To be able to push Docker images to the Docker Hub, you obviously need to create a Docker Hub account! This is very easy and note that there are many great tutorials online about Docker. + +Here are the steps to create an account and an access token: + +* Go to the [Docker Hub](https://hub.docker.com/) +* Click `Register` and follow the steps given by Docker +* On the top right corner, click on your account name and select `Account Settings` +* On the left menu, click on `Security` +* Click on `New Access Token` +* Choose an Access Token description that you will easily identify then click `Generate` + * Make sure to set the permissions `Read, Write, Delete` +* On your local computer, make sure that the Docker daemon is running +* Write the following in the command line to connect to the Docker hub: + * Run `docker login -u ` + * Set the password + +You now have access to the Docker Hub from your local computer. We will then proceed to push the Docker image to the Docker Hub. + +## Build and Push the Docker Image + +* Make sure the Docker Daemon is running +* Build the docker container (note that, while the tag is optional, it can help to track different versions) + * Template: + * ``` + docker build -t /: . + ``` + * Example: + * ``` + docker build -t dockerhubuser/nextcloudaio . + ``` +* Push the docker container to the [Docker Hub](https://hub.docker.com/) + * Template: + * ``` + docker push / + ``` + * Example: + * ``` + docker push dockerhubuser/nextcloudaio + ``` +* You should now see your docker image on the [Docker Hub](https://hub.docker.com/) when you go into the menu option `My Profile`. + * Note that you can access this link quickly with the following template: + * ``` + https://hub.docker.com/u/ + ``` + +# Convert the Docker Image to an Flist + +We will now convert the Docker image into a Zero-OS flist. + +* Go to the [ThreeFold Hub](https://hub.grid.tf/). +* Sign in with the ThreeFold Connect app. +* Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) section. +* Next to `Docker Image Name`, add the docker image repository and name, see the example below: + * Template: + * `/docker_image_name:tagname` + * Example: + * `dockerhubuser/nextcloudaio:latest` +* Click `Convert the docker image`. +* Once the conversion is done, the flist is available as a public link on the ThreeFold Hub. +* To get the flist URL, go to the [TF Hub main page](https://hub.grid.tf/), scroll down to your 3Bot ID and click on it. +* Under `Name`, you will see all your available flists. +* Right-click on the flist you want and select `Copy Clean Link`. This URL will be used when deploying on the ThreeFold Playground. We show below the template and an example of what the flist URL looks like. + * Template: + * ``` + https://hub.grid.tf/<3BOT_name.3bot>/--.flist + ``` + * Example: + * ``` + https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist + ``` + +# Deploy Nextcloud AIO on the TFGrid with Terraform + +We now proceed to deploy a Nextcloud All-in-One instance by using the Nextcloud flist we've just created. + +To do so, we will deploy a micro VM with the Nextcloud flist on the TFGrid using Terraform. + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is. + +For this example, we will be deployment with a ThreeFold gateway as well as a gateway domain. + +* Copy the following content and save the file under the name `credentials.auto.tfvars`: + +``` +mnemonics = "..." +network = "main" +SSH_KEY = "..." + +size = "50" +cpu = "2" +memory = "4096" + +gateway_id = "50" +vm1_id = "5453" + +deployment_name = "nextcloudgateway" +nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist" +``` + +Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**. + +Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`. + +Note that in our case, we set the flist to be the official Nextcloud flist. Simply replace the URL with your newly created Nextcloud flist to test it! + +* Copy the following content and save the file under the name `main.tf`: + +``` +variable "mnemonics" { + type = string + default = "your mnemonics" +} + +variable "network" { + type = string + default = "main" +} + +variable "SSH_KEY" { + type = string + default = "your SSH pub key" +} + +variable "deployment_name" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +variable "nextcloud_flist" { + type = string +} + +variable "gateway_id" { + type = string +} + +variable "vm1_id" { + type = string +} + + +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = var.mnemonics + network = var.network +} + +data "grid_gateway_domain" "domain" { + node = var.gateway_id + name = var.deployment_name +} + +resource "grid_network" "net" { + nodes = [var.gateway_id, var.vm1_id] + ip_range = "10.1.0.0/16" + name = "network" + description = "My network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = var.vm1_id + network_name = grid_network.net.name + + disks { + name = "data" + size = var.size + } + + vms { + name = "vm1" + flist = var.nextcloud_flist + cpu = var.cpu + memory = var.memory + rootfs_size = 15000 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + GATEWAY = "true" + IPV4 = "false" + NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn + } + mounts { + disk_name = "data" + mount_point = "/mnt/data" + } + } +} + +resource "grid_name_proxy" "p1" { + node = var.gateway_id + name = data.grid_gateway_domain.domain.name + backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)] + network = grid_network.net.name + tls_passthrough = false +} + +output "wg_config" { + value = grid_network.net.access_wg_config +} + +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +``` + +## Deploy Nextcloud with Terraform + +We now deploy Nextcloud with Terraform. Make sure that you are in the correct folder containing the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy Nextcloud: + * ``` + terraform apply + ``` + +Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: + * ``` + terraform show + ``` + +## Nextcloud Setup + +Once you've deployed Nextcloud, you can access the Nextcloud setup page by pasting the URL displayed on the line `fqdn = "..."` of the Terraform output. + +# Conclusion + +In this case study, we've seen the overall process of creating a new flist to deploy a Nextcloud instance on a Micro VM on the TFGrid with Terraform. + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/developers/flist/flist_case_studies/img/nextcloud_logo.jpeg b/collections/developers/flist/flist_case_studies/img/nextcloud_logo.jpeg new file mode 100644 index 0000000..2d85227 Binary files /dev/null and b/collections/developers/flist/flist_case_studies/img/nextcloud_logo.jpeg differ diff --git a/collections/developers/flist/flist_hub/api_token.md b/collections/developers/flist/flist_hub/api_token.md new file mode 100644 index 0000000..5ca968f --- /dev/null +++ b/collections/developers/flist/flist_hub/api_token.md @@ -0,0 +1,33 @@ +

TF Hub API Token

+ +

Table of Contents

+ +- [Generate an API Token](#generate-an-api-token) +- [Verify the Token Validity](#verify-the-token-validity) + +*** + +## Generate an API Token + +To generate an API Token on the TF Hub, follow those steps: + +* Go to the [ThreeFold Hub](https://hub.grid.tf/) +* Open the top right drop-down menu +* Click on `Generate API Token` +* Take note of the token and keep it somewhere safe + +## Verify the Token Validity + +To make sure the generated token is valid, in the terminal write the following with your own API Token: + +```bash +curl -H "Authorization: bearer " https://hub.grid.tf/api/flist/me +``` + +You should see the following line with your own 3BotID + +```bash +{"status": "success", "payload": {"username": "<3BotID>.3bot"}} +``` + +You can then use this API Token in the terminal to [get and update information through the API](./zos_hub.md#get-and-update-information-through-the-api). \ No newline at end of file diff --git a/collections/developers/flist/flist_hub/convert_docker_image.md b/collections/developers/flist/flist_hub/convert_docker_image.md new file mode 100644 index 0000000..e6fba85 --- /dev/null +++ b/collections/developers/flist/flist_hub/convert_docker_image.md @@ -0,0 +1,45 @@ +

Convert Docker Image to Flist

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Upload the Image](#upload-the-image) +- [Flist on the Hub](#flist-on-the-hub) + +*** + +## Introduction + +We show the steps to convert a docker image to an Flist. + +## Upload the Image + +1. Upload the Docker image to Docker Hub with the following command: + +```bash +docker push +``` + +2. Navigate to the docker converter link: https://hub.grid.tf/docker-convert + ![ ](./img/docker_convert.png) + +3. Copy the name of the uploaded Docker image to the Docker Image Name field. + +4. Then press the convert button. + +When the image is ready, some information will be displayed. + +![ ](./img/flist_ready.png) + +## Flist on the Hub + +To navigate to the created flist, you can search with the newly created file name in the search tab. + +![ ](./img/search.png) + +You can also navigate to your repository in the contributors section from the Zero-Os Hub and navigate to the newly created flist. + +Then press the preview button to display the flist's url and some other data. + +![ ](./img/preview.png) + diff --git a/collections/developers/flist/flist_hub/img/docker_convert.png b/collections/developers/flist/flist_hub/img/docker_convert.png new file mode 100644 index 0000000..1be37a0 Binary files /dev/null and b/collections/developers/flist/flist_hub/img/docker_convert.png differ diff --git a/collections/developers/flist/flist_hub/img/flist_ready.png b/collections/developers/flist/flist_hub/img/flist_ready.png new file mode 100644 index 0000000..c8913d5 Binary files /dev/null and b/collections/developers/flist/flist_hub/img/flist_ready.png differ diff --git a/collections/developers/flist/flist_hub/img/hub_flist.png b/collections/developers/flist/flist_hub/img/hub_flist.png new file mode 100644 index 0000000..3e5331a Binary files /dev/null and b/collections/developers/flist/flist_hub/img/hub_flist.png differ diff --git a/collections/developers/flist/flist_hub/img/preview.png b/collections/developers/flist/flist_hub/img/preview.png new file mode 100644 index 0000000..bd555ca Binary files /dev/null and b/collections/developers/flist/flist_hub/img/preview.png differ diff --git a/collections/developers/flist/flist_hub/img/search.png b/collections/developers/flist/flist_hub/img/search.png new file mode 100644 index 0000000..a5128cd Binary files /dev/null and b/collections/developers/flist/flist_hub/img/search.png differ diff --git a/collections/developers/flist/flist_hub/zos_hub.md b/collections/developers/flist/flist_hub/zos_hub.md new file mode 100644 index 0000000..95bdbc8 --- /dev/null +++ b/collections/developers/flist/flist_hub/zos_hub.md @@ -0,0 +1,142 @@ +

Zero-OS Hub

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Upload Your Files](#upload-your-files) +- [Merge Multiple Flists](#merge-multiple-flists) +- [Convert Docker Images and Tar Files](#convert-docker-images-and-tar-files) +- [Upload Customize Flists](#upload-customize-flists) +- [Upload Homemade Flists](#upload-homemade-flists) +- [Upload your Existing Flist to Reduce Bandwidth](#upload-your-existing-flist-to-reduce-bandwidth) +- [Authenticate via 3Bot](#authenticate-via-3bot) +- [Get and Update Information Through the API](#get-and-update-information-through-the-api) + - [Public API Endpoints (No Authentication Required)](#public-api-endpoints-no-authentication-required) + - [Restricted API Endpoints (Authentication Required)](#restricted-api-endpoints-authentication-required) + - [API Request Templates and Examples](#api-request-templates-and-examples) + +*** + +## Introduction + +The [ThreeFold Zero-OS Hub](https://hub.grid.tf/) allows you to do multiple things and acts as a public centralization of flists. + +The ZOS Hub is mainly there to gives an easy way to distribute flist files, which are databases of metadata that you can use in any Zero-OS container or virtual machine. + +## Upload Your Files +In order to publish easily your files, you can upload a `.tar.gz` and the hub will convert it automatically to a flist +and store the contents in the hub backend. After that you can use your flist directly on a container. + +## Merge Multiple Flists +In order to reduce the maintenance of your images, products, etc. flist allows you to keep your +different products and files separately and then merge them with another flist to make it usable without +keeping the system up-to-date. + +Example: there is an official `ubuntu 16.04` flist image, you can make a flist which contains your application files +and then merge your flist with ubuntu, so the resulting flist is your product on the last version of ubunbu. +You don't need to take care about the base system yourself, just merge it with the one provided. + +## Convert Docker Images and Tar Files + +The ZOS Hub allows you to convert Docker Hub images and Tar files into flists thanks to the Docker Hub Converter. + +You can convert a docker image (eg: `busybox`, `ubuntu`, `fedora`, `couchdb`, ...) to an flist directly from the backend, this allows you to use your existing docker image in our infrastructure out-of-the-box. Go to the [Docker Hub Converter](https://hub.grid.tf/docker-convert) to use this feature. For more information on the process, read the section [Convert Docker Image to flist](./convert_docker_image.md) of the TF Manual. + +You can also easily convert a Tar file into an flist via the [Upload section](https://hub.grid.tf/upload) of the ZOS Hub. + +## Upload Customize Flists + +The ZOS Hub also allows you to customize an flist via the [Customization section](https://hub.grid.tf/merge) of the ZOS Hub. Note that this is currently in beta. + +## Upload Homemade Flists + +The ZOS Hub allows you to upload flist that you've made yourself via the section [Upload a homemade flist](https://hub.grid.tf/upload-flist). + +## Upload your Existing Flist to Reduce Bandwidth +In addition with the hub-client (a side product) you can upload efficiently contents of file +to make the backend up-to-date and upload a self-made flist. This allows you to do all the jobs yourself +and gives you the full control of the chain. The only restriction is that the contents of the files you host +on the flist needs to exists on the backend, otherwise your flist will be rejected. + +## Authenticate via 3Bot +All the operations on the ZOS Hub needs to be done via a `3Bot` (default) authentication. Only downloading a flist can be done anonymously. To authenticate request via the API, you need to generate an API Token as shown in the section [ZOS Hub API Token](./api_token.md). + +## Get and Update Information Through the API +The hub host a basic REST API which can gives you some informations about flists, renaming them, remove them, etc. + +To use authenticated endpoints, you need to provide a itsyou.online valid `jwt` via `Authorization: bearer ` header. +This `jwt` can contains special `memberof` to allows you cross-repository actions. + +If your `jwt` contains memberof, you can choose which user you want to use by specifying cookie `active-user`. +See example below. + + +### Public API Endpoints (No Authentication Required) +- `/api/flist` (**GET**) + - Returns a json array with all repository/flists found +- `/api/repositories` (**GET**) + - Returns a json array with all repositories found +- `/api/fileslist` (**GET**) + - Returns a json array with all repositories and files found +- `/api/flist/` (**GET**) + - Returns a json array of each flist found inside specified repository. + - Each entry contains `filename`, `size`, `updated` date and `type` (regular or symlink), optionally `target` if it's a symbolic link. +- `/api/flist//` (**GET**) + - Returns json object with flist dumps (full file list) + +### Restricted API Endpoints (Authentication Required) +- `/api/flist/me` (**GET**) + - Returns json object with some basic information about yourself (authenticated user) +- `/api/flist/me/` (**GET**, **DELETE**) + - **GET**: same as `/api/flist//` + - **DELETE**: remove that specific flist +- `/api/flist/me//link/` (**GET**) + - Create a symbolic link `linkname` pointing to `source` +- `/api/flist/me//crosslink//` (**GET**) + - Create a cross-repository symbolic link `linkname` pointing to `repository/sourcename` +- `/api/flist/me//rename/` (**GET**) + - Rename `source` to `destination` +- `/api/flist/me/promote///` (**GET**) + - Copy cross-repository `sourcerepo/sourcefile` to your `[local-repository]/localname` file + - This is useful when you want to copy flist from one repository to another one, if your jwt allows it +- `/api/flist/me/upload` (**POST**) + - **POST**: uploads a `.tar.gz` archive and convert it to an flist + - Your file needs to be passed via `file` form attribute +- `/api/flist/me/upload-flist` (**POST**) + - **POST**: uploads a `.flist` file and store it + - Note: the flist is checked and full contents is verified to be found on the backend, if some chunks are missing, the file will be discarded. + - Your file needs to be passed via `file` form attribute +- `/api/flist/me/merge/` (**POST**) + - **POST**: merge multiple flist together + - You need to passes a json array of flists (in form `repository/file`) as POST body +- `/api/flist/me/docker` (**POST**) + - **POST**: converts a docker image to an flist + - You need to passes `image` form argument with docker-image name + - The resulting conversion will stay on your repository + +### API Request Templates and Examples + +The main template to request information from the API is the following: + +```bash +curl -H "Authorization: bearer " https://hub.grid.tf/api/flist/me/ -X +``` + +For example, if we take the command `DELETE` of the previous section and we want to delete the flist `example-latest.flist` with the API Token `abc12`, we would write the following line: + +```bash +curl -H "Authorization: bearer abc12" https://hub.grid.tf/api/flist/me/example-latest.flist -X DELETE +``` + +As another template example, if we wanted to rename the flist `current-name-latest.flist` to `new-name-latest.flist`, we would use the following template: + +```bash +curl -H "Authorization: bearer " https://hub.grid.tf/api/flist/me//rename/ -X GET +``` + +To upload an flist to the ZOS Hub, you would use the following template: + +```bash +curl -H "Authorization: bearer " -X POST -F file=@my-local-archive.tar.gz \ + https://hub.grid.tf/api/flist/me/upload +``` \ No newline at end of file diff --git a/collections/developers/flist/grid3_supported_flists.md b/collections/developers/flist/grid3_supported_flists.md new file mode 100644 index 0000000..537d0b3 --- /dev/null +++ b/collections/developers/flist/grid3_supported_flists.md @@ -0,0 +1,26 @@ +

Supported Flists

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Flists and Parameters](#flists-and-parameters) +- [More Flists](#more-flists) + +*** + +## Introduction + +We provide basic information on the currently supported Flists. + +## Flists and Parameters + +|flist|entrypoint|env vars| +|:--:|:--:|--| +|[Alpine](https://hub.grid.tf/tf-official-apps/threefoldtech-alpine-3.flist.md)|`/entrypoint.sh`|`SSH_KEY`| +|[Ubuntu](https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist.md)|`/init.sh`|`SSH_KEY`| +|[CentOS](https://hub.grid.tf/tf-official-apps/threefoldtech-centos-8.flist.md)|`/entrypoint.sh`|`SSH_KEY`| +|[K3s](https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist.md)|`/sbin/zinit init`|- `SSH_KEY`
- `K3S_TOKEN`
- `K3S_DATA_DIR`
- `K3S_FLANNEL_IFACE`
- `K3S_NODE_NAME`
- `K3S_URL` `https://${masterIp}:6443`| + +## More Flists + +You can convert any docker image to an flist. Feel free to explore the different possibilities on the [ThreeFold Hub](https://hub.grid.tf/). \ No newline at end of file diff --git a/collections/developers/go/grid3_go_gateways.md b/collections/developers/go/grid3_go_gateways.md new file mode 100644 index 0000000..66f5f68 --- /dev/null +++ b/collections/developers/go/grid3_go_gateways.md @@ -0,0 +1,104 @@ +

Deploying Gateways

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Gateway Name](#gateway-name) +- [Example](#example) +- [Gateway FQDN](#gateway-fqdn) +- [Example](#example-1) + +*** + +## Introduction + +After [deploying a VM](./grid3_go_vm.md) you can deploy Gateways to further expose your VM. + +## Gateway Name + +This generates a FQDN for your VM. + +## Example + +```go +import ( + "fmt" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes/zos" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", true, false) + + // Get a free node to deploy + domain := true + status := "up" + filter := types.NodeFilter{ + Domain: &domain, + Status: &status, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + nodeID := uint32(nodeIDs[0].NodeID) + + // Create gateway to deploy + gateway := workloads.GatewayNameProxy{ + NodeID: nodeID, + Name: "mydomain", + Backends: []zos.Backend{"http://[300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f]:8080"}, + TLSPassthrough: true, + } + err = tfPluginClient.GatewayNameDeployer.Deploy(ctx, &gateway) + + gatewayObj, err := tfPluginClient.State.LoadGatewayNameFromGrid(nodeID, gateway.Name, gateway.Name) + fmt.Println(gatewayObj.FQDN) +} + +``` + +This deploys a Gateway Name Proxy that forwards requests to your VM. You should see an output like this: + +```bash +mydomain.gent01.dev.grid.tf +``` + +## Gateway FQDN + +In case you have a FQDN already pointing to the node, you can expose your VM using Gateway FQDN. + +## Example + +```go +import ( + "fmt" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/zos/pkg/gridtypes/zos" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true) + + // Create gateway to deploy + gateway := workloads.GatewayFQDNProxy{ + NodeID: 14, + Name: "mydomain", + Backends: []zos.Backend{"http://[300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f]:8080"}, + FQDN: "my.domain.com", + TLSPassthrough: true, + } + err = tfPluginClient.GatewayFQDNDeployer.Deploy(ctx, &gateway) + + gatewayObj, err := tfPluginClient.State.LoadGatewayFQDNFromGrid(nodeID, gateway.Name, gateway.Name) +} + +``` + +This deploys a Gateway FQDN Proxy that forwards requests to from node 14 public IP to your VM. diff --git a/collections/developers/go/grid3_go_gpu.md b/collections/developers/go/grid3_go_gpu.md new file mode 100644 index 0000000..3ff0c94 --- /dev/null +++ b/collections/developers/go/grid3_go_gpu.md @@ -0,0 +1,6 @@ +

GPU and Go

+ +

Table of Contents

+ +- [GPU and Go Introduction](grid3_go_gpu_support.md) +- [Deploy a VM with GPU](grid3_go_vm_with_gpu.md) \ No newline at end of file diff --git a/collections/developers/go/grid3_go_gpu_support.md b/collections/developers/go/grid3_go_gpu_support.md new file mode 100644 index 0000000..9aadceb --- /dev/null +++ b/collections/developers/go/grid3_go_gpu_support.md @@ -0,0 +1,116 @@ +

GPU Support

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [More Information](#more-information) + +*** + +## Introduction + +We present here an example on how to deploy using the Go client. This is part of our integration tests. + + + +## Example + +```go +func TestVMWithGPUDeployment(t *testing.T) { + tfPluginClient, err := setup() + assert.NoError(t, err) + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) + defer cancel() + + publicKey, privateKey, err := GenerateSSHKeyPair() + assert.NoError(t, err) + + twinID := uint64(tfPluginClient.TwinID) + nodeFilter := types.NodeFilter{ + Status: &statusUp, + FreeSRU: convertGBToBytes(20), + FreeMRU: convertGBToBytes(8), + RentedBy: &twinID, + HasGPU: &trueVal, + } + + nodes, err := deployer.FilterNodes(ctx, tfPluginClient, nodeFilter) + if err != nil { + t.Skip("no available nodes found") + } + nodeID := uint32(nodes[0].NodeID) + + nodeClient, err := tfPluginClient.NcPool.GetNodeClient(tfPluginClient.SubstrateConn, nodeID) + assert.NoError(t, err) + + gpus, err := nodeClient.GPUs(ctx) + assert.NoError(t, err) + + network := workloads.ZNet{ + Name: "gpuNetwork", + Description: "network for testing gpu", + Nodes: []uint32{nodeID}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 20, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: false, + } + + disk := workloads.Disk{ + Name: "gpuDisk", + SizeGB: 20, + } + + vm := workloads.VM{ + Name: "gpu", + Flist: "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist", + CPU: 4, + Planetary: true, + Memory: 1024 * 8, + GPUs: ConvertGPUsToStr(gpus), + Entrypoint: "/init.sh", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + Mounts: []workloads.Mount{ + {DiskName: disk.Name, MountPoint: "/data"}, + }, + NetworkName: network.Name, + } + + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + assert.NoError(t, err) + + defer func() { + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) + assert.NoError(t, err) + }() + + dl := workloads.NewDeployment("gpu", nodeID, "", nil, network.Name, []workloads.Disk{disk}, nil, []workloads.VM{vm}, nil) + err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl) + assert.NoError(t, err) + + defer func() { + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl) + assert.NoError(t, err) + }() + + vm, err = tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name) + assert.NoError(t, err) + assert.Equal(t, vm.GPUs, ConvertGPUsToStr(gpus)) + + time.Sleep(30 * time.Second) + output, err := RemoteRun("root", vm.YggIP, "lspci -v", privateKey) + assert.NoError(t, err) + assert.Contains(t, string(output), gpus[0].Vendor) +} +``` + + + +## More Information + +For more information on this, you can check this [Client Pull Request](https://github.com/threefoldtech/tfgrid-sdk-go/pull/207/) on how to support the new calls to list GPUs and to deploy a machine with GPU. \ No newline at end of file diff --git a/collections/developers/go/grid3_go_installation.md b/collections/developers/go/grid3_go_installation.md new file mode 100644 index 0000000..c07008c --- /dev/null +++ b/collections/developers/go/grid3_go_installation.md @@ -0,0 +1,45 @@ +

Go Client Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Requirements](#requirements) +- [Steps](#steps) +- [References](#references) + +*** + +## Introduction + +We present the general steps to install the ThreeFold Grid3 Go Client. + +## Requirements + +Make sure that you have at least Go 1.19 installed on your machine. + +- [Go](https://golang.org/doc/install) >= 1.19 + +## Steps + +* Create a new directory + * ```bash + mkdir tf_go_client + ``` +* Change directory + * ```bash + cd tf_go_client + ``` +* Creates a **go.mod** file to track the code's dependencies + * ```bash + go mod init main + ``` +* Install the Grid3 Go Client + * ```bash + go get github.com/threefoldtech/tfgrid-sdk-go/grid-client + ``` + +This will make Grid3 Go Client packages available to you. + +## References + +For more information, you can read the official [Go documentation](https://go.dev/doc/). \ No newline at end of file diff --git a/collections/developers/go/grid3_go_kubernetes.md b/collections/developers/go/grid3_go_kubernetes.md new file mode 100644 index 0000000..ba1bab8 --- /dev/null +++ b/collections/developers/go/grid3_go_kubernetes.md @@ -0,0 +1,120 @@ +

Deploying Kubernetes Clusters

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We show how to deploy a Kubernetes cluster with the Go client. + +## Example + +```go +import ( + "fmt" + "net" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true) + + // Get a free node to deploy + freeMRU := uint64(1) + freeSRU := uint64(1) + status := "up" + filter := types.NodeFilter{ + FreeMRU: &freeMRU, + FreeSRU: &freeSRU, + Status: &status, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + masterNodeID := uint32(nodeIDs[0].NodeID) + workerNodeID1 := uint32(nodeIDs[1].NodeID) + workerNodeID2 := uint32(nodeIDs[2].NodeID) + + // Create a new network to deploy + network := workloads.ZNet{ + Name: "newNetwork", + Description: "A network to deploy", + Nodes: []uint32{masterNodeID, workerNodeID1, workerNodeID2}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 1, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: true, + } + + // Create master and worker nodes to deploy + master := workloads.K8sNode{ + Name: "master", + Node: masterNodeID, + DiskSize: 1, + CPU: 2, + Memory: 1024, + Planetary: true, + Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + } + + worker1 := workloads.K8sNode{ + Name: "worker1", + Node: workerNodeID1, + DiskSize: 1, + CPU: 2, + Memory: 1024, + Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + } + + worker2 := workloads.K8sNode{ + Name: "worker2", + Node: workerNodeID2, + DiskSize: 1, + Flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + CPU: 2, + Memory: 1024, + } + + k8sCluster := workloads.K8sCluster{ + Master: &master, + Workers: []workloads.K8sNode{worker1, worker2}, + Token: "tokens", + SSHKey: publicKey, + NetworkName: network.Name, + } + + // Deploy the network first + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + + // Deploy the k8s cluster + err = tfPluginClient.K8sDeployer.Deploy(ctx, &k8sCluster) + + // Load the k8s cluster + k8sClusterObj, err := tfPluginClient.State.LoadK8sFromGrid([]uint32{masterNodeID, workerNodeID1, workerNodeID2}, master.Name) + + // Print master node Yggdrasil IP + fmt.Println(k8sClusterObj.Master.YggIP) + + // Cancel the VM deployment + err = tfPluginClient.K8sDeployer.Cancel(ctx, &k8sCluster) + + // Cancel the network deployment + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) +} + +``` + +You should see an output like this: + +```bash +300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f +``` diff --git a/collections/developers/go/grid3_go_load_client.md b/collections/developers/go/grid3_go_load_client.md new file mode 100644 index 0000000..fe61a6d --- /dev/null +++ b/collections/developers/go/grid3_go_load_client.md @@ -0,0 +1,35 @@ +

Load Client

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFPluginClient Configuration](#tfpluginclient-configuration) +- [Creating Client](#creating-client) + +*** + +## Introduction + +We cover how to load client using the Go client. + +## TFPluginClient Configuration + +- mnemonics +- keyType: can be `ed25519` or `sr25519` +- network: can be `dev`, `qa`, `test` or `main` + +## Creating Client + +Import `deployer` package to your project: + +```go +import "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" +``` + +Create new Client: + +```go +func main() { + client, err := deployer.NewTFPluginClient(mnemonics, keyType, network, "", "", "", 0, true) +} +``` diff --git a/collections/developers/go/grid3_go_qsfs.md b/collections/developers/go/grid3_go_qsfs.md new file mode 100644 index 0000000..92a352a --- /dev/null +++ b/collections/developers/go/grid3_go_qsfs.md @@ -0,0 +1,186 @@ +

Deploying QSFS

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We show how to deploy QSFS workloads with the Go client. + +## Example + +```go +import ( + "context" + "fmt" + "net" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true) + + // Get a free node to deploy + freeMRU := uint64(2) + freeSRU := uint64(20) + status := "up" + filter := types.NodeFilter{ + FreeMRU: &freeMRU, + FreeSRU: &freeSRU, + Status: &status, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + nodeID := uint32(nodeIDs[0].NodeID) + + // Create data and meta ZDBs + dataZDBs := []workloads.ZDB{} + metaZDBs := []workloads.ZDB{} + for i := 1; i <= DataZDBNum; i++ { + zdb := workloads.ZDB{ + Name: "qsfsDataZdb" + strconv.Itoa(i), + Password: "password", + Public: true, + Size: 1, + Description: "zdb for testing", + Mode: zos.ZDBModeSeq, + } + dataZDBs = append(dataZDBs, zdb) + } + + for i := 1; i <= MetaZDBNum; i++ { + zdb := workloads.ZDB{ + Name: "qsfsMetaZdb" + strconv.Itoa(i), + Password: "password", + Public: true, + Size: 1, + Description: "zdb for testing", + Mode: zos.ZDBModeUser, + } + metaZDBs = append(metaZDBs, zdb) + } + + // Deploy ZDBs + dl1 := workloads.NewDeployment("qsfs", nodeID, "", nil, "", nil, append(dataZDBs, metaZDBs...), nil, nil) + err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl1) + + // result ZDBs + resDataZDBs := []workloads.ZDB{} + resMetaZDBs := []workloads.ZDB{} + for i := 1; i <= DataZDBNum; i++ { + res, err := tfPluginClient.State.LoadZdbFromGrid(nodeID, "qsfsDataZdb"+strconv.Itoa(i), dl1.Name) + resDataZDBs = append(resDataZDBs, res) + } + for i := 1; i <= MetaZDBNum; i++ { + res, err := tfPluginClient.State.LoadZdbFromGrid(nodeID, "qsfsMetaZdb"+strconv.Itoa(i), dl1.Name) + resMetaZDBs = append(resMetaZDBs, res) + } + + // backends + dataBackends := []workloads.Backend{} + metaBackends := []workloads.Backend{} + for i := 0; i < DataZDBNum; i++ { + dataBackends = append(dataBackends, workloads.Backend{ + Address: "[" + resDataZDBs[i].IPs[1] + "]" + ":" + fmt.Sprint(resDataZDBs[i].Port), + Namespace: resDataZDBs[i].Namespace, + Password: resDataZDBs[i].Password, + }) + } + for i := 0; i < MetaZDBNum; i++ { + metaBackends = append(metaBackends, workloads.Backend{ + Address: "[" + resMetaZDBs[i].IPs[1] + "]" + ":" + fmt.Sprint(resMetaZDBs[i].Port), + Namespace: resMetaZDBs[i].Namespace, + Password: resMetaZDBs[i].Password, + }) + } + + // Create a new qsfs to deploy + qsfs := workloads.QSFS{ + Name: "qsfs", + Description: "qsfs for testing", + Cache: 1024, + MinimalShards: 2, + ExpectedShards: 4, + RedundantGroups: 0, + RedundantNodes: 0, + MaxZDBDataDirSize: 512, + EncryptionAlgorithm: "AES", + EncryptionKey: "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af", + CompressionAlgorithm: "snappy", + Groups: workloads.Groups{{Backends: dataBackends}}, + Metadata: workloads.Metadata{ + Type: "zdb", + Prefix: "test", + EncryptionAlgorithm: "AES", + EncryptionKey: "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af", + Backends: metaBackends, + }, + } + + // Create a new network to deploy + network := workloads.ZNet{ + Name: "newNetwork", + Description: "A network to deploy", + Nodes: []uint32{nodeID}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 1, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: true, + } + + vm := workloads.VM{ + Name: "vm", + Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + CPU: 2, + Planetary: true, + Memory: 1024, + Entrypoint: "/sbin/zinit init", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + Mounts: []workloads.Mount{ + {DiskName: qsfs.Name, MountPoint: "/qsfs"}, + }, + NetworkName: network.Name, + } + + // Deploy the network first + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + + // Deploy the VM/QSFS deployment + dl2 := workloads.NewDeployment("qsfs", nodeID, "", nil, network.Name, nil, append(dataZDBs, metaZDBs...), []workloads.VM{vm}, []workloads.QSFS{qsfs}) + err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl2) + + // Load the QSFS using the state loader + qsfsObj, err := tfPluginClient.State.LoadQSFSFromGrid(nodeID, qsfs.Name, dl2.Name) + + // Load the VM using the state loader + vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl2.Name) + + // Print the VM Yggdrasil IP + fmt.Println(vmObj.YggIP) + + // Cancel the VM,QSFS deployment + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl1) + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl2) + + // Cancel the network deployment + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) +} +``` + +Running this code should result in a VM with QSFS deployed on an available node and get an output like this: + +```bash +Yggdrasil IP: 300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f +``` diff --git a/collections/developers/go/grid3_go_readme.md b/collections/developers/go/grid3_go_readme.md new file mode 100644 index 0000000..cf24ada --- /dev/null +++ b/collections/developers/go/grid3_go_readme.md @@ -0,0 +1,17 @@ +# Grid Go Client + +Grid Go Client is a Go client created to interact and develop on Threefold Grid using Go language. + +Please make sure to check the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) before continuing. + +

Table of Contents

+ +- [Installation](../go/grid3_go_installation.md) +- [Loading Client](../go/grid3_go_load_client.md) +- [Deploy a VM](../go/grid3_go_vm.md) +- [Deploy a VM with GPU](../go/grid3_go_vm_with_gpu.md) +- [Deploy Multiple VMs](../go/grid3_go_vms.md) +- [Deploy Gateways](../go/grid3_go_gateways.md) +- [Deploy Kubernetes](../go/grid3_go_kubernetes.md) +- [Deploy a QSFS](../go/grid3_go_qsfs.md) +- [GPU Support](../go/grid3_go_gpu_support.md) diff --git a/collections/developers/go/grid3_go_vm.md b/collections/developers/go/grid3_go_vm.md new file mode 100644 index 0000000..c164114 --- /dev/null +++ b/collections/developers/go/grid3_go_vm.md @@ -0,0 +1,99 @@ +

Deploying a VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We show how to deploy a VM with the Go client. + +## Example + +```go +import ( + "context" + "fmt" + "net" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, keyType, network, "", "", "", 0, true) + + // Get a free node to deploy + freeMRU := uint64(2) + freeSRU := uint64(20) + status := "up" + filter := types.NodeFilter{ + FreeMRU: &freeMRU, + FreeSRU: &freeSRU, + Status: &status, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + nodeID := uint32(nodeIDs[0].NodeID) + + // Create a new network to deploy + network := workloads.ZNet{ + Name: "newNetwork", + Description: "A network to deploy", + Nodes: []uint32{nodeID}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 1, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: true, + } + + // Create a new VM to deploy + vm := workloads.VM{ + Name: "vm", + Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + CPU: 2, + PublicIP: true, + Planetary: true, + Memory: 1024, + RootfsSize: 20 * 1024, + Entrypoint: "/sbin/zinit init", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + IP: "10.20.2.5", + NetworkName: network.Name, + } + + // Deploy the network first + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + + // Deploy the VM deployment + dl := workloads.NewDeployment("vm", nodeID, "", nil, network.Name, nil, nil, []workloads.VM{vm}, nil) + err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl) + + // Load the VM using the state loader + vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name) + + // Print the VM Yggdrasil IP + fmt.Println(vmObj.YggIP) + + // Cancel the VM deployment + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl) + + // Cancel the network deployment + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) +} +``` + +Running this code should result in a VM deployed on an available node and get an output like this: + +```bash +300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f +``` diff --git a/collections/developers/go/grid3_go_vm_with_gpu.md b/collections/developers/go/grid3_go_vm_with_gpu.md new file mode 100644 index 0000000..d635800 --- /dev/null +++ b/collections/developers/go/grid3_go_vm_with_gpu.md @@ -0,0 +1,121 @@ +

Deploy a VM with GPU

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +In this section, we explore how to deploy a virtual machine equipped with GPU. We deploy the VM using Go. The VM will be deployed on a 3Node with an available GPU. + + + +## Example + +```go +import ( + "context" + "fmt" + "net" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true) + + // Get a free node to deploy + freeMRU := uint64(2) + freeSRU := uint64(20) + status := "up" + trueVal := true + + twinID := uint64(tfPluginClient.TwinID) + filter := types.NodeFilter{ + FreeMRU: &freeMRU, + FreeSRU: &freeSRU, + Status: &status, + RentedBy: &twinID, + HasGPU: &trueVal, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + nodeID := uint32(nodeIDs[0].NodeID) + + // Get the available gpus on the node + nodeClient, err := tfPluginClient.NcPool.GetNodeClient(tfPluginClient.SubstrateConn, nodeID) + gpus, err := nodeClient.GPUs(ctx) + + // Create a new network to deploy + network := workloads.ZNet{ + Name: "newNetwork", + Description: "A network to deploy", + Nodes: []uint32{nodeID}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 1, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: true, + } + + // Create a new disk to deploy + disk := workloads.Disk{ + Name: "gpuDisk", + SizeGB: 20, + } + + // Create a new VM to deploy + vm := workloads.VM{ + Name: "vm", + Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + CPU: 2, + PublicIP: true, + Planetary: true, + // Insert your GPUs' IDs here + GPUs: []zos.GPU{zos.GPU(gpus[0].ID)}, + Memory: 1024, + RootfsSize: 20 * 1024, + Entrypoint: "/sbin/zinit init", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + Mounts: []workloads.Mount{ + {DiskName: disk.Name, MountPoint: "/data"}, + }, + IP: "10.20.2.5", + NetworkName: network.Name, + } + + // Deploy the network first + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + + // Deploy the VM deployment + dl := workloads.NewDeployment("gpu", nodeID, "", nil, network.Name, []workloads.Disk{disk}, nil, []workloads.VM{vm}, nil) + err = tfPluginClient.DeploymentDeployer.Deploy(ctx, &dl) + + // Load the VM using the state loader + vmObj, err := tfPluginClient.State.LoadVMFromGrid(nodeID, vm.Name, dl.Name) + + // Print the VM Yggdrasil IP + fmt.Println(vmObj.YggIP) + + // Cancel the VM deployment + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl) + + // Cancel the network deployment + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) +} +``` + +Running this code should result in a VM with a GPU deployed on an available node. The output should look like this: + +```bash +Yggdrasil IP: 300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f +``` \ No newline at end of file diff --git a/collections/developers/go/grid3_go_vms.md b/collections/developers/go/grid3_go_vms.md new file mode 100644 index 0000000..f78aaae --- /dev/null +++ b/collections/developers/go/grid3_go_vms.md @@ -0,0 +1,125 @@ +

Deploying Multiple VMs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We show how to deploy multiple VMs with the Go client. + +## Example + +```go +import ( + "context" + "fmt" + "net" + + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/deployer" + "github.com/threefoldtech/tfgrid-sdk-go/grid-client/workloads" + "github.com/threefoldtech/tfgrid-sdk-go/grid-proxy/pkg/types" + "github.com/threefoldtech/zos/pkg/gridtypes" +) + +func main() { + + // Create Threefold plugin client + tfPluginClient, err := deployer.NewTFPluginClient(mnemonics, "sr25519", network, "", "", "", 0, true) + + // Get a free node to deploy + freeMRU := uint64(2) + freeSRU := uint64(2) + status := "up" + filter := types.NodeFilter { + FreeMRU: &freeMRU, + FreeSRU: &freeSRU, + Status: &status, + } + nodeIDs, err := deployer.FilterNodes(tfPluginClient.GridProxyClient, filter) + nodeID1 := uint32(nodeIDs[0].NodeID) + nodeID2 := uint32(nodeIDs[1].NodeID) + + // Create a new network to deploy + network := workloads.ZNet{ + Name: "newNetwork", + Description: "A network to deploy", + Nodes: []uint32{nodeID1, nodeID2}, + IPRange: gridtypes.NewIPNet(net.IPNet{ + IP: net.IPv4(10, 1, 0, 0), + Mask: net.CIDRMask(16, 32), + }), + AddWGAccess: true, + } + + // Create new VMs to deploy + vm1 := workloads.VM{ + Name: "vm1", + Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + CPU: 2, + PublicIP: true, + Planetary: true, + Memory: 1024, + RootfsSize: 20 * 1024, + Entrypoint: "/sbin/zinit init", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + IP: "10.20.2.5", + NetworkName: network.Name, + } + vm2 := workloads.VM{ + Name: "vm2", + Flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + CPU: 2, + PublicIP: true, + Planetary: true, + Memory: 1024, + RootfsSize: 20 * 1024, + Entrypoint: "/sbin/zinit init", + EnvVars: map[string]string{ + "SSH_KEY": publicKey, + }, + IP: "10.20.2.6", + NetworkName: network.Name, + } + + // Deploy the network first + err = tfPluginClient.NetworkDeployer.Deploy(ctx, &network) + + // Load the network using the state loader + // this loader should load the deployment as json then convert it to a deployment go object with workloads inside it + networkObj, err := tfPluginClient.State.LoadNetworkFromGrid(network.Name) + + // Deploy the VM deployments + dl1 := workloads.NewDeployment("vm1", nodeID1, "", nil, network.Name, nil, nil, []workloads.VM{vm1}, nil) + dl2 := workloads.NewDeployment("vm2", nodeID2, "", nil, network.Name, nil, nil, []workloads.VM{vm2}, nil) + err = tfPluginClient.DeploymentDeployer.BatchDeploy(ctx, []*workloads.Deployment{&dl1, &dl2}) + + // Load the VMs using the state loader + vmObj1, err := tfPluginClient.State.LoadVMFromGrid(nodeID1, vm1.Name, dl1.Name) + vmObj2, err := tfPluginClient.State.LoadVMFromGrid(nodeID2, vm2.Name, dl2.Name) + + // Print the VMs Yggdrasil IP + fmt.Println(vmObj1.YggIP) + fmt.Println(vmObj2.YggIP) + + // Cancel the VM deployments + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl1) + err = tfPluginClient.DeploymentDeployer.Cancel(ctx, &dl2) + + // Cancel the network + err = tfPluginClient.NetworkDeployer.Cancel(ctx, &network) +} + +``` + +Running this code should result in two VMs deployed on two separate nodes while being on the same network and you should see an output like this: + +```bash +300:e9c4:9048:57cf:f4e0:2343:f891:6037 +300:e9c4:9048:57cf:6d98:42c6:a7bf:2e3f +``` diff --git a/collections/developers/grid_deployment/deploy_dashboard.md b/collections/developers/grid_deployment/deploy_dashboard.md new file mode 100644 index 0000000..50505ff --- /dev/null +++ b/collections/developers/grid_deployment/deploy_dashboard.md @@ -0,0 +1,127 @@ +

Deploy the Dashboard

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Create an SSH Tunnel](#create-an-ssh-tunnel) +- [Editor SSH Remote Connection](#editor-ssh-remote-connection) +- [Set the VM](#set-the-vm) +- [Build the Dashboard](#build-the-dashboard) +- [Dashboard Public Access](#dashboard-public-access) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +We show how to deploy the Dashboard (devnet) on a full VM. To do so, we set an SSH tunnel and use the VSCodium Remote Explorer function. We will then be able to use a source-code editor to explore the code and see changes on a local browser. + +We also show how to provide a public access to the Dashboard by setting a gateway domain to your full VM deployment. Note that this method is not production-ready and should only be used to test the Dashboard. + +## Prerequisites + +- TFChain account with TFT +- [Deploy full VM with WireGuard connection](../../system_administrators/getstarted/ssh_guide/ssh_wireguard.md) +- [Make sure you can connect via SSH on the terminal](../../system_administrators/getstarted/ssh_guide/ssh_openssh.md) + +In this guide, we use WireGuard, but you can use other connection methods, such as [Mycelium](../../system_administrators/mycelium/mycelium_toc.md). + +## Create an SSH Tunnel + +- Open a terminal and create an SSH tunnel + ``` + ssh -4 -L 5173:127.0.0.1:5173 root@10.20.4.2 + ``` + +Simply leave this window open and follow the next steps. + +If you use an IPv6 address, e.g. with Mycelium, set `-6` in the line above instead of `-4`. + +## Editor SSH Remote Connection + +You can connect via SSH through the source-code editor to a VM on the grid. In this example, WireGuard is set. + +- Add the SSH Remote extension to [VSCodium](https://vscodium.com/) +- Add a new SSH remote connection +- Set the following (adjust with your own username and host) + ``` + Host 10.20.4.2 + HostName 10.20.4.2 + User root + ``` +- Click on `Connect to host` + +## Set the VM + +We set the VM to be able to build the Dashboard. + +``` + +apt update && apt install build-essential python3 -y + +wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash + +export NVM_DIR="$HOME/.nvm" + +[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm + +[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion + +nvm install 18 + +npm install -g yarn + +``` + +## Build the Dashboard + +We now build the Dashboard. + +Clone the repository, then install, build and run the Dashboard. Note that here it is called `playground`: + +``` + +git clone https://github.com/threefoldtech/tfgrid-sdk-ts + +cd tfgrid-sdk-ts/ + +yarn install + +make build + +make run project=playground + +``` + +You can then access the dev net Dashboard on your local browser. + +To stop running the Dashboard, simply enter ̀`Ctrl-C` on the terminal window. + + +## Dashboard Public Access + +> Note: This method is not production-ready. Use only for testing purposes. + +Once you've tested the Dashboard with the SSH tunnel, you can explore how to access it from the public Internet. For this, we will create a gateway domain and bind the host to `0.0.0.0`. + +On the Full VM page, [add a domain](../../dashboard/solutions/add_domain.md) to access your deployment from the public Internet. + +- Under `Actions`, click on `Manage Domains` +- Go to `Add New Domain` +- Choose a gateway domain under `Select domain` +- Set the port 5173 +- Click on `Add` + +To run the Dashboard from the added domain, use this instead of the previous `make run` line: + +``` +cd packages/playground +yarn dev --host 0.0.0.0 +``` + +You can then access the Dashboard from the domain you just created. + +## Questions and Feedback + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/developers/grid_deployment/grid_deployment.md b/collections/developers/grid_deployment/grid_deployment.md new file mode 100644 index 0000000..da00c23 --- /dev/null +++ b/collections/developers/grid_deployment/grid_deployment.md @@ -0,0 +1,12 @@ +# Grid Deployment + +The TFGrid whole source code is open-source and instances of the grid can be deployed by anyone thanks to the distribution of daily grid snapshots of the complete ThreeFold Grid stacks. + +This section also covers the steps to deploy the Dashboard locally. This can be useful when testing the grid or contributing to the open-source project. + +## Table of Contents + +- [TFGrid Stacks](./tfgrid_stacks.md) +- [Full VM Grid Deployment](./grid_deployment_full_vm.md) +- [Grid Snapshots](./snapshots.md) +- [Deploy the Dashboard](./deploy_dashboard.md) \ No newline at end of file diff --git a/collections/developers/grid_deployment/grid_deployment_full_vm.md b/collections/developers/grid_deployment/grid_deployment_full_vm.md new file mode 100644 index 0000000..2c739a4 --- /dev/null +++ b/collections/developers/grid_deployment/grid_deployment_full_vm.md @@ -0,0 +1,175 @@ +

Grid Deployment on a Full VM

+

Table of Contents

+ + +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy All 3 Network Instances](#deploy-all-3-network-instances) +- [DNS Settings](#dns-settings) + - [DNS Verification](#dns-verification) +- [Prepare the VM](#prepare-the-vm) +- [Set the Firewall](#set-the-firewall) +- [Launch the Script](#launch-the-script) +- [Access the Grid Services](#access-the-grid-services) +- [Manual Commands](#manual-commands) +- [Update the Deployment](#update-the-deployment) + +*** + +## Introduction + +We present the steps to deploy an instance of the TFGrid on a full VM. + +For this guide, we will be deploying a mainnet instance. While the steps are similar for testnet and devnet, you will have to adjust your deployment depending on which network you use. Details are provided when needed. + +We also provide information to deploy the 3 different network instances. + +## Prerequisites + +For this guide, you will need to deploy a full VM on the ThreeFold Grid with at least the following minimum specs: + +- IPv4 +- IPv6 +- 32GB of RAM +- 1000 GB of SSD +- 8 vcores + +After deploying the full VM, take note of the IPv4 and IPv6 addresses to properly set the DNS records and then SSH into the VM. + +It is recommended to deploy on a machine with modern hardware and NVME storage disk. + +## Deploy All 3 Network Instances + +To deploy the 3 network instances, mainnet, testnet and mainnet, you need to follow the same process for each network on a separate machine or at least on a different VM. + +This means that you can either deploy each network instance on 3 different machines, or you can also deploy 3 different VMs on the same machine, e.g. a dedicated node. Then, each VM will run a different network instance. In this case, you will certainly need a machine with NVME storage disk and modern hardware. + +## DNS Settings + +You need to set an A record for the IPv4 address and an AAAA record for the IPv6 address with a wildcard subdomain. + +The following table explicitly shows how to set the A and AAAA records for your domain for all 3 networks. Note that both `testnet` and `devnet` have a subdomain. The last two lines are for mainnet since no subdomain is needed in this case. + +| Type | Host | Value | +| ---- | ---- | -------------- | +| A | \*.dev | | +| AAAA | \*.dev | | +| A | \*.test | | +| AAAA | \*.test | | +| A | \* | | +| AAAA | \* | | + +As stated above, each network instance must be on its own VM or machine to work properly. Make sure to adjust the DNS records accordingly. + +### DNS Verification + +You can use tools such as [DNSChecker](https://dnschecker.org/) or [dig](https://linux.die.net/man/1/dig) on a terminal to check if the DNS propagadation is complete. + +## Prepare the VM + +We show the steps to prepare the VM to run the network instance. + +If you are deploying on testnet or devnet, simply replace `mainnet` by the proper network in the following lines. + +- Download the ThreeFold Tech `grid_deployment` repository + ``` + git clone https://github.com/threefoldtech/grid_deployment + cd grid_deployment/docker-compose/mainnet + ``` +- Generate a TFChain node key with `subkey` + - Note: If you deploy the 3 network instances, you can use the same node key for all 3 networks. But it is recommended to use 3 different keys to facilitate management. + ``` + echo .subkey_mainnet >> .gitignore + ../subkey generate-node-key > .nodekey_mainnet + cat .nodekey_mainnet + ``` +- Create and the set environment variables file + ``` + cp .secrets.env-example .secrets.env + ``` +- Adjust the environment file + ``` + nano .secrets.env + ``` +- To adjust the `.secrets.env` file, take into account the following: + - **DOMAIN**="example.com" + - Write your own domain + - **TFCHAIN_NODE_KEY**="abc123" + - Write the output of the command `cat .nodekey_mainnet` + - **ACTIVATION_SERVICE_MNEMONIC**="word1 word2 ... word24" + - Write the seed phrase of an account on mainnet with at least 10 TFT in the wallet + - **GRID_PROXY_MNEMONIC**="word1 word2 ... word24" + - Write the seed phrase of an account on mainnet with at least 10 TFT in the wallet and a registered twin ID\* + +> \*Note: If you've created an account using the ThreeFold Dashboard on a given network, the twin ID is automatically registered for this network. + +## Set the Firewall + +You can use UFW to set the firewall: + +``` +ufw allow 80/tcp +ufw allow 443/tcp +ufw allow 30333/tcp +ufw allow 22/tcp +ufw enable +ufw status +``` + +## Launch the Script + +Once you've prepared the VM, you can simply run the script to install the grid stack and deploy it online. + +``` +sh install_grid_bknd.sh +``` + +This will take some time since you are downloading the whole mainnet grid snapshots. + +## Access the Grid Services + +Once you've deployed the grid stack online, you can access the different grid services by usual the usual subdomains: + +``` +dashboard.example.com +metrics.example.com +tfchain.example.com +graphql.example.com +relay.example.com +gridproxy.example.com +activation.example.com +stats.example.com +``` + +In the case of testnet and devnet, links will also have the given subdomain, such as `dashboard.test.example.com` for a `testnet` instance. + +## Manual Commands + +Once you've run the install script, you can deploy manually the grid stack with the following command: + +``` +docker compose --env-file .secrets.env --env-file .env up -d +``` + +You can also check if the environment variables are properly set: + +``` +docker compose --env-file .secrets.env --env-file .env config +``` + +If you want to see the output during deployment, remove `-d` in the command above as follows: + +``` +docker compose --env-file .secrets.env --env-file .env up +``` + +This can be helpful to troubleshoot errors. + +## Update the Deployment + +Go into the folder of the proper network, e.g. mainnet, and run the following commands: + +``` +git pull -r +docker compose --env-file .secrets.env --env-file .env up -d +``` \ No newline at end of file diff --git a/collections/developers/grid_deployment/snapshots.md b/collections/developers/grid_deployment/snapshots.md new file mode 100644 index 0000000..dcccd1e --- /dev/null +++ b/collections/developers/grid_deployment/snapshots.md @@ -0,0 +1,259 @@ +

Snapshots for Grid Backend Services

+

Table of Contents

+ +- [Introduction](#introduction) +- [Services](#services) +- [ThreeFold Public Snapshots](#threefold-public-snapshots) +- [Requirements](#requirements) + - [Files for Each Net](#files-for-each-net) + - [Deploy All 3 Network Instances](#deploy-all-3-network-instances) +- [Deploy a Snapshot Backend](#deploy-a-snapshot-backend) +- [Deploy the Services with Scripts](#deploy-the-services-with-scripts) + - [Create the Snapshots](#create-the-snapshots) + - [Start All the Services](#start-all-the-services) + - [Stop All the Services](#stop-all-the-services) +- [Expose the Snapshots with Rsync](#expose-the-snapshots-with-rsync) + - [Create the Service Configuration File](#create-the-service-configuration-file) + - [Start the Service](#start-the-service) + +*** + +## Introduction + +To facilitate deploying grid backend services, we provide snapshots to significantly reduce sync time. This can be setup anywhere from scratch. Once all services are synced, one can use the scripts to create snapshots automatically. + +To learn how to deploy your own grid stack, read [this section](./grid_deployment_full_vm.md). + +## Services + +There are 3 grid backend services that collect enough data to justify creating snapshots: + +- ThreeFold blockchain - TFChain +- Graphql - Indexer +- Graphql - Processor + +## ThreeFold Public Snapshots + +ThreeFold hosts all available snapshots at: [https://bknd.snapshot.grid.tf/](https://bknd.snapshot.grid.tf/). Those snapshots can be downloaded with rsync: + +- Mainnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/tfchain-mainnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/indexer-mainnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/processor-mainnet-latest.tar.gz . + ``` +- Testnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/tfchain-testnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/indexer-testnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/processor-testnet-latest.tar.gz . + ``` +- Devnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/tfchain-devnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/indexer-devnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/processor-devnet-latest.tar.gz . + ``` + +## Requirements + +To run your own snapshot backend, you need the following: + +- Configuration + - A working docker environment + - 'node key' for the TFchain public RPC node, generated with `subkey generate-node-key` + + Hardware + - min of 8 modern CPU cores + - min of 32GB RAM + - min of 1TB SSD storage (high preference for NVMe based storage), preferably more (as the chain keeps growing in size) + - min of 2TB HDD storage (to store and share the snapshots) + +Dev, QA and Testnet can do with a Sata SSD setup. Mainnet requires NVMe based SSDs due to the data size. + +**Note**: If a deployment does not have enough disk input/output operations per second (iops) available, you might see the processor container restarting regulary and grid_proxy errors regarding processor database timeouts. + +### Files for Each Net + +Each folder contains the required deployment files for its net. Make sure to work in the folder that has the name of the network you want to create snapshots for. + +What does each file do: +- `.env` - contains environment files maintaned by Threefold Tech +- `.gitignore` - has a list of files to ignore once the repo has been cloned. This has the purpose to not have uncommited changes to files when working in this repo +- `.secrets.env-examples` - is where you have to add all your unique environment variables +- `create_snapshot.sh` - script to create a snapshot (used by cron) +- `docker-compose.yml` - has all the required docker-compose configuration to deploy a working Grid stack +- `open_logs_tmux.sh` - opens all the docker logs in tmux sessions +- `typesBundle.json` - contains data for the Graphql indexer and is not to be touched +- `startall.sh` - starts all the (already deployed) containers +- `stopall.sh` - stops all the (already deployed) containers + +### Deploy All 3 Network Instances + +To deploy the 3 network instances, mainnet, testnet and mainnet, you need to follow the same process for each network on a separate machine or at least on a different VM. + +This means that you can either deploy each network instance on 3 different machines, or you can also deploy 3 different VMs on the same machine, e.g. a dedicated node. Then, each VM will run a different network instance. In this case, you will certainly need a machine with NVME storage disk and modern hardware. + +## Deploy a Snapshot Backend + +Here's how to deploy a snapshot backend of a given network. + +- Go to the corresponding network folder (e.g. `mainnet`). + ```sh + cd mainnet + cp .secrets.env-example .secrets.env + ``` +- Open `.secrets.env` and add your generated subkey node-key. +- Check that all environment variables are correct. + ``` + docker compose --env-file .secrets.env --env-file .env config + ``` +- Deploy the snapshot backend. Depending on the disk iops available, it can take up until a week to sync from block 0. + + ```sh + docker compose --env-file .secrets.env --env-file .env up -d + ``` + +## Deploy the Services with Scripts + +You can deploy the 3 individual services using known methods such as [Docker](../../system_administrators/computer_it_basics/docker_basics.md). To facilitate the process, scripts are provided that run the necessary docker commands. + +The first script creates the snapshots, while the second and third scripts serve to start and stop all services. + +You can use the start script to start all services and then set a cron job to execute periodically the snapshot creation script. This will ensure that you always have the latest version available on your server. + +### Create the Snapshots + +You can set a cron job to execute a script running rsync to create the snapshots and generate logs at a given interval. + +- First download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh + ``` +- Set the permissions of the script + ``` + chmod +x create_snapshot.sh + ``` +- Make sure to a adjust the snapshot creation script for your specific deployment +- Set a cron job + ``` + crontab -e + ``` + - Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`. + ```sh + 0 1 * * * sh /root/code/grid_deployment/grid-snapshots/mainnet/create_snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1 + ``` + +### Start All the Services + +You can start all services by running the provided scripts. + +- Download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh + ``` +- Set the permissions of the script + ``` + chmod +x startall.sh + ``` +- Run the script to start all services via docker engine. + ``` + ./startall.sh + ``` + +### Stop All the Services + +You can stop all services by running the provided scripts. + +- Download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh + ``` +- Set the permissions of the script + ``` + chmod +x stopall.sh + ``` +- Run the script to stop all services via docker engine. + ``` + ./stopall.sh + ``` + +## Expose the Snapshots with Rsync + +We use rsync with a systemd service to expose the snapshots to the community. + +### Create the Service Configuration File + +To setup a public rsync server, create and edit the following file: + +`/etc/rsyncd.conf` + +```sh +pid file = /var/run/rsyncd.pid +lock file = /var/run/rsync.lock +log file = /var/log/rsync.log +port = 34873 +max connections = 20 +exclude = lost+found/ +transfer logging = yes +use chroot = yes +reverse lookup = no + +[gridsnapshots] +path = /storage/rsync-public/mainnet +comment = THREEFOLD GRID MAINNET SNAPSHOTS +read only = true +timeout = 300 +list = false + +[gridsnapshotstest] +path = /storage/rsync-public/testnet +comment = THREEFOLD GRID TESTNET SNAPSHOTS +read only = true +timeout = 300 +list = false + +[gridsnapshotsdev] +path = /storage/rsync-public/devnet +comment = THREEFOLD GRID DEVNET SNAPSHOTS +read only = true +timeout = 300 +list = false +``` + +### Start the Service + +Start and enable via systemd: + +```sh +systemctl start rsync +systemctl enable rsync +systemctl status rsync +``` diff --git a/collections/developers/grid_deployment/snapshots_archive.md b/collections/developers/grid_deployment/snapshots_archive.md new file mode 100644 index 0000000..eac3ff7 --- /dev/null +++ b/collections/developers/grid_deployment/snapshots_archive.md @@ -0,0 +1,206 @@ +

Snapshots for Grid Backend Services

+

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Services](#services) +- [ThreeFold Public Snapshots](#threefold-public-snapshots) +- [Deploy the Services with Scripts](#deploy-the-services-with-scripts) + - [Start All the Services](#start-all-the-services) + - [Stop All the Services](#stop-all-the-services) + - [Create the Snapshots](#create-the-snapshots) +- [Expose the Snapshots with Rsync](#expose-the-snapshots-with-rsync) + - [Create the Service Configuration File](#create-the-service-configuration-file) + - [Start the Service](#start-the-service) + +*** + +## Introduction + +To facilitate deploying grid backend services, we provide snapshots to significantly reduce sync time. This can be setup anywhere from scratch. Once all services are synced, one can use the scripts to create snapshots automatically. + +## Prerequisites + +There are a few prerequisites to properly run the ThreeFold services. + +- [Docker engine](../computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine) +- [Rsync](../computer_it_basics/file_transfer.md#rsync) + +## Services + +There are 3 grid backend services that collect enough data to justify creating snapshots: + +- ThreeFold blockchain - TFChain +- Graphql - Indexer +- Graphql - Processor + +## ThreeFold Public Snapshots + +ThreeFold hosts all available snapshots at: [https://bknd.snapshot.grid.tf/](https://bknd.snapshot.grid.tf/). Those snapshots can be downloaded with rsync: + +- Mainnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/tfchain-mainnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/indexer-mainnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshots/processor-mainnet-latest.tar.gz . + ``` +- Testnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/tfchain-testnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/indexer-testnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotstest/processor-testnet-latest.tar.gz . + ``` +- Devnet: + ``` + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/tfchain-devnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/indexer-devnet-latest.tar.gz . + rsync -Lv --progress --partial rsync://bknd.snapshot.grid.tf:34873/gridsnapshotsdev/processor-devnet-latest.tar.gz . + ``` + +Let's now see how to use those snapshots to run the services via scripts. + +## Deploy the Services with Scripts + +You can deploy the 3 individual services using known methods such as [Docker](https://manual.grid.tf/computer_it_basics/docker_basics.html). To facilitate the process, scripts are provided that run the necessary docker commands. + +The first script creates the snapshots, while the second and third scripts serve to start and stop all services. + +You can use the start script to start all services and then set a cron job to execute periodically the snapshot creation script. This will ensure that you always have the latest version available on your server. + +### Start All the Services + +You can start all services by running the provided scripts. + +- Download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/startall.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/startall.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/startall.sh + ``` +- Set the permissions of the script + ``` + chmod +x startall.sh + ``` +- Run the script to start all services via docker engine. + ``` + ./startall.sh + ``` + +### Stop All the Services + +You can stop all services by running the provided scripts. + +- Download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/stopall.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/stopall.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/stopall.sh + ``` +- Set the permissions of the script + ``` + chmod +x stopall.sh + ``` +- Run the script to stop all services via docker engine. + ``` + ./stopall.sh + ``` + +### Create the Snapshots + +You can set a cron job to execute a script running rsync to create the snapshots and generate logs at a given interval. + +- First download the script. + - Main net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/mainnet/create_snapshot.sh + ``` + - Test net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/testnet/create_snapshot.sh + ``` + - Dev net + ``` + wget https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/create_snapshot.sh + ``` +- Set the permissions of the script + ``` + chmod +x create_snapshot.sh + ``` +- Make sure to a adjust the snapshot creation script for your specific deployment +- Set a cron job + ``` + crontab -e + ``` + - Here is an example of a cron job where we execute the script every day at 1 AM and send the logs to `/var/log/snapshots/snapshots-cron.log`. + ```sh + 0 1 * * * sh /opt/snapshots/create-snapshot.sh > /var/log/snapshots/snapshots-cron.log 2>&1 + ``` + +## Expose the Snapshots with Rsync + +We use rsync with a systemd service to expose the snapshots to the community. + +### Create the Service Configuration File + +To setup a public rsync server, create and edit the following file: + +`/etc/rsyncd.conf` + +```sh +pid file = /var/run/rsyncd.pid +lock file = /var/run/rsync.lock +log file = /var/log/rsync.log +port = 34873 +max connections = 20 +exclude = lost+found/ +transfer logging = yes +use chroot = yes +reverse lookup = no + +[gridsnapshots] +path = /storage/rsync-public/mainnet +comment = THREEFOLD GRID MAINNET SNAPSHOTS +read only = true +timeout = 300 +list = false + +[gridsnapshotstest] +path = /storage/rsync-public/testnet +comment = THREEFOLD GRID TESTNET SNAPSHOTS +read only = true +timeout = 300 +list = false + +[gridsnapshotsdev] +path = /storage/rsync-public/devnet +comment = THREEFOLD GRID DEVNET SNAPSHOTS +read only = true +timeout = 300 +list = false +``` + +### Start the Service + +Start and enable via systemd: + +```sh +systemctl start rsync +systemctl enable rsync +systemctl status rsync +``` + +If you're interested about hosting your own instance of the grid to strenghten the ThreeFold ecosystem, make sure to read the next section, [Guardians of the Grid](./tfgrid_guardians.md). \ No newline at end of file diff --git a/collections/developers/grid_deployment/tfgrid_stacks.md b/collections/developers/grid_deployment/tfgrid_stacks.md new file mode 100644 index 0000000..7845722 --- /dev/null +++ b/collections/developers/grid_deployment/tfgrid_stacks.md @@ -0,0 +1,32 @@ +

TFGrid Stacks

+

Table of Contents

+ + +- [Introduction](#introduction) +- [Advantages](#advantages) +- [Run Your Own Stack](#run-your-own-stack) + + +*** + +## Introduction + +ThreeFold is an open-source project and anyone can run the full stack of the TFGrid in a totally decentralized manner. In practice, this means that anyone can grab a docker compose file shared by ThreeFold of the TFGrid stacks and run an instance of the grid services on their own domain. + +This means that you could host your own instance of the ThreeFold Dashboard at `dashboard.yourdomain.com` that would serve your own instance of the complete TFGrid stack! Users could then access the ThreeFold Dashboard via your own domain. + +The process is actually very straightforward and we even provide a script to streamline the process. + +## Advantages + +Setting such instances of the TFGrid ensures resiliency and decentralization of the ThreeFold ecosystem. + +As a very concrete example, image that one instance of the Dashboard goes offline, `dashboard.grid.tf`, then users could still access the Dashboard from another instance. The more users of the TFGrid deploy their own instance, the more resilient the grid becomes. + +The overall ThreeFold ecosystem becomes more resilient to failures of individual nodes. + +## Run Your Own Stack + +To set your own instance of the TFGrid, you can download a snapshot of the grid and deploy the TFGrid services with Docker. We even provide scripts to quicken the whole process! + +Read more about snapshots in the [next section](./grid_deployment_full_vm.md). \ No newline at end of file diff --git a/collections/developers/internals/internals.md b/collections/developers/internals/internals.md new file mode 100644 index 0000000..af1c1f8 --- /dev/null +++ b/collections/developers/internals/internals.md @@ -0,0 +1,19 @@ +

Internals

+ +We present in this section of the developers book a partial list of system components. Content will be added progressively. + +

Table of Contents

+ +- [Reliable Message Bus (RMB)](rmb/rmb_toc.md) + - [Introduction to RMB](rmb/rmb_intro.md) + - [RMB Specs](rmb/rmb_specs.md) + - [RMB Peer](rmb/uml/peer.md) + - [RMB Relay](rmb/uml/relay.md) + +- [ZOS](zos/index.md) + - [Manual](./zos/manual/manual.md) + - [Workload Types](./zos/manual/workload_types.md) + - [Internal Modules](./zos/internals/internals.md) + - [Capacity](./zos/internals/capacity.md) + - [Performance Monitor Package](./zos/performance/performance.md) + - [API](./zos/manual/api.md) \ No newline at end of file diff --git a/collections/developers/internals/rmb/img/layout.png b/collections/developers/internals/rmb/img/layout.png new file mode 100644 index 0000000..a6fc981 Binary files /dev/null and b/collections/developers/internals/rmb/img/layout.png differ diff --git a/collections/developers/internals/rmb/img/peer.png b/collections/developers/internals/rmb/img/peer.png new file mode 100644 index 0000000..6d6a7d8 Binary files /dev/null and b/collections/developers/internals/rmb/img/peer.png differ diff --git a/collections/developers/internals/rmb/img/relay.png b/collections/developers/internals/rmb/img/relay.png new file mode 100644 index 0000000..6b8bf01 Binary files /dev/null and b/collections/developers/internals/rmb/img/relay.png differ diff --git a/collections/developers/internals/rmb/rmb_intro.md b/collections/developers/internals/rmb/rmb_intro.md new file mode 100644 index 0000000..bb08f99 --- /dev/null +++ b/collections/developers/internals/rmb/rmb_intro.md @@ -0,0 +1,107 @@ +

Introduction to Reliable Message Bus (RMB)

+ +

Table of Contents

+ +- [What is RMB](#what-is-rmb) +- [Why](#why) +- [Specifications](#specifications) +- [How to Use RMB](#how-to-use-rmb) +- [Libraries](#libraries) + - [Known Libraries](#known-libraries) + - [No Known Libraries](#no-known-libraries) +- [What is rmb-peer](#what-is-rmb-peer) +- [Download](#download) +- [Building](#building) +- [Running tests](#running-tests) + +*** + +## What is RMB + +Reliable message bus is a secure communication panel that allows `bots` to communicate together in a `chat` like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT. + +Out of the box RMB provides the following: + +- Guarantee authenticity of the messages. You are always sure that the received message is from whoever is pretending to be +- End to End encryption +- Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to `home` servers by `matrix` + +![layout](img/layout.png) +*** +## Why + +RMB is developed by ThreefoldTech to create a global network of nodes that are available to host capacity. Each node will act like a single bot where you can ask to host your capacity. This enforced a unique set of requirements: + +- Communication needed to be reliable + - Minimize and completely eliminate message loss + - Reduce downtime +- Node need to authenticate and authorize calls + - Guarantee identity of the other peer so only owners of data can see it +- Fast request response time + +Starting from this we came up with a more detailed requirements: + +- User (or rather bots) need their identity maintained by `tfchain` (a blockchain) hence each bot needs an account on tfchain to be able to use `rmb` +- Then each message then can be signed by the `bot` keys, hence make it easy to verify the identity of the sender of a message. This is done both ways. +- To support federation (using 3rd party relays) we needed to add e2e encryption to make sure messages that are surfing the public internet can't be sniffed +- e2e encryption is done by deriving an encryption key from the same identity seed, and share the public key on `tfchain` hence it's available to everyone to use +*** +## Specifications + +For details about protocol itself please check the [specs](./rmb_specs.md). +*** +## How to Use RMB + +There are many ways to use `rmb` because it was built for `bots` and software to communicate. Hence, there is no mobile app for it for example, but instead a set of libraries where you can use to connect to the network, make chitchats with other bots then exit. + +Or you can keep the connection forever to answer other bots requests if you are providing a service. +*** +## Libraries + +If there is a library in your preferred language, then you are in luck! Simply follow the library documentations to implement a service bot, or to make requests to other bots. + +### Known Libraries + +- Golang [rmb-sdk-go](https://github.com/threefoldtech/rmb-sdk-go) +- Typescript [rmb-sdk-ts](https://github.com/threefoldtech/rmb-sdk-ts) +*** +### No Known Libraries + +If there are no library in your preferred language, here's what you can do: + +- Implement a library in your preferred language +- If it's too much to do all the signing, verification, e2e in your language then use `rmb-peer` +*** +## What is rmb-peer + +think of `rmb-peer` as a gateway that stands between you and the `relay`. `rmb-peer` uses your mnemonics (your identity secret key) to assume your identity and it connects to the relay on your behalf, it maintains the connection forever and takes care of + +- reconnecting if connection was lost +- verifying received messages +- decrypting received messages +- sending requests on your behalf, taking care of all crypto heavy lifting. + +Then it provide a simple (plain-text) api over `redis`. means to send messages (or handle requests) you just need to be able to push and pop messages from some redis queues. Messages are simple plain text json. + +> More details can be found [here](./rmb_specs.md) + +*** +## Download + +Please check the latest [releases](https://github.com/threefoldtech/rmb-rs/releases) normally you only need the `rmb-peer` binary, unless you want to host your own relay. +*** +## Building + +```bash +git clone git@github.com:threefoldtech/rmb-rs.git +cd rmb-rs +cargo build --release --target=x86_64-unknown-linux-musl +``` +*** +## Running tests + +While inside the repository + +```bash +cargo test +``` diff --git a/collections/developers/internals/rmb/rmb_specs.md b/collections/developers/internals/rmb/rmb_specs.md new file mode 100644 index 0000000..2d28cbe --- /dev/null +++ b/collections/developers/internals/rmb/rmb_specs.md @@ -0,0 +1,258 @@ +

RMB Specs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview of the Operation of RMB Relay](#overview-of-the-operation-of-rmb-relay) + - [Connections](#connections) + - [Peer](#peer) + - [Peer implementation](#peer-implementation) + - [Message Types](#message-types) + - [Output Requests](#output-requests) + - [Incoming Response](#incoming-response) + - [Incoming Request](#incoming-request) + - [Outgoing Response](#outgoing-response) +- [End2End Encryption](#end2end-encryption) +- [Rate Limiting](#rate-limiting) + +*** + +# Introduction + +RMB is (reliable message bus) is a set of protocols and tools (client and daemon) that aims to abstract inter-process communication between multiple processes running over multiple nodes. + +The point behind using RMB is to allow the clients to not know much about the other process, or where it lives (client doesn't know network addresses, or identity). Unlike HTTP(S) or gRPC where the caller must know exact address (or dns-name) and endpoints of the service it's trying to call. Instead RMB requires you to only know about + +- Twin ID (numeric ID) of the endpoint as defined by `tfchain` +- Command (string) is simply the function to call +- The request "body" which is binary blob that is passed to the command as is + - implementation of the command need then to interpret this data as intended (out of scope of rmb) + +Twins are stored on tfchain. hence identity of twins is granted not to be spoofed, or phished. When a twin is created he needs to define 2 things: + +- RMB Relay +- His Elliptic Curve public key (we use secp256k1 (K-256) elliptic curve) + +> This data is stored on tfchain forever, and only the twin can change it using his secure-key. Hence phishing is impossible. A twin can decide later to change this encryption key or relay. + +Once all twins has their data set correctly on the chain. Any 2 twins can communicate with full end-to-end encryption as follows: + +- A twin establish a WS connection to his relay of choice +- A twin create an `envelope` as defined by the protobuf [schema](https://github.com/threefoldtech/rmb-rs/blob/main/proto/types.proto) +- Twin fill in all envelope information (more about this later) +- Twin pushes the envelope to the relay + - If the destination twin is also using the same relay, message is directly forwarded to this twin + - If federation is needed (twin using different relay), message is forwarded to the proper twin. + +> NOTE: since a sender twin need to also encrypt the message for the receiver twin, a twin queries the `tf-chain` for the twin information. Usually it caches this data locally for reuse, hence clients need to make sure this data is always up-to-date. + +On the relay, the relay checks federation information set on the envelope and then decide to either to forward it internally to one of it's connected clients, or forward it to the destination relay. Hence relays need to be publicly available. + +When the relay receive a message that is destined to a `local` connected client, it queue it for delivery. The relay can maintain a queue of messages per twin to a limit. If the twin does not come back online to consume queued messages, the relay will start to drop messages for that specific twin client. + +Once a twin come online and connect to its peer, the peer will receive all queued messages. the messages are pushed over the web-socket as they are received. the client then can decide how to handle them (a message can be a request or a response). A message type can be inspected as defined by the schema. +*** +# Overview of the Operation of RMB Relay + +![relay](img/relay.png) + +## Connections + +By design, there can be only `ONE TWIN` with that specific ID. Hence only has `ONE RELAY` set on tfchain per twin. This force a twin to always use this defined relay if it wishes to open multiple connections to its relay. In other words, a twin once sets up a relay on its public information can only use that relay for all of its connections. If decided to change the relay address, all connections must use the new relay otherwise messages will get lost as they will be delivered to the wrong relay. + +In an RPC system, the response of a request must be delivered to the requester. Hence if a twin is maintaining multiple connections to its relay, it need to identify `uniquely` the connection to allow the relay to route back the responses to the right requester. We call this `id` a `session-id`. The `session-id` must be unique per twin. + +The relay can maintain **MULTIPLE** connections per peer given that each connection has a unique **SID** (session id). But for each (twin-id, session-id) combo there can be only one connection. if a new connection with the same (twin-id, session-id) is created, the older connection is dropped. + +The message received always has the session-id as part of the source address. a reply message then must have destination set back to the source as is, this allows the relay to route the message back correctly without the need to maintain an internal state. + +The `rmb-peer` process reserved the `None` sid. It connects with No session id, hence you can only run one `rmb-peer` per `twin` (identity). But the same twin (identity) can make other connection with other rmb clients (for example rmb-sdk-go direct client) to establish more connections with unique session ids. + +## Peer + +Any language or code that can open `WebSocket` connection to the relay can work as a peer. A peer need to do the following: + +- Authenticate with the relay. This is by providing a `JWT` that is signed by the twin key (more on that later) +- Handle received binary mesasge +- Send binary messages + +Each message is an object of type `Envelope` serialized as with protobuf. Type definition can be found under `proto/types.proto` + +## Peer implementation + +This project already have a peer implementation that works as local peer gateway. By running this peer instance it allows you to +run multiple services (and clients) behind that gateway and they appear to the world as a single twin. + +- The peer gateway (rmb-peer) starts and connects to realy +- If requests are received, they are verified, decrypted and pushed to a redis queue that as command specific (from the envelope) +- A service can then be waiting on this redis queue for new messages + - The service can process the command, and push a response back to a specific redis queue for responses. +- The gateway can then pull ready responses from the responses queue, create a valid envelope, encrypt, and sign and send to destination + +![peer](img/peer.png) + +### Message Types + +Concerning, `rmb-peer` message types, to make it easy for apps to work behind an `rmb-peer`, we use JSON message for communication between the local process and the rmb-peer. the rmb-peer still +maintains a fully binary communication with the relay. + +A request message is defined as follows + +#### Output Requests + +This is created by a client who wants to request make a request to a remote service + +> this message is pushed to `msgbus.system.local` to be picked up by the peer + +```rust +#[derive(Serialize, Deserialize, Clone, Debug)] +pub struct JsonOutgoingRequest { + #[serde(rename = "ver")] + pub version: usize, + #[serde(rename = "ref")] + pub reference: Option, + #[serde(rename = "cmd")] + pub command: String, + #[serde(rename = "exp")] + pub expiration: u64, + #[serde(rename = "dat")] + pub data: String, + #[serde(rename = "tag")] + pub tags: Option, + #[serde(rename = "dst")] + pub destinations: Vec, + #[serde(rename = "ret")] + pub reply_to: String, + #[serde(rename = "shm")] + pub schema: String, + #[serde(rename = "now")] + pub timestamp: u64, +} +``` + +#### Incoming Response + +A response message is defined as follows this is what is received as a response by a client in response to his outgoing request. + +> This response is what is pushed to `$ret` queue defined by the outgoing request, hence the client need to wait on this queue until the response is received or it times out + +```rust +#[derive(Serialize, Deserialize, Clone, Debug)] +pub struct JsonError { + pub code: u32, + pub message: String, +} + +#[derive(Serialize, Deserialize, Clone, Debug)] +pub struct JsonIncomingResponse { + #[serde(rename = "ver")] + pub version: usize, + #[serde(rename = "ref")] + pub reference: Option, + #[serde(rename = "dat")] + pub data: String, + #[serde(rename = "src")] + pub source: String, + #[serde(rename = "shm")] + pub schema: Option, + #[serde(rename = "now")] + pub timestamp: u64, + #[serde(rename = "err")] + pub error: Option, +} +``` + +#### Incoming Request + +An incoming request is a modified version of the request that is received by a service running behind RMB peer +> this request is received on `msgbus.${request.cmd}` (always prefixed with `msgbus`) + +```rust +#[derive(Serialize, Deserialize, Clone, Debug)] +pub struct JsonIncomingRequest { + #[serde(rename = "ver")] + pub version: usize, + #[serde(rename = "ref")] + pub reference: Option, + #[serde(rename = "src")] + pub source: String, + #[serde(rename = "cmd")] + pub command: String, + #[serde(rename = "exp")] + pub expiration: u64, + #[serde(rename = "dat")] + pub data: String, + #[serde(rename = "tag")] + pub tags: Option, + #[serde(rename = "ret")] + pub reply_to: String, + #[serde(rename = "shm")] + pub schema: String, + #[serde(rename = "now")] + pub timestamp: u64, +} +``` + +Services that receive this needs to make sure their responses `destination` to have the same value as the incoming request `source` + +#### Outgoing Response + +A response message is defined as follows this is what is sent as a response by a service in response to an incoming request. + +Your bot (server) need to make sure to set `destination` to the same value as the incoming request `source` + +The +> this response is what is pushed to `msgbus.system.reply` + +```rust +#[derive(Serialize, Deserialize, Clone, Debug)] +pub struct JsonOutgoingResponse { + #[serde(rename = "ver")] + pub version: usize, + #[serde(rename = "ref")] + pub reference: Option, + #[serde(rename = "dat")] + pub data: String, + #[serde(rename = "dst")] + pub destination: String, + #[serde(rename = "shm")] + pub schema: Option, + #[serde(rename = "now")] + pub timestamp: u64, + #[serde(rename = "err")] + pub error: Option, +} +``` +*** +# End2End Encryption + +Relay is totally opaque to the messages. Our implementation of the relay does not poke into messages except for the routing attributes (source, and destinations addresses, and federation information). But since the relay is designed to be hosted by other 3rd parties (hence federation) you should +not fully trust the relay or whoever is hosting it. Hence e2e was needed + +As you already understand e2e is completely up to the peers to implement, and even other implementations of the peers can agree on a completely different encryption algorithm and key sharing algorithm (again, relay does not care). But in our implementation of the e2e (rmb-peer) things goes like this + +- Each twin has a `pk` field on tfchain. when rmb-peer start, it generates an `secp256k1` key from the same seed as the user tfchain mnemonics. Note that this will not make the encryption key and the signing key any related, they just are driven from the same seed. +- On start, if the key is not already set on the twin object, the key is updated. +- If a peer A is trying to send a message to peer B. but peer B does not has his `pk` set, peer A will send the message in plain-text format (please check the protobuf envelope type for details) +- If peer B has public key set, peer A will prefer e2e encryption and will does the following: +- Drive a shared secret point with `ecdh` algorithm, the key is the `sha256` of that point +- `shared = ecdh(A.sk, B.pk)` +- create a 12 bytes random nonce +- encrypt data as `encrypted = aes-gcm.encrypt(shared-key, nonce, plain-data)` +- create cipher as `cipher nonce + encrypted` +- fill `envelope.cipher = cipher` +- on receiving a message peer B does the same in the opposite direction +- split data and nonce (nonce is always first 12 bytes) +- derive the same shared key +- `shared = ecdh(B.sk, A.pk)` +- `plain-data = aes-gcm.decrypt(shared-key, nonce, encrypted)` +*** +# Rate Limiting + +To avoid abuse of the server, and prevent DoS attacks on the relay, a rate limiter is used to limit the number of clients' requests.\ +It was decided that the rate limiter should only watch websocket connections of users, since all other requests/connections with users consume little resources, and since the relay handles the max number of users inherently.\ +The limiter's configurations are passed as a command line argument `--limit , `. `` represents the number of messages a twin is allowed to send in each time window, `` represents the total size of messages in bytes a twin is allowed to send in each time window.\ +Currently there are two implementations of the rate limiter: + +- `NoLimit` which imposes no limits on users. +- `FixedWindowLimiter` which breaks the timeline into fixed time windows, and allows a twin to send a fixed number of messages, with a fixed total size, in each time window. If a twin exceeded their limits in some time window, their message is dropped, an error message is sent back to the user, the relay dumps a log about this twin, and the user gets to keep their connection with the relay. diff --git a/collections/developers/internals/rmb/rmb_toc.md b/collections/developers/internals/rmb/rmb_toc.md new file mode 100644 index 0000000..0801571 --- /dev/null +++ b/collections/developers/internals/rmb/rmb_toc.md @@ -0,0 +1,18 @@ +

Reliable Message Bus (RMB)

+ +Reliable message bus is a secure communication panel that allows bots to communicate together in a chat like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT. + +Out of the box RMB provides the following: + +- Guarantee authenticity of the messages. + - You are always sure that the received message is from whoever is pretending to be. +- End to End encryption. +- Support for 3rd party hosted relays. + - Anyone can host a relay and people can use it safely since there is no way messages can be inspected while using e2e. That's similar to home servers by matrix. + +

Table of Contents

+ +- [Introduction to RMB](rmb_intro.md) +- [RMB Specs](rmb_specs.md) +- [RMB Peer](uml/peer.md) +- [RMB Relay](uml/relay.md) \ No newline at end of file diff --git a/collections/developers/internals/rmb/uml/peer.md b/collections/developers/internals/rmb/uml/peer.md new file mode 100644 index 0000000..80c195a --- /dev/null +++ b/collections/developers/internals/rmb/uml/peer.md @@ -0,0 +1,44 @@ +

RMB Peer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We present an example of RMB peer. Note that the extension for this kind of file is `.wsd`. + +## Example + +``` +@startuml RMB + +participant "Local Process" as ps +database "Local Redis" as redis +participant "Rmb Peer" as peer + +participant "Rmb Relay" as relay +note across: Handling Out Request +peer --> relay: Establish connection + +ps -> redis: PUSH message on \n(msgbus.system.local) +redis -> peer : POP message from \n(msgbus.system.local) + +peer -> relay: message pushed over the websocket to the relay +... +relay -> peer: received response +peer -> redis: PUSH over $msg.reply_to queue +... +note across: Handling In Request +relay --> peer: Received a request +peer -> redis: PUSh request to `msgbus.$cmd` +redis -> ps: POP new request msg +ps -> ps: Process message +ps -> redis: PUSH to (msgbus.system.reply) +redis -> peer: POP from (msgbus.system.reply) +peer -> relay: send response message +@enduml +``` \ No newline at end of file diff --git a/collections/developers/internals/rmb/uml/relay.md b/collections/developers/internals/rmb/uml/relay.md new file mode 100644 index 0000000..0618d7b --- /dev/null +++ b/collections/developers/internals/rmb/uml/relay.md @@ -0,0 +1,40 @@ +

RMB Peer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We present an example of RMB relay. Note that the extension for this kind of file is `.wsd`. + +## Example + + +``` +@startuml RMB +actor "Peer 1" as peer1 +participant "Relay 1" as relay1 +participant "Relay 2" as relay2 +actor "Peer 2" as peer2 +actor "Peer 3" as peer3 + +peer1 --> relay1: Establish WS connection +peer2 --> relay1: Establish WS connection +peer3 --> relay2: Establish WS connection + +peer1 -> relay1: Send message (Envelope)\n(destination "Peer 2") +relay1 -> peer2: Forward message directly + +peer1 -> relay1: Send message (Envelope)\n(destination "Peer 3") +note right +"Peer 3" does not live on "Relay 1" hence federation is +needed +end note +relay1 -> relay2: Federation of message for\n Peer 3 +relay2 -> peer3: Forward message directly +@enduml +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/.keep b/collections/developers/internals/zos/assets/.keep new file mode 100644 index 0000000..e69de29 diff --git a/collections/developers/internals/zos/assets/0-OS v2 architecture.xml b/collections/developers/internals/zos/assets/0-OS v2 architecture.xml new file mode 100644 index 0000000..263e7e3 --- /dev/null +++ b/collections/developers/internals/zos/assets/0-OS v2 architecture.xml @@ -0,0 +1 @@ +3Vlbc6JIFP41Pkp10/fHxNzciUkqJpPKvkyhNMoEaQfxll+/BwFFAWN2NTs1WhXgnL75fd85feg0SGu0uI6c8bBjXB00bOQuGuSiYduYKAyXxLJMLULR1DCIfDdrtDF0/XedGVFmnfqunmw1jI0JYn+8beybMNT9eMvmRJGZbzfzTLA969gZ6JKh23eCsvXFd+NhapUMbew32h8M85kxyjwjJ2+cGSZDxzXzgolcNkgrMiZO70aLlg4S8HJc0n5XNd71wiIdxod06LwFL4/3TaGmC7KInu7i29llU6SjzJxgmv3ghs0DGO/cMzAsrDpeZlDwX1OTO5qTFVFn0MCm48XGCXeD5Iqa911w3gAycAn0bCULJ+oP/RhomkY6nwcWnE6VdsywWs9qx3qR2IfxKAADhttJHJk33TKBicASmlAnq/KDYMfkBP4ghMc+AKTBfj7TUewDtWeZY+S7bjLN+TxZVXfs9JM55yBksEVmGro6wQ6tl1UEO8M/GVMvCqYM/GttRjqOltAk8xLC0i7LbYXPN7LCMrMNC5LKpeZkSh6sR96QDTcZ39Xcv0+dv9vu/Mf74/J1aI+uxY1eNnEF9zvgDwCFcc0PXcec08uboxqYajHBSFrbqIgyKpxVoGKjI8ByNls89cT928/OhedO23ejWfuiApZbP5yCxtE3HYWJjndA2igF12ipIN6iThs28VjyLYkaPHz1yeKtYE8/lZLcy/LHOs11ScsUMI4tySlFXAjFOcZlQvjn+Xie6Oi+9zNJ25AjnJ4O0q7d2ERJWoYkatwpwLwaOPDDt9TvOrEDyWeVuu0rINNudcZPwUXr2v9Fo3v75ZWddxhPu+3lmhaQyLRxJGJdR0uvX0lsX+qedwoCcy+2LSxsJgiWikHi+TDClLQwg50LM4oksRE/Cr3wWGB4L+N3Op6b6O0zjI9b8/nz2fMDX949dB9+hJft8eM245UbHj4d40xLl1YxLu0eWYXyqRiHALUI5VyKpADJK6w9fDMLU06EQpRySTH9ar5bkNMcP4Rd+ROMD0Of3l7Nb5+80ffX6K9Ji8jFAYzbJ2NcY+BcVDGuuCDOKRmHLcBimCAlkLApzqvsesoxwhDjjNsUEa6EUuJUnB9WeJLSLvu9syOGY22xnmf3KzOxy3ucnZIlypDFFSGC0qS4kx+RJCEPE8YUgfKGEKnUUTgqEXInNZFPj+8t8vCOWhPTPEPdJq2oBtMK3fVneXXegan8/iSZaVO+F/xVXUzow37uh4Pc2YtyX872rr1u8D9DFTXF1FfLoi43XwX+JD5u7VUpOPZH1l7WpoqiigDFv03tdVgaKB8IPLThL8LHjb7f6LVnHZXCkoJwhoXAinG1RVzVS5GEMtvmhEq4RZQcqYoq0eQJHPxiF8IffxuM3Nsb3X5rVmyfJX7Sd/cS9D2kieZV0CMtEexRxaMP/N8BrzsmqCfCVpYshkS5lrGRhSWFRCgpU5jLLwS+vE0+jweR4yZvq3Mn7g+TmvaIkXIAXWWC9krm44jYA7U4UUlSuWL1r0WOkHQQqkQNMbHy7OQXb/X538Vft38Uz2KwsqB+5wpLQphCxzmLOYyR/MyyuD1EZuZPfBOeKAA8xxVO5Wbeo5whengAqBqS9gbAadCGx83B/8pX+PcJufwHzZbbbqMwEIafBmlvUhmcQ/dyE9JGq22jNj1IexO54IBbYyPbOdCn3zGYAEn2KHWVG7A/hpnh98wID0+y3bUieXojY8q9AMU7D4deEPj9oA83S4qKjC5HFUgUi51RAxbsnTqIHF2zmOqOoZGSG5Z3YSSFoJHpMKKU3HbNVpJ3o+YkoUdgERF+TJ9ZbNKKXg5Qw2eUJWkd2UfuSUZqYwd0SmK5bSE89fBESWmqVbabUG7Fq3V5jHbJZpbN9dOX3djoaPn9ftmrnF39zSv7T1BUmH92fXuHk2/oDqvgHQ309eh29PWxF1SuN4SvnV5eMOQQZJxC1GFiVwsjlRU5QFAda9DVWbyo2qImELx5rYZ5DR5Spq2uzgkqd4rqXArNXkpkpK00SmzeWwb6Qw2UH2HS8g0RyYyJxBoRQ+AmV3BBvfmilUJ+mMGaHxLOauL8rKSyV8602e+gIg1hgto1ETFcn25aYRofJ7wy/VaWkADhMntqAfoE52eYYVJ4wcQGYZzqQhuaVfuLi4s/9I564djmR3ISMVPY0uBEiFKYXxzNsUuAbW3KIjdF3TmG7spCMBkH4MNSGyXf6ERy0AeHQgqwHMN38AOkbWKQDQ4Hze5B5gB60Dp4vE2ZoQvgNtQWBg8wuaFqxcsGS1kcUwFMybWIqS1gtM8QzCCzg+b+TWf4+3aFOUdlRo2ysjkvfdfgRXe7bcYF/uxY2h4V2EHiRlSy99y0ISxcJ57uytclyeZqOnib3r8OJrOiGIXENXK7K1HvCkr8yhXowUE1Kvk/0bZ1hh8loj/sqoiDYxn94ISMw49S8Xi2hcdteY5S7qU7GynxiYIsZ1Ap5fkq2b/8f0rCtvkXKJ+1/qjw9Ac=nZRLc9owEIB/DTPtgRmDSUqP5dGmzUAOZkJuHWGtLSWy1pEFBn59V7aMMcykSS629Gm1u9pXL5xm+1+G5WKBHFRvGPB9L5z1hsPBaDiinyOHmoyDcQ1SI7kXakEkj+Bh4OlWcig6ghZRWZl3YYxaQ2w7jBmDZVcsQdW1mrMUrkAUM3VN15Jb4V9xE7T8DmQqGsuDwJ9krBH2oBCMY3mGwnkvnBpEW6+y/RSUC14Tl9fs7/z+9zJePa/5w7d+eHxdPPVrZT8/cuX0BAPaflp1Bs/s+EP8ye94+phE6/K+NP5KsGNq6+PVG94qMjIRZPU2dasl2BLNC4lRdWwprl5iYxqJhpDx9loD8washCxcXL2SoNoZKHLUhdxUyKKrNGDO71JS/KkGqkdY4Y517YnUKW0woU/Qf4je9Ce/ZFt1SZRsyIYVMnZ2mK1tJEg6g8eF80PzukotkxrMm0Zbjdc2jmDQSqegeU4VFKapEDOX388qxhz0joIWi3demC2jd0rO9Q4P5OWXKj8bqPMGzMQC+NcPekzwPAdVZ9lD064W9lX12UwRGNCysAZfYIqKchHONGqSnCRSqQtU5Cx2WQtnN+1uhTmBPvVrOCmFtBARd6ZKmnYuaDswiaq6WkjOQRMzuNUcXNcEJw9JjDy7mCj/acfBaUbQcAXMwBoXRK9l5Fvv0N2W7YwKv3smzufT2EPm52J60tz2Pi18+zfbdkpVZ2ezPpz/Aw==nZTbjtowEIafJlJ7sVIOsKW3pLRbCbSVoN3V3qxMMsQWju06DoF9+o4Tm5CAtmqvYn/5Z8aHfxwkaXn8pomiK5kDD+IwPwbJlyCOo0k8wY8lp47MwlkHCs1yJ+rBmr2Bg6GjNcuhGgiNlNwwNYSZFAIyM2BEa9kMZTvJh1UVKeAKrDPCr+kTyw11u5iGPX8AVlBfOQrdn5J4sQMVJblsLlCyCJJUS2m6UXlMgdvD8+eyme5f1a4U+2hJl2/Ln9ly9XzXJfv6LyHnLWgQ5r9T/1Y/tpvFcfUkXn89fyoeX2bw4ELCA+G1O68gvudYZE6x6n1hR6kUhjABGoXojxpP1mm22ms8wfJ9oIfKgw1llT1ZlyRsZxoqJUXFti0y0noNiF15w/AGWmd09at366oxy9lhjG4G3tDdQDUfE8480bXIWscIdFNpLykOPzym3z++W7ePv4DXZcYrbK1oTt7fBo7tdZmSI4hwWBkt95BKLjUSIQUq5zvG+QhVimRMFAim/WwjFYI7NHgybygzsEZuSzX4PCCTB9A73rYBZXkOwm5f1iIHa7PwvEKU4cpGLfgX/0bnpsLXCGQJRp8wzmWZOK+ehtOmb+rks2P0sqETB4l7SIpz5r5ZcOD6xU/7tm7/XTyOyeIPrZTBTuMwEIafJkekNCksXBtYWLEcVkUsN2TiaWxhe4LjNC1Pv+PEbppUAq3EKfbn8fzj0T9J8kLvbi2rxQNyUEmW8l2SXydZtlhmS/p4sh/IZXo5gMpKHoJGsJYfEGAaaCs5NJNAh6icrKewRGOgdBPGrMVuGrZBNVWtWQUnYF0ydUr/Su5EeMV5OvI7kJWIyos0nGgWgwNoBOPYHaH8JskLi+iGld4VoHzzYl9+81/ZD/P+/PwOrHiBpsCr+7Mh2c//uXJ4ggXjvjd1NqTeMtWGfiXZhSKRlSDVi8qvnh4ogozRUkvD4auNh5GQ7ngjwjqCRyEb39KQJO13FpoaTSNfe+TQmwyYL7mT1HqqSlrX9kCzUkhDNvpMv54zLrdz1Ko5UTKSP6BbSnPfv1czQ87QvuGfaY63j+CpCMFJMb2d3D561MGub7nTisCClo2z+AYFKrREDBqKXG2kUjPU1KyUpiJwPu4esSZwRibNV52QDtbEvVRHI04Mt2A3qreykJyDIWaxNRy8VdJDhRRGlc3G6AsPLg6DQX8UQA3O7uleyLIMo7SfbrtxMPOrwMTxUOYBsvAzqA6ZR8PTIng+bsfR7M+OfnD5zT8= \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/0-OS-upgrade.png b/collections/developers/internals/zos/assets/0-OS-upgrade.png new file mode 100644 index 0000000..f82c801 Binary files /dev/null and b/collections/developers/internals/zos/assets/0-OS-upgrade.png differ diff --git a/collections/developers/internals/zos/assets/0-OS-upgrade.wsd b/collections/developers/internals/zos/assets/0-OS-upgrade.wsd new file mode 100644 index 0000000..26ef653 --- /dev/null +++ b/collections/developers/internals/zos/assets/0-OS-upgrade.wsd @@ -0,0 +1,12 @@ +@startuml +start +:power on node; +repeat +:mount boot flist; +:copy files to node root; +:reconfigure services; +:restart services; +repeat while (new flist version?) is (yes) + -> power off; +stop +@enduml \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/0-OS_v2_architecture.png b/collections/developers/internals/zos/assets/0-OS_v2_architecture.png new file mode 100644 index 0000000..d4fdd98 Binary files /dev/null and b/collections/developers/internals/zos/assets/0-OS_v2_architecture.png differ diff --git a/collections/developers/internals/zos/assets/Container_module_flow.png b/collections/developers/internals/zos/assets/Container_module_flow.png new file mode 100644 index 0000000..95175ae Binary files /dev/null and b/collections/developers/internals/zos/assets/Container_module_flow.png differ diff --git a/collections/developers/internals/zos/assets/boot_sequence.plantuml b/collections/developers/internals/zos/assets/boot_sequence.plantuml new file mode 100644 index 0000000..ac5a663 --- /dev/null +++ b/collections/developers/internals/zos/assets/boot_sequence.plantuml @@ -0,0 +1,50 @@ +@startuml + +package "node-ready"{ + [local-modprobe] + [udev-trigger] + [redis] + [haveged] + [cgroup] + [redis] +} + +package "boot" { + [storaged] + [internet] + [networkd] + [identityd] +} + +package "internal modules"{ + [flistd] + [containerd] + [contd] + [upgraded] + [provisiond] +} + +[local-modprobe]<-- [udev-trigger] +[udev-trigger] <-- [storaged] +[udev-trigger] <-- [internet] +[storaged] <-- [identityd] + +[identityd] <- [networkd] + +[internet] <-- [networkd] +[networkd] <-- [containerd] +[storaged] <-- [containerd] + +[containerd] <-- [contd] + +[storaged] <-- [flistd] +[networkd] <-- [flistd] + +[flistd] <-- [upgraded] +[networkd] <-- [upgraded] + +[networkd] <-- [provisiond] +[flistd] <-- [provisiond] +[contd] <-- [provisiond] + +@enduml diff --git a/collections/developers/internals/zos/assets/boot_sequence.png b/collections/developers/internals/zos/assets/boot_sequence.png new file mode 100644 index 0000000..9a1fc5e Binary files /dev/null and b/collections/developers/internals/zos/assets/boot_sequence.png differ diff --git a/collections/developers/internals/zos/assets/grid_provisioning.png b/collections/developers/internals/zos/assets/grid_provisioning.png new file mode 100644 index 0000000..3cbb45e Binary files /dev/null and b/collections/developers/internals/zos/assets/grid_provisioning.png differ diff --git a/collections/developers/internals/zos/assets/grid_provisioning.wsd b/collections/developers/internals/zos/assets/grid_provisioning.wsd new file mode 100644 index 0000000..48931fc --- /dev/null +++ b/collections/developers/internals/zos/assets/grid_provisioning.wsd @@ -0,0 +1,37 @@ +@startuml +title Provisioning of a resource space + +autonumber +actor User as user +' entity Farmer as farmer +entity Network as network +database Blockchain as bc +boundary Node as node +collections "Resource space" as rs + +== Resource research == +user -> network: Send resource request +activate network +network -> node: broadcast resource request +activate node +deactivate network +...broadcast to all nodes... +node -> user: Send offer +user -> user: inspect offer + +== Resource space negotiation == +user -> node: accept offer +user <-> node: key exchange +user -> bc: money is locked on blockchain +... +node -> rs: create resrouce space +activate rs +node -> user: notify space is created +node -> bc: notify he created the space +user -> rs: make sure it can access the space +user -> bc: validate can access the space +bc -> node: money is released to the node +deactivate node +== Usage of the space == +user -> rs: deploy workload +@enduml \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/grid_provisioning2.png b/collections/developers/internals/zos/assets/grid_provisioning2.png new file mode 100644 index 0000000..2150407 Binary files /dev/null and b/collections/developers/internals/zos/assets/grid_provisioning2.png differ diff --git a/collections/developers/internals/zos/assets/grid_provisioning2.wsd b/collections/developers/internals/zos/assets/grid_provisioning2.wsd new file mode 100644 index 0000000..76370a3 --- /dev/null +++ b/collections/developers/internals/zos/assets/grid_provisioning2.wsd @@ -0,0 +1,42 @@ +@startuml +title Provisioning a workload on the TFGrid + +autonumber +actor "User" as user +actor "Farmer" as farmer +database "TF Explorer" as explorer +database Blockchain as blockchain +boundary Node as node + +== Price definition == +farmer -> explorer: Farmer set the price of its Resource units +== Resource research == +activate explorer +user -> explorer: User look where to deploy the workload +user <- explorer: Gives detail about the farmer owning the node selected +== Resource reservation == +user -> explorer: write description of the workload +explorer -> user: return a list of transaction to execute on the blockchain +== Reservation processing == +user -> blockchain: execute transactions +explorer <-> blockchain: verify transactions are done +explorer -> explorer: reservation status changed to `deploy` +== Resource provisioning == +node <-> explorer: read description of the workloads +node -> node: provision workload +alt provision successfull + node -> explorer: write result of the provisining + explorer -> blockchain: forward token to the farmer + blockchain -> farmer: tokens are available to the farmer + user <- explorer: read the connection information to his workload +else provision error + node -> explorer: write result of the provisining + explorer -> explorer: cancel reservation + node -> node: free up capacity + explorer -> blockchain: token refunded to user + blockchain <-> user: tokens are available to the user again +end +deactivate explorer +== Resource monitoring == +user <-> node: use / monitor workload +@enduml \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/ipc.plantuml b/collections/developers/internals/zos/assets/ipc.plantuml new file mode 100644 index 0000000..20bb31d --- /dev/null +++ b/collections/developers/internals/zos/assets/ipc.plantuml @@ -0,0 +1,20 @@ +@startuml + +== Initialization == +Module -> MsgBroker: Announce Module +MsgBroker -> Module: create bi-directional channel + +== Utilisation == +loop + DSL -> MsgBroker: put RPC message + activate MsgBroker + Module <- MsgBroker: pull RPC message + activate Module + Module -> Module: execute method + Module -> MsgBroker: put reponse + deactivate Module + MsgBroker -> DSL : read reponse + deactivate MsgBroker +end + +@enduml \ No newline at end of file diff --git a/collections/developers/internals/zos/assets/ipc.png b/collections/developers/internals/zos/assets/ipc.png new file mode 100644 index 0000000..9cc940b Binary files /dev/null and b/collections/developers/internals/zos/assets/ipc.png differ diff --git a/collections/developers/internals/zos/assets/market.png b/collections/developers/internals/zos/assets/market.png new file mode 100644 index 0000000..801bdf6 Binary files /dev/null and b/collections/developers/internals/zos/assets/market.png differ diff --git a/collections/developers/internals/zos/assets/market.wsd b/collections/developers/internals/zos/assets/market.wsd new file mode 100644 index 0000000..cd49bd7 --- /dev/null +++ b/collections/developers/internals/zos/assets/market.wsd @@ -0,0 +1,22 @@ +@startuml +actor User as user +box "To Be Defined" #LightBlue + participant Market +end box +entity Farmer as farmer +boundary Node as node + +user -> farmer: Request space +activate farmer +farmer -> node: reserve space +activate node +farmer -> user: confirmation +deactivate farmer +... +note over user, node: communication allows only owner of space +user -> node: deploy services +... +user -> farmer: destroy space +farmer -> node: delete space +deactivate node +@enduml \ No newline at end of file diff --git a/collections/developers/internals/zos/development/README.md b/collections/developers/internals/zos/development/README.md new file mode 100644 index 0000000..63b7034 --- /dev/null +++ b/collections/developers/internals/zos/development/README.md @@ -0,0 +1,6 @@ +Development +=========== + +* [Quick start](./quickstart.md) +* [Testing](./testing.md) +* [Binary packages](./packages.md) \ No newline at end of file diff --git a/collections/developers/internals/zos/development/net.sh b/collections/developers/internals/zos/development/net.sh new file mode 100755 index 0000000..ffca7f7 --- /dev/null +++ b/collections/developers/internals/zos/development/net.sh @@ -0,0 +1,30 @@ +#!/bin/bash + +# This is the same as the first case at qemu/README.md in a single script + +sudo ip link add zos0 type bridge +sudo ip link set zos0 up + +sudo ip addr add 192.168.123.1/24 dev zos0 +md5=$(echo $USER| md5sum ) +ULA=${md5:0:2}:${md5:2:4}:${md5:6:4} +sudo ip addr add fd${ULA}::1/64 dev zos0 +# you might want to add fe80::1/64 +sudo ip addr add fe80::1/64 dev zos0 + +sudo iptables -t nat -I POSTROUTING -s 192.168.123.0/24 -j MASQUERADE +sudo ip6tables -t nat -I POSTROUTING -s fd${ULA}::/64 -j MASQUERADE +sudo iptables -t filter -I FORWARD --source 192.168.123.0/24 -j ACCEPT +sudo iptables -t filter -I FORWARD --destination 192.168.123.0/24 -j ACCEPT +sudo sysctl -w net.ipv4.ip_forward=1 + +sudo dnsmasq --strict-order \ + --except-interface=lo \ + --interface=zos0 \ + --bind-interfaces \ + --dhcp-range=192.168.123.20,192.168.123.50 \ + --dhcp-range=::1000,::1fff,constructor:zos0,ra-stateless,12h \ + --conf-file="" \ + --pid-file=/var/run/qemu-dnsmasq-zos0.pid \ + --dhcp-leasefile=/var/run/qemu-dnsmasq-zos0.leases \ + --dhcp-no-override diff --git a/collections/developers/internals/zos/development/packages.md b/collections/developers/internals/zos/development/packages.md new file mode 100644 index 0000000..f7391f6 --- /dev/null +++ b/collections/developers/internals/zos/development/packages.md @@ -0,0 +1,61 @@ +# Adding a new package + +Binary packages are added via providing [a build script](../../bins/), then an automated workflow will build/publish an flist with this binary. + +For example, to add `rmb` binary, we need to provide a bash script with a `build_rmb` function: + + +```bash +RMB_VERSION="0.1.2" +RMB_CHECKSUM="4fefd664f261523b348fc48e9f1c980b" +RMB_LINK="https://github.com/threefoldtech/rmb-rs/releases/download/v${RMB_VERSION}/rmb" + +download_rmb() { + echo "download rmb" + download_file ${RMB_LINK} ${RMB_CHECKSUM} rmb +} + +prepare_rmb() { + echo "[+] prepare rmb" + github_name "rmb-${RMB_VERSION}" +} + +install_rmb() { + echo "[+] install rmb" + + mkdir -p "${ROOTDIR}/bin" + + cp ${DISTDIR}/rmb ${ROOTDIR}/bin/ + chmod +x ${ROOTDIR}/bin/* +} + +build_rmb() { + pushd "${DISTDIR}" + + download_rmb + popd + + prepare_rmb + install_rmb +} +``` + +Note that, you can just download a statically build binary instead of building it. + + +The other step is to add it to workflow to be built automatically, in [bins workflow](../../.github/workflows/bins.yaml), add your binary's job: + +```yaml +jobs: + containerd: + ... + ... + rmb: + uses: ./.github/workflows/bin-package.yaml + with: + package: rmb + secrets: + token: ${{ secrets.HUB_JWT }} +``` + +Once e.g. a `devnet` release is published, your package will be built then pushed to an flist repository. After that, you can start your local zos node, wait for it to finish downloading, then you should find your binary available. \ No newline at end of file diff --git a/collections/developers/internals/zos/development/quickstart.md b/collections/developers/internals/zos/development/quickstart.md new file mode 100644 index 0000000..0563b61 --- /dev/null +++ b/collections/developers/internals/zos/development/quickstart.md @@ -0,0 +1,70 @@ +# Quick start + +- [Quick start](#quick-start) + - [Starting a local zos node](#starting-a-local-zos-node) + - [Accessing node](#accessing-node) + - [Development](#development) + +## Starting a local zos node + +* Make sure `qemu` and `dnsmasq` are installed +* [Create a farm](../manual/manual.md#creating-a-farm) +* [Download a zos image](https://bootstrap.grid.tf/kernel/zero-os-development-zos-v3-generic-7e587e499a.efi) +* Make sure `zos0` bridge is allowed by qemu, you can add `allow zos0` in `/etc/qemu/bridge.conf` (create the file if it's not there) +* Setup the network using this script [this script](./net.sh) + +Then, inside zos repository + +``` +make -C cmds +cd qemu +mv ./zos.efi +sudo ./vm.sh -n myzos-01 -c "farmer_id= printk.devmsg=on runmode=dev" +``` + +You should see the qemu console and boot logs, wait for awhile and you can [browse farms](https://dashboard.dev.grid.tf/explorer/farms) to see your node is added/detected automatically. + +To stop the machine you can do `Control + a` then `x`. + +You can read more about setting up a qemu development environment and more network options [here](../../qemu/README.md). + +## Accessing node + +After booting up, the node should start downloading external packages, this would take some time depending on your internet connection. + +See [how to ssh into it.](../../qemu/README.md#to-ssh-into-the-machine) + +How to get the node IP? +Given the network script `dhcp-range`, it usually would be one of `192.168.123.43`, `192.168.123.44` or `192.168.123.45`. + +Or you can simply install `arp-scan` then do something like: + +``` +✗ sudo arp-scan --interface=zos0 --localnet +Interface: zos0, type: EN10MB, MAC: de:26:45:e6:87:95, IPv4: 192.168.123.1 +Starting arp-scan 1.9.7 with 256 hosts (https://github.com/royhills/arp-scan) +192.168.123.44 54:43:83:1f:eb:81 (Unknown) +``` + +Now we know for sure it's `192.168.123.44`. + +To check logs and see if the downloading of packages is still in progress, you can simply do: + +``` +zinit log +``` + +## Development + +While the overlay will enable your to boot with the binaries that's been built locally, sometimes you'll need to test the changes of certain modules without restarting the node (or intending to do so for e.g. testing a migration). + +For example if we changed anything related to `noded`, we can do the following: + +Inside zos repository: + +* Build binaries locally + * `make -C cmds` +* Copy the binary inside the machine + * `scp bin/zos root@192.168.123.44:/bin/noded` +* SSH into the machine then use `zinit` to restart it: + * `zinit stop noded && zinit start noded` diff --git a/collections/developers/internals/zos/development/testing.md b/collections/developers/internals/zos/development/testing.md new file mode 100644 index 0000000..8f023ec --- /dev/null +++ b/collections/developers/internals/zos/development/testing.md @@ -0,0 +1,157 @@ +# Testing + +Beside unit testing, you might want to test your change in an integrated environment, the following are two options to do it. + +- [Testing](#testing) + - [Using grid/node client](#using-gridnode-client) + - [Using a test app](#using-a-test-app) + - [An example to talk to container and qsfs modules](#an-example-to-talk-to-container-and-qsfs-modules) + - [An example of directly using zinit package](#an-example-of-directly-using-zinit-package) + + +## Using grid/node client + +You can simply use any grid client to deploy a workload of any type, you should specify your node's twin ID (and make sure you are on the correct network). + +Inside the node, you can do `noded -id` and `noded -net` to get your current node ID and network. Also, [you can check your farm](https://dashboard.dev.grid.tf/explorer/farms) and get node information from there. + +Another option is the golang [node client](../manual/manual.md#interaction). + +While deploying on your local node, logs with `zinit log` would be helpful to see any possible errors and to debug your code. + +## Using a test app + +If you need to test a specific module or functionality, you can create a simple test app inside e.g. [tools directory](../../tools/). + +Inside this simple test app, you can import any module or talk to another one using [zbus](../internals/internals.md#ipc). + +### An example to talk to container and qsfs modules + + +```go +// tools/del/main.go + +package main + +import ( + "context" + "flag" + "strings" + "time" + + "github.com/rs/zerolog" + "github.com/rs/zerolog/log" + + "github.com/threefoldtech/zbus" + "github.com/threefoldtech/zos/pkg" + "github.com/threefoldtech/zos/pkg/stubs" +) + +func main() { + zerolog.SetGlobalLevel(zerolog.DebugLevel) + + zbus, err := zbus.NewRedisClient("unix:///var/run/redis.sock") + if err != nil { + log.Err(err).Msg("cannot init zbus client") + return + } + + var workloadType, workloadID string + + flag.StringVar(&workloadType, "type", "", "workload type (qsfs or container)") + flag.StringVar(&workloadID, "id", "", "workload ID") + + flag.Parse() + + if workloadType == "" || workloadID == "" { + log.Error().Msg("you need to provide both type and id") + return + } + + ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) + defer cancel() + + if workloadType == "qsfs" { + qsfsd := stubs.NewQSFSDStub(zbus) + err := qsfsd.SignalDelete(ctx, workloadID) + if err != nil { + log.Err(err).Msg("cannot delete qsfs workload") + } + } else if workloadType == "container" { + args := strings.Split(workloadID, ":") + if len(args) != 2 { + log.Error().Msg("container id must contain namespace, e.g. qsfs:wl129") + } + + containerd := stubs.NewContainerModuleStub(zbus) + err := containerd.SignalDelete(ctx, args[0], pkg.ContainerID(args[1])) + if err != nil { + log.Err(err).Msg("cannot delete container workload") + } + } + +} +``` + +Then we can simply build, upload and execute this in our node: + +``` +cd tools/del +go build +scp del root@192.168.123.44:/root/del +``` + +Then ssh into `192.168.123.44` and simply execute your test app: + +``` +./del +``` + +### An example of directly using zinit package + +```go +// tools/zinit_test +package main + +import ( + "encoding/json" + "fmt" + "regexp" + + "github.com/rs/zerolog" + "github.com/rs/zerolog/log" + + "github.com/threefoldtech/zos/pkg/zinit" +) + +func main() { + zerolog.SetGlobalLevel(zerolog.DebugLevel) + z := zinit.New("/var/run/zinit.sock") + + regex := fmt.Sprintf(`^ip netns exec %s %s`, "ndmz", "/sbin/udhcpc") + _, err := regexp.Compile(regex) + if err != nil { + log.Err(err).Msgf("cannot compile %s", regex) + return + } + + // try match + matched, err := z.Matches(zinit.WithExecRegex(regex)) + if err != nil { + log.Err(err).Msg("cannot filter services") + } + + matchedStr, err := json.Marshal(matched) + if err != nil { + log.Err(err).Msg("cannot convert matched map to json") + } + + log.Debug().Str("matched", string(matchedStr)).Msg("matched services") + + // // try destroy + // err = z.Destroy(10*time.Second, matched...) + // if err != nil { + // log.Err(err).Msg("cannot destroy matched services") + // } +} +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/faq/readme.md b/collections/developers/internals/zos/faq/readme.md new file mode 100644 index 0000000..f241686 --- /dev/null +++ b/collections/developers/internals/zos/faq/readme.md @@ -0,0 +1,6 @@ +# FAQ + +This section consolidated all the common question we get about how 0-OS work and how to operate it. + +- **Q**: What is the preferred configuration for my raid controller when running 0-OS ? + **A**: 0-OS goal is to expose raw capacity. So it is best to always try to give him access to the most raw access to the disks. In case of raid controllers, the best is to try to set it up in [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD) mode if available. diff --git a/collections/developers/internals/zos/internals/boot.md b/collections/developers/internals/zos/internals/boot.md new file mode 100644 index 0000000..bb98c8a --- /dev/null +++ b/collections/developers/internals/zos/internals/boot.md @@ -0,0 +1,11 @@ +# Services Boot Sequence + +Here is dependency graph of all the services started by 0-OS: + +![boot sequence](../assets/boot_sequence.png) + +## Pseudo boot steps + +both `node-ready` and `boot` are not actual services, but instead they are there to define a `boot stage`. for example once `node-ready` service is (ready) it means all crucial system services defined by 0-initramfs are now running. + +`boot` service is similar, but guarantees that some 0-OS services are running (for example `storaged`), before starting other services like `flistd` which requires `storaged` diff --git a/collections/developers/internals/zos/internals/capacity.md b/collections/developers/internals/zos/internals/capacity.md new file mode 100644 index 0000000..04d1137 --- /dev/null +++ b/collections/developers/internals/zos/internals/capacity.md @@ -0,0 +1,89 @@ +

Capacity

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [System reserved capacity](#system-reserved-capacity) + - [Reserved Memory](#reserved-memory) + - [Reserved Storage](#reserved-storage) +- [User Capacity](#user-capacity) + - [Memory](#memory) + - [Storage](#storage) + +*** + +## Introduction + +This document describes how ZOS does the following tasks: + +- Reserved system resources + - Memory + - Storage +- Calculation of free usable capacity for user workloads + +## System reserved capacity + +ZOS always reserve some amount of the available physical resources to its own operation. The system tries to be as protective +as possible of it's critical services to make sure that the node is always reachable and usable even if it's under heavy load + +ZOS make sure it reserves Memory and Storage (but not CPU) as per the following: + +### Reserved Memory + +ZOS reserve 10% of the available system memory for basic services AND operation overhead. The operation overhead can happen as a side effect of running user workloads. For example, a user network while in theory does not consume any memory, in matter of fact it also consume some memory (kernel buffers, etc...). Same for a VM. A user VM can be assigned say 5G but the process that running the VM can/will take few extra megabytes to operate. + +This is why we decided to play on the safe side, and reserve 10% of total system memory to the system overhead, with a **MIN** reserved memory of 2GB + +```python +reserved = min(total_in_gb * 0.1, 2G) +``` + +### Reserved Storage + +While ZOS does not require installation, but it needs to download and store many things to operate correctly. This include the following: + +- Node identity. Information about the node id and keys +- The system binaries, those what include all zos to join the grid and operate as expected +- Workload flists. Those are the flists of the user workloads. Those are downloaded on demand so they don't always exist. +- State information. Tracking information maintained by ZOS to track the state of workloads, owner-ship, and more. + +This is why the system on first start allocates and reserve a part of the available SSD storage and is called `zos-cache`. Initially is `5G` (was 100G in older version) but because the `dynamic` nature of the cache we can't fix it at `5G` + +The required space to be reserved by the system can dramatically change based on the amount of workloads running on the system. For example if many users are running many different VMs, the system will need to download (and cache) different VM images, hence requiring more cache. + +This is why the system periodically checks the reserved storage and then dynamically expand or shrink to a more suitable value in increments of 5G. The expansion happens around the 20% of current cache size, and shrinking if went below 20%. + +## User Capacity + +All workloads requires some sort of a resource(s) to run and that is actually what the user hae to pay for. Any workload can consume resources in one of the following criteria: + +- CU (compute unit in vCPU) +- MU (memory unit in bytes) +- NU (network unit in bytes) +- SU (ssd storage in bytes) +- HU (hdd storage in bytes) + +A workloads, based on the type can consume one or more of those resource types. Some workloads will have a well known "size" on creation, others might be dynamic and won't be know until later. + +For example, a disk workload SU consumption will be know ahead. Unlike the NU used by a network which will only be known after usage over a certain period of time. + +A single deployment can has multiple workloads each requires a certain amount of one or more capacity types (listed above). ZOS then for each workloads type compute the amount of resources needed per workload, and then check if it can provide this amount of capacity. + +> This means that a deployment that define 2 VMs can partially succeed to deploy one of the VMs but not the other one if the amount of resources it requested are higher than what the node can provide + +### Memory + +How the system decide if there are enough memory to run a certain workload that demands MU resources goes as follows: + +- compute the "theoretically used" memory by all user workloads excluding `self`. This is basically the sum of all consumed MU units of all active workloads (as defined by their corresponding deployments, not as per actually used in the system). +- The theoretically used memory is topped with the system reserved memory. +- The the system checks actually used memory on the system this is done simply by doing `actual_used = memory.total - memory.available` +- The system now can simply `assume` an accurate used memory by doing `used = max(actual_used, theoretically_used)` +- Then `available = total - used` +- Then simply checks that `available` memory is enough to hold requested workload memory! + +### Storage + +Storage is much simpler to allocate than memory. It's completely left to the storage subsystem to find out if it can fit the requested storage on the available physical disks or not, if not possible the workloads is marked as error. + +Storage tries to find the requested space based on type (SU or HU), then find the optimal way to fit that on the available disks, or spin up a new one if needed. diff --git a/collections/developers/internals/zos/internals/compatibility/readme.md b/collections/developers/internals/zos/internals/compatibility/readme.md new file mode 100644 index 0000000..1eb4394 --- /dev/null +++ b/collections/developers/internals/zos/internals/compatibility/readme.md @@ -0,0 +1,14 @@ +# Compatibility list + +This document track all the hardware that have been tested, the issues encountered and possible workarounds. + +**Legend** +✅ : fully supported +⚠️ : supported with some tweaking +🛑 : not supported + + +| vendor | Hardware | Support | Issues | workaround | +| --- | --- | --- | --- | --- | +| Supermicro | SYS-5038ML-H8TRF | ✅ | | | +| Gigabyte Technology Co | AB350N-Gaming WIFI | ✅ | | | diff --git a/collections/developers/internals/zos/internals/container/readme.md b/collections/developers/internals/zos/internals/container/readme.md new file mode 100644 index 0000000..6d5d2e2 --- /dev/null +++ b/collections/developers/internals/zos/internals/container/readme.md @@ -0,0 +1,106 @@ +

Container Module

+ +

Table of Contents

+ +- [ZBus](#zbus) +- [Home Directory](#home-directory) +- [Introduction](#introduction) + - [zinit unit](#zinit-unit) +- [Interface](#interface) + +*** + +## ZBus + +Storage module is available on zbus over the following channel + +| module | object | version | +|--------|--------|---------| +| container|[container](#interface)| 0.0.1| + +## Home Directory + +contd keeps some data in the following locations +| directory | path| +|----|---| +| root| `/var/cache/modules/containerd`| + +## Introduction + +The container module, is a proxy to [containerd](https://github.com/containerd/containerd). The proxy provides integration with zbus. + +The implementation is the moment is straight forward, which includes preparing the OCI spec for the container, the tenant containerd namespace, +setting up proper capabilities, and finally creating the container instance on `containerd`. + +The module is fully stateless, all container information is queried during runtime from `containerd`. + +### zinit unit + +`contd` must run after containerd is running, and the node boot process is complete. Since it doesn't keep state, no dependency on `stroaged` is needed + +```yaml +exec: contd -broker unix:///var/run/redis.sock -root /var/cache/modules/containerd +after: + - containerd + - boot +``` + +## Interface + +```go +package pkg + +// ContainerID type +type ContainerID string + +// NetworkInfo defines a network configuration for a container +type NetworkInfo struct { + // Currently a container can only join one (and only one) + // network namespace that has to be pre defined on the node + // for the container tenant + + // Containers don't need to know about anything about bridges, + // IPs, wireguards since this is all is only known by the network + // resource which is out of the scope of this module + Namespace string +} + +// MountInfo defines a mount point +type MountInfo struct { + Source string // source of the mount point on the host + Target string // target of mount inside the container + Type string // mount type + Options []string // mount options +} + +//Container creation info +type Container struct { + // Name of container + Name string + // path to the rootfs of the container + RootFS string + // Env env variables to container in format {'KEY=VALUE', 'KEY2=VALUE2'} + Env []string + // Network network info for container + Network NetworkInfo + // Mounts extra mounts for container + Mounts []MountInfo + // Entrypoint the process to start inside the container + Entrypoint string + // Interactivity enable Core X as PID 1 on the container + Interactive bool +} + +// ContainerModule defines rpc interface to containerd +type ContainerModule interface { + // Run creates and starts a container on the node. It also auto + // starts command defined by `entrypoint` inside the container + // ns: tenant namespace + // data: Container info + Run(ns string, data Container) (ContainerID, error) + + // Inspect, return information about the container, given its container id + Inspect(ns string, id ContainerID) (Container, error) + Delete(ns string, id ContainerID) error +} +``` diff --git a/collections/developers/internals/zos/internals/flist/readme.md b/collections/developers/internals/zos/internals/flist/readme.md new file mode 100644 index 0000000..46f1076 --- /dev/null +++ b/collections/developers/internals/zos/internals/flist/readme.md @@ -0,0 +1,74 @@ +

Flist Module

+ +

Table of Contents

+ +- [Zbus](#zbus) +- [Home Directory](#home-directory) +- [Introduction](#introduction) +- [Public interface ](#public-interface-) +- [zinit unit](#zinit-unit) + +*** + +## Zbus + +Flist module is available on zbus over the following channel: + +| module | object | version | +|--------|--------|---------| +|flist |[flist](#public-interface)| 0.0.1 + +## Home Directory +flist keeps some data in the following locations: +| directory | path| +|----|---| +| root| `/var/cache/modules/containerd`| + +## Introduction + +This module is responsible to "mount an flist" in the filesystem of the node. The mounted directory contains all the files required by containers or (in the future) VMs. + +The flist module interface is very simple. It does not expose any way to choose where to mount the flist or have any reference to containers or VM. The only functionality is to mount a given flist and receive the location where it is mounted. It is up to the above layer to do something useful with this information. + +The flist module itself doesn't contain the logic to understand the flist format or to run the fuse filesystem. It is just a wrapper that manages [0-fs](https://github.com/threefoldtech/0-fs) processes. + +Its only job is to download the flist, prepare the isolation of all the data and then start 0-fs with the proper arguments. + +## Public interface [![GoDoc](https://godoc.org/github.com/threefoldtech/zos/pkg/flist?status.svg)](https://godoc.org/github.com/threefoldtech/zos/pkg/flist) + +```go + +//Flister is the interface for the flist module +type Flister interface { + // Mount mounts an flist located at url using the 0-db located at storage + // in a RO mode. note that there is no way u can unmount a ro flist because + // it can be shared by many users, it's then up to system to decide if the + // mount is not needed anymore and clean it up + Mount(name, url string, opt MountOptions) (path string, err error) + + // UpdateMountSize change the mount size + UpdateMountSize(name string, limit gridtypes.Unit) (path string, err error) + + // Umount a RW mount. this only unmounts the RW layer and remove the assigned + // volume. + Unmount(name string) error + + // HashFromRootPath returns flist hash from a running g8ufs mounted with NamedMount + HashFromRootPath(name string) (string, error) + + // FlistHash returns md5 of flist if available (requesting the hub) + FlistHash(url string) (string, error) + + Exists(name string) (bool, error) +} + +``` + +## zinit unit + +The zinit unit file of the module specifies the command line, test command, and the order in which the services need to be booted. + +Flist module depends on the storage and network pkg. +This is because it needs connectivity to download flist and data and it needs storage to be able to cache the data once downloaded. + +Flist doesn't do anything special on the system except creating a bunch of directories it will use during its lifetime. diff --git a/collections/developers/internals/zos/internals/gateway/readme.md b/collections/developers/internals/zos/internals/gateway/readme.md new file mode 100644 index 0000000..45f5035 --- /dev/null +++ b/collections/developers/internals/zos/internals/gateway/readme.md @@ -0,0 +1,121 @@ +# Gateway Module + +## ZBus + +Gateway module is available on zbus over the following channel + +| module | object | version | +| ------- | --------------------- | ------- | +| gateway | [gateway](#interface) | 0.0.1 | + +## Home Directory + +gateway keeps some data in the following locations +| directory | path | +| --------- | ---------------------------- | +| root | `/var/cache/modules/gateway` | + +The directory `/var/cache/modules/gateway/proxy` contains the route information used by traefik to forward traffic. +## Introduction + +The gateway modules is used to register traefik routes and services to act as a reverse proxy. It's the backend supporting two kinds of workloads: `gateway-fqdn-proxy` and `gateway-name-proxy`. + +For the FQDN type, it receives the domain and a list of backends in the form `http://ip:port` or `https://ip:port` and registers a route for this domain forwarding traffic to these backends. It's a requirement that the domain resolves to the gateway public ip. The `tls_passthrough` parameter determines whether the tls termination happens on the gateway or in the backends. When it's true, the backends must be in the form `https://ip:port`, and the backends must be https-enabled servers. + +The name type is the same as the FQDN type except that the `name` parameter is added as a prefix to the gatweay domain to determine the fqdn. It's forbidden to use a FQDN type workload to reserve a domain managed by the gateway. + +The fqdn type is enabled only if there's a public config on the node. The name type works only if a domain exists in the public config. To make a full-fledged gateway node, these DNS records are required: +``` +gatwaydomain.com A ip.of.the.gateway +*.gatewaydomain.com CNAME gatewaydomain.com +__acme-challenge.gatewaydomain.com NS gatdwaydomain.com +``` + +### zinit unit + +```yaml +exec: gateway --broker unix:///var/run/redis.sock --root /var/cache/modules/gateway +after: + - boot +``` +## Implementation details + +Traefik is used as the reverse proxy forwarding traffic to upstream servers. All worklaods deployed on the node is associated with a domain that resolves to the node IP. In the name workload case, it's a subdomain of the gateway main domain. In the FQDN case, the user must create a DNS A record pointing it to the node IP. The node by default redirects all http traffic to https. + +When an https request reaches the node, it looks at the domain and determines the correct service that should handle the request. The services defintions are in `/var/cache/modules/gateway/proxy/` and is hot-reloaded by traefik every time a service is added/removed to/from it. Zos currently supports enabling `tls_passthrough` in which case the https request is passed as is to the backend (at the TCP level). The default is `tls_passthrough` is false which means the node terminates the TLS traffic and then forwards the request as http to the backend. +Example of a FQDN service definition with tls_passthrough enabled: +```yaml +tcp: + routers: + 37-2039-testname-route: + rule: HostSNI(`remote.omar.grid.tf`) + service: 37-2039-testname + tls: + passthrough: "true" + services: + 37-2039-testname: + loadbalancer: + servers: + - address: 137.184.106.152:443 +``` +Example of a "name" service definition with tls_passthrough disabled: +```yaml +http: + routers: + 37-1976-workloadname-route: + rule: Host(`workloadname.gent01.dev.grid.tf`) + service: 40-1976-workloadname + tls: + certResolver: dnsresolver + domains: + - sans: + - '*.gent01.dev.grid.tf' + services: + 40-1976-workloadname: + loadbalancer: + servers: + - url: http://[backendip]:9000 +``` + +The `certResolver` option has two valid values, `resolver` and `dnsresolver`. The `resolver` is an http resolver and is used in FQDN services with `tls_passthrough` disabled. It uses the http challenge to generate a single-domain certificate. The `dnsresolver` is used for name services with `tls_passthrough` disabled. The `dnsresolver` is responsible for generating a wildcard certificate to be used for all subdomains of the gateway domain. Its flow is described below. + +The CNAME record is used to make all subdomains (reserved or not) resolve to the ip of the gateway. Generating a wildcard certificate requires adding a TXT record at `__acme-challenge.gatewaydomain.com`. The NS record is used to delegate this specific subdomain to the node. So if someone did `dig TXT __acme-challenge.gatewaydomain.com`, the query is served by the node, not the DNS provider used for the gateway domain. + +Traefik has, as a config parameter, multiple dns [providers](https://doc.traefik.io/traefik/https/acme/#providers) to communicate with when it wants to add the required TXT record. For non-supported providers, a bash script can be provided to do the record generation and clean up (i.e. External program). The bash [script](https://github.com/threefoldtech/zos/blob/main/pkg/gateway/static/cert.sh) starts dnsmasq managing a dns zone for the `__acme-challenge` subdomain with the given TXT record. It then kills the dnsmasq process and removes the config file during cleanup. +## Interface + +```go +type Backend string + +// GatewayFQDNProxy definition. this will proxy name. to backends +type GatewayFQDNProxy struct { + // FQDN the fully qualified domain name to use (cannot be present with Name) + FQDN string `json:"fqdn"` + + // Passthroug whether to pass tls traffic or not + TLSPassthrough bool `json:"tls_passthrough"` + + // Backends are list of backend ips + Backends []Backend `json:"backends"` +} + + +// GatewayNameProxy definition. this will proxy name. to backends +type GatewayNameProxy struct { + // Name the fully qualified domain name to use (cannot be present with Name) + Name string `json:"name"` + + // Passthroug whether to pass tls traffic or not + TLSPassthrough bool `json:"tls_passthrough"` + + // Backends are list of backend ips + Backends []Backend `json:"backends"` +} + +type Gateway interface { + SetNamedProxy(wlID string, prefix string, backends []string, TLSPassthrough bool) (string, error) + SetFQDNProxy(wlID string, fqdn string, backends []string, TLSPassthrough bool) error + DeleteNamedProxy(wlID string) error + Metrics() (GatewayMetrics, error) +} +``` diff --git a/collections/developers/internals/zos/internals/history/readme.md b/collections/developers/internals/zos/internals/history/readme.md new file mode 100644 index 0000000..e4c023a --- /dev/null +++ b/collections/developers/internals/zos/internals/history/readme.md @@ -0,0 +1,99 @@ +# 0-OS, a bit of history and introduction to Version 2 + +## Once upon a time +---- +A few years ago, we were trying to come up with some solutions to the problem of self-healing IT. +We boldly started that : the current model of cloud computing in huge data-centers is not going to be able to scale to fit the demand in IT capacity. + +The approach we took to solve this problem was to enable localized compute and storage units at the edge of the network, close to where it is needed. +That basically meant that if we were to deploy physical hardware to the edges, nearby the users, we would have to allow information providers to deploy their solutions on that edge network and hardware. That means also sharing hardware resources between users, where we would have to make damn sure noone can peek around in things that are not his. + +When we talk about sharing capacity in a secure environment, virtualization comes to mind. It's not a new technology and it has been around for quite some time. This solution comes with a cost though. Virtual machines, emulating a full hardware platform on real hardware is costly in terms of used resources, and eat away at the already scarce resources we want to provide for our users. + +Containerizing technologies were starting to get some hype at the time. Containers provide for basically the same level of isolation as Full Virtualisation, but are a lot less expensive in terms of resource utilization. + +With that in mind, we started designing the first version of 0-OS. The required features were: + +- be able to be fully in control of the hardware +- give the possibility to different users to share the same hardware +- deploy this capacity at the edge, close to where it is needed +- the System needs to self-heal. Because of their location and sheer scale, manual maintenance was not an option. Self-healing is a broad topic, and will require a lot of experience and fine-tuning, but it was meant to culminate at some point so that most of the actions that sysadmins execute, would be automated. +- Have an a small as possible attack surface, as well for remote types of attack, as well as protecting users from each-other + +The result of that thought process resulted in 0-OS v1. A linux kernel with the minimal components on top that allows to provide for these features. + +In the first incantation of 0-OS, the core framework was a single big binary that got started as the first process of the system (PID 1). All the managment features were exposed through an API that was only accessible locally. + +The idea was to have an orchestration system running on top that was going to be responsible to deploy Virtual Machines and Containers on the system using that API. + +This API exposes 3 main primitives: + +- networking: zerotier, vlan, macvlan, bridge, openvswitch... +- storage: plain disk, 0-db, ... +- compute: VM, containers + +That was all great and it allowed us to learn a lot. But some limitations started to appear. Here is a non exhaustive list of the limitations we had to face after a couple of years of utilization: + +- Difficulty to push new versions and fixes on the nodes. The fact that 0-OS was a single process running as PID 1, forced us to completely reboot the node every time we wanted to push an update. +- The API, while powerful, still required to have some logic on top to actually deploy usable solutions. +- We noticed that some features we implemented were never or extremely rarely used. This was just increasing the possible attack surface for no real benefits. +- The main networking solution we choose at the time, zerotier, was not scaling as well as we hoped for. +- We wrote a lot of code ourselves, instead of relying on already existing open source libraries that would have made that task a lot easier, but also, these libraries were a lot more mature and have had a lot more exposure for ironing out possible bugs and vulnerabilities than we could have created and tested ourselves with the little resources we have at hand. + +## Now what ? +With the knowledge and lessons gathered during these first years of usage, we +concluded that trying to fix the already existing codebase would be cumbersome +and we also wanted to avoid any technical debt that could haunt us for years +after. So we decided for a complete rewrite of that stack, taking a new and +fully modular approach, where every component could be easily replaced and +upgraded without the need for a reboot. + +Hence Version 2 saw the light of day. + +Instead of trial and error, and muddling along trying to fit new features in +that big monolithic codebase, we wanted to be sure that the components were +reduced to a more manageable size, having a clearly cut Domain Separation. + +Instead of creating solutions waiting for a problem, we started looking at things the other way around. Which is logical, as by now, we learned what the real puzzles to solve were, albeit sometimes by painful experience. + +## Tadaa! +---- +The [first commit](https://github.com/threefoldtech/zosv2/commit/7b783c888673d1e9bc400e4abbb17272e995f5a4) of the v2 repository took place the 11 of February 2019. +We are now 6 months in, and about to bake the first release of 0-OS v2. +Clocking in at almost 27KLoc, it was a very busy half-year. (admitted, there are the spec and docs too in that count ;-) ) + +Let's go over the main design decisions that were made and explain briefly each component. + +While this is just an introduction, we'll add more articles digging deeper in the technicalities and approaches of each component. + +## Solutions to puzzles (there are no problems) +---- +**UPDATES** + +One of the first puzzles we wanted to solve was the difficulty to push upgrades. +In order to solve that, we designed 0-OS components as completely stand-alone modules. Each subsystem, be it storage, networking, containers/VMs, is managed by it's own component (mostly a daemon), and communicate with each-other through a local bus. And as we said, each component can then be upgraded separately, together with the necessary data migrations that could be required. + +**WHAT API?** + +The second big change is our approach to the API, or better, lack thereof. +In V2 we dropped the idea to expose the primitives of the Node over an API. +Instead, all the required knowledge to deploy workloads is directly embedded in 0-OS. +So in order to have the node deploy a workload, we have created a blueprint like system where the user describes what his requirements in terms of compute power, storage and networking are, and the node applies that blueprint to make it reality. +That approach has a few advantages: + - It greatly reduces the attack surface of the node because there is no more direct interaction between a user and a node. + - And it also allows us to have a greater control over how things are organized in the node itself. The node being its own boss, can decide to re-organize itself whenever needed to optimize the capacity it can provide. + - Having a blueprint with requirements, gives the grid the possibility to verify that blueprint on multiple levels before applying it. That is: as well on top level as on node level a blueprint can be verified for validity and signatures before any other action will be executed. + +**PING** + +The last major change is how we want to handle networking. +The solution used during the lifetime of V1 exposed its limitations when we started scaling our networks to hundreds of nodes. +So here again we started from scratch and created our own overlay network solution. +That solution is based on the 'new kid on the block' in terms of VPN: [Wireguard](https://wireguard.io) and it's approach and usage will be fully explained in the next 0-OS article. +For the eager ones of you, there are some specifications and also some documentation [here](https://github.com/threefoldtech/zosv2/tree/master/docs/network) and [there](https://github.com/threefoldtech/zosv2/tree/master/specs/network). + +## That's All, Folks (for now) +So this little article as an intro to the brave new world of 0-OS. +The Zero-OS team engages itself to regularly keep you updated on it's progress, the new features that will surely be added, and for the so inclined, add a lot more content for techies on how to actually use that novel beast. + +[Till next time](https://youtu.be/b9434BoGkNQ) diff --git a/collections/developers/internals/zos/internals/identity/identity.md b/collections/developers/internals/zos/internals/identity/identity.md new file mode 100644 index 0000000..9ed7400 --- /dev/null +++ b/collections/developers/internals/zos/internals/identity/identity.md @@ -0,0 +1,143 @@ +

Node ID Generation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [ZBus](#zbus) +- [Home Directory](#home-directory) +- [Introduction](#introduction-1) +- [On Node Booting](#on-node-booting) +- [ID generation](#id-generation) +- [Cryptography](#cryptography) + - [zinit unit](#zinit-unit) +- [Interface](#interface) + +*** + +## Introduction + +We explain the node ID generation process. + +## ZBus + +Identity module is available on zbus over the following channel + +| module | object | version | +|--------|--------|---------| +| identity|[manager](#interface)| 0.0.1| + +## Home Directory + +identity keeps some data in the following locations + +| directory | path| +|----|---| +| root| `/var/cache/modules/identity`| + +## Introduction + +Identity manager is responsible for maintaining the node identity (public key). The manager make sure the node has one valid ID during the entire lifetime of the node. It also provide service to sign, encrypt and decrypt data using the node identity. + +On first boot, the identity manager will generate an ID and then persist this ID for life. + +Since the identity daemon is the only one that can access the node private key, it provides an interface to sign, verify and encrypt data. This methods are available for other modules on the local node to use. + +## On Node Booting + +- Check if node already has a seed generated +- If yes, load the node identity +- If not, generate a new ID +- Start the zbus daemon. + +## ID generation + +At this time of development the ID generated by identityd is the base58 encoded public key of a ed25519 key pair. + +The key pair itself is generated from a random seed of 32 bytes. It is this seed that is actually saved on the node. And during boot the key pair is re-generated from this seed if it exists. + +## Cryptography + +The signing and encryption capabilities of the identity module rely on this ed25519 key pair. + +For signing, it directly used the key pair. +For public key encryption, the ed25519 key pair is converted to its cure25519 equivalent and then use use to encrypt the data. + +### zinit unit + +The zinit unit file of the module specify the command line, test command, and the order where the services need to be booted. + +`identityd` require `storaged` to make sure the seed is persisted over reboots, to make sure node has the same ID during the full life time of the node. +The identityd daemon is only considered running if the seed file exists. + +```yaml +exec: /bin/identityd +test: test -e /var/cache/modules/identity/seed.txt +after: + - storaged +``` + +## Interface + +For an up to date interface please check code [here](https://github.com/threefoldtech/zos/blob/main/pkg/identity.go) +```go +package pkg + +// Identifier is the interface that defines +// how an object can be used as an identity +type Identifier interface { + Identity() string +} + +// StrIdentifier is a helper type that implement the Identifier interface +// on top of simple string +type StrIdentifier string + +// Identity implements the Identifier interface +func (s StrIdentifier) Identity() string { + return string(s) +} + +// IdentityManager interface. +type IdentityManager interface { + // NodeID returns the node id (public key) + NodeID() StrIdentifier + + // NodeIDNumeric returns the node registered ID. + NodeIDNumeric() (uint32, error) + + // FarmID return the farm id this node is part of. this is usually a configuration + // that the node is booted with. An error is returned if the farmer id is not configured + FarmID() (FarmID, error) + + // Farm returns name of the farm. Or error + Farm() (string, error) + + //FarmSecret get the farm secret as defined in the boot params + FarmSecret() (string, error) + + // Sign signs the message with privateKey and returns a signature. + Sign(message []byte) ([]byte, error) + + // Verify reports whether sig is a valid signature of message by publicKey. + Verify(message, sig []byte) error + + // Encrypt encrypts message with the public key of the node + Encrypt(message []byte) ([]byte, error) + + // Decrypt decrypts message with the private of the node + Decrypt(message []byte) ([]byte, error) + + // EncryptECDH aes encrypt msg using a shared key derived from private key of the node and public key of the other party using Elliptic curve Diffie Helman algorithm + // the nonce if prepended to the encrypted message + EncryptECDH(msg []byte, publicKey []byte) ([]byte, error) + + // DecryptECDH decrypt aes encrypted msg using a shared key derived from private key of the node and public key of the other party using Elliptic curve Diffie Helman algorithm + DecryptECDH(msg []byte, publicKey []byte) ([]byte, error) + + // PrivateKey sends the keypair + PrivateKey() []byte +} + +// FarmID is the identification of a farm +type FarmID uint32 +``` diff --git a/collections/developers/internals/zos/internals/identity/readme.md b/collections/developers/internals/zos/internals/identity/readme.md new file mode 100644 index 0000000..1bff097 --- /dev/null +++ b/collections/developers/internals/zos/internals/identity/readme.md @@ -0,0 +1,8 @@ +

Identity Module

+ +Identity daemon is responsible for two major operations that are crucial for the node operation. + +

Table of Contents

+ +- [Node ID Generation](identity.md) +- [Node Live Software Update](upgrade.md) diff --git a/collections/developers/internals/zos/internals/identity/upgrade.md b/collections/developers/internals/zos/internals/identity/upgrade.md new file mode 100644 index 0000000..e486a53 --- /dev/null +++ b/collections/developers/internals/zos/internals/identity/upgrade.md @@ -0,0 +1,98 @@ +

Node Upgrade

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Philosophy](#philosophy) +- [Booting a new node](#booting-a-new-node) +- [Runtime upgrade of a node](#runtime-upgrade-of-a-node) +- [Technical](#technical) + - [Flist layout](#flist-layout) + +*** + +## Introduction + +We provide information concerning node upgrade with ZOS. We also explain the philosophy behind ZOS. + +## Philosophy + +0-OS is meant to be a black box no one can access. While this provide some nice security features it also makes it harder to manage. Specially when it comes to update/upgrade. + +Hence, zos only trust few sources for upgrade packages. When the node boots up it checks the sources for the latest release and make sure all the local binaries are up-to-date before continuing the booting. The flist source must be rock-solid secured, that's another topic for different documentation. + +The run mode defines which flist the node is going to use to boot. Run mode can be specified by passing `runmode=` to the kernel boot params. Currently we have those different run modes. + +- dev: ephemeral network only setup to develop and test new features. Can be created and reset at anytime +- test: Mostly stable features that need to be tested at scale, allow preview and test of new features. Always the latest and greatest. This network can be reset sometimes, but should be relatively stable. +- prod: Released of stable version. Used to run the real grid with real money. Cannot be reset ever. Only stable and battle tested feature reach this level. + +## Booting a new node + +The base image for zos contains a very small subset of tools, plus the boot program. Standing alone, the image is not really useful. On boot and +after initial start of the system, the boot program kicks in and it does the following: + +- Detect the boot flist that the node must use to fully start. The default is hard-coded into zos, but this can be overridden by the `flist=` kernel param. The `flist=` kernel param can get deprecated without a warning, since it's a development flag. +- The bootstrap, will then mount this flist using 0-fs, this of course requires a working connection to the internet. Hence bootstrap is configured to wait for the `internet` service. +- The flist information (name, and version) is saved under `/tmp/flist.name` and `/tmp/flist.info`. +- The bootstrap makes sure to copy all files in the flist to the proper locations under the system rootfs, this include `zinit` config files. +- Then zinit is asked to monitor new installed services, zinit takes care of those services and make sure they are properly working at all times. +- Bootstrap, umounts the flist, cleans up before it exits. +- Boot process continues. + +## Runtime upgrade of a node + +Once the node is up and running, identityd takes over and it does the following: + +- It loads the boot info files `/tmp/flist.name` and `/tmp/flist.info` +- If the `flist.name` file does **not** exist, `identityd` will assume the node is booted with other means than an flist (for example overlay). In that case, identityd will log this, and disable live upgrade of the node. +- If the `flist.name` file exists, the flist will be monitored on the `https://hub.grid.tf` for changes. Any change in the version will initiate a life upgrade routine. +- Once the flist change is detected, identityd will mount the flist, make sure identityd is running the latest version. If not, identityd will update itself first before continuing. +- services that will need update will be gracefully stopped. +- `identityd` will then make sure to update all services from the flist, and config files. and restart the services properly. +- services are started again after all binaries has been copied + +## Technical + +0-OS is designed to provide maximum uptime for its workload, rebooting a node should never be required to upgrade any of its component (except when we push a kernel upgrade). + +![flow](../../assets/0-OS-upgrade.png) + +### Flist layout + +The files in the upgrade flist needs to be located in the filesystem tree at the same destination they would need to be in 0-OS. This allow the upgrade code to stays simple and only does a copy from the flist to the root filesystem of the node. + +Booting a new node, or updating a node uses the same flist. Hence, a boot flist must container all required services for node operation. + +Example: + +0-OS filesystem: + +``` +/etc/zinit/identityd.yaml +/etc/zinit/networkd.yaml +/etc/zinit/contd.yaml +/etc/zinit/init/node-ready.sh +/etc/zinit/init +/etc/zinit/redis.yaml +/etc/zinit/storaged.yaml +/etc/zinit/flistd.yaml +/etc/zinit/readme.md +/etc/zinit/internet.yaml +/etc/zinit/containerd.yaml +/etc/zinit/boot.yaml +/etc/zinit/provisiond.yaml +/etc/zinit/node-ready.yaml +/etc/zinit +/etc +/bin/zlf +/bin/provisiond +/bin/flistd +/bin/identityd +/bin/contd +/bin/capacityd +/bin/storaged +/bin/networkd +/bin/internet +/bin +``` diff --git a/collections/developers/internals/zos/internals/internals.md b/collections/developers/internals/zos/internals/internals.md new file mode 100644 index 0000000..ac7dfe8 --- /dev/null +++ b/collections/developers/internals/zos/internals/internals.md @@ -0,0 +1,88 @@ +

Internal Modules

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Booting](#booting) +- [Bootstrap](#bootstrap) +- [Zinit](#zinit) +- [Architecture](#architecture) + - [IPC](#ipc) +- [ZOS Processes (modules)](#zos-processes-modules) +- [Capacity](#capacity) + +*** + +## Introduction + +This document explains in a nutshell the internals of ZOS. This includes the boot process, architecture, the internal modules (and their responsibilities), and the inter-process communication. + +## Booting + +ZOS is a linux based operating system in the sense that we use the main-stream linux kernel with no modifications (but heavily customized). The base image of ZOS includes linux, busybox, [zinit](https://github.com/threefoldtech/zinit) and other required tools that are needed during the boot process. The base image is also shipped with a bootstrap utility that is self-updating on boot which kick starts everything. + +For more details about the ZOS base image please check [0-initramfs](https://github.com/threefoldtech/0-initramfs). + +`ZOS` uses zinit as its `init` or `PID 1` process. `zinit` acts as a process manager and it takes care of starting all required services in the right order. Using simple configuration that is available under `/etc/zinit`. + +The base `ZOS` image has a zinit config to start the basic services that are required for booting. These include (mainly) but are not limited to: + +- internet: A very basic service that tries to connect zos to the internet as fast (and as simple) as possible (over ethernet) using dhcp. This is needed so the system can continue the boot process. Once this one succeeds, it exits and leaves node network management to the more sophisticated ZOS module `networkd` which is yet to be downloaded and started by bootstrap. +- redis: This is required by all zos modules for its IPC (inter process communication). +- bootstrap: The bootstrap process which takes care of downloading all required zos binaries and modules. This one requires the `internet` service to actually succeed. + +## Bootstrap + +`bootstrap` is a utility that resides on the base image. It takes care of downloading and configuring all zos main services by doing the following: + +- It checks if there is a more recent version of itself available. If it exists, the process first updates itself before proceeding. +- It checks zos boot parameters (for example, which network you are booting into) as set by . +- Once the network is known, let's call it `${network}`. This can either be `production`, `testing`, or `development`. The proper release is downloaded as follows: + - All flists are downloaded from one of the [hub](https://hub.grid.tf/) `tf-zos-v3-bins.dev`, `tf-zos-v3-bins.test`, or `tf-zos-v3-bins` repos. Based on the network, only one of those repos is used to download all the support tools and binaries. Those are not included in the base image because they can be updated, added, or removed. + - The flist `https://hub.grid.tf/tf-zos/zos:${network}-3:latest.flist.md` is downloaded (note that ${network} is replaced with the actual value). This flist includes all zos services from this repository. More information about the zos modules are explained later. + - Once all binaries are downloaded, `bootstrap` finishes by asking zinit to start monitoring the newly installed services. The bootstrap exits and will never be started again as long as zos is running. + - If zos is restarted the entire bootstrap process happens again including downloading the binaries because ZOS is completely stateless (except for some cached runtime data that is preserved across reboots on a cache disk). + +## Zinit + +As mentioned earlier, `zinit` is the process manager of zos. Bootstrap makes sure it registers all zos services for zinit to monitor. This means that zinit will take care that those services are always running, and restart them if they have crashed for any reason. + +## Architecture + +For `ZOS` to be able to run workloads of different types it has split its functionality into smaller modules. Where each module is responsible for providing a single functionality. For example `storaged` which manages machine storages, hence it can provide low level storage capacity to other services that need it. + +As an example, imagine that you want to start a `virtual machine`. For a `virtual machine` to be able to run it will require a `rootfs` image or the image of the VM itself this is normally provided via an `flist` (managed by `flistd`), then you would need an actual persistent storage (managed by `storaged`), a virtual nic (managed by `networkd`), another service that can put everything together in a form of a VM (`vmd`). Then finally a service that orchestrates all of this and translates the user request to an actual workload `provisiond`, you get the picture. + +### IPC + +All modules running in zos needs to be able to interact with each other. As it shows from the previous example. For example, `provision` daemon need to be able to ask `storage` daemon to prepare a virtual disk. A new `inter-process communication` protocol and library was developed to enable this with those extra features: + +- Modules do not need to know where other modules live, there are no ports, and/or urls that have to be known by all services. +- A single module can run multiple versions of an API. +- Ease of development. +- Auto generated clients. + +For more details about the message bus please check [zbus](https://github.com/threefoldtech/zbus) + +`zbus` uses redis as a message bus, hence redis is started in the early stages of zos booting. + +`zbus` allows auto generation of `stubs` which are generated clients against a certain module interface. Hence a module X can interact with a module Y by importing the generated clients and then start making function calls. + +## ZOS Processes (modules) + +Modules of zos are completely internal. There is no way for an external user to talk to them directly. The idea is the node exposes a public API over rmb, while internally this API can talk to internal modules over `zbus`. + +Here is a list of the major ZOS modules. + +- [Identity](identity/index.md) +- [Node](node/index.md) +- [Storage](storage/index.md) +- [Network](network/index.md) +- [Flist](flist/index.md) +- [Container](container/index.md) +- [VM](vmd/index.md) +- [Provision](provision/index.md) + +## Capacity + +In [this document](./capacity.md), you can find detail description of how ZOS does capacity planning. diff --git a/collections/developers/internals/zos/internals/macdev/readme.md b/collections/developers/internals/zos/internals/macdev/readme.md new file mode 100644 index 0000000..1a9d4b6 --- /dev/null +++ b/collections/developers/internals/zos/internals/macdev/readme.md @@ -0,0 +1,57 @@ +> Note: This is unmaintained, try on your own responsibility + +# MacOS Developer + +0-OS (v2) uses a Linux kernel and is really build with a linux environment in mind. +As a developer working from a MacOS environment you will have troubles running the 0-OS code. + +Using [Docker][docker] you can work from a Linux development environment, hosted from your MacOS Host machine. +In this README we'll do exactly that using the standard Ubuntu [Docker][docker] container as our base. + +## Setup + +0. Make sure to have Docker installed, and configured (also make sure you have your code folder path shared in your Docker preferences). +1. Start an _Ubuntu_ Docker container with your shared code directory mounted as a volume: +```bash +docker run -ti -v "$HOME/oss":/oss ubuntu /bin/bash +``` +2. Make sure your environment is updated and upgraded using `apt-get`. +3. Install Go (`1.13`) from src using the following link or the one you found on [the downloads page](https://golang.org/dl/): +```bash +wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz +sudo tar -xvf go1.13.3.linux-amd64.tar.gz +sudo mv go /usr/local +``` +4. Add the following to your `$HOME/.bashrc` and `source` it: +```vim +export GOROOT=/usr/local/go +export GOPATH=$HOME/go +export PATH=$GOPATH/bin:$GOROOT/bin:$PATH +``` +5. Confirm you have Go installed correctly: +``` +go version && go env +``` +6. Go to your `zos` code `pkg` directory hosted from your MacOS development machine within your docker `/bin/bash`: +```bash +cd /oss/github.com/threefoldtech/zos/pkg +``` +7. Install the dependencies for testing: +```bash +make getdeps +``` +8. Run tests and verify all works as expected: +```bash +make test +``` +9. Build `zos`: +```bash +make build +``` + +If you can successfully do step (8) and step (9) you +can now contribute to `zos` as a MacOS developer. +Testing and compiling you'll do from within your container's shell, +coding you can do from your beloved IDE on your MacOS development environment. + +[docker]: https://www.docker.com diff --git a/collections/developers/internals/zos/internals/network/Deploy_Network-V2.md b/collections/developers/internals/zos/internals/network/Deploy_Network-V2.md new file mode 100644 index 0000000..59a04d9 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/Deploy_Network-V2.md @@ -0,0 +1,74 @@ +# 0-OS v2 and it's network setup + +## Introduction + +0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over +the Internet with each-other in order to do various things: + +- download it's OS modules +- perform OS module upgrades +- register itself to the grid, and send regular updates about it's status +- query the grid for tasks to execute +- build and run the Overlay Network +- download flists and the effective files to cache + +The nodes themselves can have connectivity in a few different ways: + +- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6 + Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but + cant't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from + bootstrap.grid.tf . +- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router, +but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN +- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public +addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local +management. (OOB) (like in the drawing for farmer setup) + +For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can: + +- expose services to be proxied into containers/vms +- act as aggregating nodes for Overlay Networks for HIDDEN Nodes + +Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can +effectively reach exposed services. + +But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the +grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof. +Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition. + +Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to +get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way. + +So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it. + +## Network setup for farmers + +This is a quick manual to what is needed for connecting a node with zero-OS V2.0 + +### Step 1. Testing for IPv6 availability in your location +As descibed above the network in which the node is instaleld has to be IPv6 enabled. This is not an afterthought as we are building a new internet it has to ba based on the new and forward looking IP addressing scheme. This is something you have to investigate, negotiate with you connectivity provider. Many (but not all home connectivity products and certainly most datacenters can provide you with IPv6. There are many sources of infromation on how to test and check whether your connection is IPv6 enabled, [here is a starting point](http://www.ipv6enabled.org/ipv6_enabled/ipv6_enable.php) + +### Step 2. Choosing you setup for connecitng you nodes. + +Once you have established that you have IPv6 enabled on the network you are about to deploy, you have to make sure that there is an IPv6 DHCP facility available. Zero-OS does not work with static IPv6 addresses (at this point in time). So you have choose and create one of the following setups: + +#### 2.1 Home setup + +Use your (home) ISP router Ipv6 DHCP capabilities to provide (private) IPv6 addresses. The principle will work the same as for IPv4 home connections, everything happens enabled by Network Adress Translation (just like anything else that uses internet connectivity). This should be relatively straightforward if you have established that your conenction has IPv6 enabled. + +#### 2.2 Datacenter / Expert setup + +In this situation there are many options on how to setup you node. This requires you as the expert to make a few decisions on how to connect what what the best setup is that you can support for the operaitonal time of your farm. The same basics principles apply: + - You have to have a block of (public) IPv6 routed to you router, or you have to have your router setup to provide Network Address Translation (NAT) + - You have to have a DHCP server in your network that manages and controls IPV6 ip adress leases. Depending on your specific setup you have this DHCP server manage a public IPv6y range which makes all nodes directly connected to the public internet or you have this DHCP server manage a private block og IPv6 addresses which makes all you nodes connect to the internet through NAT. + +As a farmer you are in charge of selecting and creating the appropriate network setup for your farm. + +## General notes + +The above setup will allows your node(s) to appear in explorer on the TF Grid and will allowd you to earn farming tokens. At stated in the introduction ThreeFold is creating next generation internet capacity and therefore has IPv6 as it's base building block. Connecting to the current (dominant) IPv4 network happens for IT workloads through so called webgateways. As the word sais these are gateways that provide connectivity between the currenct leading IPv4 adressing scheme and IPv6. + +We have started a forum where people share their experiences and configurations. This will be work in progress and forever growing. + +**IMPORTANT**: You as a farmer do not need access to IPV4 to be able to rent capacity for IT workloads that need to be visible on IPV4, this is something that can happen elswhere on the TF Grid. + diff --git a/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.dia b/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.dia new file mode 100644 index 0000000..139cffa Binary files /dev/null and b/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.dia differ diff --git a/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.png b/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.png new file mode 100644 index 0000000..72fbe35 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/HIDDEN-PUBLIC.png differ diff --git a/collections/developers/internals/zos/internals/network/NR_layout.dia b/collections/developers/internals/zos/internals/network/NR_layout.dia new file mode 100644 index 0000000..a9f59e2 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/NR_layout.dia differ diff --git a/collections/developers/internals/zos/internals/network/NR_layout.png b/collections/developers/internals/zos/internals/network/NR_layout.png new file mode 100644 index 0000000..2336642 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/NR_layout.png differ diff --git a/collections/developers/internals/zos/internals/network/Network-V2.md b/collections/developers/internals/zos/internals/network/Network-V2.md new file mode 100644 index 0000000..59e16ae --- /dev/null +++ b/collections/developers/internals/zos/internals/network/Network-V2.md @@ -0,0 +1,315 @@ +# 0-OS v2 and it's network + +## Introduction + +0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over +the Internet with each-other in order to do various things: + +- download it's OS modules +- perform OS module upgrades +- register itself to the grid, and send regular updates about it's status +- query the grid for tasks to execute +- build and run the Overlay Network +- download flists and the effective files to cache + +The nodes themselves can have connectivity in a few different ways: + +- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6 + Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but + cant't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from + bootstrap.grid.tf . +- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router, +but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN +- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public +addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local +management. (OOB) (like in the drawing for farmer setup) + +For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can: + +- expose services to be proxied into containers/vms +- act as aggregating nodes for Overlay Networks for HIDDEN Nodes + +Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can +effectively reach exposed services. + +But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the +grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof. +Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition. + +Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to +get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way. + +So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it. + +## Physical setup for farmers + +```text + XXXXX XXX + XX XXX XXXXX XXX + X X XXX + X X + X INTERNET X + XXX X X + XXXXX XX XX XXXX + +X XXXX XX XXXXX + | + | + | + | + | + +------+--------+ + | FIREWALL/ | + | ROUTER | + +--+----------+-+ + | | + +-----------+----+ +-+--------------+ + | switch/ | | switch/ | + | vlan segment | | vlan segment | + +-+---------+----+ +---+------------+ + | | | ++-------+-------+ |OOB | PUBLIC +| PXE / dhcp | | | +| Ser^er | | | ++---------------+ | | + | | + +-----+------------+----------+ + | | + | +--+ + | | | + | NODES | +--+ + +--+--------------------------+ | | + | | | + +--+--------------------------+ | + | | + +-----------------------------+ +``` + +The PXE/dhcp can also be done by the firewall, your mileage may vary. + +## Switch and firewall configs + +Single switch, multiple switch, it all boils down to the same: + +- one port is an access port on an OOB vlan/segment +- one port is connected to a public vlan/segment + +The farmer makes sure that every node receives properly an IPv4 address in the OOB segment through means of dhcp, so +that with a PXE config or USB, a node can effectively start it's boot process: + +- Download kernel and initrd +- Download and mount the system flists so that the 0-OS daemons can start +- Register itself on the grid +- Query the grid for tasks to execute + +For the PUBLIC side of the Nodes, there are a few things to consider: + +- It's the farmer's job to inform the grid what node gets an IP address, be it IPv4 or IPv4. +- Nodes that don't receive and IPv4 address will connect to the IPv4 net through the NATed OOB network +- A farmer is responsible to provide and IPv6 prefix on at least one segment, and have a Router Advertisement daemon +runnig to provide for SLAAC addressin on that segment. +- That IPv6 Prefix on the public segment should not be firewalled, as it's impossible to know in your firewall what +ports will get exposed for the proxies. + +The Nodes themselves have nothing listening that points into the host OS itself, and are by themselves also firewalled. +In dev mode, there is an ssh server with a key-only login, accessible by a select few ;-) + +## DHCP/Radvd/RA/DHCP6 + +For home networks, there is not much to do, a Node will get an IPv4 Private(rfc1918) address , and most probaly and +ipv6 address in a /64 prefix, but is not reachable over ipv6, unless the firewall is disabled for IPv6. As we can't +rely on the fact that that is possible, we assume these nodes to be HIDDEN. + +A normal self-respecting Firewall or IP-capable switch can hand out IP[46] addresses, some can +even bootp/tftp to get nodes booted over the network. +We are (full of hope) assuming that you would have such a beast to configure and splice your network +in multiple segments. +A segment is a physical network separation. That can be port-based vlans, or even separate switches, whatver rocks your +boat, the keyword is here **separate**. + +On both segments you will need a way to hand out IPv4 addresses based on MAC addresses of the nodes. Yes, there is some +administration to do, but it's a one-off, and really necessary, because you really need to know whic physical machine +has which IP. For lights-out management and location of machines that is a must. + +So you'll need a list of mac addresses to add to your dhcp server for IPv4, to make sure you know which machine has +received what IPv4 Address. +That is necessary for 2 things: + +- locate the node if something is amiss, like be able to pinpoint a node's disk in case it broke (which it will) +- have the node be reachable all the time, without the need to update the grid and network configs every time the node +boots. + +## What happens under the hood (farmer) + +While we did our uttermost best to keep IPv4 address needs to a strict minimum, at least one Node will need an IPv4 address for handling everything that is Overlay Networks. +For Containers to reach the Internet, any type of connectivity will do, be it NAT or though an Internal DMZ that has a +routable IPv4 address. + +Internally, a lot of things are being set-up to have a node properly participate in the grid, as well to be prepared to partake in the User's Overlay Networks. + +A node connects itself to 'the Internet' depending on a few states. + +1. It lives in a fully private network (like it would be connected directly to a port on a home router) + +``` + XX XXX + XXX XXXXXX + X Internet X + XXXXXXX XXXXX + XX XXX + XX X + X+X + | + | + +--------+-----------+ + | HOME / | + | SOHO router | + | | + +--------+-----------+ + | + | Private space IPv4 + | (192.168.1.0/24) + | ++---------+------------+ +| | +| NODE | +| | +| | +| | +| | +| | ++----------------------+ +``` + +1. It lives in a fully public network (like it is connected directly to an uplink and has a public ipv4 address) + +``` + XX XXX + XXX XXXXXX + X Internet X + XXXXXXX XXXXX + XX XXX + XX X + X+X + | + | fully public space ipv4/6 + | 185.69.166.0/24 + | 2a02:1802:5e:0:1000::abcd/64 + | ++---------+------------+ +| | +| NODE | +| | ++----------------------+ + +``` +The node is fully reachable + +1. It lives in a datacentre, where a farmer manages the Network. + +A little Drawing : + +```text ++----------------------------------------------------+ +| switch | +| | +| | ++----------+-------------------------------------+---+ + | | + access | | + mgmt | +---------------+ + vlan | | access + | | public + | | vlan + | | + +-------+---------------------+------+ + | | + | nic1 nic2 | + | | + | | + | | + | NODE | + | | + | | + | | + +------------------------------------+ + +``` + +Or the more elaborate drawing on top that should be sufficient for a sysadmin to comprehend. + +Although: + +- we don't (yet) support nic bonding (next release) +- we don't (yet) support vlans, so your ports on switch/router need to be access ports to vlans to your router/firewall + + +## yeayea, but really ... what now ? + +Ok, what are the constraints? + +A little foreword: +ZosV2 uses IPv6 as it's base for networking, where the oldie IPv4 is merely an afterthought. So for it to work properly in it's actual incantation (we are working to get it to do IPv4-only too), for now, we need the node to live in a space that provides IPv6 __too__ . +IPV4 and IPv6 are very different beasts, so any machine connected to the Internet wil do both on the same network. So basically your computer talks 2 different languages, when it comes to communicating. That is the same for ZOS, where right now, it's mother tongue is IPv6. + +So your zos for V2 can start in different settings +1) you are a farmer, your ISP can provide you with IPv6 +Ok, you're all set, aside from a public IPv4 DHCP, you need to run a Stateless-Only SLAAC Router Advertiser (ZOS does NOT do DHCP6). + +1) you are a farmer, your ISP asks you what the hell IPv6 is +That is problematic right now, wait for the next release of ZosV2 + +1) you are a farmer, with only one node , at home, and on your PC https://ipv6.net tells you you have IPv6 on your PC. +That means your home router received an IPV6 allocation from the ISP, +Your'e all set, your node will boot, and register to the grid. If you know what you're doing, you can configure your router to allow all ipv6 traffic in forwarding mode to the specifice mac address of your node. (we'll explain later) +1) you are a farmer, with a few nodes somewhere that are registered on the grid in V1, but you have no clue if IPv6 is supported where these nodes live +1) you have a ThreefoldToken node at home, and still do not have a clue + +Basically it boils down also in a few other cases + +1) the physical network where a node lives has: IPv6 and Private space IPv4 +1) the physical network where a node lives has: IPv6 and Public IPv4 +1) the physical network where a node lives has: only IPv4 + +But it bloils down to : call your ISP, ask for IPv6. It's the future, for yout ISP, it's time. There is no way to circumvent it. No way. + + +OK, then, now what. + +1) you're a farmer with a bunch of nodes somewhere in a DC + + - your nodes are connected once (with one NIC) to a switch/router + Then your router will have : + - a segment that carries IPv4 __and__ IPv6: + + - for IPv4, there are 2 possibilities: + - it's RFC1918 (Private space) -> you NAT that subnet (e.g. 192.168.1.0/24) towards the Public Internet + + - you __will__ have difficulty to designate a IPv4 public entrypoint into your farm + - your workloads will be only reachable through the overlay + - your storage will not be reachable + + - you received a (small, because of the scarceness of IPv4 addresses, your ISP will give you only limited and pricy IPv4 adresses) IPv4 range you can utilise + + - things are better, the nodes can live in public ipv4 space, where they can be used as entrypoint + - standard configuration that works + + - for IPv6, your router is a Routing advertiser that provides SLAAC (Stateless, unmanaged) for that segment, working witha /64 prefix + + - the nodes will reachable over IPv6 + - storage backend will be available for the full grid + - everything will just work + + Best solution for single NIC: + - an ipv6 prefx + - an ipv4 subnet (however small) + + - your nodes have 2 connections, and you wnat to differ management from user traffic + + - same applies as above, where the best outcome will be obtained with a real IPv6 prefix allocation and a small public subnet that is routable. + - the second NIC (typically 10GBit) will then carry everything public, and the first nic will just be there for managent, living in Private space for IPv4, mostly without IPv6 + - your switch needs to be configured to provide port-based vlans, so the segments are properly separated, and your router needs to reflect that vlan config so that separation is handeled by the firewall in the router (iptables, pf, acl, ...) + + + + + diff --git a/collections/developers/internals/zos/internals/network/attic/exitpoints.md b/collections/developers/internals/zos/internals/network/attic/exitpoints.md new file mode 100644 index 0000000..efecf3c --- /dev/null +++ b/collections/developers/internals/zos/internals/network/attic/exitpoints.md @@ -0,0 +1,66 @@ +## Farmers providing transit for Tenant Networks (TN or Network) + +For networks of a user to be reachable, these networks need penultimate Network resources that act as exit nodes for the WireGuard mesh. + +For that Users need to sollicit a routable network with farmers that provide such a service. + +### Global registry for network resources. (`GRNR`?) + +Threefold through BCDB shoud keep a store where Farmers can register also a network service for Tenant Network (TN) reachablility. + +In a network transaction the first thing asked should be where a user wants to purchase it's transit. That can be with a nearby (latency or geolocation) Exit Provider (can e.g. be a Farmer), or with a Exit Provider outside of the geolocation for easier routing towards the primary entrypoint. (VPN-like services coming to mind) + +With this, we could envision in a later stage to have the Network Resources to be IPv6 multihomed with policy-based routing. That adds the possibiltiy to have multiple exit nodes for the same Network, with different IPv6 routes to them. + +### Datastructure + +A registered Farmer can also register his (dc-located?) network to be sold as transit space. For that he registers: + - the IPv4 addresses that can be allocated to exit nodes. + - the IPv6 prefix he obtained to be used in the Grid + - the nodes that will serve as exit nodes. + These nodes need to have IPv[46] access to routable address space through: + - Physical access in an interface of the node + - Access on a public `vlan` or via `vxlan / mpls / gre` + +Together with the registered nodes that will be part of that Public segment, the TNoDB (BCDB) can verify a Network Object containing an ExitPoint for a Network and add it to the queue for ExitNodes to fetch and apply. + +Physcally Nodes can be connected in several ways: + - living directly on the Internet (with a routable IPv4 and/or IPv6 Address) without Provider-enforced firewalling (outgoing traffic only) + - having an IPv4 allocation --and-- and IPv6 allocation + - having a single IPv4 address --and-- a single IPv6 allocation (/64) or even (Oh God Why) a single IPv6 addr. + - living in a Farm that has Nodes only reachable through NAT for IPv4 and no IPv6 + - living in a Farm that has NAT IPv4 and routable IPv6 with an allocation + - living in a single-segment having IPv4 RFC1918 and only one IPv6 /64 prefix (home Nodes mostly) + +#### A Network resource allocation. +We define Network Resource (NR) as a routable IPv6 `/64` Prefix, so for every time a new TNo is generated and validated, containing a new serial number and an added/removed NR, there has been a request to obtain a valid IPv6 Prefix (/64) to be added to the TNo. + +Basically it's just a list of allocations in that prefix, that are in use. Any free Prefix will do, as we do routing in the exit nodes with a `/64` granularity. + +The TNoDB (BCDB) then validates/updates the Tenant Network object with that new Network Resource and places it on a queue to be fetched by the interested Nodes. + +#### The Nodes responsible for ExitPoints + +A Node responsible for ExitPoints as wel as a Public endpoint will know so because of how it's registered in the TNoDB (BCDB). That is : + - it is defined as an exit node + - the TNoDB hands out an Object that describes it's public connectivity. i.e. : + - the public IPv4 address(es) it can use + - the IPv6 Prefix in the network segment that contains the penultimate default route + - an eventual Private BGP AS number for announcing the `/64` Prefixes of a Tenant Network, and the BGP peer(s). + +With that information, a Node can then build the Network Namespace from which it builds the Wireguard Interfaces prior to sending them in the ExitPoint Namespace. + +So the TNoDB (BCDB) hands out + - Tenant Network Objects + - Public Interface Objects + +They are related : + - A Node can have Network Resources + - A Network Resource can have (1) Public Interface + - Both are part of a Tenant Network + +A TNo defines a Network where ONLY the ExitPoint is flagged as being one. No more. +When the Node (networkd) needs to setup a Public node, it will need to act differently. + - Verify if the Node is **really** public, if so use standard WG interface setup + - If not, verify if there is already a Public Exit Namespace defined, create WG interface there. + - If there is Public Exit Namespace, request one, and set it up first. diff --git a/collections/developers/internals/zos/internals/network/attic/tools.md b/collections/developers/internals/zos/internals/network/attic/tools.md new file mode 100644 index 0000000..8897bca --- /dev/null +++ b/collections/developers/internals/zos/internals/network/attic/tools.md @@ -0,0 +1,264 @@ +# Network + +- [How does a farmer configure a node as exit node](#How-does-a-farmer-configure-a-node-as-exit-node) +- [How to create a user private network](#How-to-create-a-user-private-network) + +## How does a farmer configure a node as exit node + +For the network of the grid to work properly, some of the nodes in the grid need to be configured as "exit nodes". An "exit node" is a node that has a publicly accessible IP address and that is responsible routing IPv6 traffic, or proxy IPv4 traffic. + +A farmer that wants to configure one of his nodes as "exit node", needs to register it in the TNODB. The node will then automatically detect it has been configured to be an exit node and do the necessary network configuration to start acting as one. + +At the current state of the development, we have a [TNODB mock](../../tools/tnodb_mock) server and a [tffarmer CLI](../../tools/tffarm) tool that can be used to do these configuration. + +Here is an example of how a farmer could register one of his node as "exit node": + +1. Farmer needs to create its farm identity + +```bash +tffarmer register --seed myfarm.seed "mytestfarm" +Farm registered successfully +Name: mytestfarm +Identity: ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E= +``` + +2. Boot your nodes with your farm identity specified in the kernel parameters. + +Take that farm identity create at step 1 and boot your node with the kernel parameters `farmer_id=` + +for your test farm that would be `farmer_id=ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=` + +Once the node is booted, it will automatically register itself as being part of your farm into the [TNODB](../../tools/tnodb_mock) server. + +You can verify that you node registered itself properly by listing all the node from the TNODB by doing a GET request on the `/nodes` endpoints: + +```bash +curl http://tnodb_addr/nodes +[{"node_id":"kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=","farm_id":"ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=","Ifaces":[]}] +``` + +3. Farmer needs to specify its public allocation range to the TNODB + +```bash +tffarmer give-alloc 2a02:2788:0000::/32 --seed myfarm.seed +prefix registered successfully +``` + +4. Configure the public interface of the exit node if needed + +In this step the farmer will tell his node how it needs to connect to the public internet. This configuration depends on the farm network setup, this is why this is up to the farmer to provide the detail on how the node needs to configure itself. + +In a first phase, we create the internet access in 2 ways: + +- the node is fully public: you don't need to configure a public interface, you can skip this step +- the node has a management interface and a nic for public + then `configure-public` is required, and the farmer has the public interface connected to a specific public segment with a router to the internet in front. + +```bash +tffarmer configure-public --ip 172.20.0.2/24 --gw 172.20.0.1 --iface eth1 kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= +#public interface configured on node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= +``` + +We still need to figure out a way to get the routes properly installed, we'll do static on the toplevel router for now to do a demo. + +The node is now configured to be used as an exit node. + +5. Mark a node as being an exit node + +The farmer then needs to select which node he agrees to use as an exit node for the grid + +```bash +tffarmer select-exit kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= +#Node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= marked as exit node +``` + +## How to create a user private network + +1. Choose an exit node +2. Request an new allocation from the farm of the exit node + - a GET request on the tnodb_mock at `/allocations/{farm_id}` will give you a new allocation +3. Creates the network schema + +Steps 1 and 2 are easy enough to be done even manually but step 3 requires a deep knowledge of how networking works +as well as the specific requirement of 0-OS network system. +This is why we provide a tool that simplify this process for you, [tfuser](../../tools/tfuser). + +Using tfuser creating a network becomes trivial: + +```bash +# creates a new network with node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk as exit node +# and output the result into network.json +tfuser generate --schema network.json network create --node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk +``` + +network.json will now contains something like: + +```json +{ + "id": "", + "tenant": "", + "reply-to": "", + "type": "network", + "data": { + "network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp", + "resources": [ + { + "node_id": { + "id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk", + "farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi", + "reachability_v4": "public", + "reachability_v6": "public" + }, + "prefix": "2001:b:a:8ac6::/64", + "link_local": "fe80::8ac6/64", + "peers": [ + { + "type": "wireguard", + "prefix": "2001:b:a:8ac6::/64", + "Connection": { + "ip": "2a02:1802:5e::223", + "port": 1600, + "key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=", + "private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662" + } + } + ], + "exit_point": true + } + ], + "prefix_zero": "2001:b:a::/64", + "exit_point": { + "ipv4_conf": null, + "ipv4_dnat": null, + "ipv6_conf": { + "addr": "fe80::8ac6/64", + "gateway": "fe80::1", + "metric": 0, + "iface": "public" + }, + "ipv6_allow": [] + }, + "allocation_nr": 0, + "version": 0 + } +} +``` + +Which is a valid network schema. This network only contains a single exit node though, so not really useful. +Let's add another node to the network: + +```bash +tfuser generate --schema network.json network add-node --node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4 +``` + +result looks like: + +```json +{ + "id": "", + "tenant": "", + "reply-to": "", + "type": "network", + "data": { + "network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp", + "resources": [ + { + "node_id": { + "id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk", + "farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi", + "reachability_v4": "public", + "reachability_v6": "public" + }, + "prefix": "2001:b:a:8ac6::/64", + "link_local": "fe80::8ac6/64", + "peers": [ + { + "type": "wireguard", + "prefix": "2001:b:a:8ac6::/64", + "Connection": { + "ip": "2a02:1802:5e::223", + "port": 1600, + "key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=", + "private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662" + } + }, + { + "type": "wireguard", + "prefix": "2001:b:a:b744::/64", + "Connection": { + "ip": "", + "port": 0, + "key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=", + "private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5" + } + } + ], + "exit_point": true + }, + { + "node_id": { + "id": "4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4", + "farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi", + "reachability_v4": "hidden", + "reachability_v6": "hidden" + }, + "prefix": "2001:b:a:b744::/64", + "link_local": "fe80::b744/64", + "peers": [ + { + "type": "wireguard", + "prefix": "2001:b:a:8ac6::/64", + "Connection": { + "ip": "2a02:1802:5e::223", + "port": 1600, + "key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=", + "private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662" + } + }, + { + "type": "wireguard", + "prefix": "2001:b:a:b744::/64", + "Connection": { + "ip": "", + "port": 0, + "key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=", + "private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5" + } + } + ], + "exit_point": false + } + ], + "prefix_zero": "2001:b:a::/64", + "exit_point": { + "ipv4_conf": null, + "ipv4_dnat": null, + "ipv6_conf": { + "addr": "fe80::8ac6/64", + "gateway": "fe80::1", + "metric": 0, + "iface": "public" + }, + "ipv6_allow": [] + }, + "allocation_nr": 0, + "version": 1 + } +} +``` + +Our network schema is now ready, but before we can provision it onto a node, we need to sign it and send it to the bcdb. +To be able to sign it we need to have a pair of key. You can use `tfuser id` command to create an identity: + +```bash +tfuser id --output user.seed +``` + +We can now provision the network on both nodes: + +```bash +tfuser provision --schema network.json \ +--node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk \ +--node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4 \ +--seed user.seed +``` diff --git a/collections/developers/internals/zos/internals/network/attic/zostst.dhcp b/collections/developers/internals/zos/internals/network/attic/zostst.dhcp new file mode 100644 index 0000000..0ac53be --- /dev/null +++ b/collections/developers/internals/zos/internals/network/attic/zostst.dhcp @@ -0,0 +1,54 @@ +#!/usr/bin/bash + +mgmtnic=( +0c:c4:7a:51:e3:6a +0c:c4:7a:51:e9:e6 +0c:c4:7a:51:ea:18 +0c:c4:7a:51:e3:78 +0c:c4:7a:51:e7:f8 +0c:c4:7a:51:e8:ba +0c:c4:7a:51:e8:0c +0c:c4:7a:51:e7:fa +) + +ipminic=( +0c:c4:7a:4c:f3:b6 +0c:c4:7a:4d:02:8c +0c:c4:7a:4d:02:91 +0c:c4:7a:4d:02:62 +0c:c4:7a:4c:f3:7e +0c:c4:7a:4d:02:98 +0c:c4:7a:4d:02:19 +0c:c4:7a:4c:f2:e0 +) +cnt=1 +for i in ${mgmtnic[*]} ; do +cat << EOF +config host + option name 'zosv2tst-${cnt}' + option dns '1' + option mac '${i}' + option ip '10.5.0.$((${cnt} + 10))' + +EOF +let cnt++ +done + + + +cnt=1 +for i in ${ipminic[*]} ; do +cat << EOF +config host + option name 'ipmiv2tst-${cnt}' + option dns '1' + option mac '${i}' + option ip '10.5.0.$((${cnt} + 100))' + +EOF +let cnt++ +done + +for i in ${mgmtnic[*]} ; do + echo ln -s zoststconf 01-$(echo $i | sed s/:/-/g) +done diff --git a/collections/developers/internals/zos/internals/network/definitions.md b/collections/developers/internals/zos/internals/network/definitions.md new file mode 100644 index 0000000..137fe0a --- /dev/null +++ b/collections/developers/internals/zos/internals/network/definitions.md @@ -0,0 +1,35 @@ +

Definitions

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Node](#node) +- [TNo : Tenant Network Object](#tno--tenant-network-object) +- [NR: Network Resource](#nr-network-resource) + +*** + +## Introduction + +We present definitions of words used through the documentation. + +## Node + + TL;DR: Computer. + A Node is a computer with CPU, Memory, Disks (or SSD's, NVMe) connected to _A_ network that has Internet access. (i.e. it can reach www.google.com, just like you on your phone, at home) + That Node will, once it has received an IP address (IPv4 or IPv6), register itself when it's new, or confirm it's identity and it's online-ness (for lack of a better word). + +## TNo : Tenant Network Object + + TL;DR: The Network Description. + We named it so, because it is a data structure that describes the __whole__ network a user can request (or setup). + That network is a virtualized overlay network. + Basically that means that transfer of data in that network *always* is encrypted, protected from prying eyes, and __resources in that network can only communicate with each other__ **unless** there is a special rule that allows access. Be it by allowing access through firewall rules, *and/or* through a proxy (a service that forwards requests on behalf of, and ships replies back to the client). + +## NR: Network Resource + + TL;DR: the Node-local part of a TNo. + The main building block of a TNo; i.e. each service of a user in a Node lives in an NR. + Each Node hosts User services, whatever type of service that is. Every service in that specific node will always be solely part of the Tenant's Network. (read that twice). + So: A Network Resource is the thing that interconnects all other network resources of the TN (Tenant Network), and provides routing/firewalling for these interconnects, including the default route to the BBI (Big Bad Internet), aka ExitPoint. + All User services that run in a Node are in some way or another connected to the Network Resource (NR), which will provide ip packet forwarding and firewalling to all other network resources (including the Exitpoint) of the TN (Tenant Network) of the user. (read that three times, and the last time, read it slowly and out loud) \ No newline at end of file diff --git a/collections/developers/internals/zos/internals/network/introduction.md b/collections/developers/internals/zos/internals/network/introduction.md new file mode 100644 index 0000000..76660fa --- /dev/null +++ b/collections/developers/internals/zos/internals/network/introduction.md @@ -0,0 +1,87 @@ +

Introduction to Networkd

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Boot and initial setup](#boot-and-initial-setup) +- [Networkd functionality](#networkd-functionality) +- [Techie talk](#techie-talk) +- [Wireguard explanations](#wireguard-explanations) +- [Caveats](#caveats) + +*** + +## Introduction + +We provide an introduction to Networkd, the network manager of 0-OS. + +## Boot and initial setup + +At boot, be it from an usb stick or PXE, ZOS starts up the kernel, with a few necessary parameters like farm ID and/or possible network parameters, but basically once the kernel has started, [zinit](https://github.com/threefoldtech/zinit) among other things, starts the network initializer. + +In short, that process loops over the available network interfaces and tries to obtain an IP address that also provides for a default gateway. That means: it tries to get Internet connectivity. Without it, ZOS stops there, as not being able to register itself, nor start other processes, there wouldn't be any use for it to be started anyway. + +Once it has obtained Internet connectivity, ZOS can then proceed to make itself known to the Grid, and acknowledge it's existence. It will then regularly poll the Grid for tasks. + +Once initialized, with the network daemon running (a process that will handle all things related to networking), ZOS will set up some basic services so that workloads can themselves use that network. + +## Networkd functionality + +The network daemon is in itself responsible for a few tasks, and working together with the [provision daemon](../provision) it mainly sets up the local infrastructure to get the user network resources, together with the wireguard configurations for the user's mesh network. + +The Wireguard mesh is an overlay network. That means that traffic of that network is encrypted and encapsulated in a new traffic frame that the gets transferred over the underlay network, here in essence the network that has been set up during boot of the node. + +For users or workloads that run on top of the mesh, the mesh network looks and behaves like any other directly connected workload, and as such that workload can reach other workloads or services in that mesh with the added advantage that that traffic is encrypted, protecting services and communications over that mesh from too curious eyes. + +That also means that workloads between nodes in a local network of a farmer is even protected from the farmer himself, in essence protecting the user from the farmer in case that farmer could become too curious. + +As the nodes do not have any way to be accessed, be it over the underlaying network or even the local console of the node, a user can be sure that his workload cannot be snooped upon. + +## Techie talk + +- **boot and initial setup** +For ZOS to work at all (the network is the computer), it needs an internet connection. That is: it needs to be able to communicate with the BCDB over the internet. +So ZOS starts with that: with the `internet` process, that tries go get the node to receive an IP address. That process will have set-up a bridge (`zos`), connected to an interface that is on an Internet-capable network. That bridge will have an IP address that has Internet access. +Also, that bridge is there for future public interfaces into workloads. +Once ZOS can reach the Internet, the rest of the system can be started, where ultimately, the `networkd` daemon is started. + +- **networkd initial setup** +`networkd` starts with recensing the available Network interfaces, and registers them to the BCDB (grid database), so that farmers can specify non-standard configs like for multi-nic machines. Once that is done, `networkd` registers itself to the zbus, so it can receive tasks to execute from the provsioning daemon (`provisiond`). +These tasks are mostly setting up network resources for users, where a network resource is a subnet in the user's wireguard mesh. + +- **multi-nic setups** + +When someone is a farmer, exploiting nodes somewhere in a datacentre, where the nodes have multiple NICs, it is advisable (though not necessary) to differentiate OOB traffic (like initial boot setup) from user traffic (as well the overlay network as the outgoing NAT for nodes for IPv4) to be on a different NIC. With these parameters, a user will have to make sure their switches are properly configured, more in docs later. + +- **registering and configurations** + +Once a node has booted and properly initialized, registering and configuring the node to be able to accept workloads and their associated network configs, is a two-step process. +First, the node registers it's live network setup to the BCDB. That is : all NICs with their associated IP addresses and routes are registered so a farm admin can in a second phase configure eventual separate NICs to handle different kinds of workloads. +In that secondary phase, a farm admin can then set-up the NICs and their associated IP's manually, so that workloads can start using them. + +## Wireguard explanations + +- **wireguard as pointopoint links and what that means** +Wireguard is a special type of VPN, where every instance is as well server for multiple peers as client towards multiple peers. That way you can create fanning-out connections als receive connections from multiple peers, creating effectively a mesh of connections Like this : ![like so](HIDDEN-PUBLIC.png) + +- **wireguard port management** +Every wireguard point (a network resource point) needs a destination/port combo when it's publicly reachable. The destination is a public ip, but the port is the differentiator. So we need to make sure every network wireguard listening port is unique in the node where it runs, and can be reapplied in case of a node's reboot. +ZOS registers the ports **already in use** to the BCDB, so a user can the pick a port that is not yet used. + +- **wireguard and hidden nodes** +Hidden nodes are nodes that are in essence hidden behind a firewall, and unreachable from the Internet to an internal network, be it as an IPv4 NATed host or an IPv6 host that is firewalled in any way, where it's impossible to have connection initiations form the Internet to the node. +As such, these nodes can only partake in a network as client-only towards publicly reachable peers, and can only initiate the connections themselves. (ref previous drawing). +To make sure connectivity stays up, the clients (all) have a keepalive towards all their peers so that communications towards network resources in hidden nodes can be established. + +## Caveats + +- **hidden nodes** +Hidden nodes live (mostly) behind firewalls that keep state about connections and these states have a lifetime. We try at best to keep these communications going, but depending of the firewall your mileage may vary (YMMV ;-)) + +- **local underlay network reachability** +When multiple nodes live in a same hidden network, at the moment we don't try to have the nodes establish connectivity between themselves, so all nodes in that hidden network can only reach each other through the intermediary of a node that is publicly reachable. So to get some performance, a farmer will have to have real routable nodes available in the vicinity. +So for now, a farmer is better off to have his nodes really reachable over a public network. + +- **IPv6 and IPv4 considerations** +While the mesh can work over IPv4 __and__ IPv6 at the same time, the peers can only be reached through one protocol at the same time. That is a peer is IPv4 __or__ IPv6, not both. Hence if a peer is reachable over IPv4, the client towards that peer needs to reach it over IPv4 too and thus needs an IPv4 address. +We advise strongly to have all nodes properly set-up on a routable unfirewalled IPv6 network, so that these problems have no reason to exist. diff --git a/collections/developers/internals/zos/internals/network/mesh.md b/collections/developers/internals/zos/internals/network/mesh.md new file mode 100644 index 0000000..fd9bb85 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/mesh.md @@ -0,0 +1,134 @@ +

Zero-Mesh

+ +

Table of Contents

+ +- [What It Is](#what-it-is) +- [Overlay Network](#overlay-network) +- [ZOS networkd](#zos-networkd) +- [Internet reachability per Network Resource](#internet-reachability-per-network-resource) +- [Interworkings](#interworkings) +- [Network Resource Internals](#network-resource-internals) + +*** + +## What It Is + +When a user wants to deploy a workload, whatever that may be, that workload needs connectivity. +If there is just one service to be run, things can be simple, but in general there are more than one services that need to interact to provide a full stack. Sometimes these services can live on one node, but mostly these service will be deployed over multiple nodes, in different containers. +The Mesh is created for that, where containers can communicate over an encrypted path, and that network can be specified in terms of IP addresses by the user. + +## Overlay Network + +Zero-Mesh is an overlay network. That requires that nodes need a proper working network with existing access to the Internet in the first place, being full-blown public access, or behind a firewall/home router that provides for Private IP NAT to the internet. + +Right now Zero-Mesh has support for both, where nodes behind a firewall are HIDDEN nodes, and nodes that are directly connected, be it over IPv6 or IPv4 as 'normal' nodes. +Hidden nodes can thus only be participating as client nodes for a specific user Mesh, and all publicly reachable nodes can act as aggregators for hidden clients in that user Mesh. + +Also, a Mesh is static: once it is configured, and thus during the lifetime of the network, there is one node containing the aggregator for Mesh clients that live on hidden nodes. So if then an aggregator node has died or is not reachable any more, the mesh needs to be reapplied, with __some__ publicly reachable node as aggregator node. + +So it goes a bit like ![this](HIDDEN-PUBLIC.png) +The Exit labeled NR in that graph is the point where Network Resources in Hidden Nodes connect to. These Exit NRs are then the transfer nodes between Hidden NRs. + +## ZOS networkd + +The networkd daemon receives tasks from the provisioning daemon, so that it can create the necessary resources for a Mesh participator in the User Network (A network Resource - NR). + +A network is defined as a whole by the User, using the tools in the 3bot to generate a proper configuration that can be used by the network daemon. + +What networkd takes care of, is the establishment of the mesh itself, in accordance with the configuration a farmer has given to his nodes. What is configured on top of the Mesh is user defined, and applied as such by the networkd. + +## Internet reachability per Network Resource + +Every node that participates in a User mesh, will also provide for Internet access for every network resource. +that means that every NR has the same Internet access as the node itself. Which also means, in terms of security, that a firewall in the node takes care of blocking all types of entry to the NR, effectively being an Internet access diode, for outgoing and related traffic only. +In a later phase a user will be able to define some network resource as __sole__ outgoing Internet Access point, but for now that is not yet defined. + +## Interworkings + +So How is that set up ? + +Every node participating in a User Network, sets up a Network Resource. +Basically, it's a Linux Network Namespace (sort of a network virtual machine), that contains a wireguard interface that has a list of other Network resources it needs to route encrypted packets toward. + +As a User Network has a range typically a `/16` (like `10.1.0.0/16`), that is user defined. The User then picks a subnet from that range (like e.g. `10.1.1.0/24`) to assign that to every new NR he wants to participate in that Network. + +Workloads that are then provisioned are started in a newly created Container, and that container gets a User assigned IP __in__ that subnet of the Network Resource. + +The Network resource itself then handles the routing and firewalling for the containers that are connected to it. Also, the Network Resource takes care of internet connectivity, so that the container can reach out to other services on the Internet. + +![like this](NR_layout.png) + +Also in a later phase, a User will be able to add IPv6 prefixes to his Network Resources, so that containers are reachable over IPv6. + +Fully-routed IPv6 will then be available, where an Exit NR will be the entrypoint towards that network. + +## Network Resource Internals + +Each NR is basically a router for the User Network, but to allow NRs to access the Internet through the Node's local connection, there are some other internal routers to be added. + +Internally it looks like this : + +```text ++------------------------------------------------------------------------------+ +| |wg mesh | +| +-------------+ +-----+-------+ | +| | | | NR cust1 | 100.64.0.123/16 | +| | container +----------+ 10.3.1.0/24 +----------------------+ | +| | cust1 | veth| | public | | +| +-------------+ +-------------+ | | +| | | +| +-------------+ +-------------+ | | +| | | | NR cust200 | 100.64.4.200/16 | | +| | container +----------+ 10.3.1.0/24 +----------------------+ | +| | cust200 | veth| | public | | +| +-------------+ +------+------+ | | +| |wg mesh | | +| 10.101.123.34/16 | | +| +------------+ |tonrs | +| | | +------------------+ | +| | zos +------+ | 100.64.0.1/16 | | +| | | | 10.101.12.231/16| ndmz | | +| +---+--------+ NIC +-----------------------------+ | | +| | | public +------------------+ | +| +--------+------+ | +| | | +| | | ++------------------------------------------------------------------------------+ + | + | + | + | 10.101.0.0/16 10.101.0.1 + +------------------+------------------------------------------------------------ + + NAT + -------- + rules NR custA + nft add rule inet nat postrouting oifname public masquerade + nft add rule inet filter input iifname public ct state { established, related } accept + nft add rule inet filter input iifname public drop + + rules NR custB + nft add rule inet nat postrouting oifname public masquerade + nft add rule inet filter input iifname public ct state { established, related } accept + nft add rule inet filter input iifname public drop + + rules ndmz + nft add rule inet nat postrouting oifname public masquerade + nft add rule inet filter input iifname public ct state { established, related } accept + nft add rule inet filter input iifname public drop + + + Routing + + if NR only needs to get out: + ip route add default via 100.64.0.1 dev public + + if an NR wants to use another NR as exitpoint + ip route add default via destnr + with for AllowedIPs 0.0.0.0/0 on that wg peer + +``` + +During startup of the Node, the ndmz is put in place, following the configuration if it has a single internet connection , or that with a dual-nic setup, a separate nic is used for internet access. + +The ndmz network has the carrier-grade nat allocation assigned, so we don'tinterfere with RFC1918 private IPv4 address space, so users can use any of them (and not any of `100.64.0.0/10`, of course) diff --git a/collections/developers/internals/zos/internals/network/readme.md b/collections/developers/internals/zos/internals/network/readme.md new file mode 100644 index 0000000..3958cf6 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/readme.md @@ -0,0 +1,8 @@ +

Zero-OS Networking

+ +

Table of Contents

+ +- [Introduction to networkd](./introduction.md) +- [Vocabulary Definitions](./definitions.md) +- [Wireguard Mesh Details](./mesh.md) +- [Farm Network Setup](./setup_farm_network.md) \ No newline at end of file diff --git a/collections/developers/internals/zos/internals/network/setup_farm_network.md b/collections/developers/internals/zos/internals/network/setup_farm_network.md new file mode 100644 index 0000000..23ecc84 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/setup_farm_network.md @@ -0,0 +1,123 @@ +

Setup

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Running ZOS (v2) at home](#running-zos-v2-at-home) +- [Running ZOS (v2) in a multi-node farm in a DC](#running-zos-v2-in-a-multi-node-farm-in-a-dc) + - [Necessities](#necessities) + - [IPv6](#ipv6) + - [Routing/firewalling](#routingfirewalling) + - [Multi-NIC Nodes](#multi-nic-nodes) + - [Farmers and the grid](#farmers-and-the-grid) + +*** + +## Introduction + +We present ZOSv2 network considerations. + +Running ZOS on a node is just a matter of booting it with a USB stick, or with a dhcp/bootp/tftp server with the right configuration so that the node can start the OS. +Once it starts booting, the OS detects the NICs, and starts the network configuration. A Node can only continue it's boot process till the end when it effectively has received an IP address and a route to the Internet. Without that, the Node will retry indefinitely to obtain Internet access and not finish it's startup. + +So a Node needs to be connected to a __wired__ network, providing a dhcp server and a default gateway to the Internet, be it NATed or plainly on the public network, where any route to the Internet, be it IPv4 or IPv6 or both is sufficient. + +For a node to have that ability to host user networks, we **strongly** advise to have a working IPv6 setup, as that is the primary IP stack we're using for the User Network's Mesh to function. + +## Running ZOS (v2) at home + +Running a ZOS Node at home is plain simple. Connect it to your router, plug it in the network, insert the preconfigured USB stick containing the bootloader and the `farmer_id`, power it on. +You will then see it appear in the Cockpit (`https://cockpit.testnet.grid.tf/capacity`), under your farm. + +## Running ZOS (v2) in a multi-node farm in a DC + +Multi-Node Farms, where a farmer wants to host the nodes in a data centre, have basically the same simplicity, but the nodes can boot from a boot server that provides for DHCP, and also delivers the iPXE image to load, without the need for a USB stick in every Node. + +A boot server is not really necessary, but it helps ;-). That server has a list of the MAC addresses of the nodes, and delivers the bootloader over PXE. The farmer is responsible to set-up the network, and configure the boot server. + +### Necessities + +The Farmer needs to: + +- Obtain an IPv6 prefix allocation from the provider. A `/64` will do, that is publicly reachable, but a `/48` is advisable if the farmer wants to provide IPv6 transit for User Networks +- If IPv6 is not an option, obtain an IPv4 subnet from the provider. At least one IPv4 address per node is needed, where all IP addresses are publicly reachable. +- Have the Nodes connected on that public network with a switch so that all Nodes are publicly reachable. +- In case of multiple NICS, also make sure his farm is properly registered in BCDB, so that the Node's public IP Addresses are registered. +- Properly list the MAC addresses of the Nodes, and configure the DHCP server to provide for an IP address, and in case of multiple NICs also provide for private IP addresses over DHCP per Node. +- Make sure that after first boot, the Nodes are reachable. + +### IPv6 + +IPv6, although already a real protocol since '98, has seen reluctant adoption over the time it exists. That mostly because ISPs and Carriers were reluctant to deploy it, and not seeing the need since the advent of NAT and private IP space, giving the false impression of security. +But this month (10/2019), RIPE sent a mail to all it's LIRs that the last consecutive /22 in IPv4 has been allocated. Needless to say, but that makes the transition to IPv6 in 2019 of utmost importance and necessity. +Hence, ZOS starts with IPv6, and IPv4 is merely an afterthought ;-) +So in a nutshell: we greatly encourage Farmers to have IPv6 on the Node's network. + +### Routing/firewalling + +Basically, the Nodes are self-protecting, in the sense that they provide no means at all to be accessed through listening processes at all. No service is active on the node itself, and User Networks function solely on an overlay. +That also means that there is no need for a Farm admin to protect the Nodes from exterior access, albeit some DDoS protection might be a good idea. +In the first phase we will still allow the Host OS (ZOS) to reply on ICMP ping requests, but that 'feature' might as well be blocked in the future, as once a Node is able to register itself, there is no real need to ever want to try to reach it. + +### Multi-NIC Nodes + +Nodes that Farmers deploy are typically multi-NIC Nodes, where one (typically a 1GBit NIC) can be used for getting a proper DHCP server running from where the Nodes can boot, and one other NIC (1Gbit or even 10GBit), that then is used for transfers of User Data, so that there is a clean separation, and possible injections bogus data is not possible. + +That means that there would be two networks, either by different physical switches, or by port-based VLANs in the switch (if there is only one). + +- Management NICs + The Management NIC will be used by ZOS to boot, and register itself to the GRID. Also, all communications from the Node to the Grid happens from there. +- Public NICs + +### Farmers and the grid + +A Node, being part of the Grid, has no concept of 'Farmer'. The only relationship for a Node with a Farmer is the fact that that is registered 'somewhere (TM)', and that a such workloads on a Node will be remunerated with Tokens. For the rest, a Node is a wholly stand-alone thing that participates in the Grid. + +```text + 172.16.1.0/24 + 2a02:1807:1100:10::/64 ++--------------------------------------+ +| +--------------+ | +-----------------------+ +| |Node ZOS | +-------+ | | +| | +-------------+1GBit +--------------------+ 1GBit switch | +| | | br-zos +-------+ | | +| | | | | | +| | | | | | +| | | | +------------------+----+ +| +--------------+ | | +-----------+ +| | OOB Network | | | +| | +----------+ ROUTER | +| | | | +| | | | +| | | | +| +------------+ | +----------+ | +| | Public | | | | | +| | container | | | +-----+-----+ +| | | | | | +| | | | | | +| +---+--------+ | +-------------------+--------+ | +| | | | 10GBit Switch | | +| br-pub| +-------+ | | | +| +-----+10GBit +-------------------+ | +----------> +| +-------+ | | Internet +| | | | +| | +----------------------------+ ++--------------------------------------+ + 185.69.167.128/26 Public network + 2a02:1807:1100:0::/64 + +``` + +Where the underlay part of the wireguard interfaces get instantiated in the Public container (namespace), and once created these wireguard interfaces get sent into the User Network (Network Resource), where a user can then configure the interface a he sees fit. + +The router of the farmer fulfills 2 roles: + +- NAT everything in the OOB network to the outside, so that nodes can start and register themselves, as well get tasks to execute from the BCDB. +- Route the assigned IPv4 subnet and IPv6 public prefix on the public segment, to which the public container is connected. + +As such, in case that the farmer wants to provide IPv4 public access for grid proxies, the node will need at least one (1) IPv4 address. It's free to the farmer to assign IPv4 addresses to only a part of the Nodes. +On the other hand, it is quite important to have a proper IPv6 setup, because things will work out better. + +It's the Farmer's task to set up the Router and the switches. + +In a simpler setup (small number of nodes for instance), the farmer could setup a single switch and make 2 port-based VLANs to separate OOB and Public, or even wit single-nic nodes, just put them directly on the public segment, but then he will have to provide a DHCP server on the Public network. diff --git a/collections/developers/internals/zos/internals/network/topology/png/ndmz-dualstack.png b/collections/developers/internals/zos/internals/network/topology/png/ndmz-dualstack.png new file mode 100644 index 0000000..dc9d508 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/ndmz-dualstack.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/ndmz-hidden.png b/collections/developers/internals/zos/internals/network/topology/png/ndmz-hidden.png new file mode 100644 index 0000000..d6a462c Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/ndmz-hidden.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/nr-join.png b/collections/developers/internals/zos/internals/network/topology/png/nr-join.png new file mode 100644 index 0000000..a8509b4 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/nr-join.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/nr-step-1.png b/collections/developers/internals/zos/internals/network/topology/png/nr-step-1.png new file mode 100644 index 0000000..fb077d8 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/nr-step-1.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/nr-step-2.png b/collections/developers/internals/zos/internals/network/topology/png/nr-step-2.png new file mode 100644 index 0000000..1d70324 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/nr-step-2.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/public-namespace.png b/collections/developers/internals/zos/internals/network/topology/png/public-namespace.png new file mode 100644 index 0000000..9276012 Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/public-namespace.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/png/zos-bridge.png b/collections/developers/internals/zos/internals/network/topology/png/zos-bridge.png new file mode 100644 index 0000000..908ea0d Binary files /dev/null and b/collections/developers/internals/zos/internals/network/topology/png/zos-bridge.png differ diff --git a/collections/developers/internals/zos/internals/network/topology/readme.md b/collections/developers/internals/zos/internals/network/topology/readme.md new file mode 100644 index 0000000..2ad5937 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/readme.md @@ -0,0 +1,68 @@ +# On boot +> this is setup by `internet` daemon, which is part of the bootstrap process. + +the first basic network setup is done, the point of this setup is to connect the node to the internet, to be able to continue the rest of the boot process. + +- Go over all **PLUGGED, and PHYSICAL** interfaces +- For each matching interface, the interface is tested if it can get both IPv4 and IPv6 +- If multiple interfaces have been found to receive ipv4 from dhcp, we find the `smallest` ip, with the private gateway IP, otherwise if no private gateway ip found, we only find the one with the smallest IP. +- Once the interface is found we do the following: (we will call this interface **eth**) + - Create a bridge named `zos` + - Disable IPv6 on this bridge, and ipv6 forwarding +- Run `udhcpc` on zos bridge +![zos-bridge](png/zos-bridge.png) + +Once this setup complete, the node now has access to the internet which allows it to download and run `networkd` which takes over the network stack and continue the process as follows. + +# Network Daemon +- Validate zos setup created by the `internet` on boot daemon +- Send information about all local nics to the explorer (?) + +## Setting up `ndmz` +First we need to find the master interface for ndmz, we have the following cases: +- master of `public_config` if set. Public Config is an external configuration that is set by the farmer on the node object. that information is retrieved by the node from the public explorer. +- otherwise (if public_config is not set) check if the public namespace is set (i think that's a dead branch because if this exist (or can exist) it means the master is always set. which means it will get used always. +- otherwise find first interface with ipv6 +- otherwise check if zos has global unicast ipv6 +- otherwise hidden node (still uses zos but in hidden node setup) + +### Hidden node ndmz +![ndmz-hidden](png/ndmz-hidden.png) + +### Dualstack ndmz +![ndmz-dualstack](png/ndmz-dualstack.png) + +## Setting up Public Config +this is an external configuration step that is configured by the farmer on the node object. The node then must have setup in the explorer. + +![public-namespace](png/public-namespace.png) + +## Setting up Yggdrasil +- Get a list of all public peers with status `up` +- If hidden node: + - Find peers with IPv4 addresses +- If dual stack node: + - Filter out all peers with same prefix as the node, to avoid connecting locally only +- write down yggdrasil config, and start yggdrasil daemon via zinit +- yggdrasil runs inside the ndmz namespace +- add an ipv6 address to npub in the same prefix as yggdrasil. this way when npub6 is used as a gateway for this prefix, traffic +will be routed through yggdrasil. + +# Creating a network resource +A network resource (`NR` for short) as a user private network that lives on the node and can span multiple nodes over wireguard. When a network is deployed the node builds a user namespace as follows: +- A unique network id is generated by md5sum(user_id + network_name) then only take first 13 bytes. We will call this `net-id`. + +![nr-1](png/nr-step-1.png) + +## Create the wireguard interface +if the node has `public_config` so the `public` namespace exists. then the wireguard device is first created inside the `public` namespace then moved +to the network-resource namespace. + +Otherwise, the port is created on the host namespace and then moved to the network-resource namespace. The final result is + +![nr-2](png/nr-step-2.png) + +Finally the wireguard peer list is applied and configured, routing rules is also configured to route traffic to the wireguard interface + +# Member joining a user network (network resource) +![nr-join](png/nr-join.png) diff --git a/collections/developers/internals/zos/internals/network/topology/uml/ndmz-dualstack.wsd b/collections/developers/internals/zos/internals/network/topology/uml/ndmz-dualstack.wsd new file mode 100644 index 0000000..8b3d6eb --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/ndmz-dualstack.wsd @@ -0,0 +1,57 @@ +@startuml +[zos\nbridge] as zos +[br-pub\nbridge] as brpub +[br-ndmz\nbridge] as brndmz +note top of brndmz +disable ipv6 +- net.ipv6.conf.br-ndmz.disable_ipv6 = 1 +end note +' brpub -left- zos : veth pair\n(tozos) +brpub -down- master +note right of master +master is found as described +in the readme (this can be zos bridge) +in case of a single node machine +end note + +package "ndmz namespace" { + [tonrs\nmacvlan] as tonrs + note bottom of tonrs + - net.ipv4.conf.tonrs.proxy_arp = 0 + - net.ipv6.conf.tonrs.disable_ipv6 = 0 + + Addresses: + 100.127.0.1/16 + fe80::1/64 + fd00::1 + end note + tonrs - brndmz: macvlan + + [npub6\nmacvlan] as npub6 + npub6 -down- brpub: macvlan + + [npub4\nmacvlan] as npub4 + npub4 -down- zos: macvlan + + note as MAC + gets static mac address generated + from node id. to make sure it receives + same ip address. + end note + + MAC .. npub4 + MAC .. npub6 + + note as setup + - net.ipv6.conf.all.forwarding = 1 + end note + + [ygg0] + note bottom of ygg0 + this will be added by yggdrasil setup + in the next step + end note +} + +footer (hidden node) no master with global unicast ipv6 found +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/ndmz-hidden.wsd b/collections/developers/internals/zos/internals/network/topology/uml/ndmz-hidden.wsd new file mode 100644 index 0000000..05304dc --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/ndmz-hidden.wsd @@ -0,0 +1,55 @@ +@startuml +[zos\nbridge] as zos +note left of zos +current select master +for hiddent ndmz setup +end note +[br-pub\nbridge] as brpub +[br-ndmz\nbridge] as brndmz +note top of brndmz +disable ipv6 +- net.ipv6.conf.br-ndmz.disable_ipv6 = 1 +end note +brpub -left- zos : veth pair\n(tozos) + +package "ndmz namespace" { + [tonrs\nmacvlan] as tonrs + note bottom of tonrs + - net.ipv4.conf.tonrs.proxy_arp = 0 + - net.ipv6.conf.tonrs.disable_ipv6 = 0 + + Addresses: + 100.127.0.1/16 + fe80::1/64 + fd00::1 + end note + tonrs - brndmz: macvlan + + [npub6\nmacvlan] as npub6 + npub6 -right- brpub: macvlan + + [npub4\nmacvlan] as npub4 + npub4 -down- zos: macvlan + + note as MAC + gets static mac address generated + from node id. to make sure it receives + same ip address. + end note + + MAC .. npub4 + MAC .. npub6 + + note as setup + - net.ipv6.conf.all.forwarding = 1 + end note + + [ygg0] + note bottom of ygg0 + this will be added by yggdrasil setup + in the next step + end note +} + +footer (hidden node) no master with global unicast ipv6 found +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/nr-join.wsd b/collections/developers/internals/zos/internals/network/topology/uml/nr-join.wsd new file mode 100644 index 0000000..0c54b18 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/nr-join.wsd @@ -0,0 +1,23 @@ +@startuml + +component "br-pub" as public +component "b-\nbridge" as bridge +package " namespace" { + component eth0 as eth + note right of eth + set ip as configured in the reservation + it must be in the subnet assinged to n- + in the user resource above. + - set default route through n- + end note + eth .. bridge: veth + + component [pub\nmacvlan] as pub + pub .. public + + note right of pub + only if public ipv6 is requests + also gets a consistent MAC address + end note +} +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/nr-step-1.wsd b/collections/developers/internals/zos/internals/network/topology/uml/nr-step-1.wsd new file mode 100644 index 0000000..a739c60 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/nr-step-1.wsd @@ -0,0 +1,31 @@ +@startuml +component [b-] as bridge +note left of bridge +- net.ipv6.conf.b-.disable_ipv6 = 1 +end note + +package "n- namespace" { + component [n-\nmacvlan] as nic + bridge .. nic: macvlan + + note bottom of nic + - nic gets the first ip ".1" in the assigned + user subnet. + - an ipv6 driven from ipv4 that is driven from the assigned ipv4 + - fe80::1/64 + end note + component [public\nmacvlan] as public + note bottom of public + - gets an ipv4 in 100.127.0.9/16 range + - get an ipv6 in the fd00::/64 prefix + - route over 100.127.0.1 + - route over fe80::1/64 + end note + note as G + - net.ipv6.conf.all.forwarding = 1 + end note +} + +component [br-ndmz] as brndmz +brndmz .. public: macvlan +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/nr-step-2.wsd b/collections/developers/internals/zos/internals/network/topology/uml/nr-step-2.wsd new file mode 100644 index 0000000..6cfcb68 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/nr-step-2.wsd @@ -0,0 +1,33 @@ +@startuml +component [b-] as bridge +note left of bridge +- net.ipv6.conf.b-.disable_ipv6 = 1 +end note + +package "n- namespace" { + component [n-\nmacvlan] as nic + bridge .. nic: macvlan + + note bottom of nic + - nic gets the first ip ".1" in the assigned + user subnet. + - an ipv6 driven from ipv4 that is driven from the assigned ipv4 + - fe80::1/64 + end note + component [public\nmacvlan] as public + note bottom of public + - gets an ipv4 in 100.127.0.9/16 range + - get an ipv6 in the fd00::/64 prefix + - route over 100.127.0.1 + - route over fe80::1/64 + end note + note as G + - net.ipv6.conf.all.forwarding = 1 + end note + component [w-\nwireguard] +} + + +component [br-ndmz] as brndmz +brndmz .. public: macvlan +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/public-namespace.wsd b/collections/developers/internals/zos/internals/network/topology/uml/public-namespace.wsd new file mode 100644 index 0000000..215152c --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/public-namespace.wsd @@ -0,0 +1,29 @@ +@startuml + +() "br-pub (Public Bridge)" as brpub + +note bottom of brpub +This bridge is always created on boot, and is either +connected to the zos bridge (in single nic setup). +or to the seond nic with public IPv6 (in dual nic setup) +end note + + +package "public namespace" { + + [public\nmacvlan] as public + public -down- brpub: macvlan + note right of public + - have a static mac generated from node id + - set the ips as configured + - set the default gateways as configured + end note + + note as global + inside namespace + - net.ipv6.conf.all.accept_ra = 2 + - net.ipv6.conf.all.accept_ra_defrtr = 1 + end note +} + +@enduml diff --git a/collections/developers/internals/zos/internals/network/topology/uml/zos-bridge.wsd b/collections/developers/internals/zos/internals/network/topology/uml/zos-bridge.wsd new file mode 100644 index 0000000..2328f00 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/topology/uml/zos-bridge.wsd @@ -0,0 +1,16 @@ +@startuml +() eth +[zos] +eth -up- zos +note left of zos +bridge takes same mac address as eth +(ipv6 is enabled on the bridge) +- net.ipv6.conf.zos.disable_ipv6 = 0 +end note +note left of eth +disable ipv6 on interface: +(ipv6 is disabled on the nic) +- net.ipv6.conf..disable_ipv6 = 1 +- net.ipv6.conf.all.forwarding = 0 +end note +@enduml diff --git a/collections/developers/internals/zos/internals/network/yggdrasil.md b/collections/developers/internals/zos/internals/network/yggdrasil.md new file mode 100644 index 0000000..7b3c189 --- /dev/null +++ b/collections/developers/internals/zos/internals/network/yggdrasil.md @@ -0,0 +1,25 @@ +# Yggdrasil integration in 0-OS + +Since day one, 0-OS v2 networking has been design around IPv6. The goal was avoid having to deal with exhausted IPV4 address and be ready for the future. + +While this decision made sense on the long term, it pose trouble on the short term for farmer that only have access to ipv4 and are unable to ask for an upgrade to their IPS. + +In order to allow these ipv4 only nodes to join the grid, an other overlay network has to be created between all the nodes. To achieve this, Yggdrasil has been selected. + +## Yggdrasil + +[Yggdrasil network project](https://yggdrasil-network.github.io/) has been selected to be integrated into 0-OS. All 0-OS node will runs an yggdrasil daemon which means all 0-OS nodes can now communicate over the yggdrasil network. The yggdrasil integration is an experiment planned in multiple phase: + +Phase 1: Allow 0-DB container to be exposed over yggdrasil network. Implemented in v0.3.5 +Phase 2: Allow containers to request an interface with an yggdrasil IP address. + +## networkd bootstrap + +When booting, networkd will wait for 2 minute to receive an IPv6 address through router advertisement for it's `npub6` interface in the ndmz network namspace. +If after 2 minutes, no IPv6 is received, networkd will consider the node to be an IPv4 only nodes, switch to this mode and continue booting. + +### 0-DB containers + +For ipv4 only nodes, the 0-DB container will be exposed on top an yggdrasil IPv6 address. Since all the 0-OS node will also run yggdrasil, these 0-DB container will always be reachable from any container in the grid. + +For dual stack nodes, the 0-DB container will also get an yggdrasil IP in addition to the already present public IPv6. \ No newline at end of file diff --git a/collections/developers/internals/zos/internals/network/zbus.md b/collections/developers/internals/zos/internals/network/zbus.md new file mode 100644 index 0000000..c2b7a2a --- /dev/null +++ b/collections/developers/internals/zos/internals/network/zbus.md @@ -0,0 +1,46 @@ +# Network module + +## ZBus + +Network module is available on zbus over the following channel + +| module | object | version | +|--------|--------|---------| +| network|[network](#interface)| 0.0.1| + +## Home Directory + +network keeps some data in the following locations +| directory | path| +|----|---| +| root| `/var/cache/modules/network`| + + +## Interface + +```go +//Networker is the interface for the network module +type Networker interface { + // Create a new network resource + CreateNR(Network) (string, error) + // Delete a network resource + DeleteNR(Network) error + + // Join a network (with network id) will create a new isolated namespace + // that is hooked to the network bridge with a veth pair, and assign it a + // new IP from the network resource range. The method return the new namespace + // name. + // The member name specifies the name of the member, and must be unique + // The NetID is the network id to join + Join(networkdID NetID, containerID string, addrs []string) (join Member, err error) + + // ZDBPrepare creates a network namespace with a macvlan interface into it + // to allow the 0-db container to be publicly accessible + // it retusn the name of the network namespace created + ZDBPrepare() (string, error) + + // Addrs return the IP addresses of interface + // if the interface is in a network namespace netns needs to be not empty + Addrs(iface string, netns string) ([]net.IP, error) +} +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/internals/node/readme.md b/collections/developers/internals/zos/internals/node/readme.md new file mode 100644 index 0000000..0679bd7 --- /dev/null +++ b/collections/developers/internals/zos/internals/node/readme.md @@ -0,0 +1,50 @@ +

Node Module

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Zbus](#zbus) +- [Example](#example) + +*** + +## Introduction + +This module is responsible of registering the node on the grid, and handling of grid events. The node daemon broadcast the intended events on zbus for other modules that are interested in those events. + +The node also provide zbus interfaces to query some of the node information. + +## Zbus + +Node module is available on [zbus](https://github.com/threefoldtech/zbus) over the following channel + +| module | object | version | +|--------|--------|---------| +|host |host| 0.0.1 +|system |system| 0.0.1 +|events |events| 0.0.1 + +## Example + +```go + +//SystemMonitor interface (provided by noded) +type SystemMonitor interface { + NodeID() uint32 + Memory(ctx context.Context) <-chan VirtualMemoryStat + CPU(ctx context.Context) <-chan TimesStat + Disks(ctx context.Context) <-chan DisksIOCountersStat + Nics(ctx context.Context) <-chan NicsIOCounterStat +} + +// HostMonitor interface (provided by noded) +type HostMonitor interface { + Uptime(ctx context.Context) <-chan time.Duration +} + +// Events interface +type Events interface { + PublicConfigEvent(ctx context.Context) <-chan PublicConfigEvent + ContractCancelledEvent(ctx context.Context) <-chan ContractCancelledEvent +} +``` diff --git a/collections/developers/internals/zos/internals/provision/readme.md b/collections/developers/internals/zos/internals/provision/readme.md new file mode 100644 index 0000000..c82be5c --- /dev/null +++ b/collections/developers/internals/zos/internals/provision/readme.md @@ -0,0 +1,35 @@ +

Provision Module

+ +

Table of Contents

+ +- [ZBus](#zbus) +- [Introduction](#introduction) +- [Supported workload](#supported-workload) + + +*** + +## ZBus + +This module is autonomous module and is not reachable over `zbus`. + +## Introduction + +This module is responsible to provision/decommission workload on the node. + +It accepts new deployment over `rmb` and tries to bring them to reality by running a series of provisioning workflows based on the workload `type`. + +`provisiond` knows about all available daemons and it contacts them over `zbus` to ask for the needed services. The pull everything together and update the deployment with the workload state. + +If node was restarted, `provisiond` tries to bring all active workloads back to original state. +## Supported workload + +0-OS currently support 8 type of workloads: +- network +- `zmachine` (virtual machine) +- `zmount` (disk): usable only by a `zmachine` +- `public-ip` (v4 and/or v6): usable only by a `zmachine` +- [`zdb`](https://github.com/threefoldtech/0-DB) `namespace` +- [`qsfs`](https://github.com/threefoldtech/quantum-storage) +- `zlogs` +- `gateway` diff --git a/collections/developers/internals/zos/internals/storage/readme.md b/collections/developers/internals/zos/internals/storage/readme.md new file mode 100644 index 0000000..8114ec1 --- /dev/null +++ b/collections/developers/internals/zos/internals/storage/readme.md @@ -0,0 +1,153 @@ +

Storage Module

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [ZBus](#zbus) +- [Overview](#overview) +- [List of sub-modules](#list-of-sub-modules) +- [On Node Booting](#on-node-booting) + - [zinit unit](#zinit-unit) + - [Interface](#interface) + +*** + +## Introduction + +This module is responsible to manage everything related with storage. + +## ZBus + +Storage module is available on zbus over the following channel + +| module | object | version | +|--------|--------|---------| +| storage|[storage](#interface)| 0.0.1| + + +## Overview + +On start, storaged holds ownership of all node disks, and it separate it into 2 different sets: + +- SSD Storage: For each ssd disk available, a storage pool of type SSD is created +- HDD Storage: For each HDD disk available, a storage pool of type HDD is created + + +Then `storaged` can provide the following storage primitives: +- `subvolume`: (with quota). The btrfs subvolume can be used by used by `flistd` to support read-write operations on flists. Hence it can be used as rootfs for containers and VMs. This storage primitive is only supported on `ssd` pools. + - On boot, storaged will always create a permanent subvolume with id `zos-cache` (of 100G) which will be used by the system to persist state and to hold cache of downloaded files. +- `vdisk`: Virtual disk that can be attached to virtual machines. this is only possible on `ssd` pools. +- `device`: that is a full disk that gets allocated and used by a single `0-db` service. Note that a single 0-db instance can serve multiple zdb namespaces for multiple users. This is only possible for on `hdd` pools. + +You already can tell that ZOS can work fine with no HDD (it will not be able to server zdb workloads though), but not without SSD. Hence a zos with no SSD will never register on the grid. + +## List of sub-modules + +- disks +- 0-db +- booting + +## On Node Booting + +When the module boots: + +- Make sure to mount all available pools +- Scan available disks that are not used by any pool and create new pools on those disks. (all pools now are created with `RaidSingle` policy) +- Try to find and mount a cache sub-volume under /var/cache. +- If no cache sub-volume is available a new one is created and then mounted. + +### zinit unit + +The zinit unit file of the module specify the command line, test command, and the order where the services need to be booted. + +Storage module is a dependency for almost all other system modules, hence it has high boot presidency (calculated on boot) by zinit based on the configuration. + +The storage module is only considered running, if (and only if) the /var/cache is ready + +```yaml +exec: storaged +test: mountpoint /var/cache +``` + +### Interface + +```go + +// StorageModule is the storage subsystem interface +// this should allow you to work with the following types of storage medium +// - full disks (device) (these are used by zdb) +// - subvolumes these are used as a read-write layers for 0-fs mounts +// - vdisks are used by zmachines +// this works as following: +// a storage module maintains a list of ALL disks on the system +// separated in 2 sets of pools (SSDs, and HDDs) +// ssd pools can only be used for +// - subvolumes +// - vdisks +// hdd pools are only used by zdb as one disk +type StorageModule interface { + // Cache method return information about zos cache volume + Cache() (Volume, error) + + // Total gives the total amount of storage available for a device type + Total(kind DeviceType) (uint64, error) + // BrokenPools lists the broken storage pools that have been detected + BrokenPools() []BrokenPool + // BrokenDevices lists the broken devices that have been detected + BrokenDevices() []BrokenDevice + //Monitor returns stats stream about pools + Monitor(ctx context.Context) <-chan PoolsStats + + // Volume management + + // VolumeCreate creates a new volume + VolumeCreate(name string, size gridtypes.Unit) (Volume, error) + + // VolumeUpdate updates the size of an existing volume + VolumeUpdate(name string, size gridtypes.Unit) error + + // VolumeLookup return volume information for given name + VolumeLookup(name string) (Volume, error) + + // VolumeDelete deletes a volume by name + VolumeDelete(name string) error + + // VolumeList list all volumes + VolumeList() ([]Volume, error) + + // Virtual disk management + + // DiskCreate creates a virtual disk given name and size + DiskCreate(name string, size gridtypes.Unit) (VDisk, error) + + // DiskResize resizes the disk to given size + DiskResize(name string, size gridtypes.Unit) (VDisk, error) + + // DiskWrite writes the given raw image to disk + DiskWrite(name string, image string) error + + // DiskFormat makes sure disk has filesystem, if it already formatted nothing happens + DiskFormat(name string) error + + // DiskLookup looks up vdisk by name + DiskLookup(name string) (VDisk, error) + + // DiskExists checks if disk exists + DiskExists(name string) bool + + // DiskDelete deletes a disk + DiskDelete(name string) error + + DiskList() ([]VDisk, error) + // Device management + + //Devices list all "allocated" devices + Devices() ([]Device, error) + + // DeviceAllocate allocates a new device (formats and give a new ID) + DeviceAllocate(min gridtypes.Unit) (Device, error) + + // DeviceLookup inspects a previously allocated device + DeviceLookup(name string) (Device, error) +} +``` diff --git a/collections/developers/internals/zos/internals/vmd/readme.md b/collections/developers/internals/zos/internals/vmd/readme.md new file mode 100644 index 0000000..d30fa3e --- /dev/null +++ b/collections/developers/internals/zos/internals/vmd/readme.md @@ -0,0 +1,66 @@ +

VMD Module

+ +

Table of Contents

+ +- [ZBus](#zbus) +- [Home Directory](#home-directory) +- [Introduction](#introduction) + - [zinit unit](#zinit-unit) +- [Interface](#interface) + +*** + +## ZBus + +Storage module is available on zbus over the following channel + +| module | object | version | +|--------|--------|---------| +| vmd|[vmd](#interface)| 0.0.1| + +## Home Directory + +contd keeps some data in the following locations +| directory | path| +|----|---| +| root| `/var/cache/modules/containerd`| + +## Introduction + +The vmd module, manages all virtual machines processes, it provide the interface to, create, inspect, and delete virtual machines. It also monitor the vms to make sure they are re-spawned if crashed. Internally it uses `cloud-hypervisor` to start the Vm processes. + +It also provide the interface to configure VM logs streamers. + +### zinit unit + +`contd` must run after containerd is running, and the node boot process is complete. Since it doesn't keep state, no dependency on `stroaged` is needed + +```yaml +exec: vmd --broker unix:///var/run/redis.sock +after: + - boot + - networkd +``` + +## Interface + +```go + +// VMModule defines the virtual machine module interface +type VMModule interface { + Run(vm VM) error + Inspect(name string) (VMInfo, error) + Delete(name string) error + Exists(name string) bool + Logs(name string) (string, error) + List() ([]string, error) + Metrics() (MachineMetrics, error) + + // VM Log streams + + // StreamCreate creates a stream for vm `name` + StreamCreate(name string, stream Stream) error + // delete stream by stream id. + StreamDelete(id string) error +} +``` diff --git a/collections/developers/internals/zos/manual/api.md b/collections/developers/internals/zos/manual/api.md new file mode 100644 index 0000000..1ffdad4 --- /dev/null +++ b/collections/developers/internals/zos/manual/api.md @@ -0,0 +1,273 @@ +

API

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deployments](#deployments) + - [Deploy](#deploy) + - [Update](#update) + - [Get](#get) + - [Changes](#changes) + - [Delete](#delete) +- [Statistics](#statistics) +- [Storage](#storage) + - [List separate pools with capacity](#list-separate-pools-with-capacity) +- [Network](#network) + - [List Wireguard Ports](#list-wireguard-ports) + - [Supports IPV6](#supports-ipv6) + - [List Public Interfaces](#list-public-interfaces) + - [List Public IPs](#list-public-ips) + - [Get Public Config](#get-public-config) +- [Admin](#admin) + - [List Physical Interfaces](#list-physical-interfaces) + - [Get Public Exit NIC](#get-public-exit-nic) + - [Set Public Exit NIC](#set-public-exit-nic) +- [System](#system) + - [Version](#version) + - [DMI](#dmi) + - [Hypervisor](#hypervisor) +- [GPUs](#gpus) + - [List Gpus](#list-gpus) + + +*** + +## Introduction + +This document should list all the actions available on the node public API. which is available over [RMB](https://github.com/threefoldtech/rmb-rs) + +The node is always reachable over the node twin id as per the node object on tfchain. Once node twin is known, a [client](https://github.com/threefoldtech/zos/blob/main/client/node.go) can be initiated and used to talk to the node. + +## Deployments + +### Deploy + +| command |body| return| +|---|---|---| +| `zos.deployment.deploy` | [Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)|-| + +Deployment need to have valid signature, the contract must exist on chain with the correct contract hash as the deployment. + +### Update + +| command |body| return| +|---|---|---| +| `zos.deployment.update` | [Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)|-| + +The update call, will update (modify) an already existing deployment with new definition. The deployment must already exist on the node, the contract must have the new hash as the provided deployment, plus valid versions. + +> TODO: need more details over the deployment update calls how to handle the version + +### Get + +| command |body| return| +|---|---|---| +| `zos.deployment.get` | `{contract_id: }`|[Deployment](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/deployment.go)| + +### Changes + +| command |body| return| +|---|---|---| +| `zos.deployment.changes` | `{contract_id: }`| `[]Workloads` | + +Where: + +- [workload](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/workload.go) + +The list will contain all deployment workloads (changes) means a workload can (will) appear +multiple times in this list for each time a workload state will change. + +This means a workload will first appear in `init` state, then next time it will show the state change (with time) to the next state which can be success or failure, and so on. +This will happen for each workload in the deployment. + +### Delete +> +> You probably never need to call this command yourself, the node will delete the deployment once the contract is cancelled on the chain. + +| command |body| return| +|---|---|---| +| `zos.deployment.get` | `{contract_id: }`|-| + +## Statistics + +| command |body| return| +|---|---|---| +| `zos.statistics.get` | - |`{total: Capacity, used: Capacity, system: Capacity}`| + +Where: + +```json +Capacity { + "cur": "uint64", + "sru": "bytes", + "hru": "bytes", + "mru": "bytes", + "ipv4u": "unit64", +} +``` + +> Note that, `used` capacity equal the full workload reserved capacity PLUS the system reserved capacity +so `used = user_used + system`, while `system` is only the amount of resourced reserved by `zos` itself + +## Storage + +### List separate pools with capacity + +| command |body| return| +|---|---|---| +| `zos.storage.pools` | - |`[]Pool`| + +List all node pools with their types, size and used space +where + +```json +Pool { + "name": "pool-id", + "type": "(ssd|hdd)", + "size": , + "used": +} +``` + +## Network + +### List Wireguard Ports + +| command |body| return| +|---|---|---| +| `zos.network.list_wg_ports` | - |`[]uint16`| + +List all `reserved` ports on the node that can't be used for network wireguard. A user then need to find a free port that is not in this list to use for his network + +### Supports IPV6 + +| command |body| return| +|---|---|---| +| `zos.network.has_ipv6` | - |`bool`| + +### List Public Interfaces + +| command |body| return| +|---|---|---| +| `zos.network.interfaces` | - |`map[string][]IP` | + +list of node IPs this is a public information. Mainly to show the node yggdrasil IP and the `zos` interface. + +### List Public IPs + +| command |body| return| +|---|---|---| +| `zos.network.list_public_ips` | - |`[]IP` | + +List all user deployed public IPs that are served by this node. + +### Get Public Config + +| command |body| return| +|---|---|---| +| `zos.network.public_config_get` | - |`PublicConfig` | + +Where + +```json +PublicConfig { + "type": "string", // always vlan + "ipv4": "CIDR", + "ipv6": "CIDR", + "gw4": "IP", + "gw6": "IP", + "domain": "string", +} +``` + +returns the node public config or error if not set. If a node has public config +it means it can act like an access node to user private networks + +## Admin + +The next set of commands are ONLY possible to be called by the `farmer` only. + +### List Physical Interfaces + +| command |body| return| +|---|---|---| +| `zos.network.admin.interfaces` | - |`map[string]Interface` | + +Where + +```json +Interface { + "ips": ["ip"], + "mac": "mac-address", +} +``` + +Lists ALL node physical interfaces. +Those interfaces then can be used as an input to `set_public_nic` + +### Get Public Exit NIC + +| command |body| return| +|---|---|---| +| `zos.network.admin.get_public_nic` | - |`ExitDevice` | + +Where + +```json +ExitInterface { + "is_single": "bool", + "is_dual": "bool", + "dual_interface": "name", +} +``` + +returns the interface used by public traffic (for user workloads) + +### Set Public Exit NIC + +| command |body| return| +|---|---|---| +| `zos.network.admin.set_public_nic` | `name` |- | + +name must be one of (free) names returned by `zos.network.admin.interfaces` + +## System + +### Version + +| command |body| return| +|---|---|---| +| `zos.system.version` | - | `{zos: string, zinit: string}` | + +### DMI + +| command |body| return| +|---|---|---| +| `zos.system.dmi` | - | [DMI](https://github.com/threefoldtech/zos/blob/main/pkg/capacity/dmi/dmi.go) | + +### Hypervisor + +| command |body| return| +|---|---|---| +| `zos.system.hypervisor` | - | `string` | + +## GPUs + +### List Gpus + +| command |body| return| +|---|---|---| +| `zos.gpu.list` | - | `[]GPU` | + +Where + +```json +GPU { + "id": "string" + "vendor": "string" + "device": "string", + "contract": "uint64", +} +``` + +Lists all available node GPUs if exist diff --git a/collections/developers/internals/zos/manual/gateway/fqdn-proxy.md b/collections/developers/internals/zos/manual/gateway/fqdn-proxy.md new file mode 100644 index 0000000..07a2f8b --- /dev/null +++ b/collections/developers/internals/zos/manual/gateway/fqdn-proxy.md @@ -0,0 +1,5 @@ +# `gateway-fqdn-proxy` type + +This create a proxy with the given fqdn to the given backends. In this case the user then must configure his dns server (i.e name.com) to point to the correct node public IP. + +Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_fqdn.go) diff --git a/collections/developers/internals/zos/manual/gateway/name-proxy.md b/collections/developers/internals/zos/manual/gateway/name-proxy.md new file mode 100644 index 0000000..2ce40f3 --- /dev/null +++ b/collections/developers/internals/zos/manual/gateway/name-proxy.md @@ -0,0 +1,5 @@ +# `gateway-name-proxy` type + +This create a proxy with the given name to the given backends. The `name` of the proxy must be owned by a name contract on the grid. The idea is that a user can reserve a name (i.e `example`). Later he can deploy a gateway work load with name `example` on any gateway node that points to specified backends. The name then is prefix by the gateway name. For example if the gateway domain is `gent0.freefarm.com` then your full QFDN is goint to be called `example.gen0.freefarm.com` + +Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_name.go) diff --git a/collections/developers/internals/zos/manual/ip/readme.md b/collections/developers/internals/zos/manual/ip/readme.md new file mode 100644 index 0000000..0e6068f --- /dev/null +++ b/collections/developers/internals/zos/manual/ip/readme.md @@ -0,0 +1,11 @@ +# `ip` type +The IP workload type reserves an IP from the available contract IPs list. Which means on contract creation the user must specify number of public IPs it needs to use. The contract then will allocate this number of IPs from the farm and will kept on the contract. + +When the user then add the IP workload to the deployment associated with this contract, each IP workload will pick and link to one IP from the contract. + +In minimal form, `IP` workload does not require any data. But in reality it has 2 flags to pick which kind of public IP do you want + +- `ipv4` (`bool`): pick one from the contract public Ipv4 +- `ipv6` (`bool`): pick an IPv6 over SLAAC. Ipv6 are not reserved with a contract. They are basically free if the farm infrastructure allows Ipv6 over SLAAC. + +Full `IP` workload definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/ipv4.go) diff --git a/collections/developers/internals/zos/manual/manual.md b/collections/developers/internals/zos/manual/manual.md new file mode 100644 index 0000000..fb926c9 --- /dev/null +++ b/collections/developers/internals/zos/manual/manual.md @@ -0,0 +1,187 @@ +

ZOS Manual

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Farm? Network? What are these?](#farm-network-what-are-these) +- [Creating a farm](#creating-a-farm) +- [Interaction](#interaction) +- [Deployment](#deployment) + - [Workload](#workload) + - [Types](#types) + - [API](#api) +- [Raid Controller Configuration](#raid-controller-configuration) + +*** + +## Introduction + +This document explain the usage of `ZOS`. `ZOS` usually pronounced (zero OS), got it's name from the idea of zero configuration. Since after the initial `minimal` configuration which only include which `farm` to join and what `network` (`development`, `testing`, or `production`) the owner of the node does not has to do anything more, and the node work fully autonomous. + +The farmer himself cannot control the node, or access it by any mean. The only way you can interact with a node is via it's public API. + +## Farm? Network? What are these? + +Well, `zos` is built to allow people to run `workloads` around the world this simply is enabled by allowing 3rd party data-centers to run `ZOS` on their hardware. Then a user can then find any nearby `farm` (is what we call a cluster of nodes that belong to the same `farmer`) and then they can choose to deploy capacity on that node/farm. A `farm` can consist of one or more nodes. + +So what is `network`.Well, to allow developers to build and `zos` itself and make it available during the early stages of development for testers and other enthusiastic people to try it out. To allow this we created 3 `networks` +- `development`: This is used mainly by developers to test their work. This is still available for users to deploy their capacity on (for really really cheap prices), but at the same time there is no grantee that it's stable or that data loss or corruption will happen. Also the entire network can be reset with no heads up. +- `testing`: Once new features are developed and well tested on `development` network they are released to `testing` environment. This also available for users to use with a slightly higher price than `development` network. But it's much more stable. In theory this network is stable, there should be no resets of the network, issues on this network usually are not fatal, but partial data loss can still occurs. +- `production`: Well, as the name indicates this is the most stable network (also full price) once new features are fully tested on `testing` network they are released on `production`. + +## Creating a farm + +While this is outside the scope of this document here you are a [link](https://library.threefold.me/info/manual/#/manual__create_farm) + +## Interaction + +`ZOS` provide a simple `API` that can be used to: +- Query node runtime information + - Network information + - Free `wireguard` ports + - Get public configuration + - System version + - Other (check client for details) +- Deployment management (more on that later) + - Create + - Update + - Delete + +Note that `zos` API is available over `rmb` protocol. `rmb` which means `reliable message bus` is a simple messaging protocol that enables peer to peer communication over `yggdrasil` network. Please check [`rmb`](https://github.com/threefoldtech/rmb) for more information. + +Simply put, `RMB` allows 2 entities two communicate securely knowing only their `id` an id is linked to a public key on the blockchain. Hence messages are verifiable via a signature. + +To be able to contact the node directly you need to run +- `yggdrasil` +- `rmb` (correctly configured) + +Once you have those running you can now contact the node over `rmb`. For a reference implementation (function names and parameters) please refer to [RMB documentation](../../rmb/rmb_toc.md) + +Here is a rough example of how low level creation of a deployment is done. + +```go +cl, err := rmb.Default() +if err != nil { + panic(err) +} +``` +then create an instance of the node client +```go +node := client.NewNodeClient(NodeTwinID, cl) +``` +define your deployment object +```go +dl := gridtypes.Deployment{ + Version: Version, + TwinID: Twin, //LocalTwin, + // this contract id must match the one on substrate + Workloads: []gridtypes.Workload{ + network(), // network workload definition + zmount(), // zmount workload definition + publicip(), // public ip definition + zmachine(), // zmachine definition + }, + SignatureRequirement: gridtypes.SignatureRequirement{ + WeightRequired: 1, + Requests: []gridtypes.SignatureRequest{ + { + TwinID: Twin, + Weight: 1, + }, + }, + }, +} +``` +compute hash +```go +hash, err := dl.ChallengeHash() +if err != nil { + panic("failed to create hash") +} +fmt.Printf("Hash: %x\n", hash) +``` +create the contract on `substrate` and get the `contract id` then you can link the deployment to the contract, then send to the node. + +```go +dl.ContractID = 11 // from substrate +ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) +defer cancel() +err = node.DeploymentDeploy(ctx, dl) +if err != nil { + panic(err) +} +``` + +Once the node receives the deployment. It will then fetch the contract (using the contract id) from the node recompute the deployment hash and compare with the one set on the contract. If matches, the node proceeds to process the deployment. + +## Deployment + +A deployment is a set of workloads that are contextually related. Workloads in the same deployment can reference to other workloads in the same deployment. But can't be referenced from another deployment. Well, except the network workload which can be referenced from a different deployment as long it belongs to the same user. + +Workloads has unique IDs (per deployment) that are set by the user, hence he can create multiple workloads then reference to them with the given IDs (`names`) + +For example, a deployment can define +- A private network with id `net` +- A disk with id `data` +- A public IP with id `ip` +- A container that uses: + - The container can mount the disk like `mount: {data: /mount/path}`. + - The container can get assign the public IP to itself like by referencing the IP with id `ip`. + - etc. + +### Workload +Each workload has a type which is associated with some data. So minimal definition of a workload contains: +- `name`: unique per deployment (id) +- `type`: workload type +- `data`: workload data that is proper for the selected type. + +```go + +// Workload struct +type Workload struct { + // Version is version of reservation object. On deployment creation, version must be 0 + // then only workloads that need to be updated must match the version of the deployment object. + // if a deployment update message is sent to a node it does the following: + // - validate deployment version + // - check workloads list, if a version is not matching the new deployment version, the workload is untouched + // - if a workload version is same as deployment, the workload is "updated" + // - if a workload is removed, the workload is deleted. + Version uint32 `json:"version"` + //Name is unique workload name per deployment (required) + Name Name `json:"name"` + // Type of the reservation (container, zdb, vm, etc...) + Type WorkloadType `json:"type"` + // Data is the reservation type arguments. + Data json.RawMessage `json:"data"` + // Metadata is user specific meta attached to deployment, can be used to link this + // deployment to other external systems for automation + Metadata string `json:"metadata"` + //Description human readale description of the workload + Description string `json:"description"` + // Result of reservation, set by the node + Result Result `json:"result"` +} +``` + +### Types +- Virtual machine related + - [`network`](./workload_types.md#network-type) + - [`ip`](./workload_types.md#ip-type) + - [`zmount`](./workload_types.md#zmount-type) + - [`zmachine`](./workload_types.md#zmachine-type) + - [`zlogs`](./workload_types.md#zlogs-type) +- Storage related + - [`zdb`](./workload_types.md#zdb-type) + - [`qsfs`](./workload_types.md#qsfs-type) +- Gateway related + - [`gateway-name-proxy`](./workload_types.md#gateway-name-proxy-type) + - [`gateway-fqdn-proxy`](./workload_types.md#gateway-fqdn-proxy-type) + +### API +Node is always connected to the RMB network with the node `twin`. Means the node is always reachable over RMB with the node `twin-id` as an address. + +The [node client](https://github.com/threefoldtech/zos/blob/main/client/node.go) should have a complete list of all available functions. documentations of the API can be found [here](./api.md) + +## Raid Controller Configuration + +0-OS goal is to expose raw capacity. So it is best to always try to give it access to the most raw access to the disks. In case of raid controllers, the best is to try to set it up in [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD) mode if available. \ No newline at end of file diff --git a/collections/developers/internals/zos/manual/network/readme.md b/collections/developers/internals/zos/manual/network/readme.md new file mode 100644 index 0000000..3cbc23b --- /dev/null +++ b/collections/developers/internals/zos/manual/network/readme.md @@ -0,0 +1,14 @@ +# `network` type +Private network can span multiple nodes at the same time. Which means workloads (`VMs`) that live (on different node) but part of the same virtual network can still reach each other over this `private` network. + +If one (or more) nodes are `public access nodes` you can also add your personal laptop to the nodes and be able to reach your `VMs` over `wireguard` network. + +In the simplest form a network workload consists of: +- network range +- sub-range available on this node +- private key +- list of peers + - each peer has public key + - sub-range + +Full network definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/network.go) diff --git a/collections/developers/internals/zos/manual/qsfs/readme.md b/collections/developers/internals/zos/manual/qsfs/readme.md new file mode 100644 index 0000000..1049cdf --- /dev/null +++ b/collections/developers/internals/zos/manual/qsfs/readme.md @@ -0,0 +1,5 @@ +# `qsfs` type + +`qsfs` short for `quantum safe file system` is a FUSE filesystem which aim to be able to support unlimited local storage with remote backend for offload and backup which cannot be broke even by a quantum computer. Please read about it [here](https://github.com/threefoldtech/quantum-storage) + +To create a `qsfs` workload you need to provide the workload type as [here](https://github.com/threefoldtech/zos/blob/main/pkg/qsfsd/qsfs.go) diff --git a/collections/developers/internals/zos/manual/workload_types.md b/collections/developers/internals/zos/manual/workload_types.md new file mode 100644 index 0000000..a9e2d85 --- /dev/null +++ b/collections/developers/internals/zos/manual/workload_types.md @@ -0,0 +1,108 @@ +

Workload Types

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Virtual Machine](#virtual-machine) + - [`network` type](#network-type) + - [`ip` type](#ip-type) + - [`zmount` type](#zmount-type) + - [`zmachine` type](#zmachine-type) + - [Building your `flist`](#building-your-flist) + - [`zlogs` type](#zlogs-type) +- [Storage](#storage) + - [`zdb` type](#zdb-type) + - [`qsfs` type](#qsfs-type) +- [Gateway](#gateway) + - [`gateway-name-proxy` type](#gateway-name-proxy-type) + - [`gateway-fqdn-proxy` type](#gateway-fqdn-proxy-type) + +## Introduction + +Each workload has a type which is associated with some data. We present here the different types of workload associated with Zero-OS. + +## Virtual Machine + +### `network` type +Private network can span multiple nodes at the same time. Which means workloads (`VMs`) that live (on different node) but part of the same virtual network can still reach each other over this `private` network. + +If one (or more) nodes are `public access nodes` you can also add your personal laptop to the nodes and be able to reach your `VMs` over `wireguard` network. + +In the simplest form a network workload consists of: +- network range +- sub-range available on this node +- private key +- list of peers + - each peer has public key + - sub-range + +Full network definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/network.go) + +### `ip` type +The IP workload type reserves an IP from the available contract IPs list. Which means on contract creation the user must specify number of public IPs it needs to use. The contract then will allocate this number of IPs from the farm and will kept on the contract. + +When the user then add the IP workload to the deployment associated with this contract, each IP workload will pick and link to one IP from the contract. + +In minimal form, `IP` workload does not require any data. But in reality it has 2 flags to pick which kind of public IP do you want + +- `ipv4` (`bool`): pick one from the contract public Ipv4 +- `ipv6` (`bool`): pick an IPv6 over SLAAC. Ipv6 are not reserved with a contract. They are basically free if the farm infrastructure allows Ipv6 over SLAAC. + +Full `IP` workload definition can be found [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/ipv4.go) + +### `zmount` type +A `zmount` is a local disk that can be attached directly to a container or a virtual machine. `zmount` only require `size` as input as defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmount.go) this workload type is only utilized via the `zmachine` workload. + +### `zmachine` type + +`zmachine` is a unified container/virtual machine type. This can be used to start a virtual machine on a `zos` node give the following: +- `flist`, this what provide the base `vm` image or container image. + - the `flist` content is what changes the `zmachine` mode. An `flist` built from a docker image or has files, or executable binaries will run in a container mode. `ZOS` will inject it's own `kernel+initramfs` to run the workload and kick start the defined `flist` `entrypoint` +- private network to join (with assigned IP) +- optional public `ipv4` or `ipv6` +- optional disks. But at least one disk is required in case running `zmachine` in `vm` mode, which is used to hold the `vm` root image. + +For more details on all parameters needed to run a `zmachine` please refer to [`zmachine` data](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmachine.go) + +#### Building your `flist` + +Please refer to [this document](./manual.md) here about how to build an compatible `zmachine flist` + +### `zlogs` type + +Zlogs is a utility workload that allows you to stream `zmachine` logs to a remote location. + +The `zlogs` workload needs to know what `zmachine` to stream logs of and also the `target` location to stream the logs to. `zlogs` uses internally the [`tailstream`](https://github.com/threefoldtech/tailstream) so it supports any streaming url that is supported by this utility. + +`zlogs` workload runs inside the same private network as the `zmachine` instance. Which means zlogs can stream logs to other `zmachines` that is running inside the same private network (possibly on different nodes). + +For example, you can run [`logagg`](https://github.com/threefoldtech/logagg) which is a web-socket server that can work with `tailstream` web-socket protocol. + +Check `zlogs` configuration [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zlogs.go) + +## Storage + +### `zdb` type +`zdb` is a storage primitives that gives you a persisted key value store over RESP protocol. Please check [`zdb` docs](https://github.com/threefoldtech/0-db) + +Please check [here](https://github.com/threefoldtech/zos/blob/main/pkg/zdb/zdb.go) for workload data. + +### `qsfs` type + +`qsfs` short for `quantum safe file system` is a FUSE filesystem which aim to be able to support unlimited local storage with remote backend for offload and backup which cannot be broke even by a quantum computer. Please read about it [here](https://github.com/threefoldtech/quantum-storage) + +To create a `qsfs` workload you need to provide the workload type as [here](https://github.com/threefoldtech/zos/blob/main/pkg/qsfsd/qsfs.go) + +## Gateway + +### `gateway-name-proxy` type + +This create a proxy with the given name to the given backends. The `name` of the proxy must be owned by a name contract on the grid. The idea is that a user can reserve a name (i.e `example`). Later he can deploy a gateway work load with name `example` on any gateway node that points to specified backends. The name then is prefix by the gateway name. For example if the gateway domain is `gent0.freefarm.com` then your full QFDN is goint to be called `example.gen0.freefarm.com` + +Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_name.go) + +### `gateway-fqdn-proxy` type + +This create a proxy with the given fqdn to the given backends. In this case the user then must configure his dns server (i.e name.com) to point to the correct node public IP. + +Full name-proxy workload data is defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/gw_fqdn.go) diff --git a/collections/developers/internals/zos/manual/zdb/readme.md b/collections/developers/internals/zos/manual/zdb/readme.md new file mode 100644 index 0000000..45f88f4 --- /dev/null +++ b/collections/developers/internals/zos/manual/zdb/readme.md @@ -0,0 +1,4 @@ +# `zdb` type +`zdb` is a storage primitives that gives you a persisted key value store over RESP protocol. Please check [`zdb` docs](https://github.com/threefoldtech/0-db) + +Please check [here](https://github.com/threefoldtech/zos/blob/main/pkg/zdb/zdb.go) for workload data. diff --git a/collections/developers/internals/zos/manual/zlogs/readme.md b/collections/developers/internals/zos/manual/zlogs/readme.md new file mode 100644 index 0000000..b77ade4 --- /dev/null +++ b/collections/developers/internals/zos/manual/zlogs/readme.md @@ -0,0 +1,11 @@ +# `zlogs` type + +Zlogs is a utility workload that allows you to stream `zmachine` logs to a remote location. + +The `zlogs` workload needs to know what `zmachine` to stream logs of and also the `target` location to stream the logs to. `zlogs` uses internally the [`tailstream`](https://github.com/threefoldtech/tailstream) so it supports any streaming url that is supported by this utility. + +`zlogs` workload runs inside the same private network as the `zmachine` instance. Which means zlogs can stream logs to other `zmachines` that is running inside the same private network (possibly on different nodes). + +For example, you can run [`logagg`](https://github.com/threefoldtech/logagg) which is a web-socket server that can work with `tailstream` web-socket protocol. + +Check `zlogs` configuration [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zlogs.go) diff --git a/collections/developers/internals/zos/manual/zmachine/cloud-console.md b/collections/developers/internals/zos/manual/zmachine/cloud-console.md new file mode 100644 index 0000000..f1e0324 --- /dev/null +++ b/collections/developers/internals/zos/manual/zmachine/cloud-console.md @@ -0,0 +1,14 @@ +# Cloud console + +- `cloud-console` is a tool to view machine logging and interact with the machine you have deployed +- It always runs on the machine's private network ip and port number equla to `20000 +last octect` of machine private IP +- For example if the machine ip is `10.20.2.2/24` this means + - `cloud-console` is running on `10.20.2.1:20002` +- For the cloud-console to run we need to start the cloud-hypervisor with option "--serial pty" instead of tty, this allows us to interact with the vm from another process `cloud-console` in our case +- To be able to connect to the web console you should first start wireguard to connect to the private network + +``` +wg-quick up wireguard.conf +``` + +- Then go to your browser with the network router IP `10.20.2.1:20002` diff --git a/collections/developers/internals/zos/manual/zmachine/readme.md b/collections/developers/internals/zos/manual/zmachine/readme.md new file mode 100644 index 0000000..e94a0e7 --- /dev/null +++ b/collections/developers/internals/zos/manual/zmachine/readme.md @@ -0,0 +1,13 @@ +# `zmachine` type + +`zmachine` is a unified container/virtual machine type. This can be used to start a virtual machine on a `zos` node give the following: +- `flist`, this what provide the base `vm` image or container image. + - the `flist` content is what changes the `zmachine` mode. An `flist` built from a docker image or has files, or executable binaries will run in a container mode. `ZOS` will inject it's own `kernel+initramfs` to run the workload and kick start the defined `flist` `entrypoint` +- private network to join (with assigned IP) +- optional public `ipv4` or `ipv6` +- optional disks. But at least one disk is required in case running `zmachine` in `vm` mode, which is used to hold the `vm` root image. + +For more details on all parameters needed to run a `zmachine` please refer to [`zmachine` data](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmachine.go) + +# Building your `flist`. +Please refer to [this document](../manual.md) here about how to build an compatible `zmachine flist` diff --git a/collections/developers/internals/zos/manual/zmachine/zmachine.md b/collections/developers/internals/zos/manual/zmachine/zmachine.md new file mode 100644 index 0000000..ddbd102 --- /dev/null +++ b/collections/developers/internals/zos/manual/zmachine/zmachine.md @@ -0,0 +1,410 @@ +# Zmachine + +A `Zmachine` is an instance of virtual compute capacity. There are 2 kinds of Zmachines. +One is a `VM`, standard in cloud environments. Next to this it can also be a `container`. +On the Zos level, both of these are implemented as virtual machines. Depending on +the context, it will be considered to be either a VM or a container. In either +scenario, the `Zmachine` is started from an `Flist`. + +> Note, both VM and Container on ZOS are actually served as Virtual Machines. The +only difference is that if you are running in VM mode, you only need to provide a **raw** +disk image (image.raw) in your flist. + +## Container + +A container is meant to host `microservice`. The `microservice` architecture generally +dictates that each service should be run in it's own container (therefore providing +a level of isolation), and communicate with other containers it depends on over the +network. + +Similar to docker. In Zos, a container is actually also run in a virtualized environment. +Similar to containers, some setup is done on behalf of the user. After setup this is done, +the users `entrypoint` is started. + +It should be noted that a container has no control over the kernel +used to run it, if this is required, a `VM` should be used instead. Furthermore, +a container should ideally only have 1 process running. A container can be a single +binary, or a complete filesystem. In general, the first should be preferred, and +if you need the latter, it might be an indication that you actually want a `VM`. + +For containers, the network setup will be created for you. Your init process can +assume that it will be fully set up (according to the config you provided) by the +time it is started. Mountpoints will also be setup for you. The environment variables +passed will be available inside the container. + +## VM + +In container mode, zos provide a minimal kernel that is used to run a light weight VM +and then run your app from your flist. If you need control over the kernel you can actually +still provide it inside the flist as follows: + +- /boot/vmlinuz +- /boot/initrd.img [optional] + +**NOTE**: the vmlinuz MUST be an EFI kernel (not compressed) if building your own kernel, or you can use the [extract-vmlinux](https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux) script to extract the EFI kernel. To test if your kernel is a valid elf kernel run command +`readelf -n ` + +Any of those files can be a symlink to another file in the flist. + +If ZOS found the `/boot/vmlinuz` file, it will use this with the initrd.img if also exists. otherwise zos will use the built-in minimal kernel and run in `container` mode. + +### Building an ubuntu VM flist + +This is a guide to help you build a working VM flist. + +This guide is for ubuntu `jammy` + +prepare rootfs + +```bash +mkdir ubuntu:jammy +``` + +bootstrap ubuntu + +```bash +sudo debootstrap jammy ubuntu:jammy http://archive.ubuntu.com/ubuntu +``` + +this will create and download the basic rootfs for ubuntu jammy in the directory `ubuntu:jammy`. +After its done we can then chroot into this directory to continue installing the necessary packages needed and configure +few things. + +> I am using script called `arch-chroot` which is available by default on arch but you can also install on ubuntu to continue +the following steps + +```bash +sudo arch-chroot ubuntu:jammy +``` + +> This script (similar to the `chroot` command) switch root to that given directory but also takes care of mounting /dev /sys, etc.. for you +and clean it up on exit. + +Next just remove this link and re-create the file with a valid name to be able to continue + +```bash +# make sure to set the path correctly +export PATH=/usr/local/sbin/:/usr/local/bin/:/usr/sbin/:/usr/bin/:/sbin:/bin + +rm /etc/resolv.conf +echo 'nameserver 1.1.1.1' > /etc/resolv.conf +``` + +Install cloud-init + +```bash +apt-get update +apt-get install cloud-init openssh-server curl +# to make sure we have clean setup +cloud-init clean +``` + +Also really important that we install a kernel + +```bash +apt-get install linux-modules-extra-5.15.0-25-generic +``` + +> I choose this package because it will also install extra modules for us and a generic kernel + +Next make sure that virtiofs is part of the initramfs image + +```bash +echo 'fs-virtiofs' >> /etc/initramfs-tools/modules +update-initramfs -c -k all +``` + +clean up cache + +```bash +apt-get clean +``` + +Last thing we do inside the container before we actually upload the flist +is to make sure the kernel is in the correct format + +This step does not require that we stay in the chroot so hit `ctr+d` or type `exit` + +you should be out of the arch-chroot now + +```bash +curl -O https://raw.githubusercontent.com/torvalds/linux/master/scripts/extract-vmlinux +chmod +x extract-vmlinux + +sudo ./extract-vmlinux ubuntu:jammy/boot/vmlinuz | sudo tee ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic.elf > /dev/null +# then replace original kernel +sudo mv ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic.elf ubuntu:jammy/boot/vmlinuz-5.15.0-25-generic +``` + +To verify you can do this: + +```bash +ls -l ubuntu:jammy/boot +``` + +and it should show something like + +```bash +total 101476 +-rw-r--r-- 1 root root 260489 Mar 30 2022 config-5.15.0-25-generic +drwxr-xr-x 1 root root 54 Jun 28 15:35 grub +lrwxrwxrwx 1 root root 28 Jun 28 15:35 initrd.img -> initrd.img-5.15.0-25-generic +-rw-r--r-- 1 root root 41392462 Jun 28 15:39 initrd.img-5.15.0-25-generic +lrwxrwxrwx 1 root root 28 Jun 28 15:35 initrd.img.old -> initrd.img-5.15.0-25-generic +-rw------- 1 root root 6246119 Mar 30 2022 System.map-5.15.0-25-generic +lrwxrwxrwx 1 root root 25 Jun 28 15:35 vmlinuz -> vmlinuz-5.15.0-25-generic +-rw-r--r-- 1 root root 55988436 Jun 28 15:50 vmlinuz-5.15.0-25-generic +lrwxrwxrwx 1 root root 25 Jun 28 15:35 vmlinuz.old -> vmlinuz-5.15.0-25-generic +``` + +Now package the tar for upload + +```bash +sudo rm -rf ubuntu:jammy/dev/* +sudo tar -czf ubuntu-jammy.tar.gz -C ubuntu:jammy . +``` + +Upload to the hub, and use it to create a Zmachine + +## VM Image [deprecated] + +In a VM image mode, you run your own operating system (for now only linux is supported) +The image provided must be + +- EFI bootable +- Cloud-init enabled. + +You can find later in this document how to create your own bootable image. + +A VM reservations must also have at least 1 volume, as the boot image +will be copied to this volume. The size of the root disk will be the size of this +volume. + +The image used to the boot the VM must has cloud-init enabled on boot. Cloud-init +receive its config over the NoCloud source. This takes care of setting up networking, hostname +, root authorized_keys. + +> This method of building a full VM from a raw image is not recommended and will get phased out in +the future. It's better to use either the container method to run containerized Apps. Another option +is to run your own kernel from an flist (explained below) + +### Expected Flist structure + +An `Zmachine` will be considered a `VM` if it contains an `/image.raw` file. + +`/image.raw` is used as "boot disk". This `/image.raw` is copied to the first attached +volume of the `VM`. Cloud-init will take care of resizing the filesystem on the image +to take the full disk size allocated in the deployment. + +Note if the `image.raw` size is larger than the allocated disk. the workload for the VM +will fail. + +### Expected Flist structure + +Any Flist will boot as a container, **UNLESS** is has a `/image.raw` file. There is +no need to specify a kernel yourself (it will be provided). + +### Known issues + +- We need to do proper performance testing for `virtio-fs`. There seems to be some + suboptimal performance right now. +- It's not currently possible to get container logs. +- TODO: more testing + +## Creating VM image + +This is a simple tutorial on how to create your own VM image +> Note: Please consider checking the official vm images repo on the hub before building your own +image. this can save you a lot of time (and network traffic) here + +### Use one of ubuntu cloud-images + +If the ubuntu images in the official repo are not enough, you can simply upload one of the official images as follows + +- Visit +- Select the version you want (let's assume bionic) +- Go to bionic, then click on current +- download the amd64.img file like this one +- This is a `Qcow2` image, this is not supported by zos. So we need to convert this to a raw disk image using the following command + +```bash +qemu-img convert -p -f qcow2 -O raw bionic-server-cloudimg-amd64.img image.raw +``` + +- now we have the raw image (image.raw) time to compress and upload to the hub + +```bash +tar -czf ubuntu-18.04-lts.tar.gz image.raw +``` + +- now visit the hub and login or create your own account, then click on upload my file button +- Select the newly created tar.gz file +- Now you should be able to use this flist to create Zmachine workloads + +### Create an image from scratch + +This is an advanced scenario and you will require some prior knowledge of how to create local VMs and how to prepare the installation medium, +and installing your OS of choice. + +Before we continue you need to have some hypervisor that you can use locally. Libvirt/Qemu are good choices. Hence we skip on what you need to do to install and configure your system correctly not how to create the VM + +#### VM Requirements + +Create a VM with enough CPU and Memory to handle the installation process note that this does not relate on what your choices for CPU and Memory are going to be for the actual VM running on the grid. + +We going to install arch linux image. So we will have to create a VM with + +- Disk of about 2GB (note this also is not related to the final VM running on the grid, on installation the OS image will expand to use the entire allocated disk attached to the VM eventually). The smaller the disk is better this can be different for each OS. +- Add the arch installation iso or any other installation medium + +#### Boot the VM (locally) + +Boot the VM to start installation. The boot must support EFI booting because ZOS only support images with esp partition. So make sure that both your hypervisor and boot/installation medium supports this. + +For example in Libvirt Manager make sure you are using the right firmware (UEFI) + +#### Installation + +We going to follow the installation manual for Arch linux but with slight tweaks: + +- Make sure VM is booted with UEFI, run `efivar -l` command see if you get any output. Otherwise the machine is probably booted in BIOS mode. +- With `parted` create 2 partitions + - an esp (boot) partition of 100M + - a root partition that spans the remaining of the disk + +```bash +DISK=/dev/vda +# First, create a gpt partition table +parted $DISK mklabel gpt +# Secondly, create the esp partition of 100M +parted $DISK mkpart primary 1 100M +# Mark first part as esp +parted $DISK set 1 esp on +# Use the remaining part as root that takes the remaining +# space on disk +parted $DISK mkpart primary 100M 100% + +# To verify everything is correct do +parted $DISK print + +# this should 2 partitions the first one is slightly less that 100M and has flags (boot, esp), the second one takes the remaining space +``` + +We need to format the partitions as follows: + +```bash +# this one has to be vfat of size 32 as follows +mkfs.vfat -F 32 /dev/vda1 +# This one can be anything based on your preference as long as it's supported by you OS kernel. we going with ext4 in this tutorial +mkfs.ext4 -L cloud-root /dev/vda2 +``` + +Note the label assigned to the /dev/vda2 (root) partition this can be anything but it's needed to configure the boot later when installing the boot loader. Otherwise you can use the partition UUID. + +Next, we need to mount the disks + +```bash +mount /dev/vda2 /mnt +mkdir /mnt/boot +mount /dev/vda1 /mnt/boot +``` + +After disks are mounted as above, we need to start the installation + +```bash +pacstrap /mnt base linux linux-firmware vim openssh cloud-init cloud-guest-utils +``` + +This will install basic arch linux but will also include cloud-init, cloud-guest-utils, openssh, and vim for convenience. + +Following the installation guid to generate fstab file + +``` +genfstab -U /mnt >> /mnt/etc/fstab +``` + +And arch-chroot into /mnt `arch-chroot /mnt` to continue the setup. please follow all steps in the installation guide to set timezone, and locales as needed. + +- You don't have to set the hostname, this will be setup later on zos when the VM is deployed via cloud-init +- let's drop the root password all together since login to the VM over ssh will require key authentication only, you can do this by running + +```bash +passwd -d root +``` + +We make sure required services are enabled + +```bash +systemctl enable sshd +systemctl enable systemd-networkd +systemctl enable systemd-resolved +systemctl enable cloud-init +systemctl enable cloud-final + +# make sure we using resolved +rm /etc/resolv.conf +ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf +``` + +Finally installing the boot loader as follows +> Only grub2 has been tested and known to work. + +```bash +pacman -S grub +``` + +Then we need to install grub + +``` +grub-install --target=x86_64-efi --efi-directory=esp --removable +``` + +Change default values as follows + +``` +vim /etc/default/grub +``` + +And make sure to change `GRUB_CMDLINE_LINUX_DEFAULT` as follows + +``` +GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 console=tty console=ttyS0" +``` + +> Note: we removed the `quiet` and add the console flags. + +Also set the `GRUB_TIMEOUT` to 0 for a faster boot + +``` +GRUB_TIMEOUT=0 +``` + +Then finally generating the config + +``` +grub-mkconfig -o /boot/grub/grub.cfg +``` + +Last thing we need to do is clean up + +- pacman cache by running `rm -rf /var/cache/pacman/pkg` +- cloud-init state by running `cloud-init clean` + +Click `Ctrl+D` to exit the change root, then power off by running `poweroff` command. + +> NOTE: if you booted the machine again you always need to do `cloud-init clean` as long as it's not yet deployed on ZOS this to make sure the image has a clean state +> +#### Converting the disk + +Based on your hypervisor of choice you might need to convert the disk to a `raw` image same way we did with ubuntu image. + +```bash +# this is an optional step in case you used a qcoq disk for the installation. If the disk is already `raw` you can skip this +qemu-img convert -p -f qcow2 -O raw /path/to/vm/disk.img image.raw +``` + +Compress and tar the image.raw as before, and upload to the hub. + +``` +tar -czf arch-linux.tar.gz image.raw +``` diff --git a/collections/developers/internals/zos/manual/zmount/readme.md b/collections/developers/internals/zos/manual/zmount/readme.md new file mode 100644 index 0000000..e7de260 --- /dev/null +++ b/collections/developers/internals/zos/manual/zmount/readme.md @@ -0,0 +1,2 @@ +# `zmount` type +A `zmount` is a local disk that can be attached directly to a container or a virtual machine. `zmount` only require `size` as input as defined [here](https://github.com/threefoldtech/zos/blob/main/pkg/gridtypes/zos/zmount.go) this workload type is only utilized via the `zmachine` workload. diff --git a/collections/developers/internals/zos/performance/cpubench.md b/collections/developers/internals/zos/performance/cpubench.md new file mode 100644 index 0000000..2d3f8a7 --- /dev/null +++ b/collections/developers/internals/zos/performance/cpubench.md @@ -0,0 +1,85 @@ +

CPUBenchmark

+ +

Table of Contents

+ +- [Overview](#overview) +- [Configuration](#configuration) +- [Details](#details) +- [Result Sample](#result-sample) +- [Result Explanation](#result-explanation) + +*** + +## Overview + +The `CPUBenchmark` task is designed to measure the performance of the CPU. it utilizes the [cpu-benchmark-simple](https://github.com/threefoldtech/cpu-benchmark-simple) tool and includes a zos stub to gather the number of workloads running on the node. + +## Configuration + +- Name: `cpu-benchmark` +- Schedule: 4 times a day +- Jitter: 0 + +## Details + +- The benchmark simply runs a `CRC64` computation task, calculates the time spent in the computation and reports it in `seconds`. +- The computation is performed in both single-threaded and multi-threaded scenarios. +- Lower time = better performance: for a single threaded benchmark, a lower execution time indicates better performance. + +## Result Sample + +```json +{ + "description": "Measures the performance of the node CPU by reporting the time spent of computing a task in seconds.", + "name": "cpu-benchmark", + "result": { + "multi": 1.105, + "single": 1.135, + "threads": 1, + "workloads": 0 + }, + "timestamp": 1700504403 +} +``` + +## Result Explanation + +The best way to know what's a good or bad value is by testing and comparing different hardware. +Here are some examples: + +**1x Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz** (Q3'2017) + +``` +Single thread score: 0.777 +Multi threads score: 13.345 [16 threads] +``` + +**1x Intel(R) Pentium(R) CPU G4400 @ 3.30GHz** (Q3'2015) + +``` +Single thread score: 1.028 +Multi threads score: 2.089 [2 threads] +``` + +**1x Intel(R) Core(TM) i5-3570 CPU @ 3.40GHz** (Q2'2012) + +``` +Single thread score: 2.943 +Multi threads score: 12.956 [4 threads] +``` + +**2x Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz** (Q1'2012) + +``` +Single thread score: 1.298 +Multi threads score: 44.090 [32 threads] +``` + +**2x Intel(R) Xeon(R) CPU L5640 @ 2.27GHz** (Q1'2010) + +``` +Single thread score: 2.504 +Multi threads score: 72.452 [24 threads] +``` + +As you can see, the more recent the CPU is, the faster it is, but for a same launch period, you can see Xeon way better than regular/desktop CPU. You have to take in account the amount of threads and the time per threads. \ No newline at end of file diff --git a/collections/developers/internals/zos/performance/healthcheck.md b/collections/developers/internals/zos/performance/healthcheck.md new file mode 100644 index 0000000..a41c059 --- /dev/null +++ b/collections/developers/internals/zos/performance/healthcheck.md @@ -0,0 +1,38 @@ +

Health Check

+ +

Table of Contents

+ +- [Overview](#overview) +- [Configuration](#configuration) +- [Details](#details) +- [Result Sample](#result-sample) + +*** + +## Overview + +Health check task executes some checks over ZOS components to determine if the node is in a usable state or not and set flags for the Power Daemon to stop uptime reports if the node is unusable. + +## Configuration + +- Name: `healthcheck` +- Schedule: Every 20 mins. + +## Details + +- Check if the node cache disk is usable or not by trying to write some data to it. If it failed, it set the Readonly flag. + +## Result Sample + +```json +{ + "description": "health check task runs multiple checks to ensure the node is in a usable state and set flags for the power daemon to stop reporting uptime if it is not usable", + "name": "healthcheck", + "result": { + "cache": [ + "failed to write to cache: open /var/cache/healthcheck: operation not permitted" + ] + }, + "timestamp": 1701599580 +} +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/performance/iperf.md b/collections/developers/internals/zos/performance/iperf.md new file mode 100644 index 0000000..d7a36dc --- /dev/null +++ b/collections/developers/internals/zos/performance/iperf.md @@ -0,0 +1,80 @@ +

IPerf

+ +

Table of Contents

+ +- [Overview](#overview) +- [Configuration](#configuration) +- [Details](#details) +- [Result Sample](#result-sample) + +*** + +## Overview + +The `iperf` package is designed to facilitate network performance testing using the `iperf3` tool. with both UDP and TCP over IPv4 and IPv6. + +## Configuration + +- Name: `iperf` +- Schedule: 4 times a day +- Jitter: 20 min + +## Details + +- The package using the iperf binary to examine network performance under different conditions. +- It randomly fetch PublicConfig data for randomly public nodes on the chain + all public node from free farm. These nodes serve as the targets for the iperf tests. +- For each node, it run the test with 4 times. through (UDP/TCP) using both node IPs (v4/v6) +- result will be a slice of all public node report (4 for each) each one will include: + ``` + UploadSpeed: Upload speed (in bits per second). + DownloadSpeed: Download speed (in bits per second). + NodeID: ID of the node where the test was conducted. + NodeIpv4: IPv4 address of the node. + TestType: Type of the test (TCP or UDP). + Error: Any error encountered during the test. + CpuReport: CPU utilization report (in percentage). + ``` + +## Result Sample + +```json +{ + "description": "Test public nodes network performance with both UDP and TCP over IPv4 and IPv6", + "name": "iperf", + "result": [ + { + "cpu_report": { + "host_system": 2.4433388913571044, + "host_total": 3.542919199613454, + "host_user": 1.0996094859359695, + "remote_system": 0.24430594945859846, + "remote_total": 0.3854457128784448, + "remote_user": 0.14115962407747246 + }, + "download_speed": 1041274.4792242317, + "error": "", + "node_id": 124, + "node_ip": "88.99.30.200", + "test_type": "tcp", + "upload_speed": 1048549.3668460822 + }, + { + "cpu_report": { + "host_system": 0, + "host_total": 0, + "host_user": 0, + "remote_system": 0, + "remote_total": 0, + "remote_user": 0 + }, + "download_speed": 0, + "error": "unable to connect to server - server may have stopped running or use a different port, firewall issue, etc.: Network unreachable", + "node_id": 124, + "node_ip": "2a01:4f8:10a:710::2", + "test_type": "tcp", + "upload_speed": 0 + } + ], + "timestamp": 1700507035 +} +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/performance/performance.md b/collections/developers/internals/zos/performance/performance.md new file mode 100644 index 0000000..7f3ea76 --- /dev/null +++ b/collections/developers/internals/zos/performance/performance.md @@ -0,0 +1,90 @@ +

Performance Monitor Package

+ +

Table of Contents

+ +- [Overview](#overview) +- [Flow](#flow) +- [Node Initialization Check](#node-initialization-check) +- [Scheduling](#scheduling) +- [RMB Commands](#rmb-commands) +- [Caching](#caching) +- [Registered Tests](#registered-tests) +- [Test Suite](#test-suite) + +*** + +## Overview + +The `perf` package is a performance monitor in `zos` nodes. it schedules tasks, cache their results and allows retrieval of these results through `RMB` calls. + +## Flow + +1. The `perf` monitor is started by the `noded` service in zos. +2. Tasks are registered with a schedule in the new monitor. +3. A bus handler is opened to allow result retrieval. + +## Node Initialization Check + +To ensure that the node always has a test result available, a check is performed on node startup for all the registered tasks, if a task doesn't have any stored result, it will run immediately without waiting for the next scheduled time. + +## Scheduling + +- Tasks are scheduled using a 6 fields cron format. this format provides flexibility to define time, allowing running tasks periodically or at specific time. + +- Each task has a jitter which is the maximum number of seconds the task could sleep before it runs, this happens to prevent all tests ending up running at exactly the same time. So, for example, if a task is scheduled to run at `06:00` and its jitter is `10`, it is expected to run anywhere between `06:00` and `06:10`. + +## RMB Commands + +- `zos.perf.get`: + + - Payload: a payload type that contains the name of the test + + ```go + type Payload struct { + Name string + } + ``` + + Possible values: + + - `"public-ip-validation"` + - `"cpu-benchmark"` + - `"iperf"` + + - Return: a single task result. + + - Possible Error: `ErrResultNotFound` if no result is stored for the given task. + +- `zos.perf.get_all`: + + - Return: all stored results + +The rmb direct client can be used to call these commands. check the [example](https://github.com/threefoldtech/tfgrid-sdk-go/blob/development/rmb-sdk-go/examples/rpc_client/main.go) + +## Caching + +Results are stored in a Redis server running on the node. + +The key in redis is the name of the task prefixed with the word `perf`. +The value is an instance of `TaskResult` struct contains: + +- Name of the task +- Timestamp when the task was run +- A brief description about what the task do +- The actual returned result from the task + +Notes: + +- Storing results by a key ensures each new result overrides the old one, so there is always a single result for each task. +- Storing results prefixed with `perf` eases retrieving all the results stored by this module. + +## Registered Tests + +- [Public IP Validation](./publicips.md) +- [CPUBenchmark](./cpubench.md) +- [IPerf](./iperf.md) +- [Health Check](./healthcheck.md) + +## Test Suite + +Go to [this link](https://app.testlodge.com/a/26076/projects/40893/suites/234919) for a test suite covering the test cases for the performance testing. \ No newline at end of file diff --git a/collections/developers/internals/zos/performance/publicips.md b/collections/developers/internals/zos/performance/publicips.md new file mode 100644 index 0000000..0549512 --- /dev/null +++ b/collections/developers/internals/zos/performance/publicips.md @@ -0,0 +1,55 @@ +

Public IPs Validation Task

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Configuration](#configuration) +- [Task Details](#task-details) +- [Result](#result) + - [Result Sample](#result-sample) + +*** + +## Introduction + +The goal of the task is to make sure public IPs assigned to a farm are valid and can be assigned to deployments. + +## Configuration + +- Name: `public-ip-validation` +- Schedule: 4 times a day +- Jitter: 10 min + +## Task Details + +- The task depends on `Networkd` ensuring the proper test network setup is correct and will fail if it wasn't setup properly. The network setup consists of a test Namespace and a MacVLAN as part of it. All steps are done inside the test Namespace. +- Decide if the node should run the task or another one in the farm based on the node ID. The node with the least ID and with power target as up should run it. The other will log why they shouldn't run the task and return with no errors. This is done to ensure only one node runs the task to avoid problems like assigning the same IP. +- Get public IPs set on the farm. +- Remove all IPs and routes added to the test MacVLAN to ensure any remaining from previous task run are removed. +- Skip IPs that are assigned to a contract. +- Set the MacVLAN link up. +- Iterate over all public IPs and add them with the provided gateway to the MacVLAN. +- Validate the IP by querying an external source that return the public IP for the node. +- If the public IP returned matches the IP added in the link, then the IP is valid. Otherwise, it is invalid. +- Remove all IPs and routes between each IP to make them available for other deployments. +- After iterating over all public IPs, set the link down. + +## Result + +The task only returns a single map of String (IP) to IPReport. The report consists of the IP state (valid, invalid or skipped) and the reason for the state. + +### Result Sample + +```json +{ + "description": "Runs on the least NodeID node in a farm to validate all its IPs.", + "name": "public-ip-validation", + "result": { + "185.206.122.29/24": { + "reason": "public ip or gateway data are not valid", + "state": "invalid" + } + }, + "timestamp": 1700504421 +} +``` \ No newline at end of file diff --git a/collections/developers/internals/zos/readme.md b/collections/developers/internals/zos/readme.md new file mode 100644 index 0000000..33d16df --- /dev/null +++ b/collections/developers/internals/zos/readme.md @@ -0,0 +1,28 @@ +

Zero-OS

+ +

Table of Contents

+ +- [Manual](./manual/manual.md) +- [Workload Types](./manual/workload_types.md) +- [Internal Modules](./internals/internals.md) + - [Identity](./internals/identity/index.md) + - [Node ID Generation](./internals/identity/identity.md) + - [Node Upgrade](./internals/identity/upgrade.md) + - [Node](./internals/node/index.md) + - [Storage](./internals/storage/index.md) + - [Network](./internals/network/index.md) + - [Introduction](./internals/network/introduction.md) + - [Definitions](./internals/network/definitions.md) + - [Mesh](./internals/network/mesh.md) + - [Setup](./internals/network/setup_farm_network.md) + - [Flist](./internals/flist/index.md) + - [Container](./internals/container/index.md) + - [VM](./internals/vmd/index.md) + - [Provision](./internals/provision/index.md) +- [Capacity](./internals/capacity.md) +- [Performance Monitor Package](./performance/performance.md) + - [Public IPs Validation Task](./performance/publicips.md) + - [CPUBenchmark](./performance/cpubench.md) + - [IPerf](./performance/iperf.md) + - [Health Check](./performance/healthcheck.md) +- [API](./manual/api.md) diff --git a/collections/developers/internals/zos/release/readme.md b/collections/developers/internals/zos/release/readme.md new file mode 100644 index 0000000..6af1a51 --- /dev/null +++ b/collections/developers/internals/zos/release/readme.md @@ -0,0 +1,31 @@ +# Releases of Zero-OS + +We use a simple pipeline release workflow. Building and file distribution are made using GitHub Actions. +Usable files are available on the [Zero-OS Hub](https://hub.grid.tf/tf-zos). + +This pipeline is made to match the 3 different type of running mode of 0-OS. For more information head to the [upgrade documentation](../identity/upgrade.md). + +## Development build + +On a push to main branch on the zos repository, a new development build is triggered. If the build succeed, +binaries are packed into an flist and uploaded to the [tf-autobuilder](https://hub.grid.tf/tf-autobuilder) repository of the hub. + +This flist is then promoted into the [tf-zos](https://hub.grid.tf/tf-zos) repository of the hub and a symlink to this latest build is made (`tf-autobuilder/zos:development-3:latest.flist`) + +## Releases +We create 3 types of releases: +- QA release, in this release the version is suffixed by `qa` for example `v3.5.0-qa1`. +- RC release, in this release the version is suffixed by `rc` for example `v3.5.0-rc2`. +- Main release, is this release the version has no suffix, for example `v3.5.0` + +The release cycle goes like this: +- As mentioned before devnet is updated the moment new code is available on `main` branch. Since the `dev` release is auto linked to the latest `flist` on the hub. Nodes on devnet will auto update to the latest available build. +- Creating a `qa` release, will not not trigger the same behavior on `qa` net, same for both testnet and mainnet. Instead a workflow must be triggered, this is only to make sure 100% that an update is needed. +- Once the build of the release is available, a [deploy](../../.github/workflows/grid-deploy.yaml) workflow needed to be triggered with the right version to deploy on the proper network. + - The work flow all what it does is linking the right version under the hub [tf-zos](https://hub.grid.tf/tf-zos) repo + +> The `deploy` flow is rarely used, the on chain update is also available. By setting the right version on tfchain, the link on the hub is auto-updated and hence the deploy workflow won't be needed to be triggered. Although we have it now as a safety net in case something goes wrong (chain is broken) and we need to force a specific version on ZOS. + +- Development: https://playground.hub.grid.tf/tf-autobuilder/zos:development-3:latest.flist +- Testing: https://playground.hub.grid.tf/tf-zos/zos:testing-3:latest.flist +- Production: https://playground.hub.grid.tf/tf-zos/zos:production-3:latest.flist diff --git a/collections/developers/javascript/grid3_javascript_capacity_planning.md b/collections/developers/javascript/grid3_javascript_capacity_planning.md new file mode 100644 index 0000000..f1db3cc --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_capacity_planning.md @@ -0,0 +1,110 @@ +

Capacity Planning

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +It's almost the same as in [deploying a single VM](../javascript/grid3_javascript_vm.md) the only difference is you can automate the choice of the node to deploy on using code. We now support `FilterOptions` to filter nodes based on specific criteria e.g the node resources (CRU, SRU, HRU, MRU) or being part of a specific farm or located in some country, or being a gateway or not + +## Example + +```ts +FilterOptions: { accessNodeV4?: boolean; accessNodeV6?: boolean; city?: string; country?: string; cru?: number; hru?: number; mru?: number; sru?: number; farmId?: number; farmName?: string; gateway?: boolean; publicIPs?: boolean; certified?: boolean; dedicated?: boolean; availableFor?: number; page?: number;} +``` + +```ts +import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + // create network Object + const n = new NetworkModel(); + n.name = "dynamictest"; + n.ip_range = "10.249.0.0/16"; + + // create disk Object + const disk = new DiskModel(); + disk.name = "dynamicDisk"; + disk.size = 8; + disk.mountpoint = "/testdisk"; + + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 2, // GB + sru: 9, + country: "Belgium", + availableFor: grid3.twinId, + }; + + // create vm node Object + const vm = new MachineModel(); + vm.name = "testvm"; + vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choise + vm.disks = [disk]; + vm.public_ip = false; + vm.planetary = true; + vm.cpu = 1; + vm.memory = 1024 * 2; + vm.rootfs_size = 0; + vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"; + vm.entrypoint = "/sbin/zinit init"; + vm.env = { + SSH_KEY: config.ssh_key, + }; + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "dynamicVMS"; + vms.network = n; + vms.machines = [vm]; + vms.metadata = "{'testVMs': true}"; + vms.description = "test deploying VMs via ts grid3 client"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + // // delete + // const d = await grid3.machines.delete({ name: vms.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + +In this example you can notice the criteria for `server1` + +```typescript +const server1_options: FilterOptions = { + cru: 1, + mru: 2, // GB + sru: 9, + country: "Belgium", + availableFor: grid3.twinId, +}; + +``` + +Here we want all the nodes with `CRU:1`, `MRU:2`, `SRU:9`, located in `Belgium` and available for me (not rented for someone else). + +> Note some libraries allow reverse lookup of countries codes by name e.g [i18n-iso-countries](https://www.npmjs.com/package/i18n-iso-countries) + +and then in the MachineModel, we specified the `node_id` to be the first value of our filteration + +```typescript +vm.node_id = +(await nodes.filterNodes(server1_options))[0].nodeId; +``` diff --git a/collections/developers/javascript/grid3_javascript_caprover.md b/collections/developers/javascript/grid3_javascript_caprover.md new file mode 100644 index 0000000..1b1e1e3 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_caprover.md @@ -0,0 +1,232 @@ +

Deploy CapRover

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Leader Node](#leader-node) + - [Code Example](#code-example) + - [Environment Variables](#environment-variables) +- [Worker Node](#worker-node) + - [Code Example](#code-example-1) + - [Environment Variables](#environment-variables-1) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this section, we show how to deploy CapRover with the Javascript client. + +This deployment is very similar to what we have in the section [Deploy a VM](./grid3_javascript_vm.md), but the environment variables are different. + +## Leader Node + +We present here a code example and the environment variables to deploy a CapRover Leader node. + +For further details about the Leader node deployment, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#a-leader-node-deploymentsetup). + +### Code Example + +```ts +import { + DiskModel, + FilterOptions, + MachineModel, + MachinesModel, + NetworkModel, +} from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + const vmQueryOptions: FilterOptions = { + cru: 4, + mru: 4, // GB + sru: 10, + farmId: 1, + }; + + const CAPROVER_FLIST = + "https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist"; + // create network Object + const n = new NetworkModel(); + n.name = "wedtest"; + n.ip_range = "10.249.0.0/16"; + + // create disk Object + const disk = new DiskModel(); + disk.name = "wedDisk"; + disk.size = 10; + disk.mountpoint = "/var/lib/docker"; + + // create vm node Object + const vm = new MachineModel(); + vm.name = "testvm"; + vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + vm.disks = [disk]; + vm.public_ip = true; + vm.planetary = false; + vm.cpu = 4; + vm.memory = 1024 * 4; + vm.rootfs_size = 0; + vm.flist = CAPROVER_FLIST; + vm.entrypoint = "/sbin/zinit init"; + vm.env = { + PUBLIC_KEY: config.ssh_key, + SWM_NODE_MODE: "leader", + CAPROVER_ROOT_DOMAIN: "rafy.grid.tf", // update me + DEFAULT_PASSWORD: "captain42", + CAPTAIN_IMAGE_VERSION: "latest", + }; + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "newVMS5"; + vms.network = n; + vms.machines = [vm]; + vms.metadata = "{'testVMs': true}"; + vms.description = "caprover leader machine/node"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + log( + `You can access Caprover via the browser using: https://captain.${vm.env.CAPROVER_ROOT_DOMAIN}` + ); + + // // delete + // const d = await grid3.machines.delete({ name: vms.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + + + +### Environment Variables + +- PUBLIC_KEY: Your public IP to be able to access the VM. +- SWM_NODE_MODE: Caprover Node type which must be `leader` as we are deploying a leader node. +- CAPROVER_ROOT_DOMAIN: The domain which you we will use to bind the deployed VM. +- DEFAULT_PASSWORD: Caprover default password you want to deploy with. + + + +## Worker Node + +We present here a code example and the environment variables to deploy a CapRover Worker node. + +Note that before deploying the Worker node, you should check the following: + +- Get the Leader node public IP address. +- The Worker node should join the cluster from the UI by adding public IP address and the private SSH key. + +For further information, [read this documentation](https://github.com/freeflowuniverse/freeflow_caprover#step-4-access-the-captain-dashboard). + +### Code Example + +```ts +import { + DiskModel, + FilterOptions, + MachineModel, + MachinesModel, + NetworkModel, +} from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + const vmQueryOptions: FilterOptions = { + cru: 4, + mru: 4, // GB + sru: 10, + farmId: 1, + }; + + const CAPROVER_FLIST = + "https://hub.grid.tf/tf-official-apps/tf-caprover-latest.flist"; + // create network Object + const n = new NetworkModel(); + n.name = "wedtest"; + n.ip_range = "10.249.0.0/16"; + + // create disk Object + const disk = new DiskModel(); + disk.name = "wedDisk"; + disk.size = 10; + disk.mountpoint = "/var/lib/docker"; + + // create vm node Object + const vm = new MachineModel(); + vm.name = "capworker1"; + vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + vm.disks = [disk]; + vm.public_ip = true; + vm.planetary = false; + vm.cpu = 4; + vm.memory = 1024 * 4; + vm.rootfs_size = 0; + vm.flist = CAPROVER_FLIST; + vm.entrypoint = "/sbin/zinit init"; + vm.env = { + // These env. vars needed to be changed based on the leader node. + PUBLIC_KEY: config.ssh_key, + SWM_NODE_MODE: "worker", + LEADER_PUBLIC_IP: "185.206.122.157", + CAPTAIN_IMAGE_VERSION: "latest", + }; + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "newVMS6"; + vms.network = n; + vms.machines = [vm]; + vms.metadata = "{'testVMs': true}"; + vms.description = "caprover worker machine/node"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + // // delete + // const d = await grid3.machines.delete({ name: vms.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + + + +### Environment Variables + +The deployment of the Worker node is similar to the deployment of the Leader node, with the exception of the environment variables which differ slightly. + +- PUBLIC_KEY: Your public IP to be able to access the VM. +- SWM_NODE_MODE: Caprover Node type which must be `worker` as we are deploying a worker node. +- LEADER_PUBLIC_IP: Leader node public IP. + + + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/developers/javascript/grid3_javascript_gpu_support.md b/collections/developers/javascript/grid3_javascript_gpu_support.md new file mode 100644 index 0000000..4c56ca6 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_gpu_support.md @@ -0,0 +1,91 @@ +

GPU Support and JavaScript

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We present here a quick introduction to GPU support with JavaScript. + +There are a couple of updates regarding finding nodes with GPU, querying node for GPU information and deploying with support of GPU. + +This is an ongoing development and this section will be updated as new information comes in. + +## Example + +Here is an example script to deploy with GPU support: + +```ts +import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + // create network Object + const n = new NetworkModel(); + n.name = "vmgpuNetwork"; + n.ip_range = "10.249.0.0/16"; + + // create disk Object + const disk = new DiskModel(); + disk.name = "vmgpuDisk"; + disk.size = 100; + disk.mountpoint = "/testdisk"; + + const vmQueryOptions: FilterOptions = { + cru: 8, + mru: 16, // GB + sru: 100, + availableFor: grid3.twinId, + hasGPU: true, + rentedBy: grid3.twinId, + }; + + // create vm node Object + const vm = new MachineModel(); + vm.name = "vmgpu"; + vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice + vm.disks = [disk]; + vm.public_ip = false; + vm.planetary = true; + vm.cpu = 8; + vm.memory = 1024 * 16; + vm.rootfs_size = 0; + vm.flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist"; + vm.entrypoint = "/"; + vm.env = { + SSH_KEY: config.ssh_key, + }; + vm.gpu = ["0000:0e:00.0/1002/744c"]; // gpu card's id, you can check the available gpu from the dashboard + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "vmgpu"; + vms.network = n; + vms.machines = [vm]; + vms.metadata = ""; + vms.description = "test deploying VM with GPU via ts grid3 client"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + // delete + const d = await grid3.machines.delete({ name: vms.name }); + log(d); + + await grid3.disconnect(); +} + +main(); +``` \ No newline at end of file diff --git a/collections/developers/javascript/grid3_javascript_installation.md b/collections/developers/javascript/grid3_javascript_installation.md new file mode 100644 index 0000000..3040880 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_installation.md @@ -0,0 +1,124 @@ +

Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Installation](#installation) + - [External Package](#external-package) + - [Local Usage](#local-usage) +- [Getting Started](#getting-started) + - [Client Configuration](#client-configuration) +- [Generate the Documentation](#generate-the-documentation) +- [How to Run the Scripts](#how-to-run-the-scripts) +- [Reference API](#reference-api) + +*** + +## Introduction + +We present here the general steps required to install and use the ThreeFold Grid Client. + +The [Grid Client](https://github.com/threefoldtech/tfgrid-sdk-ts/tree/development/packages/grid_client) is written using [TypeScript](https://www.typescriptlang.org/) to provide more convenience and type-checked code. It is used to deploy workloads like virtual machines, kubernetes clusters, quantum storage, and more. + +## Prerequisites + +To install the Grid Client, you will need the following on your machine: + +- [Node.js](https://nodejs.org/en) ^18 +- npm 8.2.0 or higher +- may need to install libtool (**apt-get install libtool**) + +> Note: [nvm](https://nvm.sh/) is the recommended way for installing node. + +To use the Grid Client, you will need the following on the TFGrid: + +- A TFChain account +- TFT in your wallet + +If it is not the case, please visit the [Get started section](../../system_administrators/getstarted/tfgrid3_getstarted.md). + +## Installation + +### External Package + +To install the external package, simply run the following command: + +```bash +yarn add @threefold/grid_client +``` + +> Note: For the **qa**, **test** and **main** networks, please use @2.1.1 version. + +### Local Usage + +To use the Grid Client locally, clone the repository then install the Grid Client: + +- Clone the repository + - ```bash + git clone https://github.com/threefoldtech/tfgrid-sdk-ts + ``` +- Install the Grid Client + - With yarn + - ```bash + yarn install + ``` + - With npm + - ```bash + npm install + ``` + +> Note: In the directory **grid_client/scripts**, we provided a set of scripts to test the Grid Client. + +## Getting Started + +You will need to set the client configuration either by setting the json file manually (**scripts/config.json**) or by using the provided script (**scripts/client_loader.ts**). + +### Client Configuration + +Make sure to set the client configuration properly before using the Grid Client. + +- **network**: The network environment (**dev**, **qa**, **test** or **main**). + +- **mnemonic**: The 12 words mnemonics for your account. + - Learn how to create one [here](../../dashboard/wallet_connector.md). + +- **storeSecret**: This is any word that will be used for encrypting/decrypting the keys on ThreeFold key-value store. + +- **ssh_key**: The public SSH key set on your machine. + +> Note: Only networks can't be isolated, all projects can see the same network. + +## Generate the Documentation + +The easiest way to test the installation is to run the following command with either yarn or npm to generate the Grid Client documentation: + +* With yarn + * ``` + yarn run serve-docs + ``` +* With npm + * ``` + npm run serve-docs + ``` + +> Note: You can also use the command **yarn run** to see all available options. + +## How to Run the Scripts + +You can explore the Grid Client by testing the different scripts proposed in **grid_client/scripts**. + +- Update your customized deployments specs if needed +- Run using [ts-node](https://www.npmjs.com/ts-node) + - With yarn + - ```bash + yarn run ts-node --project tsconfig-node.json scripts/zdb.ts + ``` + - With npx + - ```bash + npx ts-node --project tsconfig-node.json scripts/zdb.ts + ``` + +## Reference API + +While this is still a work in progress, you can have a look [here](https://threefoldtech.github.io/tfgrid-sdk-ts/packages/grid_client/docs/api/index.html). diff --git a/collections/developers/javascript/grid3_javascript_kubernetes.md b/collections/developers/javascript/grid3_javascript_kubernetes.md new file mode 100644 index 0000000..645f3e2 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_kubernetes.md @@ -0,0 +1,186 @@ +

Deploying a Kubernetes Cluster

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Example code](#example-code) +- [Detailed explanation](#detailed-explanation) + - [Building network](#building-network) + - [Building nodes](#building-nodes) + - [Building cluster](#building-cluster) + - [Deploying](#deploying) + - [Getting deployment information](#getting-deployment-information) + - [Deleting deployment](#deleting-deployment) + +*** + +## Introduction + +We show how to deploy a Kubernetes cluster on the TFGrid with the Javascript client. + +## Prerequisites + +- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared + +## Example code + +```ts +import { FilterOptions, K8SModel, KubernetesNodeModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + // create network Object + const n = new NetworkModel(); + n.name = "monNetwork"; + n.ip_range = "10.238.0.0/16"; + n.addAccess = true; + + const masterQueryOptions: FilterOptions = { + cru: 2, + mru: 2, // GB + sru: 2, + availableFor: grid3.twinId, + farmId: 1, + }; + + const workerQueryOptions: FilterOptions = { + cru: 1, + mru: 1, // GB + sru: 1, + availableFor: grid3.twinId, + farmId: 1, + }; + + // create k8s node Object + const master = new KubernetesNodeModel(); + master.name = "master"; + master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId; + master.cpu = 1; + master.memory = 1024; + master.rootfs_size = 0; + master.disk_size = 1; + master.public_ip = false; + master.planetary = true; + + // create k8s node Object + const worker = new KubernetesNodeModel(); + worker.name = "worker"; + worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId; + worker.cpu = 1; + worker.memory = 1024; + worker.rootfs_size = 0; + worker.disk_size = 1; + worker.public_ip = false; + worker.planetary = true; + + // create k8s Object + const k = new K8SModel(); + k.name = "testk8s"; + k.secret = "secret"; + k.network = n; + k.masters = [master]; + k.workers = [worker]; + k.metadata = "{'testk8s': true}"; + k.description = "test deploying k8s via ts grid3 client"; + k.ssh_key = config.ssh_key; + + // deploy + const res = await grid3.k8s.deploy(k); + log(res); + + // get the deployment + const l = await grid3.k8s.getObj(k.name); + log(l); + + // // delete + // const d = await grid3.k8s.delete({ name: k.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + +## Detailed explanation + +### Building network + +```typescript +// create network Object +const n = new NetworkModel(); +n.name = "monNetwork"; +n.ip_range = "10.238.0.0/16"; + +``` + +### Building nodes + +```typescript +// create k8s node Object +const master = new KubernetesNodeModel(); +master.name = "master"; +master.node_id = +(await grid3.capacity.filterNodes(masterQueryOptions))[0].nodeId; +master.cpu = 1; +master.memory = 1024; +master.rootfs_size = 0; +master.disk_size = 1; +master.public_ip = false; +master.planetary = true; + + // create k8s node Object +const worker = new KubernetesNodeModel(); +worker.name = "worker"; +worker.node_id = +(await grid3.capacity.filterNodes(workerQueryOptions))[0].nodeId; +worker.cpu = 1; +worker.memory = 1024; +worker.rootfs_size = 0; +worker.disk_size = 1; +worker.public_ip = false; +worker.planetary = true; + +``` + +### Building cluster + +Here we specify the cluster project name, cluster secret, network model to be used, master and workers nodes and sshkey to access them + +```ts +// create k8s Object +const k = new K8SModel(); +k.name = "testk8s"; +k.secret = "secret"; +k.network = n; +k.masters = [master]; +k.workers = [worker]; +k.metadata = "{'testk8s': true}"; +k.description = "test deploying k8s via ts grid3 client"; +k.ssh_key = config.ssh_key; +``` + +### Deploying + +use `deploy` function to deploy the kubernetes project + +```ts +const res = await grid3.k8s.deploy(k); +log(res); +``` + +### Getting deployment information + +```ts +const l = await grid3.k8s.getObj(k.name); +log(l); +``` + +### Deleting deployment + +```ts +const d = await grid3.k8s.delete({ name: k.name }); +log(d); +``` diff --git a/collections/developers/javascript/grid3_javascript_kvstore.md b/collections/developers/javascript/grid3_javascript_kvstore.md new file mode 100644 index 0000000..5075086 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_kvstore.md @@ -0,0 +1,101 @@ +

Using TFChain KVStore

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Example code](#example-code) + - [setting values](#setting-values) + - [getting key](#getting-key) + - [listing keys](#listing-keys) + - [deleting key](#deleting-key) + +*** + +## Introduction + +As part of the tfchain, we support a keyvalue store module that can be used for any value within `2KB` range. practically it's used to save the user configurations state, so it can be built up again on any machine, given they used the same mnemonics and same secret. + +## Prerequisites + +- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared + +## Example code + +```ts +import { getClient } from "./client_loader"; +import { log } from "./utils"; + +/* +KVStore example usage: +*/ +async function main() { + //For creating grid3 client with KVStore, you need to specify the KVStore storage type in the pram: + + const gridClient = await getClient(); + + //then every module will use the KVStore to save its configuration and restore it. + + // also you can use it like this: + const db = gridClient.kvstore; + + // set key + const key = "hamada"; + const exampleObj = { + key1: "value1", + key2: 2, + }; + // set key + await db.set({ key, value: JSON.stringify(exampleObj) }); + + // list all the keys + const keys = await db.list(); + log(keys); + + // get the key + const data = await db.get({ key }); + log(JSON.parse(data)); + + // remove the key + await db.remove({ key }); + + await gridClient.disconnect(); +} + +main(); + +``` + +### setting values + +`db.set` is used to set key to any value `serialized as string` + +```ts +await db.set({ key, value: JSON.stringify(exampleObj) }); +``` + +### getting key + +`db.get` is used to get a specific key + +```ts +const data = await db.get({ key }); +log(JSON.parse(data)); +``` + +### listing keys + +`db.list` is used to list all the keys. + +```ts +const keys = await db.list(); +log(keys); +``` + +### deleting key + +`db.remove` is used to delete a specific key. + +```ts +await db.remove({ key }); +``` diff --git a/collections/developers/javascript/grid3_javascript_loadclient.md b/collections/developers/javascript/grid3_javascript_loadclient.md new file mode 100644 index 0000000..fc7c025 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_loadclient.md @@ -0,0 +1,68 @@ +

Grid3 Client

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Client Configurations](#client-configurations) +- [Creating/Initializing The Grid3 Client](#creatinginitializing-the-grid3-client) +- [What is `rmb-rs` | Reliable Message Bus --rust](#what-is-rmb-rs--reliable-message-bus---rust) +- [Grid3 Client Options](#grid3-client-options) + +## Introduction + +Grid3 Client is a client used for deploying workloads (VMs, ZDBs, k8s, etc.) on the TFGrid. + +## Client Configurations + +so you have to set up your configuration file to be like this: + +```json +{ + "network": "dev", + "mnemonic": "", + "storeSecret": "secret", + "ssh_key": "" +} +``` + +## Creating/Initializing The Grid3 Client + +```ts +async function getClient(): Promise { + const gridClient = new GridClient({ + network: "dev", // can be dev, qa, test, main, or custom + mnemonic: "", + }); + await gridClient.connect(); + + return gridClient; + } +``` + +The grid client uses `rmb-rs` tool to send requests to/from nodes. + +## What is `rmb-rs` | Reliable Message Bus --rust + +Reliable message bus is a secure communication panel that allows bots to communicate together in a chat like way. It makes it very easy to host a service or a set of functions to be used by anyone, even if your service is running behind NAT. + +Out of the box RMB provides the following: + +- Guarantee authenticity of the messages. You are always sure that the received message is from whoever is pretending to be +- End to End encryption +- Support for 3rd party hosted relays. Anyone can host a relay and people can use it safely since there is no way messages can be inspected while +using e2e. That's similar to home servers by matrix + +## Grid3 Client Options + +- network: `dev` for devnet, `test` for testnet +- mnemonics: used for signing the requests. +- storeSecret: used to encrypt data while storing in backend. It's any word that will be used for encrypting/decrypting the keys on threefold key-value store. If left empty, the Grid client will use the mnemonics as the storeSecret. +- BackendStorage : can be `auto` which willl automatically adapt if running in node environment to use `filesystem backend` or the browser enviornment to use `localstorage backend`. Also you can set it to `kvstore` to use the tfchain keyvalue store module. +- keypairType: is defaulted to `sr25519`, most likely you will never need to change it. `ed25519` is supported too. + +for more details, check [client options](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/docs/client_configuration.md) + +> Note: The choice of the node is completely up to the user at this point. They need to do the capacity planning. Check [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria. + +Check the document for [capacity planning using code](../javascript/grid3_javascript_capacity_planning.md) if you want to automate it +> Note: this feature is still experimental diff --git a/collections/developers/javascript/grid3_javascript_qsfs.md b/collections/developers/javascript/grid3_javascript_qsfs.md new file mode 100644 index 0000000..df6b02a --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_qsfs.md @@ -0,0 +1,297 @@ +

Deploying a VM with QSFS

+ +

Table of Contents

+ +- [Prerequisites](#prerequisites) +- [Code Example](#code-example) +- [Detailed Explanation](#detailed-explanation) + - [Getting the Client](#getting-the-client) + - [Preparing QSFS](#preparing-qsfs) + - [Deploying a VM with QSFS](#deploying-a-vm-with-qsfs) + - [Getting the Deployment Information](#getting-the-deployment-information) + - [Deleting a Deployment](#deleting-a-deployment) + +*** + +## Prerequisites + +First, make sure that you have your [client](./grid3_javascript_loadclient.md) prepared. + +## Code Example + +```ts +import { FilterOptions, MachinesModel, QSFSZDBSModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + const qsfs_name = "wed2710q1"; + const machines_name = "wed2710t1"; + + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 1, // GB + sru: 1, + availableFor: grid3.twinId, + farmId: 1, + }; + + const qsfsQueryOptions: FilterOptions = { + hru: 6, + availableFor: grid3.twinId, + farmId: 1, + }; + + const qsfsNodes = []; + + const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions); + if (allNodes.length >= 2) { + qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId); + } else { + throw Error("Couldn't find nodes for qsfs"); + } + + const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + + const qsfs: QSFSZDBSModel = { + name: qsfs_name, + count: 8, + node_ids: qsfsNodes, + password: "mypassword", + disk_size: 1, + description: "my qsfs test", + metadata: "", + }; + + const vms: MachinesModel = { + name: machines_name, + network: { + name: "wed2710n1", + ip_range: "10.201.0.0/16", + }, + machines: [ + { + name: "wed2710v1", + node_id: vmNode, + disks: [ + { + name: "wed2710d1", + size: 1, + mountpoint: "/mydisk", + }, + ], + qsfs_disks: [ + { + qsfs_zdbs_name: qsfs_name, + name: "wed2710d2", + minimal_shards: 2, + expected_shards: 4, + encryption_key: "hamada", + prefix: "hamada", + cache: 1, + mountpoint: "/myqsfsdisk", + }, + ], + public_ip: false, + public_ip6: false, + planetary: true, + cpu: 1, + memory: 1024, + rootfs_size: 0, + flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + entrypoint: "/sbin/zinit init", + env: { + SSH_KEY: config.ssh_key, + }, + }, + ], + metadata: "{'testVMs': true}", + description: "test deploying VMs via ts grid3 client", + }; + + async function cancel(grid3) { + // delete + const d = await grid3.machines.delete({ name: machines_name }); + log(d); + const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name }); + log(r); + } + //deploy qsfs + const res = await grid3.qsfs_zdbs.deploy(qsfs); + log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<"); + log(res); + + const vm_res = await grid3.machines.deploy(vms); + log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<"); + log(vm_res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(">>>>>>>>>>>>>>>Deployment result<<<<<<<<<<<<<<<"); + log(l); + + // await cancel(grid3); + + await grid3.disconnect(); +} + +main(); +``` + +## Detailed Explanation + +We present a detailed explanation of the example shown above. + +### Getting the Client + +```ts +const grid3 = getClient(); +``` + +### Preparing QSFS + +```ts +const qsfs_name = "wed2710q1"; +const machines_name = "wed2710t1"; +``` + +We prepare here some names to use across the client for the QSFS and the machines project + +```ts + const qsfsQueryOptions: FilterOptions = { + hru: 6, + availableFor: grid3.twinId, + farmId: 1, + }; + const qsfsNodes = []; + + const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions); + if (allNodes.length >= 2) { + qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId); + } else { + throw Error("Couldn't find nodes for qsfs"); + } + + const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + + const qsfs: QSFSZDBSModel = { + name: qsfs_name, + count: 8, + node_ids: qsfsNodes, + password: "mypassword", + disk_size: 1, + description: "my qsfs test", + metadata: "", + }; + +const res = await grid3.qsfs_zdbs.deploy(qsfs); +log(">>>>>>>>>>>>>>>QSFS backend has been created<<<<<<<<<<<<<<<"); +log(res); +``` + +Here we deploy `8` ZDBs on nodes `2,3` with password `mypassword`, all of them having disk size of `10GB` + +### Deploying a VM with QSFS + +```ts +const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 1, // GB + sru: 1, + availableFor: grid3.twinId, + farmId: 1, +}; + +const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + + // deploy vms +const vms: MachinesModel = { + name: machines_name, + network: { + name: "wed2710n1", + ip_range: "10.201.0.0/16", + }, + machines: [ + { + name: "wed2710v1", + node_id: vmNode, + disks: [ + { + name: "wed2710d1", + size: 1, + mountpoint: "/mydisk", + }, + ], + qsfs_disks: [ + { + qsfs_zdbs_name: qsfs_name, + name: "wed2710d2", + minimal_shards: 2, + expected_shards: 4, + encryption_key: "hamada", + prefix: "hamada", + cache: 1, + mountpoint: "/myqsfsdisk", + }, + ], + public_ip: false, + public_ip6: false, + planetary: true, + cpu: 1, + memory: 1024, + rootfs_size: 0, + flist: "https://hub.grid.tf/tf-official-apps/base:latest.flist", + entrypoint: "/sbin/zinit init", + env: { + SSH_KEY: config.ssh_key, + }, + }, + ], + metadata: "{'testVMs': true}", + description: "test deploying VMs via ts grid3 client", +}; +const vm_res = await grid3.machines.deploy(vms); +log(">>>>>>>>>>>>>>>vm has been created<<<<<<<<<<<<<<<"); +log(vm_res); +``` + +So this deployment is almost similiar to what we have in the [vm deployment section](./grid3_javascript_vm.md). We only have a new section `qsfs_disks` + +```ts + qsfs_disks: [{ + qsfs_zdbs_name: qsfs_name, + name: "wed2710d2", + minimal_shards: 2, + expected_shards: 4, + encryption_key: "hamada", + prefix: "hamada", + cache: 1, + mountpoint: "/myqsfsdisk" + }], +``` + +`qsfs_disks` is a list, representing all of the QSFS disks used within that VM. + +- `qsfs_zdbs_name`: that's the backend ZDBs we defined in the beginning +- `expected_shards`: how many ZDBs that QSFS should be working with +- `minimal_shards`: the minimal possible amount of ZDBs to recover the data with when losing disks e.g due to failure +- `mountpoint`: where it will be mounted on the VM `/myqsfsdisk` + +### Getting the Deployment Information + +```ts +const l = await grid3.machines.getObj(vms.name); +log(l); +``` + +### Deleting a Deployment + +```ts +// delete +const d = await grid3.machines.delete({ name: machines_name }); +log(d); +const r = await grid3.qsfs_zdbs.delete({ name: qsfs_name }); +log(r); +``` diff --git a/collections/developers/javascript/grid3_javascript_qsfs_zdbs.md b/collections/developers/javascript/grid3_javascript_qsfs_zdbs.md new file mode 100644 index 0000000..9c3a3f2 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_qsfs_zdbs.md @@ -0,0 +1,142 @@ +

Deploying ZDBs for QSFS

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Example code](#example-code) +- [Detailed explanation](#detailed-explanation) + - [Getting the client](#getting-the-client) + - [Preparing the nodes](#preparing-the-nodes) + - [Preparing ZDBs](#preparing-zdbs) + - [Deploying the ZDBs](#deploying-the-zdbs) + - [Getting deployment information](#getting-deployment-information) + - [Deleting a deployment](#deleting-a-deployment) + +*** + +## Introduction + +We show how to deploy ZDBs for QSFS on the TFGrid with the Javascript client. + +## Prerequisites + +- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared + +## Example code + +````typescript +import { FilterOptions, QSFSZDBSModel } from "../src"; +import { getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + const qsfs_name = "zdbsQsfsDemo"; + const qsfsQueryOptions: FilterOptions = { + hru: 8, + availableFor: grid3.twinId, + farmId: 1, + }; + const qsfsNodes = []; + + const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions); + + if (allNodes.length >= 2) { + qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId); + } else { + throw Error("Couldn't find nodes for qsfs"); + } + + const qsfs: QSFSZDBSModel = { + name: qsfs_name, + count: 12, + node_ids: qsfsNodes, + password: "mypassword", + disk_size: 1, + description: "my zdbs test", + metadata: "", + }; + const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs); + log(deploy_res); + + const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name }); + log(zdbs_data); + + + await grid3.disconnect(); +} +main(); + +```` + +## Detailed explanation + +### Getting the client + +```typescript +const grid3 = getClient(); +``` + +### Preparing the nodes + +we need to deploy the zdbs on two different nodes so, we setup the filters here to retrieve the available nodes. + +````typescript +const qsfsQueryOptions: FilterOptions = { + hru: 16, + availableFor: grid3.twinId, + farmId: 1, +}; +const qsfsNodes = []; + +const allNodes = await grid3.capacity.filterNodes(qsfsQueryOptions); + +if (allNodes.length >= 2) { + qsfsNodes.push(+allNodes[0].nodeId, +allNodes[1].nodeId); +} else { + throw Error("Couldn't find nodes for qsfs"); +} +```` + +Now we have two nodes in `qsfsNode`. + +### Preparing ZDBs + +````typescript +const qsfs_name = "zdbsQsfsDemo"; +```` + +We prepare here a name to use across the client for the QSFS ZDBs + +### Deploying the ZDBs + +````typescript +const qsfs: QSFSZDBSModel = { + name: qsfs_name, + count: 12, + node_ids: qsfsNodes, + password: "mypassword", + disk_size: 1, + description: "my qsfs test", + metadata: "", + }; +const deploy_res = await grid3.qsfs_zdbs.deploy(qsfs); +log(deploy_res); +```` + +Here we deploy `12` ZDBs on nodes in `qsfsNode` with password `mypassword`, all of them having disk size of `1GB`, the client already add 4 zdbs for metadata. + +### Getting deployment information + +````typescript +const zdbs_data = await grid3.qsfs_zdbs.get({ name: qsfs_name }); +log(zdbs_data); +```` + +### Deleting a deployment + +````typescript +const delete_response = await grid3.qsfs_zdbs.delete({ name: qsfs_name }); +log(delete_response); +```` diff --git a/collections/developers/javascript/grid3_javascript_readme.md b/collections/developers/javascript/grid3_javascript_readme.md new file mode 100644 index 0000000..7072d19 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_readme.md @@ -0,0 +1,24 @@ +

Javascript Client

+ +This section covers developing projects on top of Threefold Grid using Javascript language. + +Javascript has a huge ecosystem, and first class citizen when it comes to blockchain technologies like substrate and that was one of the reasons for it to become one the very first supported languages on the grid. + +Please make sure to check the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) before continuing. + +

Table of Contents

+ +- [Installation](./grid3_javascript_installation.md) +- [Loading Client](./grid3_javascript_loadclient.md) +- [Deploy a VM](./grid3_javascript_vm.md) +- [Capacity Planning](./grid3_javascript_capacity_planning.md) +- [Deploy Multiple VMs](./grid3_javascript_vms.md) +- [Deploy CapRover](./grid3_javascript_caprover.md) +- [Gateways](./grid3_javascript_vm_gateways.md) +- [Deploy a Kubernetes Cluster](./grid3_javascript_kubernetes.md) +- [Deploy a ZDB](./grid3_javascript_zdb.md) +- [Deploy ZDBs for QSFS](./grid3_javascript_qsfs_zdbs.md) +- [QSFS](./grid3_javascript_qsfs.md) +- [Key Value Store](./grid3_javascript_kvstore.md) +- [VM with Wireguard and Gateway](./grid3_wireguard_gateway.md) +- [GPU Support](./grid3_javascript_gpu_support.md) \ No newline at end of file diff --git a/collections/developers/javascript/grid3_javascript_run_scripts.md b/collections/developers/javascript/grid3_javascript_run_scripts.md new file mode 100644 index 0000000..7c3a2d5 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_run_scripts.md @@ -0,0 +1,15 @@ +## How to run the scripts + +- Set your grid3 client configuration in `scripts/client_loader.ts` or easily use one of `config.json` +- update your customized deployments specs +- Run using [ts-node](https://www.npmjs.com/ts-node) + +```bash +npx ts-node --project tsconfig-node.json scripts/zdb.ts +``` + +or + +```bash +yarn run ts-node --project tsconfig-node.json scripts/zdb.ts +``` diff --git a/collections/developers/javascript/grid3_javascript_vm.md b/collections/developers/javascript/grid3_javascript_vm.md new file mode 100644 index 0000000..5c9069e --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_vm.md @@ -0,0 +1,194 @@ + +

Deploying a VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Detailed Explanation](#detailed-explanation) + - [Building Network](#building-network) +- [Building the Disk Model](#building-the-disk-model) +- [Building the VM](#building-the-vm) +- [Building VMs Collection](#building-vms-collection) +- [deployment](#deployment) +- [Getting Deployment Information](#getting-deployment-information) +- [Deleting a Deployment](#deleting-a-deployment) + +*** + +## Introduction + +We present information on how to deploy a VM with the Javascript client with concrete examples. + +## Example + +```ts +import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + // create network Object + const n = new NetworkModel(); + n.name = "dynamictest"; + n.ip_range = "10.249.0.0/16"; + + // create disk Object + const disk = new DiskModel(); + disk.name = "dynamicDisk"; + disk.size = 8; + disk.mountpoint = "/testdisk"; + + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 1, // GB + sru: 1, + availableFor: grid3.twinId, + country: "Belgium", + }; + + // create vm node Object + const vm = new MachineModel(); + vm.name = "testvm"; + vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice + vm.disks = [disk]; + vm.public_ip = false; + vm.planetary = true; + vm.cpu = 1; + vm.memory = 1024; + vm.rootfs_size = 0; + vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"; + vm.entrypoint = "/sbin/zinit init"; + vm.env = { + SSH_KEY: config.ssh_key, + }; + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "dynamicVMS"; + vms.network = n; + vms.machines = [vm]; + vms.metadata = "{'testVMs': true}"; + vms.description = "test deploying VMs via ts grid3 client"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + // // delete + // const d = await grid3.machines.delete({ name: vms.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + +## Detailed Explanation + +### Building Network + +```ts +// create network Object +const n = new NetworkModel(); +n.name = "dynamictest"; +n.ip_range = "10.249.0.0/16"; +``` + +Here we prepare the network model that is going to be used by specifying a name to our network and the range it will be spanning over + +## Building the Disk Model + +```ts +// create disk Object +const disk = new DiskModel(); +disk.name = "dynamicDisk"; +disk.size = 8; +disk.mountpoint = "/testdisk"; +``` + +here we create the disk model specifying its name, size in GB and where it will be mounted eventually + +## Building the VM + +```ts +// create vm node Object +const vm = new MachineModel(); +vm.name = "testvm"; +vm.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; // TODO: allow random choice +vm.disks = [disk]; +vm.public_ip = false; +vm.planetary = true; +vm.cpu = 1; +vm.memory = 1024; +vm.rootfs_size = 0; +vm.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"; +vm.entrypoint = "/sbin/zinit init"; +vm.env = { + SSH_KEY: config.ssh_key, +}; +``` + +Now we go to the VM model, that will be used to build our `zmachine` object + +We need to specify its + +- name +- node_id: where it will get deployed +- disks: disks model collection +- memory +- root filesystem size +- flist: the image it is going to start from. Check the [supported flists](../flist/grid3_supported_flists.md) +- entry point: entrypoint command / script to execute +- env: has the environment variables needed e.g sshkeys used +- public ip: if we want to have a public ip attached to the VM +- planetary: to enable planetary network on VM + +## Building VMs Collection + +```ts +// create VMs Object +const vms = new MachinesModel(); +vms.name = "dynamicVMS"; +vms.network = n; +vms.machines = [vm]; +vms.metadata = "{'testVMs': true}"; +vms.description = "test deploying VMs via ts grid3 client"; +``` + +Here it's quite simple we can add one or more VM to the `machines` property to have them deployed as part of our project + +## deployment + +```ts +// deploy vms +const res = await grid3.machines.deploy(vms); +log(res); +``` + +## Getting Deployment Information + +can do so based on the name you gave to the `vms` collection + +```ts +// get the deployment +const l = await grid3.machines.getObj(vms.name); +log(l); +``` + +## Deleting a Deployment + +```ts +// delete +const d = await grid3.machines.delete({ name: vms.name }); +log(d); +``` + +In the underlying layer we cancel the contracts that were created on the chain and as a result all of the workloads tied to his project will get deleted. diff --git a/collections/developers/javascript/grid3_javascript_vm_gateways.md b/collections/developers/javascript/grid3_javascript_vm_gateways.md new file mode 100644 index 0000000..052d9f3 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_vm_gateways.md @@ -0,0 +1,189 @@ +

Deploying a VM and exposing it over a Gateway Prefix

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example code](#example-code) +- [Detailed explanation](#detailed-explanation) + - [deploying](#deploying) + - [getting deployment object](#getting-deployment-object) + - [deletion](#deletion) +- [Deploying a VM and exposing it over a Gateway using a Full domain](#deploying-a-vm-and-exposing-it-over-a-gateway-using-a-full-domain) +- [Example code](#example-code-1) +- [Detailed explanation](#detailed-explanation-1) + - [deploying](#deploying-1) + - [get deployment object](#get-deployment-object) + - [deletion](#deletion-1) + +*** + +## Introduction + +After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world + +## Example code + +```ts +import { FilterOptions, GatewayNameModel } from "../src"; +import { getClient } from "./client_loader"; +import { log } from "./utils"; + +// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway +async function main() { + const grid3 = await getClient(); + + const gatewayQueryOptions: FilterOptions = { + gateway: true, + farmId: 1, + }; + + const gw = new GatewayNameModel(); + gw.name = "test"; + gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId; + gw.tls_passthrough = false; + // the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available. + gw.backends = ["http://185.206.122.35:8000"]; + + // deploy + const res = await grid3.gateway.deploy_name(gw); + log(res); + + // get the deployment + const l = await grid3.gateway.getObj(gw.name); + log(l); + + // // delete + // const d = await grid3.gateway.delete_name({ name: gw.name }); + // log(d); + + grid3.disconnect(); +} + +main(); + +``` + +## Detailed explanation + +```ts +const gw = new GatewayNameModel(); +gw.name = "test"; +gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId; +gw.tls_passthrough = false; +gw.backends = ["http://185.206.122.35:8000"]; +``` + +- we created a gateway name model and gave it a `name` -that's why it's called GatewayName- `test` to be deployed on gateway node to end up with a domain `test.gent01.devnet.grid.tf`, +- we create a proxy for the gateway to send the traffic coming to `test.ghent01.devnet.grid.tf` to the backend `http://185.206.122.35`, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replace it with `true` your backend service needs to be able to do the TLS termination + +### deploying + +```ts +// deploy +const res = await grid3.gateway.deploy_name(gw); +log(res); +``` + +this deploys `GatewayName` on the grid + +### getting deployment object + +```ts +const l = await grid3.gateway.getObj(gw.name); +log(l); +``` + +getting the deployment information can be done using `getObj` + +### deletion + +```ts +const d = await grid3.gateway.delete_name({ name: gw.name }); +log(d); +``` + +## Deploying a VM and exposing it over a Gateway using a Full domain + +After the [deployment of a VM](./grid3_javascript_vm.md), now it's time to expose it to the world + +## Example code + +```ts +import { FilterOptions, GatewayFQDNModel } from "../src"; +import { getClient } from "./client_loader"; +import { log } from "./utils"; + +// read more about the gateway types in this doc: https://github.com/threefoldtech/zos/tree/main/docs/gateway +async function main() { + const grid3 = await getClient(); + + const gatewayQueryOptions: FilterOptions = { + gateway: true, + farmId: 1, + }; + const gw = new GatewayFQDNModel(); + gw.name = "applyFQDN"; + gw.node_id = +(await grid3.capacity.filterNodes(gatewayQueryOptions))[0].nodeId; + gw.fqdn = "test.hamada.grid.tf"; + gw.tls_passthrough = false; + // the backends have to be in this format `http://ip:port` or `https://ip:port`, and the `ip` pingable from the node so using the ygg ip or public ip if available. + gw.backends = ["http://185.206.122.35:8000"]; + + // deploy + const res = await grid3.gateway.deploy_fqdn(gw); + log(res); + + // get the deployment + const l = await grid3.gateway.getObj(gw.name); + log(l); + + // // delete + // const d = await grid3.gateway.delete_fqdn({ name: gw.name }); + // log(d); + + grid3.disconnect(); +} + +main(); +``` + +## Detailed explanation + +```ts +const gw = new GatewayFQDNModel(); +gw.name = "applyFQDN"; +gw.node_id = 1; +gw.fqdn = "test.hamada.grid.tf"; +gw.tls_passthrough = false; +gw.backends = ["my yggdrasil IP"]; +``` + +- we created a `GatewayFQDNModel` and gave it a name `applyFQDNN` to be deployed on gateway node `1` and specified the fully qualified domain `fqdn` to a domain we own `test.hamada.grid.tf` +- we created a record on our name provider for `test.hamada.grid.tf` to point to the IP of gateway node `1` +- we specified the backened would be an yggdrassil ip so once this is deployed when we go to `test.hamada.grid.tf` we go to the gateway server and from their our traffic goes to the backend. + +### deploying + +```ts +// deploy +const res = await grid3.gateway.deploy_fqdn(gw); +log(res); +``` + +this deploys `GatewayName` on the grid + +### get deployment object + +```ts +const l = await grid3.gateway.getObj(gw.name); +log(l); +``` + +getting the deployment information can be done using `getObj` + +### deletion + +```ts +const d = await grid3.gateway.delete_fqdn({ name: gw.name }); +log(d); +``` diff --git a/collections/developers/javascript/grid3_javascript_vms.md b/collections/developers/javascript/grid3_javascript_vms.md new file mode 100644 index 0000000..b928007 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_vms.md @@ -0,0 +1,108 @@ + +

Deploying multiple VMs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example code](#example-code) + +*** + +## Introduction + +It is possible to deploy multiple VMs with the Javascript client. + +## Example code + +```ts +import { DiskModel, FilterOptions, MachineModel, MachinesModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + // create network Object + const n = new NetworkModel(); + n.name = "monNetwork"; + n.ip_range = "10.238.0.0/16"; + + // create disk Object + const disk1 = new DiskModel(); + disk1.name = "newDisk1"; + disk1.size = 1; + disk1.mountpoint = "/newDisk1"; + + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 1, // GB + sru: 1, + availableFor: grid3.twinId, + farmId: 1, + }; + + // create vm node Object + const vm1 = new MachineModel(); + vm1.name = "testvm1"; + vm1.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + vm1.disks = [disk1]; + vm1.public_ip = false; + vm1.planetary = true; + vm1.cpu = 1; + vm1.memory = 1024; + vm1.rootfs_size = 0; + vm1.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"; + vm1.entrypoint = "/sbin/zinit init"; + vm1.env = { + SSH_KEY: config.ssh_key, + }; + + // create disk Object + const disk2 = new DiskModel(); + disk2.name = "newDisk2"; + disk2.size = 1; + disk2.mountpoint = "/newDisk2"; + + // create another vm node Object + const vm2 = new MachineModel(); + vm2.name = "testvm2"; + vm2.node_id = +(await grid3.capacity.filterNodes(vmQueryOptions))[1].nodeId; + vm2.disks = [disk2]; + vm2.public_ip = false; + vm2.planetary = true; + vm2.cpu = 1; + vm2.memory = 1024; + vm2.rootfs_size = 0; + vm2.flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist"; + vm2.entrypoint = "/sbin/zinit init"; + vm2.env = { + SSH_KEY: config.ssh_key, + }; + + // create VMs Object + const vms = new MachinesModel(); + vms.name = "monVMS"; + vms.network = n; + vms.machines = [vm1, vm2]; + vms.metadata = "{'testVMs': true}"; + vms.description = "test deploying VMs via ts grid3 client"; + + // deploy vms + const res = await grid3.machines.deploy(vms); + log(res); + + // get the deployment + const l = await grid3.machines.getObj(vms.name); + log(l); + + // // delete + // const d = await grid3.machines.delete({ name: vms.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + +It's similiar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but just adds more vm objects to vms collection. diff --git a/collections/developers/javascript/grid3_javascript_zdb.md b/collections/developers/javascript/grid3_javascript_zdb.md new file mode 100644 index 0000000..d773269 --- /dev/null +++ b/collections/developers/javascript/grid3_javascript_zdb.md @@ -0,0 +1,143 @@ +

Deploying ZDB

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Example code](#example-code) +- [Detailed explanation](#detailed-explanation) + - [Getting the client](#getting-the-client) + - [Building the model](#building-the-model) + - [preparing ZDBs collection](#preparing-zdbs-collection) + - [Deployment](#deployment) + - [Getting Deployment information](#getting-deployment-information) + - [Deleting a deployment](#deleting-a-deployment) + +*** + +## Introduction + +We show how to deploy ZDB on the TFGrid with the Javascript client. + +## Prerequisites + +- Make sure you have your [client](./grid3_javascript_loadclient.md) prepared + +## Example code + +```ts +import { FilterOptions, ZDBModel, ZdbModes, ZDBSModel } from "../src"; +import { getClient } from "./client_loader"; +import { log } from "./utils"; + +async function main() { + const grid3 = await getClient(); + + const zdbQueryOptions: FilterOptions = { + sru: 1, + hru: 1, + availableFor: grid3.twinId, + farmId: 1, + }; + + // create zdb object + const zdb = new ZDBModel(); + zdb.name = "hamada"; + zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId; + zdb.mode = ZdbModes.user; + zdb.disk_size = 1; + zdb.publicNamespace = false; + zdb.password = "testzdb"; + + // create zdbs object + const zdbs = new ZDBSModel(); + zdbs.name = "tttzdbs"; + zdbs.zdbs = [zdb]; + zdbs.metadata = '{"test": "test"}'; + + // deploy zdb + const res = await grid3.zdbs.deploy(zdbs); + log(res); + + // get the deployment + const l = await grid3.zdbs.getObj(zdbs.name); + log(l); + + // // delete + // const d = await grid3.zdbs.delete({ name: zdbs.name }); + // log(d); + + await grid3.disconnect(); +} + +main(); +``` + +## Detailed explanation + +### Getting the client + +```ts +const grid3 = getClient(); +``` + +### Building the model + +```ts +// create zdb object +const zdb = new ZDBModel(); +zdb.name = "hamada"; +zdb.node_id = +(await grid3.capacity.filterNodes(zdbQueryOptions))[0].nodeId; +zdb.mode = ZdbModes.user; +zdb.disk_size = 1; +zdb.publicNamespace = false; +zdb.password = "testzdb"; +``` + +Here we define a `ZDB model` and setting the relevant properties e.g + +- name +- node_id : where to deploy on +- mode: `user` or `seq` +- disk_size: disk size in GB +- publicNamespace: a public namespace can be read-only if a password is set +- password: namespace password + +### preparing ZDBs collection + +```ts +// create zdbs object +const zdbs = new ZDBSModel(); +zdbs.name = "tttzdbs"; +zdbs.zdbs = [zdb]; +zdbs.metadata = '{"test": "test"}'; +``` + +you can attach multiple ZDBs into the collection and send it for deployment + +### Deployment + +```ts +const res = await grid3.zdbs.deploy(zdbs); +log(res); +``` + +### Getting Deployment information + +`getObj` gives detailed information about the workload. + +```ts +// get the deployment +const l = await grid3.zdbs.getObj(zdbs.name); +log(l); +``` + +### Deleting a deployment + +`.delete` method helps cancelling the relevant contracts related to that ZDBs deployment + +```ts +// delete +const d = await grid3.zdbs.delete({ name: zdbs.name }); +log(d); +``` diff --git a/collections/developers/javascript/grid3_wireguard_gateway.md b/collections/developers/javascript/grid3_wireguard_gateway.md new file mode 100644 index 0000000..2cb1b89 --- /dev/null +++ b/collections/developers/javascript/grid3_wireguard_gateway.md @@ -0,0 +1,302 @@ +

Deploying a VM with Wireguard and Gateway

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Client Configurations](#client-configurations) +- [Code Example](#code-example) +- [Detailed Explanation](#detailed-explanation) + - [Get the Client](#get-the-client) + - [Get the Nodes](#get-the-nodes) + - [Deploy the VM](#deploy-the-vm) + - [Deploy the Gateway](#deploy-the-gateway) + - [Get the Deployments Information](#get-the-deployments-information) + - [Disconnect the Client](#disconnect-the-client) + - [Delete the Deployments](#delete-the-deployments) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We present here the relevant information when it comes to deploying a virtual machine with Wireguard and a gateway. + + + + +## Client Configurations + +To configure the client, have a look at [this section](./grid3_javascript_loadclient.md). + + + +## Code Example + +```ts +import { FilterOptions, GatewayNameModel, GridClient, MachineModel, MachinesModel, NetworkModel } from "../src"; +import { config, getClient } from "./client_loader"; +import { log } from "./utils"; + +function createNetworkModel(gwNode: number, name: string): NetworkModel { + return { + name, + addAccess: true, + accessNodeId: gwNode, + ip_range: "10.238.0.0/16", + } as NetworkModel; +} +function createMachineModel(node: number) { + return { + name: "testvm1", + node_id: node, + public_ip: false, + planetary: true, + cpu: 1, + memory: 1024 * 2, + rootfs_size: 0, + disks: [], + flist: "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist", + entrypoint: "/usr/bin/python3 -m http.server --bind ::", + env: { + SSH_KEY: config.ssh_key, + }, + } as MachineModel; +} +function createMachinesModel(vm: MachineModel, network: NetworkModel): MachinesModel { + return { + name: "newVMs", + network, + machines: [vm], + metadata: "", + description: "test deploying VMs with wireguard via ts grid3 client", + } as MachinesModel; +} +function createGwModel(node_id: number, ip: string, networkName: string, name: string, port: number) { + return { + name, + node_id, + tls_passthrough: false, + backends: [`http://${ip}:${port}`], + network: networkName, + } as GatewayNameModel; +} + +async function main() { + const grid3 = await getClient(); + + const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId; + + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 2, // GB + availableFor: grid3.twinId, + farmId: 1, + }; + const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + + const network = createNetworkModel(gwNode, "monNetwork"); + const vm = createMachineModel(vmNode); + const machines = createMachinesModel(vm, network); + log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`); + + // deploy the vm + const vmResult = await grid3.machines.deploy(machines); + log(vmResult); + + const deployedVm = await grid3.machines.getObj(machines.name); + log("+++ deployed vm +++"); + log(deployedVm); + + // deploy the gateway + const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip; + const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000); + log(`deploying gateway ${network.name} on node ${gwNode}`); + + const gatewayResult = await grid3.gateway.deploy_name(gateway); + log(gatewayResult); + + log("+++ Deployed gateway +++"); + + const deployedGw = await grid3.gateway.getObj(gateway.name); + log(deployedGw); + + await grid3.disconnect(); +} + +main(); + +``` + + +## Detailed Explanation + +What we need to do with that code is: Deploy a name gateway with the wireguard IP as the backend; that allows accessing a server inside the vm through the gateway using the private network (wireguard) as the backend. + +This will be done through the following steps: + +### Get the Client + +```ts +const grid3 = getClient(); +``` + +### Get the Nodes + +Determine the deploying nodes for the vm, network and gateway. + +- Gateway and network access node + + ```ts + const gwNode = +(await grid3.capacity.filterNodes({ gateway: true }))[0].nodeId; + ``` + + Using the `filterNodes` method, will get the first gateway node id, we will deploy the gateway and will use it as our network access node. + + > The gateway node must be the same as the network access node. +- VM node + + we need to set the filter options first for this example we will deploy the vm with 1 cpu, 2 GB of memory. + now will crete a `FilterOptions` object with that specs and get the firs node id of the result. + + ```ts + const vmQueryOptions: FilterOptions = { + cru: 1, + mru: 2, // GB + availableFor: grid3.twinId, + farmId: 1, + }; + const vmNode = +(await grid3.capacity.filterNodes(vmQueryOptions))[0].nodeId; + ``` + +### Deploy the VM + +We need to create the network and machine models, the deploy the VM + +```ts +const network = createNetworkModel(gwNode, "monNetwork"); +const vm = createMachineModel(vmNode); +const machines = createMachinesModel(vm, network); +log(`Deploying vm on node: ${vmNode}, with network node: ${gwNode}`); + +// deploy the vm +const vmResult = await grid3.machines.deploy(machines); +log(vmResult); +``` + +- `CreateNetWorkModel` : + we are creating a network and set the node id to be `gwNode`, the name `monNetwork` and inside the function we set `addAccess: true` to add __wireguard__ access. + +- `createMachineModel` and `createMachinesModel` is similar to the previous section of [deploying a single VM](../javascript/grid3_javascript_vm.md), but we are passing the created `NetworkModel` to the machines model and the entry point here runs a simple python server. + +### Deploy the Gateway + +Now we have our VM deployed with it's network, we need to make the gateway on the same node, same network and pointing to the VM's private IP address. + +- Get the VM's private IP address: + + ```ts + const vmPrivateIP = (deployedVm as { interfaces: { ip: string }[] }[])[0].interfaces[0].ip; + ``` + +- Create the Gateway name model: + + ```ts + const gateway = createGwModel(gwNode, vmPrivateIP, network.name, "pyserver", 8000); + ``` + + This will create a `GatewayNameModel` with the following properties: + + - `name` : the subdomain name + - `node_id` : the gateway node id + - `tls_passthrough: false` + - `backends: [`http://${ip}:${port}`]` : the private ip address and the port number of our machine + - `network: networkName` : the network name, we already created earlier. + +### Get the Deployments Information + + ```ts + const deployedVm = await grid3.machines.getObj(machines.name); + log("+++ deployed vm +++"); + log(deployedVm); + + log("+++ Deployed gateway +++"); + const deployedGw = await grid3.gateway.getObj(gateway.name); + log(deployedGw); + ``` + +- `deployedVm` : is an array of one object contains the details about the vm deployment. + + ```ts + [ + { + version: 0, + contractId: 30658, + nodeId: 11, + name: 'testvm1', + created: 1686225126, + status: 'ok', + message: '', + flist: 'https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist', + publicIP: null, + planetary: '302:9e63:7d43:b742:3582:a831:cd41:3f19', + interfaces: [ { network: 'monNetwork', ip: '10.238.2.2' } ], + capacity: { cpu: 1, memory: 2048 }, + mounts: [], + env: { + SSH_KEY: 'ssh' + }, + entrypoint: '/usr/bin/python3 -m http.server --bind ::', + metadata: '{"type":"vm","name":"newVMs","projectName":""}', + description: 'test deploying VMs with wireguard via ts grid3 client', + rootfs_size: 0, + corex: false + } + ] + ``` + +- `deployedGw` : is an array of one object contains the details of the gateway name. + + ```ts + [ + { + version: 0, + contractId: 30659, + name: 'pyserver1', + created: 1686225139, + status: 'ok', + message: '', + type: 'gateway-name-proxy', + domain: 'pyserver1.gent02.dev.grid.tf', + tls_passthrough: false, + backends: [ 'http://10.238.2.2:8000' ], + metadata: '{"type":"gateway","name":"pyserver1","projectName":""}', + description: '' + } + ] + ``` + + Now we can access the vm using the `domain` that returned in the object. + +### Disconnect the Client + +finally we need to disconnect the client using `await grid3.disconnect();` + +### Delete the Deployments + +If we want to delete the deployments we can just do this: + +```ts + const deletedMachines = await grid3.machines.delete({ name: machines.name}); + log(deletedMachines); + + const deletedGW = await grid3.gateway.delete_name({ name: gateway.name}); + log(deletedGW); +``` + + + +## Conclusion + +This section presented a detailed description on how to create a virtual machine with private IP using Wireguard and use it as a backend for a name gateway. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/developers/javascript/sidebar.md b/collections/developers/javascript/sidebar.md new file mode 100644 index 0000000..421992b --- /dev/null +++ b/collections/developers/javascript/sidebar.md @@ -0,0 +1,11 @@ +- [Installation](@grid3_javascript_installation) +- [Loading client](@grid3_javascript_loadclient) +- [Deploy a VM](@grid3_javascript_vm) +- [Capacity planning](@grid3_javascript_capacity_planning) +- [Deploy multiple VMs](@grid3_javascript_vms) +- [Deploy CapRover](@grid3_javascript_caprover) +- [Gateways](@grid3_javascript_vm_gateways) +- [Deploy a Kubernetes cluster](@grid3_javascript_kubernetes) +- [Deploy a ZDB](@grid3_javascript_zdb) +- [QSFS](@grid3_javascript_qsfs) +- [Key Value Store](@grid3_javascript_kvstore) diff --git a/collections/developers/proxy/commands.md b/collections/developers/proxy/commands.md new file mode 100644 index 0000000..51837b0 --- /dev/null +++ b/collections/developers/proxy/commands.md @@ -0,0 +1,127 @@ +

Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Work on Docs](#work-on-docs) +- [To start the GridProxy server](#to-start-the-gridproxy-server) +- [Run tests](#run-tests) + +*** + +## Introduction + +The Makefile makes it easier to do mostly all the frequently commands needed to work on the project. + +## Work on Docs + +we are using [swaggo/swag](https://github.com/swaggo/swag) to generate swagger docs based on the annotation inside the code. + +- install swag executable binary + + ```bash + go install github.com/swaggo/swag/cmd/swag@latest + ``` + +- now if you check the binary directory inside go directory you will find the executable file. + + ```bash + ls $(go env GOPATH)/bin + ``` + +- to run swag you can either use the full path `$(go env GOPATH)/bin/swag` or export go binary to `$PATH` + + ```bash + export PATH=$PATH:$(go env GOPATH)/bin + ``` + +- use swag to format code comments. + + ```bash + swag fmt + ``` + +- update the docs + + ```bash + swag init + ``` + +- to parse external types from vendor + + ```bash + swag init --parseVendor + ``` + +- for a full generate docs command + + ```bash + make docs + ``` + +## To start the GridProxy server + +After preparing the postgres database you can `go run` the main file in `cmds/proxy_server/main.go` which responsible for starting all the needed server/clients. + +The server options + +| Option | Description | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------- | +| -address | Server ip address (default `":443"`) | +| -ca | certificate authority used to generate certificate (default `"https://acme-staging-v02.api.letsencrypt.org/directory"`) | +| -cert-cache-dir | path to store generated certs in (default `"/tmp/certs"`) | +| -domain | domain on which the server will be served | +| -email | email address to generate certificate with | +| -log-level | log level | +| -no-cert | start the server without certificate | +| -postgres-db | postgres database | +| -postgres-host | postgres host | +| -postgres-password | postgres password | +| -postgres-port | postgres port (default 5432) | +| -postgres-user | postgres username | +| -tfchain-url | tF chain url (default `"wss://tfchain.dev.grid.tf/ws"`) | +| -relay-url | RMB relay url (default`"wss://relay.dev.grid.tf"`) | +| -mnemonics | Dummy user mnemonics for relay calls | +| -v | shows the package version | + +For a full server setup: + +```bash +make restart +``` + +## Run tests + +There is two types of tests in the project + +- Unit Tests + - Found in `pkg/client/*_test.go` + - Run with `go test -v ./pkg/client` +- Integration Tests + - Found in `tests/queries/` + - Run with: + + ```bash + go test -v \ + --seed 13 \ + --postgres-host \ + --postgres-db tfgrid-graphql \ + --postgres-password postgres \ + --postgres-user postgres \ + --endpoint \ + --mnemonics + ``` + + - Or to run a specific test you can append the previous command with + + ```bash + -run + ``` + + You can found the TestName in the `tests/queries/*_test.go` files. + +To run all the tests use + +```bash +make test-all +``` diff --git a/collections/developers/proxy/contributions.md b/collections/developers/proxy/contributions.md new file mode 100644 index 0000000..3960676 --- /dev/null +++ b/collections/developers/proxy/contributions.md @@ -0,0 +1,55 @@ +

Contributions Guide

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Project structure](#project-structure) + - [Internal](#internal) + - [Pkg](#pkg) +- [Writing tests](#writing-tests) + +*** + +## Introduction + +We propose a quick guide to learn how to contribute. + +## Project structure + +The main structure of the code base is as follows: + +- `charts`: helm chart +- `cmds`: includes the project Golang entrypoints +- `docs`: project documentation +- `internal`: contains the explorer API logic and the cert manager implementation, this where most of the feature work will be done +- `pkg`: contains client implementation and shared libs +- `tests`: integration tests +- `tools`: DB tools to prepare the Postgres DB for testing and development +- `rootfs`: ZOS root endpoint that will be mounted in the docker image + +### Internal + +- `explorer`: contains the explorer server logic: + - `db`: the db connection and operations + - `mw`: defines the generic action mount that will be be used as http handler +- `certmanager`: logic to ensure certificates are available and up to date + +`server.go` includes the logic for all the API operations. + +### Pkg + +- `client`: client implementation +- `types`: defines all the API objects + +## Writing tests + +Adding a new endpoint should be accompanied with a corresponding test. Ideally every change or bug fix should include a test to ensure the new behavior/fix is working as intended. + +Since these are integration tests, you need to first make sure that your local db is already seeded with the ncessary data. See tools [doc](./db_testing.md) for more information about how to prepare your db. + +Testing tools offer two clients that are the basic of most tests: + +- `local`: this client connects to the local db +- `proxy client`: this client connects to the running local instance + +You need to start an instance of the server before running the tests. Check [here](./commands.md) for how to start. diff --git a/collections/developers/proxy/database.md b/collections/developers/proxy/database.md new file mode 100644 index 0000000..58c327a --- /dev/null +++ b/collections/developers/proxy/database.md @@ -0,0 +1,21 @@ +

Database

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Max Open Connections](#max-open-connections) + +*** + +## Introduction + +The grid proxy has access to a postgres database containing information about the tfgrid, specifically information about grid nodes, farms, twins, and contracts.\ +The database is filled/updated by this [indexer](https://github.com/threefoldtech/tfchain_graphql). +The grid proxy mainly retrieves information from the db with a few modifications for efficient retrieval (e.g. adding indices, caching node gpus, etc..). + +## Max Open Connections + +The postgres database can handle 100 open connections concurrently (that is the default value set by postgres), this number can be increased, depending on the infrastructure, by modifying it in the postgres.conf file where the db is deployed, or by executing the following query `ALTER system SET max_connections=size-of-connection`, but this requires a db restart to take effect.\ +The explorer creates a connection pool to the postgres db, with a max open pool connections set to a specific number (currently 80).\ +It's important to distinguish between the database max connections, and the max pool open connections, because if the pool did not have any constraints, it would try to open as many connections as it wanted, without any notion of the maximum connections the database accepts. It's the database responsibility then to accept or deny the connection.\ +This is why the max number of open pool connections is set to 80: It's below the max connections the database could handle (100), and it gives room for other actors outside of the explorer to open connections with the database.\ diff --git a/collections/developers/proxy/db_testing.md b/collections/developers/proxy/db_testing.md new file mode 100644 index 0000000..60bffed --- /dev/null +++ b/collections/developers/proxy/db_testing.md @@ -0,0 +1,45 @@ +

DB for testing

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Run postgresql container](#run-postgresql-container) +- [Create the DB](#create-the-db) + - [Method 1: Generate a db with relevant schema using the db helper tool:](#method-1-generate-a-db-with-relevant-schema-using-the-db-helper-tool) + - [Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run:](#method-2-fill-the-db-from-a-production-db-dump-file-for-example-if-you-have-dumpsql-file-you-can-run) + +*** + +## Introduction + +We show how to use a database for testing. + +## Run postgresql container + + ```bash + docker run --rm --name postgres \ + -e POSTGRES_USER=postgres \ + -e POSTGRES_PASSWORD=postgres \ + -e POSTGRES_DB=tfgrid-graphql \ + -p 5432:5432 -d postgres + ``` + +## Create the DB +you can either Generate a db with relevant schema to test things locally quickly, or load a previously taken DB dump file: + +### Method 1: Generate a db with relevant schema using the db helper tool: + + ```bash + cd tools/db/ && go run . \ + --postgres-host 127.0.0.1 \ + --postgres-db tfgrid-graphql \ + --postgres-password postgres \ + --postgres-user postgres \ + --reset \ + ``` + +### Method 2: Fill the DB from a Production db dump file, for example if you have `dump.sql` file, you can run: + + ```bash + psql -h 127.0.0.1 -U postgres -d tfgrid-graphql < dump.sql + ``` diff --git a/collections/developers/proxy/explorer.md b/collections/developers/proxy/explorer.md new file mode 100644 index 0000000..2651cae --- /dev/null +++ b/collections/developers/proxy/explorer.md @@ -0,0 +1,38 @@ +

The Grid Explorer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Explorer Overview](#explorer-overview) +- [Explorer Endpoints](#explorer-endpoints) + +*** + +## Introduction + +The Grid Explorer is a rest API used to index a various information from the TFChain. + +## Explorer Overview + +- Due to limitations on indexing information from the blockchain, Complex inter-tables queries and limitations can't be applied directly on the chain. +- Here comes the TFGridDB, a shadow database contains all the data on the chain which is being updated each 2 hours. +- Then the explorer can apply a raw SQL queries on the database with all limitations and filtration needed. +- The used technology to extract the info from the blockchain is Subsquid check the [repo](https://github.com/threefoldtech/tfchain_graphql). + +## Explorer Endpoints + +| HTTP Verb | Endpoint | Description | +| --------- | --------------------------- | ---------------------------------- | +| GET | `/contracts` | Show all contracts on the chain | +| GET | `/farms` | Show all farms on the chain | +| GET | `/gateways` | Show all gateway nodes on the grid | +| GET | `/gateways/:node_id` | Get a single gateway node details | +| GET | `/gateways/:node_id/status` | Get a single node status | +| GET | `/nodes` | Show all nodes on the grid | +| GET | `/nodes/:node_id` | Get a single node details | +| GET | `/nodes/:node_id/status` | Get a single node status | +| GET | `/stats` | Show the grid statistics | +| GET | `/twins` | Show all the twins on the chain | +| GET | `/nodes/:node_id/statistics`| Get a single node ZOS statistics | + +For the available filters on each node. check `/swagger/index.html` endpoint on the running instance. diff --git a/collections/developers/proxy/production.md b/collections/developers/proxy/production.md new file mode 100644 index 0000000..fe4e108 --- /dev/null +++ b/collections/developers/proxy/production.md @@ -0,0 +1,117 @@ +

Running Proxy in Production

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Production Run](#production-run) +- [To upgrade the machine](#to-upgrade-the-machine) +- [Dockerfile](#dockerfile) +- [Update helm package](#update-helm-package) +- [Install the chart using helm package](#install-the-chart-using-helm-package) + +*** + +## Introduction + +We show how to run grid proxy in production. + +## Production Run + +- Download the latest binary [here](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-client) +- add the execution permission to the binary and move it to the bin directory + + ```bash + chmod +x ./gridproxy-server + mv ./gridproxy-server /usr/local/bin/gridproxy-server + ``` + +- Add a new systemd service + +```bash +cat << EOF > /etc/systemd/system/gridproxy-server.service +[Unit] +Description=grid proxy server +After=network.target + +[Service] +ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics +Type=simple +Restart=always +User=root +Group=root + +[Install] +WantedBy=multi-user.target +Alias=gridproxy.service +EOF +``` + +- enable the service + + ```bash + systemctl enable gridproxy.service + ``` + +- start the service + + ```bash + systemctl start gridproxy.service + ``` + +- check the status + + ```bash + systemctl status gridproxy.service + ``` + +- The command options: + - domain: the host domain which will generate ssl certificate to. + - email: the mail used to run generate the ssl certificate. + - ca: certificate authority server url, e.g. + - let's encrypt staging: `https://acme-staging-v02.api.letsencrypt.org/directory` + - let's encrypt production: `https://acme-v02.api.letsencrypt.org/directory` + - postgres -\*: postgres connection info. + +## To upgrade the machine + +- just replace the binary with the new one and apply + +```bash +systemctl restart gridproxy-server.service +``` + +- it you have changes in the `/etc/systemd/system/gridproxy-server.service` you have to run this command first + +```bash +systemctl daemon-reload +``` + +## Dockerfile + +To build & run dockerfile + +```bash +docker build -t threefoldtech/gridproxy . +docker run --name gridproxy -e POSTGRES_HOST="127.0.0.1" -e POSTGRES_PORT="5432" -e POSTGRES_DB="db" -e POSTGRES_USER="postgres" -e POSTGRES_PASSWORD="password" -e MNEMONICS="" threefoldtech/gridproxy +``` + +## Update helm package + +- Do `helm lint charts/gridproxy` +- Regenerate the packages `helm package -u charts/gridproxy` +- Regenerate index.yaml `helm repo index --url https://threefoldtech.github.io/tfgridclient_proxy/ .` +- Push your changes + +## Install the chart using helm package + +- Adding the repo to your helm + + ```bash + helm repo add gridproxy https://threefoldtech.github.io/tfgridclient_proxy/ + ``` + +- install a chart + + ```bash + helm install gridproxy/gridproxy + ``` diff --git a/collections/developers/proxy/proxy.md b/collections/developers/proxy/proxy.md new file mode 100644 index 0000000..7dc936c --- /dev/null +++ b/collections/developers/proxy/proxy.md @@ -0,0 +1,149 @@ +

Introducing Grid Proxy

+ +

Table of Content

+ +- [About](#about) +- [How to Use the Project](#how-to-use-the-project) +- [Used Technologies \& Prerequisites](#used-technologies--prerequisites) +- [Start for Development](#start-for-development) +- [Setup for Production](#setup-for-production) +- [Get and Install the Binary](#get-and-install-the-binary) +- [Add as a Systemd Service](#add-as-a-systemd-service) + +*** + + + +## About + +The TFGrid client Proxy acts as an interface to access information about the grid. It supports features such as filtering, limitation, and pagination to query the various entities on the grid like nodes, contracts and farms. Additionally the proxy can contact the required twin ID to retrieve stats about the relevant objects and performing ZOS calls. + +The proxy is used as the backend of several threefold projects like: + +- [Dashboard](../../dashboard/dashboard.md) + + + +## How to Use the Project + +If you don't want to care about setting up your instance you can use one of the live instances. each works against a different TFChain network. + +- Dev network: + - Swagger: +- Qa network: + - Swagger: +- Test network: + - Swagger: +- Main network: + - Swagger: + +Or follow the [development guide](#start-for-development) to run yours. +By default, the instance runs against devnet. to configure that you will need to config this while running the server. + +> Note: You may face some differences between each instance and the others. that is normal because each network is in a different stage of development and works correctly with others parts of the Grid on the same network. + + +## Used Technologies & Prerequisites + +1. **GoLang**: Mainly the two parts of the project written in `Go 1.17`, otherwise you can just download the compiled binaries from github [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) +2. **Postgresql**: Used to load the TFGrid DB +3. **Docker**: Containerize the running services such as Postgres and Redis. +4. **Mnemonics**: Secret seeds for adummy identity to use for the relay client. + +For more about the prerequisites and how to set up and configure them. follow the [Setup guide](./setup.md) + + + +## Start for Development + +To start the services for development or testing make sure first you have all the [Prerequisites](#used-technologies--prerequisites). + +- Clone this repo + + ```bash + git clone https://github.com/threefoldtech/tfgrid-sdk-go.git + cd tfgrid-sdk-go/grid-proxy + ``` + +- The `Makefile` has all that you need to deal with Db, Explorer, Tests, and Docs. + + ```bash + make help # list all the available subcommands. + ``` + +- For a quick test explorer server. + + ```bash + make all-start e= + ``` + + Now you can access the server at `http://localhost:8080` +- Run the tests + + ```bash + make test-all + ``` + +- Generate docs. + + ```bash + make docs + ``` + +To run in development environment see [here](./db_testing.md) how to generate test db or load a db dump then use: + +```sh +go run cmds/proxy_server/main.go --address :8080 --log-level debug -no-cert --postgres-host 127.0.0.1 --postgres-db tfgrid-graphql --postgres-password postgres --postgres-user postgres --mnemonics +``` + +Then visit `http://localhost:8080/` + +For more illustrations about the commands needed to work on the project, see the section [Commands](./commands.md). For more info about the project structure and contributions guidelines check the section [Contributions](./contributions.md). + + + +## Setup for Production + +## Get and Install the Binary + +- You can either build the project: + + ```bash + make build + chmod +x cmd/proxy_server/server \ + && mv cmd/proxy_server/server /usr/local/bin/gridproxy-server + ``` + +- Or download a release: + Check the [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) page and edit the next command with the chosen version. + + ```bash + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v1.6.7-rc2/tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ + && tar -xzf tfgridclient_proxy_1.6.7-rc2_linux_amd64.tar.gz \ + && chmod +x server \ + && mv server /usr/local/bin/gridproxy-server + ``` + +## Add as a Systemd Service + +- Create the service file + + ```bash + cat << EOF > /etc/systemd/system/gridproxy-server.service + [Unit] + Description=grid proxy server + After=network.target + + [Service] + ExecStart=gridproxy-server --domain gridproxy.dev.grid.tf --email omar.elawady.alternative@gmail.com -ca https://acme-v02.api.letsencrypt.org/directory --substrate wss://tfchain.dev.grid.tf/ws --postgres-host 127.0.0.1 --postgres-db db --postgres-password password --postgres-user postgres --mnemonics + Type=simple + Restart=always + User=root + Group=root + + [Install] + WantedBy=multi-user.target + Alias=gridproxy.service + EOF + ``` + diff --git a/collections/developers/proxy/proxy_readme.md b/collections/developers/proxy/proxy_readme.md new file mode 100644 index 0000000..aaf4266 --- /dev/null +++ b/collections/developers/proxy/proxy_readme.md @@ -0,0 +1,25 @@ +

Grid Proxy

+ +Welcome to the *Grid Proxy* section of the TFGrid Manual! + +In this comprehensive guide, we delve into the intricacies of the ThreeFold Grid Proxy, a fundamental component that empowers the ThreeFold Grid ecosystem. + +This section is designed to provide users, administrators, and developers with a detailed understanding of the TFGrid Proxy, offering step-by-step instructions for its setup, essential commands, and insights into its various functionalities. + +The Grid Proxy plays a pivotal role in facilitating secure and efficient communication between nodes within the ThreeFold Grid, contributing to the decentralized and autonomous nature of the network. + +Whether you are a seasoned ThreeFold enthusiast or a newcomer exploring the decentralized web, this manual aims to be your go-to resource for navigating the ThreeFold Grid Proxy landscape. + +To assist you on your journey, we have organized the content into distinct chapters below, covering everything from initial setup procedures and database testing to practical commands, contributions, and insights into the ThreeFold Explorer and the Grid Proxy Database functionalities. + +

Table of Contents

+ +- [Introducing Grid Proxy](./proxy.md) +- [Setup](./setup.md) +- [DB Testing](./db_testing.md) +- [Commands](./commands.md) +- [Contributions](./contributions.md) +- [Explorer](./explorer.md) +- [Database](./database.md) +- [Production](./production.md) +- [Release](./release.md) \ No newline at end of file diff --git a/collections/developers/proxy/release.md b/collections/developers/proxy/release.md new file mode 100644 index 0000000..5f5fe84 --- /dev/null +++ b/collections/developers/proxy/release.md @@ -0,0 +1,32 @@ +

Release Grid-Proxy

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Steps](#steps) +- [Debugging](#debugging) + +*** + +## Introduction + +We show the steps to release a new version of the Grid Proxy. + +## Steps + +To release a new version of the Grid-Proxy component, follow these steps: + +Update the `appVersion` field in the `charts/Chart.yaml` file. This field should reflect the new version number of the release. + +The release process includes generating and pushing a Docker image with the latest GitHub tag. This step is automated through the `gridproxy-release.yml` workflow. + +Trigger the `gridproxy-release.yml` workflow by pushing the desired tag to the repository. This will initiate the workflow, which will generate the Docker image based on the tag and push it to the appropriate registry. + +## Debugging +In the event that the workflow does not run automatically after pushing the tag and making the release, you can manually execute it using the GitHub Actions interface. Follow these steps: + +Go to the [GitHub Actions page](https://github.com/threefoldtech/tfgrid-sdk-go/actions/workflows/gridproxy-release.yml) for the Grid-Proxy repository. + +Locate the workflow named gridproxy-release.yml. + +Trigger the workflow manually by selecting the "Run workflow" option. \ No newline at end of file diff --git a/collections/developers/proxy/setup.md b/collections/developers/proxy/setup.md new file mode 100644 index 0000000..fa8d07f --- /dev/null +++ b/collections/developers/proxy/setup.md @@ -0,0 +1,50 @@ +

Setup

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Golang](#install-golang) +- [Docker](#docker) +- [Postgres](#postgres) +- [Get Mnemonics](#get-mnemonics) + +*** + +## Introduction + +We show how to set up grid proxy. + +## Install Golang + +To install Golang, you can follow the official [guide](https://go.dev/doc/install). + +## Docker + +Docker is useful for running the TFGridDb in container environment. Read this to [install Docker engine](../../system_administrators/computer_it_basics/docker_basics.md#install-docker-desktop-and-docker-engine). + +Note: it will be necessary to follow step #2 in the previous article to run docker without sudo. if you want to avoid that. edit the docker commands in the `Makefile` and add sudo. + +## Postgres + +If you have docker installed you can run postgres on a container with: + +```bash +make db-start +``` + +Then you can either load a dump of the database if you have one: + +```bash +make db-dump p=~/dump.sql +``` + +or easier you can fill the database tables with randomly generated data with the script `tools/db/generate.go` to do that run: + +```bash +make db-fill +``` + +## Get Mnemonics + +1. Install [polkadot extension](https://github.com/polkadot-js/extension) on your browser. +2. Create a new account from the extension. It is important to save the seeds. diff --git a/collections/developers/tfchain/farming_policies.md b/collections/developers/tfchain/farming_policies.md new file mode 100644 index 0000000..e997b8f --- /dev/null +++ b/collections/developers/tfchain/farming_policies.md @@ -0,0 +1,94 @@ +

Farming Policies

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Farming Policy Fields](#farming-policy-fields) +- [Limits on linked policy](#limits-on-linked-policy) +- [Creating a Policy](#creating-a-policy) +- [Linking a policy to a Farm](#linking-a-policy-to-a-farm) + +*** + +## Introduction + +A farming policy defines how farming rewards are handed out for nodes. Every node has a farming policy attached. A farming policy is either linked to a farm, in which case new nodes are given the farming policy of the farm they are in once they register themselves. Alternatively a farming policy can be a "default". These are not attached to a farm, but instead they are used for nodes registered in farms which don't have a farming policy. Multiple defaults can exist at the same time, and the most fitting should be chosen. + +## Farming Policy Fields + +A farming policy has the following fields: + +- id (used to link policies) +- name +- Default. This indicates if the policy can be used by any new node (if the parent farm does not have a dedicated attached policy). Essentially, a `Default` policy serves as a base which can be overriden per farm by linking a non default policy to said farm. +- Reward tft per CU, SU and NU, IPV4 +- Minimal uptime needed in integer format (example 995) +- Policy end (After this block number the policy can not be linked to new farms any more) +- If this policy is immutable or not. Immutable policies can never be changed again + +Additionally, we also use the following fields, though those are only useful for `Default` farming policies: + +- Node needs to be certified +- Farm needs to be certified (with certification level, which will be changed to an enum). + +In case a farming policy is not attached to a farm, new nodes will pick the most appropriate farming policy from the default ones. To decide which one to pick, they should be considered in order with most restrictive first until one matches. That means: + +- First check for the policy with highest farming certification (in the current case gold) and certified nodes +- Then check for a policy with highest farming certification (in the current case gold) and non certified nodes +- Check for policy without farming certification but certified nodes +- Last check for a policy without any kind of certification + +Important here is that certification of a node only happens after it comes live for the first time. As such, when a node gets certified, farming certification needs to be re evaluated, but only if the currently attached farming policy on the node is a `Default` policy (as specifically linked policies have priority over default ones). When evaluating again, we first consider if we are eligible for the farming policy linked to the farm, if any. + +## Limits on linked policy + +When a council member attaches a policy to a farm, limits can be set. These limits define how much a policy can be used for nodes, before it becomes unusable and gets removed. The limits currently are: + +- Farming Policy ID: the ID of the farming policy which we want to limit to a farm. +- CU. Every time a node is added in the farm, it's CU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of CU that can be attached to this policy is reached. +- SU. Every time a node is added in the farm, it's SU is calculated and deducted from this amount. If the amount drops below 0, the maximum amount of SU that can be attached to this policy is reached. +- End date. After this date the policy is not effective anymore and can't be used. It is removed from the farm and a default policy is used. +- Certification. If set, only certified nodes can get this policy. Non certified nodes get a default policy. + +Once a limit is reached, the farming policy is removed from the farm, so new nodes will get one of the default policies until a new policy is attached to the farm. + +## Creating a Policy + +A council member can create a Farming Policy (DAO) in the following way: + +1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics` +2: Now select the account to propose from (should be an account that's a council member). +3: Select as action `dao` -> `propose` +5: Set a `threshold` (amount of farmers to vote) +6: Select an actions `tfgridModule` -> `createFarmingPolicy` and fill in all the fields. +7: Create a forum post with the details of the farming policy and fill in the link of that post in the `link` field +8: Give it some good `description`. +9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`. +10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate). + +All (su, cu, nu, ipv4) values should be expressed in units USD. Minimal uptime should be expressed as integer that represents an percentage (example: `95`). + +Policy end is optional (0 or some block number in the future). This is used for expiration. + +For reference: + +![image](./img/create_policy.png) + +## Linking a policy to a Farm + +First identify the policy ID to link to a farm. You can check for farming policies in [chainstate](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate) -> `tfgridModule` -> `farmingPolciesMap`, start with ID 1 and increment with 1 until you find the farming policy which was created when the proposal was expired and closed. + +1: Open [PolkadotJS](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) apps on the corresponding network and go to `Extrinsics` +2: Now select the account to propose from (should be an account that's a council member). +3: Select as proposal `dao` -> `propose` +4: Set a `threshold` (amount of farmers to vote) +5: Select an actions `tfgridModule` -> `attachPolicyToFarm` and fill in all the fields (FarmID and Limits). +6: Limits contains a `farming_policy_id` (Required) and cu, su, end, node count (which are all optional). It also contains `node_certification`, if this is set to true only certified nodes can have this policy. +7: Create a forum post with the details of why we want to link that farm to that policy and fill in the link of that post in the `link` field +8: Give it some good `description`. +9: Duration is optional (by default it's 7 days). A proposal cannot be closed before the duration is "expired". If you wish to set a duration, the duration should be expressed in number of blocks from `now`. For example, 2 hours is equal to 1200 blocks (blocktime is 6 seconds) in this case, the duration should be filled in as `1200`. +10: If all the fields are filled in, click `Propose`, now Farmers can vote. A proposal can be closed manually once there are enough votes AND the proposal is expired. To close go to extrinsics -> `dao` -> `close` -> fill in proposal hash and index (both can be found in chainstate). + +For reference: + +![image](./img/attach.png) diff --git a/collections/developers/tfchain/img/TF.png b/collections/developers/tfchain/img/TF.png new file mode 100644 index 0000000..528b5d9 Binary files /dev/null and b/collections/developers/tfchain/img/TF.png differ diff --git a/collections/developers/tfchain/img/attach.png b/collections/developers/tfchain/img/attach.png new file mode 100644 index 0000000..96e3c5f Binary files /dev/null and b/collections/developers/tfchain/img/attach.png differ diff --git a/collections/developers/tfchain/img/close_proposal.png b/collections/developers/tfchain/img/close_proposal.png new file mode 100644 index 0000000..07e66a2 Binary files /dev/null and b/collections/developers/tfchain/img/close_proposal.png differ diff --git a/collections/developers/tfchain/img/create_contract.png b/collections/developers/tfchain/img/create_contract.png new file mode 100644 index 0000000..f082e80 Binary files /dev/null and b/collections/developers/tfchain/img/create_contract.png differ diff --git a/collections/developers/tfchain/img/create_policy.png b/collections/developers/tfchain/img/create_policy.png new file mode 100644 index 0000000..fa344e7 Binary files /dev/null and b/collections/developers/tfchain/img/create_policy.png differ diff --git a/collections/developers/tfchain/img/create_provider.png b/collections/developers/tfchain/img/create_provider.png new file mode 100644 index 0000000..e8668a2 Binary files /dev/null and b/collections/developers/tfchain/img/create_provider.png differ diff --git a/collections/developers/tfchain/img/propose_approve.png b/collections/developers/tfchain/img/propose_approve.png new file mode 100644 index 0000000..667f66f Binary files /dev/null and b/collections/developers/tfchain/img/propose_approve.png differ diff --git a/collections/developers/tfchain/img/proposed_approve.png b/collections/developers/tfchain/img/proposed_approve.png new file mode 100644 index 0000000..5202c4c Binary files /dev/null and b/collections/developers/tfchain/img/proposed_approve.png differ diff --git a/collections/developers/tfchain/img/query_provider.png b/collections/developers/tfchain/img/query_provider.png new file mode 100644 index 0000000..de66d4c Binary files /dev/null and b/collections/developers/tfchain/img/query_provider.png differ diff --git a/collections/developers/tfchain/img/service_contract_approve.png b/collections/developers/tfchain/img/service_contract_approve.png new file mode 100644 index 0000000..1e0d034 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_approve.png differ diff --git a/collections/developers/tfchain/img/service_contract_bill.png b/collections/developers/tfchain/img/service_contract_bill.png new file mode 100644 index 0000000..55e84fe Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_bill.png differ diff --git a/collections/developers/tfchain/img/service_contract_cancel.png b/collections/developers/tfchain/img/service_contract_cancel.png new file mode 100644 index 0000000..7669510 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_cancel.png differ diff --git a/collections/developers/tfchain/img/service_contract_create.png b/collections/developers/tfchain/img/service_contract_create.png new file mode 100644 index 0000000..69ec62a Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_create.png differ diff --git a/collections/developers/tfchain/img/service_contract_id.png b/collections/developers/tfchain/img/service_contract_id.png new file mode 100644 index 0000000..e49c396 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_id.png differ diff --git a/collections/developers/tfchain/img/service_contract_reject.png b/collections/developers/tfchain/img/service_contract_reject.png new file mode 100644 index 0000000..6235530 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_reject.png differ diff --git a/collections/developers/tfchain/img/service_contract_set_fees.png b/collections/developers/tfchain/img/service_contract_set_fees.png new file mode 100644 index 0000000..6cfa91a Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_set_fees.png differ diff --git a/collections/developers/tfchain/img/service_contract_set_metadata.png b/collections/developers/tfchain/img/service_contract_set_metadata.png new file mode 100644 index 0000000..e472145 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_set_metadata.png differ diff --git a/collections/developers/tfchain/img/service_contract_state.png b/collections/developers/tfchain/img/service_contract_state.png new file mode 100644 index 0000000..e824552 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_state.png differ diff --git a/collections/developers/tfchain/img/service_contract_twin_from_account.png b/collections/developers/tfchain/img/service_contract_twin_from_account.png new file mode 100644 index 0000000..293bad2 Binary files /dev/null and b/collections/developers/tfchain/img/service_contract_twin_from_account.png differ diff --git a/collections/developers/tfchain/img/vote_proposal.png b/collections/developers/tfchain/img/vote_proposal.png new file mode 100644 index 0000000..16111a0 Binary files /dev/null and b/collections/developers/tfchain/img/vote_proposal.png differ diff --git a/collections/developers/tfchain/introduction.md b/collections/developers/tfchain/introduction.md new file mode 100644 index 0000000..a983b68 --- /dev/null +++ b/collections/developers/tfchain/introduction.md @@ -0,0 +1,57 @@ +

ThreeFold Chain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deployed instances](#deployed-instances) +- [Create a TFChain twin](#create-a-tfchain-twin) +- [Get your twin ID](#get-your-twin-id) + +*** + +## Introduction + +ThreeFold blockchain (aka TFChain) serves as a registry for Nodes, Farms, Digital Twins and Smart Contracts. +It is the backbone of [ZOS](https://github.com/threefoldtech/zos) and other components. + +## Deployed instances + +- Development network (Devnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer) + - Websocket url: `wss://tfchain.dev.grid.tf` + - GraphQL UI: [https://graphql.dev.grid.tf/graphql](https://graphql.dev.grid.tf/graphql) + +- QA testing network (QAnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer) + - Websocket url: `wss://tfchain.qa.grid.tf` + - GraphQL UI: [https://graphql.qa.grid.tf/graphql](https://graphql.qa.grid.tf/graphql) + +- Test network (Testnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer) + - Websocket url: `wss://tfchain.test.grid.tf` + - GraphQL UI: [https://graphql.test.grid.tf/graphql](https://graphql.test.grid.tf/graphql) + +- Production network (Mainnet): + + - Polkadot UI: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer) + - Websocket url: `wss://tfchain.grid.tf` + - GraphQL UI: [https://graphql.grid.tf/graphql](https://graphql.grid.tf/graphql) + +## Create a TFChain twin + +A twin is a unique identifier linked to a specific account on a given TFChain network. +Actually there are 2 ways to create a twin: + +- With the [Dashboard](../../dashboard/wallet_connector.md) + - a twin is automatically generated while creating a TFChain account +- With the TFConnect app + - a twin is automatically generated while creating a farm (in this case the twin will be created on mainnet) + +## Get your twin ID + +One can retrieve the twin ID associated to his account going to `Developer` -> `Chain state` -> `tfgridModule` -> `twinIdByAccountID()`. + +![service_contract_twin_from_account](img/service_contract_twin_from_account.png) diff --git a/collections/developers/tfchain/tfchain.md b/collections/developers/tfchain/tfchain.md new file mode 100644 index 0000000..a575535 --- /dev/null +++ b/collections/developers/tfchain/tfchain.md @@ -0,0 +1,95 @@ +

ThreeFold Chain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Twins](#twins) +- [Farms](#farms) +- [Nodes](#nodes) +- [Node Contract](#node-contract) +- [Rent Contract](#rent-contract) +- [Name Contract](#name-contract) +- [Contract billing](#contract-billing) +- [Contract locking](#contract-locking) +- [Contract grace period](#contract-grace-period) +- [DAO](#dao) +- [Farming Policies](#farming-policies) +- [Node Connection price](#node-connection-price) +- [Node Certifiers](#node-certifiers) + +*** + +## Introduction + +ThreeFold Chain (TFChain) is the base layer for everything that interacts with the grid. Nodes, farms, users are registered on the chain. It plays the central role in achieving decentralised consensus between a user and Node to deploy a certain workload. A contract can be created on the chain that is essentially an agreement between a node and user. + +## Twins + +A twin is the central Identity object that is used for every entity that lives on the grid. A twin optionally has an IPV6 planetary network address which can be used for communication between twins no matter of the location they are in. A twin is coupled to a private/public keypair on chain. This keypair can hold TFT on TF Chain. + +## Farms + +A farm must be created before a Node can be booted. Every farms needs to have an unique name and is linked to the Twin that creates the farm. Once a farm is created, a unique ID is generated. This ID can be used to provide to the boot image of a Node. + +## Nodes + +When a node is booted for the first time, it registers itself on the chain and a unique identity is generated for this Node. + +## Node Contract + +A node contract is a contract between a user and a Node to deploy a certain workload. The contract is specified as following: + +``` +{ + "contract_id": auto generated, + "node_id": unique id of the node, + "deployment_data": some additional deployment data + "deployment_hash": hash of the deployment definition signed by the user + "public_ips": number of public ips to attach to the deployment contract +} +``` + +We don't save the raw workload definition on the chain but only a hash of the definition. After the contract is created, the user must send the raw deployment to the specified node in the contract. He can find where to send this data by looking up the Node's twin and contacting that twin over the planetary network. + +## Rent Contract + +A rent contract is also a contract between a user and a Node, but instead of being able to reserve a part of the node's capacity, the full capacity is rented. Once a rent contract is created on a Node by a user, only this user can deploy node contracts on this specific node. A discount of 50% is given if a the user wishes to rent the full capacity of a node by creating a rent contract. All node contracts deployed on a node where a user has a rent contract are free of use expect for the public ip's which can be added on a node contract. + +## Name Contract + +A name contract is a contract that specifies a unique name to be used on the grid's webgateways. Once a name contract is created, this name can be used as and entrypoint for an application on the grid. + +## Contract billing + +Every contract is billed every 1 hour on the chain, the amount that is due is deducted from the user's wallet every 24 hours or when the user cancels his contract. The total amount acrued in those 24 hours gets send to following destinations: + +- 10% goes to the threefold foundation +- 5% goes to staking pool wallet (to be implemented in a later phase) +- 50% goes to certified sales channel +- 35% TFT gets burned + +See [pricing](../../../knowledge_base/cloud/pricing/pricing.md) for more information on how the cost for a contract is calculated. + +## Contract locking + +To not overload the chain with transfer events and others we choose to lock the amount due for a contract every hour and after 24 hours unlock the amount and deduct it in one go. This lock is saved on a user's account, if the user has multiple contracts the locked amount will be stacked. + +## Contract grace period + +When the owner of a contract runs out funds on his wallet to pay for his deployment, the contract goes in to a Grace Period state. The deployment, whatever that might be, will be unaccessible during this period to the user. When the wallet is funded with TFT again, the contract goes back to a normal operating state. If the grace period runs out (by default 2 weeks) the user's deployment and data will be deleted from the node. + +## DAO + +See [DAO](../../dashboard/tfchain/tf_dao.md) for more information on the DAO on TF Chain. + +## Farming Policies + +See [farming_policies](farming_policies.md) for more information on the farming policies on TF Chain. + +## Node Connection price + +A connection price is set to every new Node that boots on the Grid. This connection price influences the amount of TFT farmed in a period. The connection price set on a node is permanent. The DAO can propose the increase / decrease of the connection price. At the time of writing the connection price is set to $ 0.08. When the DAO proposes a connection price and the vote is passed, new nodes will attach to the new connection price. + +## Node Certifiers + +Node certifiers are entities who are allowed to set a node's certification level to `Certified`. The DAO can propose to add / remove entities that can certify nodes. This is usefull for allowing approved resellers of Threefold nodes to mark nodes as Certified. A certified node farms 25% more tokens than `Diy` a node. \ No newline at end of file diff --git a/collections/developers/tfchain/tfchain_external_service_contract.md b/collections/developers/tfchain/tfchain_external_service_contract.md new file mode 100644 index 0000000..992186a --- /dev/null +++ b/collections/developers/tfchain/tfchain_external_service_contract.md @@ -0,0 +1,142 @@ +

External Service Contract: How to set and execute

+

Table of Contents

+ +- [Introduction](#introduction) +- [Step 1: Create the contract and get its unique ID](#step-1-create-contract--get-unique-id) +- [Step 2: Fill contract](#step-2-fill-contract) +- [Step 3: Both parties approve contract](#step-3-both-parties-approve-contract) +- [Step 4: Bill for the service](#step-4-bill-for-the-service) +- [Step 5: Cancel the contract](#step-5-cancel-the-contract) + +*** + + +# Introduction + +It is now possible to create a generic contract between two TFChain users (without restriction of account type) for some external service and bill for it. + +The initial scenario is when two parties, a service provider and a consumer of the service, want to use TFChain to automatically handle the billing/payment process for an agreement (in TFT) they want to make for a service which is external from the grid. +This is actually a more direct and generic feature if we compare to the initial rewarding model where a service provider (or solution provider) is receiving TFT from a rewards distribution process, linked to a node contract and based on a cloud capacity consumption, which follows specific billing rules. + +The initial requirements are: +- Both service and consumer need to have their respective twin created on TFChain (if not, see [here](tfchain.md#create-a-tfchain-twin) how to do it) +- Consumer account needs to be funded (lack of funds will simply result in the contract cancelation while billed) + +In the following steps we detail the sequence of extrinsics that need to be called in TFChain Polkadot portal for setting up and executing such contract. + +Make sure to use right [links](tfchain.md#deployed-instances) depending on the targeted network. + + +# Step 1: Create contract / Get unique ID + +## Create service contract + +The contract creation can be initiated by both service or consumer. +In TFChain Polkadot portal, the one who iniciates the contract should go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCreate()`, using the account he pretends to use in the contract, and select the corresponding service and consumer accounts before submiting the transaction. + +![service_contract_create](img/service_contract_create.png) + +Once executed the service contract is `Created` between the two parties and a unique ID is generated. + +## Last service contract ID + +To get the last generated service contract ID go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContractID()`. + +![service_contract_id](img/service_contract_id.png) + +## Parse service contract + +To get the corresponding contract details, go to `Developer` -> `Chain state` -> `smartContractModule` -> `serviceContracts()` and provide the contract ID. +You should see the following details: + +![service_contract_state](img/service_contract_state.png) + +Check if the contract fields are correct, especially the twin ID of both service and consumer, to be sure you get the right contract ID, referenced as `serviceContractId`. + +## Wrong contract ID ? + +If twin IDs are wrong ([how to get my twin ID?](tfchain.md#get-your-twin-id)) on service contract fields it means the contract does not correspond to the last created contract. +In this case parse the last contracts on stack by decreasing `serviceContractId` and try to identify the right one; or the contract was simply not created so you should repeat the creation process and evaluate the error log. + + +# Step 2: Fill contract + +Once created, the service contract must be filled with its relative `per hour` fees: +- `baseFee` is the constant "per hour" price (in TFT) for the service. +- `variableFee` is the maximum "per hour" amount (in TFT) that can be billed extra. + +To provide these values (only service can set fees), go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetFees()` specifying `serviceContractId`. + +![service_contract_set_fees](img/service_contract_set_fees.png) + +Some metadata (the description of the service for example) must be filled in a similar way (`Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractSetMetadata()`). +In this case service or consumer can set metadata. + +![service_contract_set_metadata](img/service_contract_set_metadata.png) + +The agreement will be automatically considered `Ready` when both metadata and fees are set (`metadata` not empty and `baseFee` greater than zero). +Note that whenever this condition is not reached both extrinsics can still be called to modify agreement. +You can check the contract status at each step of flow by parsing it as shown [here](#parse-service-contract). + + +# Step 3: Both parties approve contract + +Now having the agreement ready the contract can be submited for approval. +To approve the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractApprove()` specifying `serviceContractId`. + +![service_contract_approve](img/service_contract_approve.png) + +To reject the agreement, go to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractReject()` specifying `serviceContractId`. + +![service_contract_reject](img/service_contract_reject.png) + +The contract needs to be explicitly `Approved` by both service and consumer to be ready for billing. +Before reaching this state, if one of the parties decides to call the rejection extrinsic, it will instantly lead to the cancelation of the contract (and its permanent removal). + + +# Step 4: Bill for the service + +Once the contract is accepted by both it can be billed. + +## Send bill to consumer + +Only the service can bill the consumer going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractBill()` specifying `serviceContractId` and billing informations such as `variableAmount` and some `metadata`. + +![service_contract_bill](img/service_contract_bill.png) + +## Billing frequency + +⚠️ Important: because a service should not charge the user if it doesn't work, it is required that bills be send in less than 1 hour intervals. +Any bigger interval will result in a bounded 1 hour bill (in other words, extra time will not be billed). +It is the service responsability to bill on right frequency! + +## Amount due calculation + +When the bill is received, the chain calculates the bill amount based on the agreement values as follows: + +~~~ +amount = baseFee * T / 3600 + variableAmount +~~~ + +where `T` is the elapsed time, in seconds and bounded by 3600 (see [above](#billing-frequency)), since last effective billing operation occured. + +## Protection against draining + +Note that if `variableAmount` is too high (i.e `variableAmount > variableFee * T / 3600`) the billing extrinsic will fail. +The `variableFee` value in the contract is interpreted as being "per hour" and acts as a protection mechanism to avoid consumer draining. +Indeed, as it is technically possible for the service to send a bill every second, there would be no gain for that (unless overloading the chain uselessly). +So it is also the service responsability to set a suitable `variableAmount` according to the billing frequency! + +## Billing considerations + +Then, if all goes well and no error is dispatched after submitting the transaction, the consumer pays for the due amount calculated from the bill (see calculation detail [above](#amount-due-calculation)). +In practice the amount is transferred from the consumer twin account to the service twin account. +Be aware that if the consumer is out of funds the billing will fail AND the contract will automatically be canceled. + + +# Step 5: Cancel the contract + +At every moment of the flow since the contract is created it can be canceled (and definitively removed). +Only the service or the consumer can do it going to `Developer` -> `Extrinsics` -> `smartContractModule` -> `serviceContractCancel()` specifying `serviceContractId`. + +![service_contract_cancel](img/service_contract_cancel.png) diff --git a/collections/developers/tfchain/tfchain_solution_provider.md b/collections/developers/tfchain/tfchain_solution_provider.md new file mode 100644 index 0000000..c027d16 --- /dev/null +++ b/collections/developers/tfchain/tfchain_solution_provider.md @@ -0,0 +1,81 @@ +

Solution Provider

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Changes to Contract Creation](#changes-to-contract-creation) +- [Creating a Provider](#creating-a-provider) +- [Council needs to approve a provider before it can be used](#council-needs-to-approve-a-provider-before-it-can-be-used) + +*** + +## Introduction + +> Note: While the solution provider program is still active, the plan is to discontinue the program in the near future. We will update the manual as we get more information. We currently do not accept new solution providers. + +A "solution" is something running on the grid, created by a community member. This can be brought forward to the council, who can vote on it to recognize it as a solution. On contract creation, a recognized solution can be referenced, in which case part of the payment goes toward the address coupled to the solution. On chain a solution looks as follows: + +- Description (should be some text, limited in length. Limit should be rather low, if a longer one is desired a link can be inserted. 160 characters should be enough imo). +- Up to 5 payout addresses, each with a payout percentage. This is the percentage of the payout received by the associated address. The amount is deducted from the payout to the treasury and specified as percentage of the total contract cost. As such, the sum of these percentages can never exceed 50%. If this value is not 50%, the remainder is payed to the treasure. Example: 10% payout percentage to addr 1, 5% payout to addr 2. This means 15% goes to the 2 listed addresses combined and 35% goes to the treasury (instead of usual 50). Rest remains as is. If the cost would be 10TFT, 1TFT goes to the address1, 0.5TFT goes to address 2, 3.5TFT goes to the treasury, instead of the default 5TFT to the treasury +- A unique code. This code is used to link a solution to the contract (numeric ID). + +This means contracts need to carry an optional solution code. If the code is not specified (default), the 50% goes entirely to the treasury (as is always the case today). + +A solution can be created by calling the extrinsic `smartContractModule` -> `createSolutionProvider` with parameters: + +- description +- link (to website) +- list of providers + +Provider: + +- who (account id) +- take (amount of take this account should get) specified as an integer of max 50. example: 25 + +A forum post should be created with the details of the created solution provider, the dao can vote to approve this or not. If the solution provider get's approved, it can be referenced on contract creation. + +Note that a solution can be deleted. In this case, existing contracts should fall back to the default behavior (i.e. if code not found -> default). + +## Changes to Contract Creation + +When creating a contract, a `solution_provider_id` can be passed. An error will be returned if an invalid or non-approved solution provider id is passed. + +## Creating a Provider + +Creating a provider is as easy as going to the [polkadotJS UI](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) (Currently only on devnet) + +Select module `SmartContractModule` -> `createSolutionProvider(..)` + +Fill in all the details, you can specify up to 5 target accounts which can have a take of the TFT generated from being a provider. Up to a total maximum of 50%. `Take` should be specified as a integer, example (`25`). + +Once this object is created, a forum post should be created here: + +![create](./img/create_provider.png) + +## Council needs to approve a provider before it can be used + +First propose the solution to be approved: + +![propose_approve](./img/propose_approve.png) + +After submission it should like like this: + +![proposed_approved](./img/proposed_approve.png) + +Now another member of the council needs to vote: + +![vote](./img/vote_proposal.png) + +After enough votes are reached, it can be closed: + +![close](./img/close_proposal.png) + +If the close was executed without error the solution should be approved and ready to be used + +Query the solution: `chainstate` -> `SmartContractModule` -> `solutionProviders` + +![query](./img/query_provider.png) + +Now the solution provider can be referenced on contract creation: + +![create](./img/create_contract.png) diff --git a/collections/developers/tfcmd/tfcmd.md b/collections/developers/tfcmd/tfcmd.md new file mode 100644 index 0000000..daa502a --- /dev/null +++ b/collections/developers/tfcmd/tfcmd.md @@ -0,0 +1,15 @@ +

TFCMD

+ +TFCMD (`tfcmd`) is a command line interface to interact and develop on Threefold Grid using command line. + +Consult the [ThreeFoldTech TFCMD repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/grid-cli) for the latest updates. Make sure to read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md). + +

Table of Contents

+ +- [Getting Started](./tfcmd_basics.md) +- [Deploy a VM](./tfcmd_vm.md) +- [Deploy Kubernetes](./tfcmd_kubernetes.md) +- [Deploy ZDB](./tfcmd_zdbs.md) +- [Gateway FQDN](./tfcmd_gateway_fqdn.md) +- [Gateway Name](./tfcmd_gateway_name.md) +- [Contracts](./tfcmd_contracts.md) \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_basics.md b/collections/developers/tfcmd/tfcmd_basics.md new file mode 100644 index 0000000..8816eea --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_basics.md @@ -0,0 +1,67 @@ +

TFCMD Getting Started

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Login](#login) +- [Commands](#commands) +- [Using TFCMD](#using-tfcmd) + +*** + +## Introduction + +This section covers the basics on how to set up and use TFCMD (`tfcmd`). + +TFCMD is available as binaries. Make sure to download the latest release and to stay up to date with new releases. + +## Installation + +An easy way to use TFCMD is to download and extract the TFCMD binaries to your path. + +- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) + - ``` + wget + ``` +- Extract the binaries + - ``` + tar -xvf + ``` +- Move `tfcmd` to any `$PATH` directory: + ```bash + mv tfcmd /usr/local/bin + ``` + +## Login + +Before interacting with Threefold Grid with `tfcmd` you should login with your mnemonics and specify the grid network: + +```console +$ tfcmd login +Please enter your mnemonics: +Please enter grid network (main,test): +``` + +This validates your mnemonics and store your mnemonics and network to your default configuration dir. +Check [UserConfigDir()](https://pkg.go.dev/os#UserConfigDir) for your default configuration directory. + +## Commands + +You can run the command `tfcmd help` at any time to access the help section. This will also display the available commands. + +| Command | Description | +| ---------- | ---------------------------------------------------------- | +| cancel | Cancel resources on Threefold grid | +| completion | Generate the autocompletion script for the specified shell | +| deploy | Deploy resources to Threefold grid | +| get | Get a deployed resource from Threefold grid | +| help | Help about any command | +| login | Login with mnemonics to a grid network | +| version | Get latest build tag | + +## Using TFCMD + +Once you've logged in, you can use commands to deploy workloads on the TFGrid. Read the next sections for more information on different types of workloads available with TFCMD. + + diff --git a/collections/developers/tfcmd/tfcmd_contracts.md b/collections/developers/tfcmd/tfcmd_contracts.md new file mode 100644 index 0000000..bb14c5d --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_contracts.md @@ -0,0 +1,99 @@ +

Contracts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Get](#get) + - [Get Contracts](#get-contracts) + - [Get Contract](#get-contract) +- [Cancel](#cancel) + - [Optional Flags](#optional-flags) + +*** + +## Introduction + +We explain how to handle contracts on the TFGrid with `tfcmd`. + +## Get + +### Get Contracts + +Get all contracts + +```bash +tfcmd get contracts +``` + +Example: + +```console +$ tfcmd get contracts +5:13PM INF starting peer session=tf-1184566 twin=81 +Node contracts: +ID Node ID Type Name Project Name +50977 21 network vm1network vm1 +50978 21 vm vm1 vm1 +50980 14 Gateway Name gatewaytest gatewaytest + +Name contracts: +ID Name +50979 gatewaytest +``` + +### Get Contract + +Get specific contract + +```bash +tfcmd get contract +``` + +Example: + +```console +$ tfcmd get contract 50977 +5:14PM INF starting peer session=tf-1185180 twin=81 +5:14PM INF contract: +{ + "contract_id": 50977, + "twin_id": 81, + "state": "Created", + "created_at": 1702480020, + "type": "node", + "details": { + "nodeId": 21, + "deployment_data": "{\"type\":\"network\",\"name\":\"vm1network\",\"projectName\":\"vm1\"}", + "deployment_hash": "21adc91ef6cdc915d5580b3f12732ac9", + "number_of_public_ips": 0 + } +} +``` + +## Cancel + +Cancel specified contracts or all contracts. + +```bash +tfcmd cancel contracts ... [Flags] +``` + +Example: + +```console +$ tfcmd cancel contracts 50856 50857 +5:17PM INF starting peer session=tf-1185964 twin=81 +5:17PM INF contracts canceled successfully +``` + +### Optional Flags + +- all: cancel all twin's contracts. + +Example: + +```console +$ tfcmd cancel contracts --all +5:17PM INF starting peer session=tf-1185964 twin=81 +5:17PM INF contracts canceled successfully +``` \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_gateway_fqdn.md b/collections/developers/tfcmd/tfcmd_gateway_fqdn.md new file mode 100644 index 0000000..538438f --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_gateway_fqdn.md @@ -0,0 +1,87 @@ +

Gateway FQDN

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +We explain how to use gateway fully qualified domain names on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy gateway fqdn [flags] +``` + +### Required Flags + +- name: name for the gateway deployment also used for canceling the deployment. must be unique. +- node: node id to deploy gateway on. +- backends: list of backends the gateway will forward requests to. +- fqdn: FQDN pointing to the specified node. + +### Optional Flags + +-tls: add TLS passthrough option (default false). + +Example: + +```console +$ tfcmd deploy gateway fqdn -n gatewaytest --node 14 --backends http://93.184.216.34:80 --fqdn example.com +3:34PM INF deploying gateway fqdn +3:34PM INF gateway fqdn deployed +``` + +## Get + +```bash +tfcmd get gateway fqdn +``` + +gateway is the name used when deploying gateway-fqdn using tfcmd. + +Example: + +```console +$ tfcmd get gateway fqdn gatewaytest +2:05PM INF gateway fqdn: +{ + "NodeID": 14, + "Backends": [ + "http://93.184.216.34:80" + ], + "FQDN": "awady.gridtesting.xyz", + "Name": "gatewaytest", + "TLSPassthrough": false, + "Description": "", + "NodeDeploymentID": { + "14": 19653 + }, + "SolutionType": "gatewaytest", + "ContractID": 19653 +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel gatewaytest +3:37PM INF canceling contracts for project gatewaytest +3:37PM INF gatewaytest canceled +``` \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_gateway_name.md b/collections/developers/tfcmd/tfcmd_gateway_name.md new file mode 100644 index 0000000..a4c8191 --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_gateway_name.md @@ -0,0 +1,88 @@ +

Gateway Name

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +We explain how to use gateway names on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy gateway name [flags] +``` + +### Required Flags + +- name: name for the gateway deployment also used for canceling the deployment. must be unique. +- backends: list of backends the gateway will forward requests to. + +### Optional Flags + +- node: node id gateway should be deployed on. +- farm: farm id gateway should be deployed on, if set choose available node from farm that fits vm specs (default 1). note: node and farm flags cannot be set both. +-tls: add TLS passthrough option (default false). + +Example: + +```console +$ tfcmd deploy gateway name -n gatewaytest --node 14 --backends http://93.184.216.34:80 +3:34PM INF deploying gateway name +3:34PM INF fqdn: gatewaytest.gent01.dev.grid.tf +``` + +## Get + +```bash +tfcmd get gateway name +``` + +gateway is the name used when deploying gateway-name using tfcmd. + +Example: + +```console +$ tfcmd get gateway name gatewaytest +1:56PM INF gateway name: +{ + "NodeID": 14, + "Name": "gatewaytest", + "Backends": [ + "http://93.184.216.34:80" + ], + "TLSPassthrough": false, + "Description": "", + "SolutionType": "gatewaytest", + "NodeDeploymentID": { + "14": 19644 + }, + "FQDN": "gatewaytest.gent01.dev.grid.tf", + "NameContractID": 19643, + "ContractID": 19644 +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel gatewaytest +3:37PM INF canceling contracts for project gatewaytest +3:37PM INF gatewaytest canceled +``` \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_kubernetes.md b/collections/developers/tfcmd/tfcmd_kubernetes.md new file mode 100644 index 0000000..9a7c2b1 --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_kubernetes.md @@ -0,0 +1,147 @@ +

Kubernetes

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +In this section, we explain how to deploy Kubernetes workloads on the TFGrid using `tfcmd`. + +## Deploy + +```bash +tfcmd deploy kubernetes [flags] +``` + +### Required Flags + +- name: name for the master node deployment also used for canceling the cluster deployment. must be unique. +- ssh: path to public ssh key to set in the master node. + +### Optional Flags + +- master-node: node id master should be deployed on. +- master-farm: farm id master should be deployed on, if set choose available node from farm that fits master specs (default 1). note: master-node and master-farm flags cannot be set both. +- workers-node: node id workers should be deployed on. +- workers-farm: farm id workers should be deployed on, if set choose available node from farm that fits master specs (default 1). note: workers-node and workers-farm flags cannot be set both. +- ipv4: assign public ipv4 for master node (default false). +- ipv6: assign public ipv6 for master node (default false). +- ygg: assign yggdrasil ip for master node (default true). +- master-cpu: number of cpu units for master node (default 1). +- master-memory: master node memory size in GB (default 1). +- master-disk: master node disk size in GB (default 2). +- workers-number: number of workers nodes (default 0). +- workers-ipv4: assign public ipv4 for each worker node (default false) +- workers-ipv6: assign public ipv6 for each worker node (default false) +- workers-ygg: assign yggdrasil ip for each worker node (default true) +- workers-cpu: number of cpu units for each worker node (default 1). +- workers-memory: memory size for each worker node in GB (default 1). +- workers-disk: disk size in GB for each worker node (default 2). + +Example: + +```console +$ tfcmd deploy kubernetes -n kube --ssh ~/.ssh/id_rsa.pub --master-node 14 --workers-number 2 --workers-node 14 +4:21PM INF deploying network +4:22PM INF deploying cluster +4:22PM INF master yggdrasil ip: 300:e9c4:9048:57cf:504f:c86c:9014:d02d +``` + +## Get + +```bash +tfcmd get kubernetes +``` + +kubernetes is the name used when deploying kubernetes cluster using tfcmd. + +Example: + +```console +$ tfcmd get kubernetes examplevm +3:14PM INF k8s cluster: +{ + "Master": { + "Name": "kube", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:e8a0:662b:4e66:8faa", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + }, + "Workers": [ + { + "Name": "worker1", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:66d0:3ee4:294e:d134", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + }, + { + "Name": "worker0", + "Node": 14, + "DiskSize": 2, + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-k3s-latest.flist", + "FlistChecksum": "c87cf57e1067d21a3e74332a64ef9723", + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "300:e9c4:9048:57cf:1ae5:cc51:3ffc:81e", + "IP": "10.20.2.2", + "CPU": 1, + "Memory": 1024 + } + ], + "Token": "", + "NetworkName": "", + "SolutionType": "kube", + "SSHKey": "", + "NodesIPRange": null, + "NodeDeploymentID": { + "14": 22743 + } +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +deployment-name is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel kube +3:37PM INF canceling contracts for project kube +3:37PM INF kube canceled +``` \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_vm.md b/collections/developers/tfcmd/tfcmd_vm.md new file mode 100644 index 0000000..21e1471 --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_vm.md @@ -0,0 +1,171 @@ + +

Deploy a VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Flags](#flags) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) + - [Examples](#examples) + - [Deploy a VM without GPU](#deploy-a-vm-without-gpu) + - [Deploy a VM with GPU](#deploy-a-vm-with-gpu) +- [Get](#get) + - [Get Example](#get-example) +- [Cancel](#cancel) + - [Cancel Example](#cancel-example) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this section, we explore how to deploy a virtual machine (VM) on the ThreeFold Grid using `tfcmd`. + +# Deploy + +You can deploy a VM with `tfcmd` using the following template accompanied by required and optional flags: + +```bash +tfcmd deploy vm [flags] +``` + +## Flags + +When you use `tfcmd`, there are two required flags (`name` and `ssh`), while the other remaining flags are optional. Using such optional flags can be used to deploy a VM with a GPU for example or to set an IPv6 address and much more. + +### Required Flags + +- **name**: name for the VM deployment also used for canceling the deployment. The name must be unique. +- **ssh**: path to public ssh key to set in the VM. + +### Optional Flags + +- **node**: node ID the VM should be deployed on. +- **farm**: farm ID the VM should be deployed on, if set choose available node from farm that fits vm specs (default `1`). Note: node and farm flags cannot both be set. +- **cpu**: number of cpu units (default `1`). +- **disk**: size of disk in GB mounted on `/data`. If not set, no disk workload is made. +- **entrypoint**: entrypoint for the VM FList (default `/sbin/zinit init`). Note: setting this without the flist option will fail. +- **flist**: FList used in the VM (default `https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist`). Note: setting this without the entrypoint option will fail. +- **ipv4**: assign public ipv4 for the VM (default `false`). +- **ipv6**: assign public ipv6 for the VM (default `false`). +- **memory**: memory size in GB (default `1`). +- **rootfs**: root filesystem size in GB (default `2`). +- **ygg**: assign yggdrasil ip for the VM (default `true`). +- **gpus**: assign a list of gpus' IDs to the VM. Note: setting this without the node option will fail. + +## Examples + +We present simple examples on how to deploy a virtual machine with or without a GPU using `tfcmd`. + +### Deploy a VM without GPU + +```console +$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 +12:06PM INF deploying network +12:06PM INF deploying vm +12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821 +``` +### Deploy a VM with GPU + +```console +$ tfcmd deploy vm --name examplevm --ssh ~/.ssh/id_rsa.pub --cpu 2 --memory 4 --disk 10 --gpus '0000:0e:00.0/1882/543f' --gpus '0000:0e:00.0/1887/593f' --node 12 +12:06PM INF deploying network +12:06PM INF deploying vm +12:07PM INF vm yggdrasil ip: 300:e9c4:9048:57cf:7da2:ac99:99db:8821 +``` + +# Get + +To get the VM, use the following template: + +```bash +tfcmd get vm +``` + +Make sure to replace `` with the name of the VM specified using `tfcmd`. + +## Get Example + +In the following example, the name of the deployment to get is `examplevm`. + +```console +$ tfcmd get vm examplevm +3:20PM INF vm: +{ + "Name": "examplevm", + "NodeID": 15, + "SolutionType": "examplevm", + "SolutionProvider": null, + "NetworkName": "examplevmnetwork", + "Disks": [ + { + "Name": "examplevmdisk", + "SizeGB": 10, + "Description": "" + } + ], + "Zdbs": [], + "Vms": [ + { + "Name": "examplevm", + "Flist": "https://hub.grid.tf/tf-official-apps/threefoldtech-ubuntu-22.04.flist", + "FlistChecksum": "", + "PublicIP": false, + "PublicIP6": false, + "Planetary": true, + "Corex": false, + "ComputedIP": "", + "ComputedIP6": "", + "YggIP": "301:ad3a:9c52:98d1:cd05:1595:9abb:e2f1", + "IP": "10.20.2.2", + "Description": "", + "CPU": 2, + "Memory": 4096, + "RootfsSize": 2048, + "Entrypoint": "/sbin/zinit init", + "Mounts": [ + { + "DiskName": "examplevmdisk", + "MountPoint": "/data" + } + ], + "Zlogs": null, + "EnvVars": { + "SSH_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcGrS1RT36rHAGLK3/4FMazGXjIYgWVnZ4bCvxxg8KosEEbs/DeUKT2T2LYV91jUq3yibTWwK0nc6O+K5kdShV4qsQlPmIbdur6x2zWHPeaGXqejbbACEJcQMCj8szSbG8aKwH8Nbi8BNytgzJ20Ysaaj2QpjObCZ4Ncp+89pFahzDEIJx2HjXe6njbp6eCduoA+IE2H9vgwbIDVMQz6y/TzjdQjgbMOJRTlP+CzfbDBb6Ux+ed8F184bMPwkFrpHs9MSfQVbqfIz8wuq/wjewcnb3wK9dmIot6CxV2f2xuOZHgNQmVGratK8TyBnOd5x4oZKLIh3qM9Bi7r81xCkXyxAZbWYu3gGdvo3h85zeCPGK8OEPdYWMmIAIiANE42xPmY9HslPz8PAYq6v0WwdkBlDWrG3DD3GX6qTt9lbSHEgpUP2UOnqGL4O1+g5Rm9x16HWefZWMjJsP6OV70PnMjo9MPnH+yrBkXISw4CGEEXryTvupfaO5sL01mn+UOyE= abdulrahman@AElawady-PC\n" + }, + "NetworkName": "examplevmnetwork" + } + ], + "QSFS": [], + "NodeDeploymentID": { + "15": 22748 + }, + "ContractID": 22748 +} +``` + +# Cancel + +To cancel your VM deployment, use the following template: + +```bash +tfcmd cancel +``` + +Make sure to replace `` with the name of the deployment specified using `tfcmd`. + +## Cancel Example + +In the following example, the name of the deployment to cancel is `examplevm`. + +```console +$ tfcmd cancel examplevm +3:37PM INF canceling contracts for project examplevm +3:37PM INF examplevm canceled +``` + +# Questions and Feedback + +If you have any questions or feedback, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/developers/tfcmd/tfcmd_zdbs.md b/collections/developers/tfcmd/tfcmd_zdbs.md new file mode 100644 index 0000000..b9c01d7 --- /dev/null +++ b/collections/developers/tfcmd/tfcmd_zdbs.md @@ -0,0 +1,125 @@ +

ZDBs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy](#deploy) + - [Required Flags](#required-flags) + - [Optional Flags](#optional-flags) +- [Get](#get) +- [Cancel](#cancel) + +*** + +## Introduction + +In this section, we explore how to use ZDBs related commands using `tfcmd` to interact with the TFGrid. + +## Deploy + +```bash +tfcmd deploy zdb [flags] +``` + +### Required Flags + +- project_name: project name for the ZDBs deployment also used for canceling the deployment. must be unique. +- size: HDD of zdb in GB. + +### Optional Flags + +- node: node id zdbs should be deployed on. +- farm: farm id zdbs should be deployed on, if set choose available node from farm that fits zdbs deployment specs (default 1). note: node and farm flags cannot be set both. +- count: count of zdbs to be deployed (default 1). +- names: a slice of names for the number of ZDBs. +- password: password for ZDBs deployed +- description: description for your ZDBs, it's optional. +- mode: the enumeration of the modes 0-db can operate in (default user). +- public: if zdb gets a public ip6 (default false). + +Example: + +- Deploying ZDBs + +```console +$ tfcmd deploy zdb --project_name examplezdb --size=10 --count=2 --password=password +12:06PM INF deploying zdbs +12:06PM INF zdb 'examplezdb0' is deployed +12:06PM INF zdb 'examplezdb1' is deployed +``` + +## Get + +```bash +tfcmd get zdb +``` + +`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd get zdb examplezdb +3:20PM INF zdb: +{ + "Name": "examplezdb", + "NodeID": 11, + "SolutionType": "examplezdb", + "SolutionProvider": null, + "NetworkName": "", + "Disks": [], + "Zdbs": [ + { + "name": "examplezdb1", + "password": "password", + "public": false, + "size": 10, + "description": "", + "mode": "user", + "ips": [ + "2a10:b600:1:0:c4be:94ff:feb1:8b3f", + "302:9e63:7d43:b742:469d:3ec2:ab15:f75e" + ], + "port": 9900, + "namespace": "81-36155-examplezdb1" + }, + { + "name": "examplezdb0", + "password": "password", + "public": false, + "size": 10, + "description": "", + "mode": "user", + "ips": [ + "2a10:b600:1:0:c4be:94ff:feb1:8b3f", + "302:9e63:7d43:b742:469d:3ec2:ab15:f75e" + ], + "port": 9900, + "namespace": "81-36155-examplezdb0" + } + ], + "Vms": [], + "QSFS": [], + "NodeDeploymentID": { + "11": 36155 + }, + "ContractID": 36155, + "IPrange": "" +} +``` + +## Cancel + +```bash +tfcmd cancel +``` + +`zdb-project-name` is the name of the deployment specified in while deploying using tfcmd. + +Example: + +```console +$ tfcmd cancel examplezdb +3:37PM INF canceling contracts for project examplezdb +3:37PM INF examplezdb canceled +``` \ No newline at end of file diff --git a/collections/developers/tfrobot/tfrobot.md b/collections/developers/tfrobot/tfrobot.md new file mode 100644 index 0000000..c8b2d5f --- /dev/null +++ b/collections/developers/tfrobot/tfrobot.md @@ -0,0 +1,13 @@ +

TFROBOT

+ +TFROBOT (`tfrobot`) is a command line interface tool that offers simultaneous mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments, and customizable configurations, where you can define node groups, VMs groups and other configurations through a YAML or a JSON file. + +Consult the [ThreeFoldTech TFROBOT repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/tfrobot) for the latest updates and read the [basics](../../system_administrators/getstarted/tfgrid3_getstarted.md) to get up to speed if needed. + +

Table of Contents

+ +- [Installation](./tfrobot_installation.md) +- [Configuration File](./tfrobot_config.md) +- [Deployment](./tfrobot_deploy.md) +- [Commands and Flags](./tfrobot_commands_flags.md) +- [Supported Configurations](./tfrobot_configurations.md) \ No newline at end of file diff --git a/collections/developers/tfrobot/tfrobot_commands_flags.md b/collections/developers/tfrobot/tfrobot_commands_flags.md new file mode 100644 index 0000000..f33c59d --- /dev/null +++ b/collections/developers/tfrobot/tfrobot_commands_flags.md @@ -0,0 +1,57 @@ +

Commands and Flags

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Commands](#commands) +- [Subcommands](#subcommands) +- [Flags](#flags) + +*** + +## Introduction + +We present the various commands, subcommands and flags available with TFROBOT. + + +## Commands + +You can run the command `tfrobot help` at any time to access the help section. This will also display the available commands. + +| Command | Description | +| ---------- | ---------------------------------------------------------- | +| completion | Generate the autocompletion script for the specified shell | +| help | Help about any command | +| version | Get latest build tag | + +Use `tfrobot [command] --help` for more information about a command. + +## Subcommands + +You can use subcommands to deploy and cancel workloads on the TFGrid. + +- **deploy:** used to mass deploy groups of vms with specific configurations + ```bash + tfrobot deploy -c path/to/your/config.yaml + ``` +- **cancel:** used to cancel all vms deployed using specific configurations + ```bash + tfrobot cancel -c path/to/your/config.yaml + ``` +- **load:** used to load all vms deployed using specific configurations + ```bash + tfrobot load -c path/to/your/config.yaml + ``` + +## Flags + +You can use different flags to configure your deployment. + +| Flag | Usage | +| :---: | :---: | +| -c | used to specify path to configuration file | +| -o | used to specify path to output file to store the output info in | +| -d | allow debug logs to appear in the output logs | +| -h | help | + +> **Note:** Make sure to use every flag once. If the flag is repeated, it will ignore all values and take the last value of the flag.` \ No newline at end of file diff --git a/collections/developers/tfrobot/tfrobot_config.md b/collections/developers/tfrobot/tfrobot_config.md new file mode 100644 index 0000000..55c2850 --- /dev/null +++ b/collections/developers/tfrobot/tfrobot_config.md @@ -0,0 +1,131 @@ +

Configuration File

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Examples](#examples) + - [YAML Example](#yaml-example) + - [JSON Example](#json-example) +- [Create a Configuration File](#create-a-configuration-file) + +*** + +## Introduction + +To use TFROBOT, the user needs to create a YAML or a JSON configuration file that will contain the mass deployment information, such as the groups information, number of VMs to deploy how, the compute, storage and network resources needed, as well as the user's credentials, such as the SSH public key, the network (main, test, dev, qa) and the TFChain mnemonics. + +## Examples + +We present here a configuration file example that deploys 3 nodes with 2 vcores, 16GB of RAM, 100GB of SSD, 50GB of HDD and an IPv4 address. The same deployment is shown with a YAML file and with a JSON file. Parsing is based on file extension, TFROBOT will use JSON format if the file has a JSON extension and YAML format otherwise. + +You can use this example for guidance, and make sure to replace placeholders and adapt the groups based on your actual project details. To the minimum, `ssh_key1` should be replaced by the user SSH public key and `example-mnemonic` should be replaced by the user mnemonics. + +Note that if no IPs are specified as true (IPv4 or IPv6), an Yggdrasil IP address will automatically be assigned to the VM, as at least one IP should be set to allow an SSH connection to the VM. + +### YAML Example + +``` +node_groups: + - name: group_a + nodes_count: 3 + free_cpu: 2 + free_mru: 16 + free_ssd: 100 + free_hdd: 50 + dedicated: false + public_ip4: true + public_ip6: false + certified: false + region: europe +vms: + - name: examplevm123 + vms_count: 5 + node_group: group_a + cpu: 1 + mem: 0.25 + public_ip4: true + public_ip6: false + ssd: + - size: 15 + mount_point: /mnt/ssd + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entry_point: /sbin/zinit init + root_size: 0 + ssh_key: example1 + env_vars: + user: user1 + pwd: 1234 +ssh_keys: + example1: ssh_key1 +mnemonic: example-mnemonic +network: dev +max_retries: 5 +``` + +### JSON Example + +``` +{ + "node_groups": [ + { + "name": "group_a", + "nodes_count": 3, + "free_cpu": 2, + "free_mru": 16, + "free_ssd": 100, + "free_hdd": 50, + "dedicated": false, + "public_ip4": true, + "public_ip6": false, + "certified": false, + "region": europe, + } + ], + "vms": [ + { + "name": "examplevm123", + "vms_count": 5, + "node_group": "group_a", + "cpu": 1, + "mem": 0.25, + "public_ip4": true, + "public_ip6": false, + "ssd": [ + { + "size": 15, + "mount_point": "/mnt/ssd" + } + ], + "flist": "https://hub.grid.tf/tf-official-apps/base:latest.flist", + "entry_point": "/sbin/zinit init", + "root_size": 0, + "ssh_key": "example1", + "env_vars": { + "user": "user1", + "pwd": "1234" + } + } + ], + "ssh_keys": { + "example1": "ssh_key1" + }, + "mnemonic": "example-mnemonic", + "network": "dev", + "max_retries": 5 +} +``` + +## Create a Configuration File + +You can start with the example above and adjust for your specific deployment needs. + +- Create directory + ``` + mkdir tfrobot_deployments && cd $_ + ``` +- Create configuration file and adjust with the provided example above + ``` + nano config.yaml + ``` + +Once you've set your configuration file, all that's left is to deploy on the TFGrid. Read the next section for more information on how to deploy with TFROBOT. \ No newline at end of file diff --git a/collections/developers/tfrobot/tfrobot_configurations.md b/collections/developers/tfrobot/tfrobot_configurations.md new file mode 100644 index 0000000..7ceb867 --- /dev/null +++ b/collections/developers/tfrobot/tfrobot_configurations.md @@ -0,0 +1,68 @@ +

Supported Configurations

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Config File](#config-file) +- [Node Group](#node-group) +- [Vms Groups](#vms-groups) +- [Disk](#disk) + +*** + +## Introduction + +When deploying with TFROBOT, you can set different configurations allowing for personalized deployments. + +## Config File + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| [node_group](#node-group) | description of all resources needed for each node_group | list of structs of type node_group | +| [vms](#vms-groups) | description of resources needed for deploying groups of vms belong to node_group | list of structs of type vms | +| ssh_keys | map of ssh keys with key=name and value=the actual ssh key | map of string to string | +| mnemonic | mnemonic of the user | should be valid mnemonic | +| network | valid network of ThreeFold Grid networks | main, test, qa, dev | +| max_retries | times of retries of failed node groups | positive integer | + +## Node Group + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| name | name of node_group | node group name should be unique | +| nodes_count | number of nodes in node group| nonzero positive integer | +| free_cpu | number of cpu of node | nonzero positive integer max = 32 | +| free_mru | free memory in the node in GB | min = 0.25, max = 256 | +| free_ssd | free ssd storage in the node in GB | positive integer value | +| free_hdd | free hdd storage in the node in GB | positive integer value | +| dedicated | are nodes dedicated | `true` or `false` | +| public_ip4 | should the nodes have free ip v4 | `true` or `false` | +| public_ip6 | should the nodes have free ip v6 | `true` or `false` | +| certified | should the nodes be certified(if false the nodes could be certified or DIY) | `true` or `false` | +| region | region could be the name of the continents the nodes are located in | africa, americas, antarctic, antarctic ocean, asia, europe, oceania, polar | + +## Vms Groups + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| name | name of vm group | string value with no special characters | +| vms_count | number of vms in vm group| nonzero positive integer | +| node_group | name of node_group the vm belongs to | should be defined in node_groups | +| cpu | number of cpu for vm | nonzero positive integer max = 32 | +| mem | free memory in the vm in GB | min = 0.25, max 256 | +| planetary | should the vm have yggdrasil ip | `true` or `false` | +| public_ip4 | should the vm have free ip v4 | `true` or `false` | +| public_ip6 | should the vm have free ip v6 | `true` or `false` | +| flist | should be a link to valid flist | valid flist url with `.flist` or `.fl` extension | +| entry_point | entry point of the flist | path to the entry point in the flist | +| ssh_key | key of ssh key defined in the ssh_keys map | should be valid ssh_key defined in the ssh_keys map | +| env_vars | map of env vars | map of type string to string | +| ssd | list of disks | should be of type disk| +| root_size | root size in GB | 0 for default root size, max 10TB | + +## Disk + +| Field | Description| Supported Values| +| :---: | :---: | :---: | +| size | disk size in GB| positive integer min = 15 | +| mount_point | disk mount point | path to mountpoint | diff --git a/collections/developers/tfrobot/tfrobot_deploy.md b/collections/developers/tfrobot/tfrobot_deploy.md new file mode 100644 index 0000000..7e16d12 --- /dev/null +++ b/collections/developers/tfrobot/tfrobot_deploy.md @@ -0,0 +1,59 @@ + + +

Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy Workloads](#deploy-workloads) +- [Delete Workloads](#delete-workloads) +- [Logs](#logs) +- [Using TFCMD with TFROBOT](#using-tfcmd-with-tfrobot) + - [Get Contracts](#get-contracts) + +*** + +## Introduction + +We present how to deploy workloads on the ThreeFold Grid using TFROBOT. + +## Prerequisites + +To deploy workloads on the TFGrid with TFROBOT, you first need to [install TFROBOT](./tfrobot_installation.md) on your machine and create a [configuration file](./tfrobot_config.md). + +## Deploy Workloads + +Once you've installed TFROBOT and created a configuration file, you can deploy on the TFGrid with the following command. Make sure to indicate the path to your configuration file. + +```bash +tfrobot deploy -c ./config.yaml +``` + +## Delete Workloads + +To delete the contracts, you can use the following line. Make sure to indicate the path to your configuration file. + +```bash +tfrobot cancel -c ./config.yaml +``` + +## Logs + +To ensure a complete log history, append `2>&1 | tee path/to/log/file` to the command being executed. + +```bash +tfrobot deploy -c ./config.yaml 2>&1 | tee path/to/log/file +``` + +## Using TFCMD with TFROBOT + +### Get Contracts + +The TFCMD tool works well with TFROBOT, as it can be used to query the TFGrid, for example you can see the contracts created by TFROBOT by running the TFCMD command, taking into consideration that you are using the same mnemonics and are on the same network: + +```bash +tfcmd get contracts +``` + +For more information on TFCMD, [read the documentation](../tfcmd/tfcmd.md). \ No newline at end of file diff --git a/collections/developers/tfrobot/tfrobot_installation.md b/collections/developers/tfrobot/tfrobot_installation.md new file mode 100644 index 0000000..deec2b8 --- /dev/null +++ b/collections/developers/tfrobot/tfrobot_installation.md @@ -0,0 +1,36 @@ +

Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) + +*** + +## Introduction + +This section covers the basics on how to install TFROBOT (`tfrobot`). + +TFROBOT is available as binaries. Make sure to download the latest release and to stay up to date with new releases. + +## Installation + +To install TFROBOT, simply download and extract the TFROBOT binaries to your path. + +- Create a new directory for `tfgrid-sdk-go` + ``` + mkdir tfgrid-sdk-go + cd tfgrid-sdk-go + ``` +- Download latest release from [releases](https://github.com/threefoldtech/tfgrid-sdk-go/releases) + - ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download/v0.14.4/tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` +- Extract the binaries + - ``` + tar -xvf tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` +- Move `tfrobot` to any `$PATH` directory: + ```bash + mv tfrobot /usr/local/bin + ``` \ No newline at end of file diff --git a/collections/documentation/.collection copy b/collections/documentation/.collection copy new file mode 100644 index 0000000..e69de29 diff --git a/collections/documentation/dashboard/dashboard.md b/collections/documentation/dashboard/dashboard.md deleted file mode 100644 index 991b189..0000000 --- a/collections/documentation/dashboard/dashboard.md +++ /dev/null @@ -1,43 +0,0 @@ -

ThreeFold Dashboard

- -Explore, control, and manage your ThreeFold Grid resources effortlessly through our integrated Dashboard. Deploy solutions seamlessly while gaining full control, all within a unified interface. - -The ThreeFold Dashboard is a revolutionary platform that simplifies the deployment process, allowing users to effortlessly interact with the TFGrid using intuitive web components known as weblets. - -## What is the ThreeFold Dashboard? - -The ThreeFold Dashboard is a dynamic environment designed for both seasoned developers and newcomers alike. It offers a seamless and accessible browser experience, making it easy to deploy solutions on the TFGrid through the use of weblets. - -In the context of the Dashboard, a weblet is a compiled JavaScript web component that can be effortlessly embedded within the HTML page of a web application. This modular approach allows for flexible and intuitive interactions, facilitating a user-friendly deployment process. - -The backend for the weblets is introduced with the [Javascript Client](../developers/javascript/grid3_javascript_readme.md) which communicates to TFChain over RMB. - -

Table of Contents

- -- [Wallet Connector](./wallet_connector.md) -- [TFGrid](./tfgrid/tfgrid.md) -- [Deploy](./deploy/deploy.md) -- [Farms](./farms/farms.md) -- [TFChain](./tfchain/tfchain.md) - -## Advantages - -- It is a non-code easy way to deploy a whole solution on the TFGrid. -- It is 100% decentralized, there is no server involved. -- It is powerful tool designed to empower individuals and organizations with seamless control and management over their ThreeFold Grid resources. -- It provides an intuitive web-based interface that allows users to effortlessly deploy, monitor, and scale their workloads on the decentralized and sustainable ThreeFold Grid infrastructure. - -## Dashboard - -You can access the ThreeFold Dashboard on different TF Chain networks. - -- [https://dashboard.dev.grid.tf](https://dashboard.dev.grid.tf) for Dev net -- [https://dashboard.qa.grid.tf](https://dashboard.qa.grid.tf) for QA net -- [https://dashboard.test.grid.tf](https://dashboard.test.grid.tf) for Test net -- [https://dashboard.grid.tf](https://dashboard.grid.tf) for Main net - -## Limitations - -- Regarding browser support, we're only supporting Google Chrome browser (and thus Brave browser) at the moment with more browsers to be supported soon. -- Deploys one thing at a time. -- Might take sometime to deploy a solution like Peertube, so you should wait a little bit until it's fully running. diff --git a/collections/documentation/dashboard/deploy/applications.md b/collections/documentation/dashboard/deploy/applications.md deleted file mode 100644 index ca16e19..0000000 --- a/collections/documentation/dashboard/deploy/applications.md +++ /dev/null @@ -1,24 +0,0 @@ -

Ready Community Applications

- -Easily deploy your favourite applications on the ThreeFold grid with a click of a button. - -![](../img/applications_landing.png) - -*** - -

Table of Contents

- -- [Algorand](../solutions/algorand.md) -- [CasperLabs](../solutions/casper.md) -- [Discourse](../solutions/discourse.md) -- [Funkwhale](../solutions/funkwhale.md) -- [Mattermost](../solutions/mattermost.md) -- [Nextcloud](../solutions/nextcloud.md) -- [Node Pilot](../solutions/nodepilot.md) -- [ownCloud](../solutions/owncloud.md) -- [Peertube](../solutions/peertube.md) -- [Presearch](../solutions/presearch.md) -- [Subsquid](../solutions/subsquid.md) -- [Taiga](../solutions/taiga.md) -- [Umbrel](../solutions/umbrel.md) -- [WordPress](../solutions/wordpress.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/dedicated_machines.md b/collections/documentation/dashboard/deploy/dedicated_machines.md deleted file mode 100644 index 24f4dbb..0000000 --- a/collections/documentation/dashboard/deploy/dedicated_machines.md +++ /dev/null @@ -1,93 +0,0 @@ -

Dedicated Machines

- -

Table of Contents

- -- [What is a Dedicated Machine?](#what-is-a-dedicated-machine) -- [Description](#description) -- [Billing \& Pricing](#billing--pricing) -- [Discounts](#discounts) -- [Usage](#usage) -- [GPU Support](#gpu-support) -- [Filter and Reserve a GPU Node](#filter-and-reserve-a-gpu-node) - - [Filter Nodes](#filter-nodes) - - [Reserve a Node](#reserve-a-node) -- [GPU Support Links](#gpu-support-links) - -*** - -## What is a Dedicated Machine? - -Dedicated machines are 3Nodes that can be reserved and rented entirely by one user. The user can thus reserve an entire node and use it exclusively to deploy solutions. This feature is ideal for users who want to host heavy deployments with the benefits of high reliability and cost effectiveness. - -## Description - -- Node reserved with deploying a `RentContract` on this node. node can has only one rentContract. -- When a user create a RentContract against a node, the grid validate that there are no other active contracts on that node on the creation. -- Once a RentContract is created, the grid can only accept contracts on this node from the tenant. -- Only workloads from the tenant are accepted - -## Billing & Pricing - -- Once a node is rented, there is a fixed charge billed to the tenant regardless of deployed workloads. -- Any subsequent NodeContract deployed on a node where a rentContract is active (and the same user is creating the nodeContracts) can be excluded from billing (apart from public ip and network usage). -- Billing rates are calculated hourly on the TFGrid. - - While some of the documentation mentions a monthly price, the chain expresses pricing per hour. The monthly price shown within the manual is offered as a convenience to users, as it provides a simple way to estimate costs. - -## Discounts - -- Received Discounts for renting a node on TFGrid internet capacity - - 50% for dedicated node (TF Pricing policies) - - A second level discount up to 60% for balance level see [Discount Levels](../../../knowledge_base/cloud/pricing/staking_discount_levels.md) -- Discounts are calculated every time the grid bills by checking the available TFT balance on the user wallet and seeing if it is sufficient to receive a discount. As a result, if the user balance drops below the treshold of a given discount, the deployment price increases. - -## Usage - -- See list of all dedicated node on `Dedicated Machines` tab on the portal. - - ![ ](../img/dedicated_machines.png) - - - Hover over the price to see the applied discounts - - ![](../img/dashboard_dedicated_nodes_discounts.png) - - - Expand row to see more info on the node: - - ![ ](../img/dashboard_dedicated_nodes_details.png) - - Resources - - Location - - Possible Public Ips *this depends on the farm it belongs to* - - - You can see the nodes in 2 states: - - Free - - Reserved *Owned by current twin* -- Reserve a node: - - If node is not rented by another twin you can simply click reserve. - - -- Unreserve a node: - - Simply as reserving but another check will be done to check you don't have any active workloads on the node before unreserving. - -## GPU Support - -To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have rented a dedicated node equipped with a GPU, users can deploy workloads on their dedicated GPU node. - -## Filter and Reserve a GPU Node - -You can filter and reserve a GPU node using the [Dedicated Machines section](https://dashboard.grid.tf/#/deploy/dedicated-nodes/) of the **Dashboard**. - -### Filter Nodes - -- Filter nodes using the vendor name - - In **Filters**, select **GPU's vendor name** - - Write the name of the vendor desired (e.g. **nvidia**, **amd**) -- Filter nodes using the device name - - In **Filters**, select **GPU's device name** - - Write the name of the device desired (e.g. **GT218**) - -### Reserve a Node - -When you have decided which node to reserve, click on **Reserve** under the column named **Actions**. Once you've rented a dedicated node that has a GPU, you can deploy GPU workloads. - -## GPU Support Links - -The ThreeFold Manual covers many ways to use a GPU node on the TFGrid. Read [this section](../../system_administrators/gpu/gpu_toc.md) to learn more. \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/deploy.md b/collections/documentation/dashboard/deploy/deploy.md deleted file mode 100644 index a96decc..0000000 --- a/collections/documentation/dashboard/deploy/deploy.md +++ /dev/null @@ -1,27 +0,0 @@ -# Deploy - -Here you will find everything related to deployments on the ThreeFold grid. This includes: - -- Checking the cost of a deployment using [Pricing Calculator](./pricing_calculator.md) -- Finding a node to deploy on using the [Node Finder](./node_finder.md) -- Deploying your desired workload from [Virtual Machines](../solutions/vm_intro.md), [Orchestrators](./orchestrators.md), or [Applictions](./applications.md) -- Renting your own node on the ThreeFold grid from [Dedicated Machines](./dedicated_machines.md) -- Consulting [Your Contracts](./your_contracts.md) on the TFGrid -- Finding or publishing Flists from [Images](./images.md) -- Updating or generating your SSH key from [SSH Keys](./ssh_keys.md) - - ![](../img/sidebar_2.png) - -*** - -## Table of Content - -- [Pricing Calculator](./pricing_calculator.md) -- [Node Finder](./node_finder.md) -- [Virtual Machines](../solutions/vm_intro.md) -- [Orchestrators](./orchestrators.md) -- [Dedicated Machines](./dedicated_machines.md) -- [Applications](./applications.md) -- [Your Contracts](./your_contracts.md) -- [Images](./images.md) -- [SSH Keys](./ssh_keys.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/images.md b/collections/documentation/dashboard/deploy/images.md deleted file mode 100644 index 2c27966..0000000 --- a/collections/documentation/dashboard/deploy/images.md +++ /dev/null @@ -1,5 +0,0 @@ -# Images - -Find or Publish your Flist from [Zero-OS Hub](https://hub.grid.tf/) - -![](../img/0_hub.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/node_finder.md b/collections/documentation/dashboard/deploy/node_finder.md deleted file mode 100644 index dd93347..0000000 --- a/collections/documentation/dashboard/deploy/node_finder.md +++ /dev/null @@ -1,40 +0,0 @@ -

Node Finder

- -

Table of Contents

- -- [Nodes](#nodes) -- [GPU Support](#gpu-support) - -*** - -## Nodes - -The Node Finder page provides a more detailed view for the nodes available on the ThreeFold grid With detailed information and statistics about any of the available nodes. - -![](../img/nodes.png) - -You can get a node with the desired specifications using the filters available in the nodes page. - -![](../img/nodes_filters.png) - -You can see all of the node details by clicking on a node record. - -![](../img/nodes_details.png) - -## GPU Support - -![GPU support](../img/gpu_filter.png) - -- A new filter for GPU supported node is now available on the Nodes page. -- GPU count -- Filtering capabilities based on the model / device - -On the details pages is shown the card information and its status (`reserved` or `available`) also the ID that’s needed to be used during deployments is easily accessible and has a copy to clipboard button. - -![GPU details](../img/gpu_details.png) - -Here’s an example of how it looks in case of reserved - -![GPU details](../img/gpu_details_reserved.png) - -The TF Dashboard is where to reserve the nodes the farmer should be able to set the extra fees on the form and the user also should be able to reserve and get the details of the node (cost including the extrafees, GPU informations). diff --git a/collections/documentation/dashboard/deploy/node_finder_gpu_support.md b/collections/documentation/dashboard/deploy/node_finder_gpu_support.md deleted file mode 100644 index 7002630..0000000 --- a/collections/documentation/dashboard/deploy/node_finder_gpu_support.md +++ /dev/null @@ -1,19 +0,0 @@ -

GPU Support

- -*** - -![GPU support](../img/gpu_filter.png) - -- A new filter for GPU supported node is now available on the Nodes page. -- GPU count -- Filtering capabilities based on the model / device - -On the details pages is shown the card information and its status (`reserved` or `available`) also the ID that’s needed to be used during deployments is easily accessible and has a copy to clipboard button. - -![GPU details](../img/gpu_details.png) - -Here’s an example of how it looks in case of reserved - -![GPU details](../img/gpu_details_reserved.png) - -The TF Dashboard is where to reserve the nodes the farmer should be able to set the extra fees on the form and the user also should be able to reserve and get the details of the node (cost including the extrafees, GPU informations). diff --git a/collections/documentation/dashboard/deploy/orchestrators.md b/collections/documentation/dashboard/deploy/orchestrators.md deleted file mode 100644 index 47282a6..0000000 --- a/collections/documentation/dashboard/deploy/orchestrators.md +++ /dev/null @@ -1,14 +0,0 @@ -# Orchestrators - -Deploy your favorite orchestrating services and enjoy the seamless coordination and automation of various software applications and services. - -![](../img/orchestrator_landing.png) - -*** - -## Table of Contnet - -- [Kubernetes](../solutions/k8s.md) -- [Caprover](../solutions/caprover.md) - - [Caprover Admin](../solutions/caprover_admin.md) - - [Caprover Worker](../solutions/caprover_worker.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/pricing_calculator.md b/collections/documentation/dashboard/deploy/pricing_calculator.md deleted file mode 100644 index a050215..0000000 --- a/collections/documentation/dashboard/deploy/pricing_calculator.md +++ /dev/null @@ -1,6 +0,0 @@ -# TF Resource Calculator - -A tool provided by ThreeFold that allows users to estimate and calculate potential cost of a deployment on the ThreeFold grid. The resource calculator takes into account various factors, including deployment resources, node certification, currnet balance, and in return it displays an accurate estimate for the deployment in terms of ThreeFold Tokens (TFT) and in USD per month. - - -![](../img/pricing_calculator.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/ssh_keys.md b/collections/documentation/dashboard/deploy/ssh_keys.md deleted file mode 100644 index 4e2cecb..0000000 --- a/collections/documentation/dashboard/deploy/ssh_keys.md +++ /dev/null @@ -1,5 +0,0 @@ -# SSH Keys - -Add, update or generate your SSH key with a click of a button. - -![](../img/SSH_Key.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/deploy/vm.md b/collections/documentation/dashboard/deploy/vm.md deleted file mode 100644 index 5a22576..0000000 --- a/collections/documentation/dashboard/deploy/vm.md +++ /dev/null @@ -1,13 +0,0 @@ -

Virtual Machines

- -On the TFGrid, you can deploy both micro and full virtual machines. - -![](../img/vm_landing.png) - -*** - -

Table of Contents

- -- [Micro and Full VM Differences ](../solutions/vm_differences.md) -- [Full Virtual Machine](../solutions/fullVm.md) -- [Micro Virtual Machine](../solutions/vm.md) diff --git a/collections/documentation/dashboard/deploy/your_contracts.md b/collections/documentation/dashboard/deploy/your_contracts.md deleted file mode 100644 index c7cb4a6..0000000 --- a/collections/documentation/dashboard/deploy/your_contracts.md +++ /dev/null @@ -1,49 +0,0 @@ -# Contracts - -From the Contracts section you can check your contracts by navigating to the `Deploy` then `Your Contracts` tab from the sidebar. - -From there you will see the `Contracts List`, the list is split into three different sections. these sections are: - -### Node contracts - -![image](../img/node_contracts.png) - -### Name contracts - -![image](../img/name_contracts.png) - -### Rent contracts - -![image](../img/rent_contracts.png) - - - -This list includes the following information about each contract. - -- Contract ID. -- Contract Type. -- Contract State (Created, Deleted, GracePeriod). -- Solution Typw -- Billing Rate (in TFT/Hour). -- Solution Name. -- Created At. -- Expiration (Only appears if the contract is in GracePeriod). -- Node ID -- Node Status (Up, Down, Standby). -- Show Details (This button will display the detailed information of the desired contract). - - ![image](../img/contract_details.png) - - -## Cancel Contract - -You can also cancel the target contract/s by select the contract you want to cancel - -- Click on the checkbox on the left side of the contract row -- Click on the delete button in the bottom right of the table -- Review the contract/s ID then click on *Delete* button - -Note: - ->- You can Cancel all you contracts by clicking on the checkbox on the left side of the table header then click on *Delete* button. ->- It is advisable to remove the contract from its solution page, especially when multiple contracts may be linked to the same instance. diff --git a/collections/documentation/dashboard/farms/farms.md b/collections/documentation/dashboard/farms/farms.md deleted file mode 100644 index 7faedd3..0000000 --- a/collections/documentation/dashboard/farms/farms.md +++ /dev/null @@ -1,19 +0,0 @@ -# Farms - -Here you will find everything farming related. this includes: - -- Monitoring, creating, and updating your farms from the [Your Farms](./your_farms.md) section where you can also check your nodes and update multiple things like the public configuration and extra fees of the node. -- Exploring and finding farms that are available on the ThreeFold grid from the [Farm Finder](./farms_finder.md) section. -- Generating your own boot device for your system from the [Node Installer](./node_installer.md) section. -- Estimating and calculating potential earnings from farming on the ThreeFold Grid from the [Simulator](./simulator.md) section. - - ![](../img/sidebar_3.png) - -*** - -## Table of Content - -- [Your Farms](./your_farms.md) -- [Farm Finder](./farms_finder.md) -- [Node Installer](./node_installer.md) -- [Simulator](./simulator.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/farms/farms_finder.md b/collections/documentation/dashboard/farms/farms_finder.md deleted file mode 100644 index 252aa73..0000000 --- a/collections/documentation/dashboard/farms/farms_finder.md +++ /dev/null @@ -1,13 +0,0 @@ -# Farms - -The farms page provides a more detailed view for the farms available on the ThreeFold grid With detailed information about any of the available farms. - -![](../img/farms.png) - -You can search for a specific farm using the farms filters. - -![](../img/farms_filters.png) - -You can see all of the farm details by clicking on a farm record. - -![](../img/farms_details.png) diff --git a/collections/documentation/dashboard/farms/node_installer.md b/collections/documentation/dashboard/farms/node_installer.md deleted file mode 100644 index 903e10e..0000000 --- a/collections/documentation/dashboard/farms/node_installer.md +++ /dev/null @@ -1,5 +0,0 @@ -# Node Installer - -Generate your own boot device for your system and download Zero-OS Images from [Zero-OS Bootstrap](https://bootstrap.grid.tf/) - -![](../img/0_Bootstrap.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/farms/simulator.md b/collections/documentation/dashboard/farms/simulator.md deleted file mode 100644 index 3d5b594..0000000 --- a/collections/documentation/dashboard/farms/simulator.md +++ /dev/null @@ -1,5 +0,0 @@ -# Simulator - -A tool provided by ThreeFold that allows users to estimate and calculate potential earnings from farming on the ThreeFold Grid. Farming refers to the process of providing computing resources, such as storage and processing power, to the ThreeFold Grid and earning tokens in return. The tf-farming-calculator takes into account various factors, including the amount of resources contributed, the duration of farming, and the current market conditions, to provide users with an estimate of their potential earnings in terms of ThreeFold Tokens (TFT). - -![](../img/simulator.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/farms/your_farms.md b/collections/documentation/dashboard/farms/your_farms.md deleted file mode 100644 index 8afc756..0000000 --- a/collections/documentation/dashboard/farms/your_farms.md +++ /dev/null @@ -1,134 +0,0 @@ -# Farms - -This comprehensive guide aims to provide users with detailed instructions and insights into efficiently managing their _Farms_. Farms encompass servers and storage devices contributing computational and storage capabilities to the grid, empowering users to oversee, maintain, and optimize their resources effectively. - -- [Getting started](#getting-started) -- [Create a new Farm](#create-a-new-farm) -- [Manage Your Farms](#manage-your-farms) - - [Add a public IP to your Farm](#add-a-public-ip-to-your-farm) - - [Add a Stellar address for payout](#add-a-stellar-address-for-payout) - - [Generate your node bootstrap image](#generate-your-node-bootstrap-image) - - [Additional information](#additional-information) -- [Manage Your Nodes](#manage-your-nodes) - - [Node information](#node-information) - - [Extra Fees](#extra-fees) - - [Public Configuration](#public-configuration) - - [The Difference Between IPs Assigned to Nodes Versus a Farm](#the-difference-between-ips-assigned-to-nodes-versus-a-farm) - -## Getting started - -After logging in to the TF Dashboard, on the sidebar click on **Dashboard** then _Your Farms_ . - -## Create a new Farm - -If you want to start farming, you need a farmID, the ID of the farm that is owning the hardware node(s) you connect to the TFGrid. - -**Currently on**: - -- [Devnet](https://dashboard.dev.grid.tf/) -- [Qanet](https://dashboard.qa.grid.tf/) -- [Testnet](https://dashboard.test.grid.tf/) -- [Mainnet](https://dashboard.grid.tf/) - -Click `Create Farm` and choose a name. - -![ ](../img/dashboard_farms.png) - -![ ](../img/dashboard_farms_create.png) - -Click on `Create`. - -The farm is by default set up as 'DIY'. A farm can become certified through certification program. -Also a pricing policy is defined. Pricing policy is currently the same for all farms, the field is created for future use. - -## Manage Your Farms - -You can browse your Farms in _Farms_ table; Farms table contains all your own farms and its your entry point to manage your farm as in the following sections. - -![](../img/dashboard_farms_farms_table.png) - -### Add a public IP to your Farm - -If you have public IPv4 addresses available that can be used for usage on the TFGrid, you can add them in your farm. -Click `ADD IP`, specify the addresses, the gateway and click `CREATE`. -You can add them one by one or using range of IPs. - -**Some notes about adding a new IPs**: - -- Be careful not to create a new IP range that contains an IP address that already exists; doing so will result in an error. -- Verify that both the gateway address and the IP address are correct. -- Be careful not to include the same gateway address in a new IP range. - -![ ](../img/dashboard_farms_farm_details.png) - -![ ](../img/dashboard_farms_add_ip_single.png) - -![ ](../img/dashboard_farms_add_ip_range.png) - -Deleting IPv4 addresses is also possible here. The `Deployed Contract ID` gives an indication of whether an IP is currently used. If it is 0, it is safe to remove it. - -![ ](../img/dashboard_farms_ip_details.png) - -### Add a Stellar address for payout - -In a first phase, farming of tokens still results in payout on the Stellar network. So to get the farming reward, a Stellar address needs to be provided. - -![ ](../img/dashboard_farms_farm_details.png) - -![ ](../img/dashboard_farms_stellar_address.png) - -You can read about different ways to store TFT [here](../../threefold_token/storing_tft/storing_tft.md). Make sure to use a Stellar wallet for your farming rewards. - -### Generate your node bootstrap image - -Once you know your farmID, you can set up your node on TFGrid3. Click on `Bootstrap Node Image`. - -![dashboard_bootstrap_farm](../img/dashboard_bootstrap_farm.png) - -Read more Zero-OS bootstrap image [here](../../farmers/3node_building/2_bootstrap_image.md). - -### Additional information - -After booting a node, the info will become available in `Your Nodes` table, including the status info along with the minting and fixup receipts. - -![ ](../img/dashboard_farms_node_details.png) - -Clicking on the node statistics will open up a calendar where you can view the periods the node was minting or undergoing a fixup. Clicking on the periods will show a popup with the start and end datetimes, receipt hash and the amount of TFTs minted (if it is a minting receipt). - -![ ](../img/dashboard_portal_ui_nodes_minting.png) - -You can also download a single node's receipts using the `Download Receipts` button within the node statistics. Moreover, you can download all of the nodes' receipts using the `Download Receipts` button on the top left corner of the farm nodes table. - -## Manage Your Nodes - -as in farms table _Nodes_ table contains all your own nodes and its your entry point to manage your farm as in the following sections. - -### Node information - -Expand your node information by clicking on the expand button in the target node row. - -### Extra Fees - -You can set a price for the special hardware you’re providing e.g. GPUs while renting. - -![](../img/dashboard_farms_extra_fee.png) - -- Under the **Your Nodes** table, locate the target node and click **Set Additional Fees** under **Actions** -- Set a monthly fee (in USD) and click **Save** - -### Public Configuration - -To configure public IP addresses to a specific Node - -![](../img/dashboard_farms_public_config.png) - -- Under the **Your Nodes** table, locate the target node and click **Add a public config** under **Actions** -- Fill in the necessary information and click save. Only the IPv4 address and gateway are necessary. - -> The IPv6 address and the Domain are optional but if you provide The IPv6 you have to provide its Domain. - -#### The Difference Between IPs Assigned to Nodes Versus a Farm - ---- - -IPs assigned to a farm are available to be rented by workloads. They can be assigned to virtual machines for example. IPs assigned to nodes enable each node to become a gateway. diff --git a/collections/documentation/dashboard/home.md b/collections/documentation/dashboard/home.md deleted file mode 100644 index d14fd9a..0000000 --- a/collections/documentation/dashboard/home.md +++ /dev/null @@ -1,43 +0,0 @@ -

ThreeFold Dashboard

- -Explore, control, and manage your ThreeFold Grid resources effortlessly through our integrated Dashboard. Deploy solutions seamlessly while gaining full control, all within a unified interface. - -The ThreeFold Dashboard is a revolutionary platform that simplifies the deployment process, allowing users to effortlessly interact with the TFGrid using intuitive web components known as weblets. - -## What is the ThreeFold Dashboard? - -The ThreeFold Dashboard is a dynamic environment designed for both seasoned developers and newcomers alike. It offers a seamless and accessible browser experience, making it easy to deploy solutions on the TFGrid through the use of weblets. - -In the context of the Dashboard, a weblet is a compiled JavaScript web component that can be effortlessly embedded within the HTML page of a web application. This modular approach allows for flexible and intuitive interactions, facilitating a user-friendly deployment process. - -The backend for the weblets is introduced with the [Javascript Client](../javascript/grid3_javascript_readme.md) which communicates to TFChain over RMB. - -

Table of Contents

- -- [Wallet Connector](./wallet_connector.md) -- [TFGrid](./tfgrid/tfgrid.md) -- [Deploy](./deploy/deploy.md) -- [Farms](./farms/farms.md) -- [TFChain](./tfchain/tfchain.md) - -## Advantages - -- It is a non-code easy way to deploy a whole solution on the TFGrid. -- It is 100% decentralized, there is no server involved. -- It is powerful tool designed to empower individuals and organizations with seamless control and management over their ThreeFold Grid resources. -- It provides an intuitive web-based interface that allows users to effortlessly deploy, monitor, and scale their workloads on the decentralized and sustainable ThreeFold Grid infrastructure. - -## Dashboard - -You can access the ThreeFold Dashboard on different TF Chain networks. - -- [https://dashboard.dev.grid.tf](https://dashboard.dev.grid.tf) for Dev net. -- [https://dashboard.qa.grid.tf](https://dashboard.qa.grid.tf) for QA net. -- [https://dashboard.test.grid.tf](https://dashboard.test.grid.tf) for Test net. -- [https://dashboard.grid.tf](https://dashboard.grid.tf) for Main net. - -## Limitations - -- Regarding browser support, we're only supporting Google Chrome browser (and thus Brave browser) at the moment with more browsers to be supported soon. -- Deploys one thing at a time. -- Might take sometime to deploy a solution like Peertube, so you should wait a little bit until it's fully running. diff --git a/collections/documentation/dashboard/img/.done b/collections/documentation/dashboard/img/.done deleted file mode 100644 index d8eefe3..0000000 --- a/collections/documentation/dashboard/img/.done +++ /dev/null @@ -1,3 +0,0 @@ -dashboard_tc.png -dashboard_portal_terms_conditions.png -profile_manager1.png diff --git a/collections/documentation/dashboard/img/0_bootstrap.png b/collections/documentation/dashboard/img/0_bootstrap.png deleted file mode 100644 index dedec1b..0000000 Binary files a/collections/documentation/dashboard/img/0_bootstrap.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/0_hub.png b/collections/documentation/dashboard/img/0_hub.png deleted file mode 100644 index da7faf5..0000000 Binary files a/collections/documentation/dashboard/img/0_hub.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_domain_10.png b/collections/documentation/dashboard/img/add_domain_10.png deleted file mode 100644 index aabe5de..0000000 Binary files a/collections/documentation/dashboard/img/add_domain_10.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_domain_11.png b/collections/documentation/dashboard/img/add_domain_11.png deleted file mode 100644 index 8540ce8..0000000 Binary files a/collections/documentation/dashboard/img/add_domain_11.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_domain_6.png b/collections/documentation/dashboard/img/add_domain_6.png deleted file mode 100644 index 05611ef..0000000 Binary files a/collections/documentation/dashboard/img/add_domain_6.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_domain_8.png b/collections/documentation/dashboard/img/add_domain_8.png deleted file mode 100644 index 3a35803..0000000 Binary files a/collections/documentation/dashboard/img/add_domain_8.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_domain_9.png b/collections/documentation/dashboard/img/add_domain_9.png deleted file mode 100644 index 346009b..0000000 Binary files a/collections/documentation/dashboard/img/add_domain_9.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/add_new_domain_success.png b/collections/documentation/dashboard/img/add_new_domain_success.png deleted file mode 100644 index 3b23776..0000000 Binary files a/collections/documentation/dashboard/img/add_new_domain_success.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/applications_landing.png b/collections/documentation/dashboard/img/applications_landing.png deleted file mode 100644 index e4117a5..0000000 Binary files a/collections/documentation/dashboard/img/applications_landing.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/contract_details.png b/collections/documentation/dashboard/img/contract_details.png deleted file mode 100644 index 286c87a..0000000 Binary files a/collections/documentation/dashboard/img/contract_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_bridge.png b/collections/documentation/dashboard/img/dashboard_bridge.png deleted file mode 100644 index 111ef91..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_bridge.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_connect.png b/collections/documentation/dashboard/img/dashboard_connect.png deleted file mode 100644 index 289b2e8..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_connect.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_dao.png b/collections/documentation/dashboard/img/dashboard_dao.png deleted file mode 100644 index 37fb232..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_dao.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_dedicated_nodes_details.png b/collections/documentation/dashboard/img/dashboard_dedicated_nodes_details.png deleted file mode 100644 index 649f7c9..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_dedicated_nodes_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_dedicated_nodes_discounts.png b/collections/documentation/dashboard/img/dashboard_dedicated_nodes_discounts.png deleted file mode 100644 index fe66434..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_dedicated_nodes_discounts.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_explorer_farms.png b/collections/documentation/dashboard/img/dashboard_explorer_farms.png deleted file mode 100644 index 58d3305..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_explorer_farms.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_explorer_nodes.png b/collections/documentation/dashboard/img/dashboard_explorer_nodes.png deleted file mode 100644 index 963730e..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_explorer_nodes.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_explorer_statistics.png b/collections/documentation/dashboard/img/dashboard_explorer_statistics.png deleted file mode 100644 index 0d81f19..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_explorer_statistics.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms.png b/collections/documentation/dashboard/img/dashboard_farms.png deleted file mode 100644 index bebffe2..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_add_ip_range.png b/collections/documentation/dashboard/img/dashboard_farms_add_ip_range.png deleted file mode 100644 index 30d6dcc..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_add_ip_range.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_add_ip_single.png b/collections/documentation/dashboard/img/dashboard_farms_add_ip_single.png deleted file mode 100644 index 26f80e4..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_add_ip_single.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_create.png b/collections/documentation/dashboard/img/dashboard_farms_create.png deleted file mode 100644 index 684dfbb..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_create.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_extra_fee.png b/collections/documentation/dashboard/img/dashboard_farms_extra_fee.png deleted file mode 100644 index 2800f03..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_extra_fee.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_farm_details.png b/collections/documentation/dashboard/img/dashboard_farms_farm_details.png deleted file mode 100644 index 5de107e..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_farm_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_farms_table.png b/collections/documentation/dashboard/img/dashboard_farms_farms_table.png deleted file mode 100644 index dd92e0b..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_farms_table.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_ip_details.png b/collections/documentation/dashboard/img/dashboard_farms_ip_details.png deleted file mode 100644 index 025d9b5..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_ip_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_node_details.png b/collections/documentation/dashboard/img/dashboard_farms_node_details.png deleted file mode 100644 index c95e3fc..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_node_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_public_config.png b/collections/documentation/dashboard/img/dashboard_farms_public_config.png deleted file mode 100644 index cf1e8f5..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_public_config.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_farms_stellar_address.png b/collections/documentation/dashboard/img/dashboard_farms_stellar_address.png deleted file mode 100644 index 0ba6efd..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_farms_stellar_address.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_generate_account.png b/collections/documentation/dashboard/img/dashboard_generate_account.png deleted file mode 100644 index c0433dc..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_generate_account.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_login.png b/collections/documentation/dashboard/img/dashboard_login.png deleted file mode 100644 index c119ac3..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_login.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_account.png b/collections/documentation/dashboard/img/dashboard_portal_account.png deleted file mode 100644 index be66b4e..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_account.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_create_account_1.png b/collections/documentation/dashboard/img/dashboard_portal_create_account_1.png deleted file mode 100644 index 4024e7d..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_create_account_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_create_account_2.png b/collections/documentation/dashboard/img/dashboard_portal_create_account_2.png deleted file mode 100644 index 5f1b4e5..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_create_account_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_create_twin.png b/collections/documentation/dashboard/img/dashboard_portal_create_twin.png deleted file mode 100644 index 776a27e..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_create_twin.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_deposit_tft.png b/collections/documentation/dashboard/img/dashboard_portal_deposit_tft.png deleted file mode 100644 index 97782e3..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_deposit_tft.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_farms.png b/collections/documentation/dashboard/img/dashboard_portal_farms.png deleted file mode 100644 index 3272e64..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_farms.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_fill_ipv6.png b/collections/documentation/dashboard/img/dashboard_portal_fill_ipv6.png deleted file mode 100644 index c2ddd0c..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_fill_ipv6.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_terms_conditions.png b/collections/documentation/dashboard/img/dashboard_portal_terms_conditions.png deleted file mode 100644 index b784fef..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_terms_conditions.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_transaction_sign.png b/collections/documentation/dashboard/img/dashboard_portal_transaction_sign.png deleted file mode 100644 index 62b4219..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_transaction_sign.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_transfer_detail.png b/collections/documentation/dashboard/img/dashboard_portal_transfer_detail.png deleted file mode 100644 index 919303c..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_transfer_detail.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_twin_created.png b/collections/documentation/dashboard/img/dashboard_portal_twin_created.png deleted file mode 100644 index 0d9b470..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_twin_created.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_ui_nodes_minting.png b/collections/documentation/dashboard/img/dashboard_portal_ui_nodes_minting.png deleted file mode 100644 index fc925e2..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_ui_nodes_minting.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_portal_withdraw_tft.png b/collections/documentation/dashboard/img/dashboard_portal_withdraw_tft.png deleted file mode 100644 index 5493694..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_portal_withdraw_tft.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_sidebar_portal.png b/collections/documentation/dashboard/img/dashboard_sidebar_portal.png deleted file mode 100644 index 9639032..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_sidebar_portal.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_statistics.png b/collections/documentation/dashboard/img/dashboard_statistics.png deleted file mode 100644 index 8badd4c..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_statistics.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_swap.png b/collections/documentation/dashboard/img/dashboard_swap.png deleted file mode 100644 index dbda2ba..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_swap.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_tc.png b/collections/documentation/dashboard/img/dashboard_tc.png deleted file mode 100644 index f58af46..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_tc.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_transfer_address.png b/collections/documentation/dashboard/img/dashboard_transfer_address.png deleted file mode 100644 index 2a389e9..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_transfer_address.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_transfer_twin.png b/collections/documentation/dashboard/img/dashboard_transfer_twin.png deleted file mode 100644 index 7bff8ca..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_transfer_twin.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dashboard_uptime.png b/collections/documentation/dashboard/img/dashboard_uptime.png deleted file mode 100644 index 21920e7..0000000 Binary files a/collections/documentation/dashboard/img/dashboard_uptime.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/dedicated_machines.png b/collections/documentation/dashboard/img/dedicated_machines.png deleted file mode 100644 index e832be5..0000000 Binary files a/collections/documentation/dashboard/img/dedicated_machines.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_basics_.png b/collections/documentation/dashboard/img/explorer_basics_.png deleted file mode 100644 index d8635ea..0000000 Binary files a/collections/documentation/dashboard/img/explorer_basics_.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_basics_2.png b/collections/documentation/dashboard/img/explorer_basics_2.png deleted file mode 100644 index 8af783c..0000000 Binary files a/collections/documentation/dashboard/img/explorer_basics_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_darkmode.png b/collections/documentation/dashboard/img/explorer_darkmode.png deleted file mode 100644 index 54cce34..0000000 Binary files a/collections/documentation/dashboard/img/explorer_darkmode.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_farm_details.png b/collections/documentation/dashboard/img/explorer_farm_details.png deleted file mode 100644 index 17f5f88..0000000 Binary files a/collections/documentation/dashboard/img/explorer_farm_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_farms.png b/collections/documentation/dashboard/img/explorer_farms.png deleted file mode 100644 index 73ddb75..0000000 Binary files a/collections/documentation/dashboard/img/explorer_farms.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_find_country_pubip.png b/collections/documentation/dashboard/img/explorer_find_country_pubip.png deleted file mode 100644 index 54b7ab6..0000000 Binary files a/collections/documentation/dashboard/img/explorer_find_country_pubip.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_gpu.png b/collections/documentation/dashboard/img/explorer_gpu.png deleted file mode 100644 index 6df7394..0000000 Binary files a/collections/documentation/dashboard/img/explorer_gpu.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_grafana.png b/collections/documentation/dashboard/img/explorer_grafana.png deleted file mode 100644 index 9a702fb..0000000 Binary files a/collections/documentation/dashboard/img/explorer_grafana.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_details.png b/collections/documentation/dashboard/img/explorer_node_details.png deleted file mode 100644 index e6fd8c1..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_example_1.png b/collections/documentation/dashboard/img/explorer_node_example_1.png deleted file mode 100644 index 712a8da..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_example_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_example_2.png b/collections/documentation/dashboard/img/explorer_node_example_2.png deleted file mode 100644 index 7a0e46b..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_example_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_resources_1.png b/collections/documentation/dashboard/img/explorer_node_resources_1.png deleted file mode 100644 index 51e44a0..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_resources_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_resources_2.png b/collections/documentation/dashboard/img/explorer_node_resources_2.png deleted file mode 100644 index 0c406f3..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_resources_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_node_resources_3.png b/collections/documentation/dashboard/img/explorer_node_resources_3.png deleted file mode 100644 index 47af607..0000000 Binary files a/collections/documentation/dashboard/img/explorer_node_resources_3.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_nodes.png b/collections/documentation/dashboard/img/explorer_nodes.png deleted file mode 100644 index d99a011..0000000 Binary files a/collections/documentation/dashboard/img/explorer_nodes.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_nodes_distribution.png b/collections/documentation/dashboard/img/explorer_nodes_distribution.png deleted file mode 100644 index 90cd664..0000000 Binary files a/collections/documentation/dashboard/img/explorer_nodes_distribution.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_statistics.png b/collections/documentation/dashboard/img/explorer_statistics.png deleted file mode 100644 index 9821a1d..0000000 Binary files a/collections/documentation/dashboard/img/explorer_statistics.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_status.png b/collections/documentation/dashboard/img/explorer_status.png deleted file mode 100644 index 821d4ca..0000000 Binary files a/collections/documentation/dashboard/img/explorer_status.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_toggle_gateways.png b/collections/documentation/dashboard/img/explorer_toggle_gateways.png deleted file mode 100644 index 6ce6c9f..0000000 Binary files a/collections/documentation/dashboard/img/explorer_toggle_gateways.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/explorer_toggle_gpu.png b/collections/documentation/dashboard/img/explorer_toggle_gpu.png deleted file mode 100644 index 7ee4d6f..0000000 Binary files a/collections/documentation/dashboard/img/explorer_toggle_gpu.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/farms.png b/collections/documentation/dashboard/img/farms.png deleted file mode 100644 index 5280633..0000000 Binary files a/collections/documentation/dashboard/img/farms.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/farms_details.png b/collections/documentation/dashboard/img/farms_details.png deleted file mode 100644 index 744fde4..0000000 Binary files a/collections/documentation/dashboard/img/farms_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/farms_filters.png b/collections/documentation/dashboard/img/farms_filters.png deleted file mode 100644 index 8eca69f..0000000 Binary files a/collections/documentation/dashboard/img/farms_filters.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/gpu_details.png b/collections/documentation/dashboard/img/gpu_details.png deleted file mode 100644 index 83314cb..0000000 Binary files a/collections/documentation/dashboard/img/gpu_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/gpu_details_reserved.png b/collections/documentation/dashboard/img/gpu_details_reserved.png deleted file mode 100644 index 84ab221..0000000 Binary files a/collections/documentation/dashboard/img/gpu_details_reserved.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/gpu_filter.png b/collections/documentation/dashboard/img/gpu_filter.png deleted file mode 100644 index 3a3657d..0000000 Binary files a/collections/documentation/dashboard/img/gpu_filter.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/grid_health.png b/collections/documentation/dashboard/img/grid_health.png deleted file mode 100644 index 2c5d9ee..0000000 Binary files a/collections/documentation/dashboard/img/grid_health.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/list_domains.png b/collections/documentation/dashboard/img/list_domains.png deleted file mode 100644 index 9144a76..0000000 Binary files a/collections/documentation/dashboard/img/list_domains.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/manage_domains_button.png b/collections/documentation/dashboard/img/manage_domains_button.png deleted file mode 100644 index 891110b..0000000 Binary files a/collections/documentation/dashboard/img/manage_domains_button.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/minting.png b/collections/documentation/dashboard/img/minting.png deleted file mode 100644 index b7de134..0000000 Binary files a/collections/documentation/dashboard/img/minting.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/monitoring.png b/collections/documentation/dashboard/img/monitoring.png deleted file mode 100644 index 1cda446..0000000 Binary files a/collections/documentation/dashboard/img/monitoring.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/name_contracts.png b/collections/documentation/dashboard/img/name_contracts.png deleted file mode 100644 index 35f1217..0000000 Binary files a/collections/documentation/dashboard/img/name_contracts.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/node_contracts.png b/collections/documentation/dashboard/img/node_contracts.png deleted file mode 100644 index b785b21..0000000 Binary files a/collections/documentation/dashboard/img/node_contracts.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/node_detail_.png b/collections/documentation/dashboard/img/node_detail_.png deleted file mode 100644 index 1e06f9a..0000000 Binary files a/collections/documentation/dashboard/img/node_detail_.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/node_detail_1.png b/collections/documentation/dashboard/img/node_detail_1.png deleted file mode 100644 index 6747192..0000000 Binary files a/collections/documentation/dashboard/img/node_detail_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/nodes.png b/collections/documentation/dashboard/img/nodes.png deleted file mode 100644 index be29183..0000000 Binary files a/collections/documentation/dashboard/img/nodes.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/nodes_details.png b/collections/documentation/dashboard/img/nodes_details.png deleted file mode 100644 index f5c4e8b..0000000 Binary files a/collections/documentation/dashboard/img/nodes_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/nodes_filters.png b/collections/documentation/dashboard/img/nodes_filters.png deleted file mode 100644 index 68257c2..0000000 Binary files a/collections/documentation/dashboard/img/nodes_filters.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/orchestrator_landing.png b/collections/documentation/dashboard/img/orchestrator_landing.png deleted file mode 100644 index 4770f40..0000000 Binary files a/collections/documentation/dashboard/img/orchestrator_landing.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_create_user.png b/collections/documentation/dashboard/img/owncloud_create_user.png deleted file mode 100644 index ba0badc..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_create_user.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_credentials.png b/collections/documentation/dashboard/img/owncloud_credentials.png deleted file mode 100644 index 8f1ad29..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_credentials.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_details.png b/collections/documentation/dashboard/img/owncloud_details.png deleted file mode 100644 index efe6136..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_details.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_enabled.png b/collections/documentation/dashboard/img/owncloud_enabled.png deleted file mode 100644 index 8877f6b..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_enabled.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_logo.svg b/collections/documentation/dashboard/img/owncloud_logo.svg deleted file mode 100644 index 4959a9f..0000000 --- a/collections/documentation/dashboard/img/owncloud_logo.svg +++ /dev/null @@ -1,159 +0,0 @@ - - - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/collections/documentation/dashboard/img/owncloud_logout.png b/collections/documentation/dashboard/img/owncloud_logout.png deleted file mode 100644 index ba21a9f..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_logout.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_show.png b/collections/documentation/dashboard/img/owncloud_show.png deleted file mode 100644 index 6b16ee3..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_show.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_tfconnect.png b/collections/documentation/dashboard/img/owncloud_tfconnect.png deleted file mode 100644 index d386991..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_tfconnect.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_users.png b/collections/documentation/dashboard/img/owncloud_users.png deleted file mode 100644 index aec7c03..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_users.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/owncloud_visit.png b/collections/documentation/dashboard/img/owncloud_visit.png deleted file mode 100644 index 5624e89..0000000 Binary files a/collections/documentation/dashboard/img/owncloud_visit.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/polkadot_extension_.png b/collections/documentation/dashboard/img/polkadot_extension_.png deleted file mode 100644 index bf3f6fd..0000000 Binary files a/collections/documentation/dashboard/img/polkadot_extension_.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/pricing_calculator.png b/collections/documentation/dashboard/img/pricing_calculator.png deleted file mode 100644 index 31f21eb..0000000 Binary files a/collections/documentation/dashboard/img/pricing_calculator.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/pro_manager5.png b/collections/documentation/dashboard/img/pro_manager5.png deleted file mode 100644 index 791dae0..0000000 Binary files a/collections/documentation/dashboard/img/pro_manager5.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/pro_manager6.png b/collections/documentation/dashboard/img/pro_manager6.png deleted file mode 100644 index 2b77950..0000000 Binary files a/collections/documentation/dashboard/img/pro_manager6.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/profile_manager1.png b/collections/documentation/dashboard/img/profile_manager1.png deleted file mode 100644 index c34e966..0000000 Binary files a/collections/documentation/dashboard/img/profile_manager1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/profile_manager2.png b/collections/documentation/dashboard/img/profile_manager2.png deleted file mode 100644 index 98f2154..0000000 Binary files a/collections/documentation/dashboard/img/profile_manager2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/profile_manager3.png b/collections/documentation/dashboard/img/profile_manager3.png deleted file mode 100644 index 407780c..0000000 Binary files a/collections/documentation/dashboard/img/profile_manager3.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/rent_contracts.png b/collections/documentation/dashboard/img/rent_contracts.png deleted file mode 100644 index 170ca57..0000000 Binary files a/collections/documentation/dashboard/img/rent_contracts.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/select_to_delete_domain.png b/collections/documentation/dashboard/img/select_to_delete_domain.png deleted file mode 100644 index 1ae8ce8..0000000 Binary files a/collections/documentation/dashboard/img/select_to_delete_domain.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_1.png b/collections/documentation/dashboard/img/sidebar_1.png deleted file mode 100644 index 4779b4c..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_1_copy_full.png b/collections/documentation/dashboard/img/sidebar_1_copy_full.png deleted file mode 100644 index 9e01af9..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_1_copy_full.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_2.png b/collections/documentation/dashboard/img/sidebar_2.png deleted file mode 100644 index 58e913b..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_2_copy_full.png b/collections/documentation/dashboard/img/sidebar_2_copy_full.png deleted file mode 100644 index f7d8600..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_2_copy_full.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_3.png b/collections/documentation/dashboard/img/sidebar_3.png deleted file mode 100644 index 565715d..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_3.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_3_copy_full.png b/collections/documentation/dashboard/img/sidebar_3_copy_full.png deleted file mode 100644 index 8924f42..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_3_copy_full.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_4.png b/collections/documentation/dashboard/img/sidebar_4.png deleted file mode 100644 index 8f94e9e..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_4.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/sidebar_4_copy_full.png b/collections/documentation/dashboard/img/sidebar_4_copy_full.png deleted file mode 100644 index 7a7e165..0000000 Binary files a/collections/documentation/dashboard/img/sidebar_4_copy_full.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/simulator.png b/collections/documentation/dashboard/img/simulator.png deleted file mode 100644 index b85a528..0000000 Binary files a/collections/documentation/dashboard/img/simulator.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/ssh_key.png b/collections/documentation/dashboard/img/ssh_key.png deleted file mode 100644 index 7d22832..0000000 Binary files a/collections/documentation/dashboard/img/ssh_key.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/statistics.png b/collections/documentation/dashboard/img/statistics.png deleted file mode 100644 index a216231..0000000 Binary files a/collections/documentation/dashboard/img/statistics.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/twin.png b/collections/documentation/dashboard/img/twin.png deleted file mode 100644 index 1e62185..0000000 Binary files a/collections/documentation/dashboard/img/twin.png and /dev/null differ diff --git a/collections/documentation/dashboard/img/vm_landing.png b/collections/documentation/dashboard/img/vm_landing.png deleted file mode 100644 index 21211a3..0000000 Binary files a/collections/documentation/dashboard/img/vm_landing.png and /dev/null differ diff --git a/collections/documentation/dashboard/play_go.md b/collections/documentation/dashboard/play_go.md deleted file mode 100644 index 5bb8332..0000000 --- a/collections/documentation/dashboard/play_go.md +++ /dev/null @@ -1,5 +0,0 @@ -- Choose one of the networks: - - https://dashboard.dev.grid.tf for Devnet. - - https://dashboard.qa.grid.tf for QAnet. - - https://dashboard.test.grid.tf for Testnet. - - https://dashboard.grid.tf for Mainnet. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/add_domain.md b/collections/documentation/dashboard/solutions/add_domain.md deleted file mode 100644 index 6883c97..0000000 --- a/collections/documentation/dashboard/solutions/add_domain.md +++ /dev/null @@ -1,107 +0,0 @@ -

Add a Domain to a VM

- -

Table of Contents

- -- [Introduction](#introduction) -- [Preparation](#preparation) -- [Add New Domain](#add-new-domain) -- [Domains List](#domains-list) -- [Delete a Domain](#delete-a-domain) -- [Questions and Feedback](#questions-and-feedback) - -*** - -## Introduction - -We cover the overall process to add a domain to a virtual machine running on the ThreeFold Grid. - -## Preparation - -- Deploy a virtual machine -- Click on the button **Manage Domains** under **Actions** - -![](../img/add_domain_6.png) - -- Open the **Add New Domain** tab - -![](../img/add_domain_10.png) - -## Add New Domain - -We cover the different domain parameters presented in the **Add New Domain** tab. - -- **Subdomain** - - The subdomain is used to reference to the complete domain name. It is randomly generated, but the user can write a specific subdomain name. - - The subdomain prefix (e.g. **fvm3748domainguide**) is decided as follows: - - Solution name (e.g. **fvm**) - - Twin ID (e.g. **3748**) - - Deployment name (e.g. **domainguide**) - - The complete subdomain is thus composed of the subdomain prefix mentioned above and the subdomain entered in the **Subdomain** field. -- **Custom domain name** - - You can also use a custom domain. - - In this case, instead of having a gateway subdomain and a gateway name as your domain, the domain will be the custom domain entered in this field. - - If you select **Custom domain**, make sure to set a DNS A record pointing to the gateway IP address on your domain name registrar. - -![Custom Domain Name](../img/add_domain_8.png) - -- **Select domain** - - Choose a gateway for your domain. - -- **Port** - - Choose the port that exposes your application instance on the virtual machine which the domain will point to. - - By default, it is set to **80**. - -- **TLS Passthrough** - - Disabling TLS passthrough will let the gateway terminate the traffic. - - Enabling TLS passthrough will let the backend service terminate the traffic. - -- **Network Name** - - This is the name of the WireGuard interface network (read-only field). - -- **IP Address** - - This is the WireGuard IP address (read-only field). - -Once you've filled the domain parameters, click on the **Add** button. The message **Successfully deployed gateway** will be presented once the domain is properly added. - -![Success Domain](../img/add_new_domain_success.png) - -## Domains List - -Once your domain is set, you can access the **Domains List** tab to consult its parameters. To visit the domain, simply click on the **Visit** button under **Actions**. - -![List Domain For VM](../img/add_domain_9.png) - -* **Name** - * The name is the subdomain (without the prefix) -* **Contract ID** - * Contract ID of the domain -* **Domain** - * Without a custom domain (default) - * The complete domain name (e.g. `fvm3748domainguidextebgpt.gent01.dev.grid.tf`) is composed of the subdomain prefix, the subdomain and the gateway domain. - - The subdomain prefix (e.g. `fvm3748domainguide`), as mentioned above. - - The subdomain (e.g. `xtebgpt`), chosen during the domain creation. - - The gateway domain (e.g. `gent01.dev.grid.tf`), based on the chosen gateway. - - With a custom domain - - The domain will be your custom domain (`e.g. threefold.pro`). -* **TLS Passthrough** - * The TLS passthrough status can be either **Yes** or **No**. -* **Backend** - * The WireGuard IP and the chosen port of the domain (e.g. `http://10.20.4.2:80`). -* **Status** - * **OK** is displayed when the domain is properly set. -* **Actions** - * Use the **Visit** button to open the domain URL. - -At all time, you can click on **Reload** to reload the Domains List parameters. - -## Delete a Domain - -To delete a domain, open the **Manage Domains** window, in the tab **Domains lists** select the domain you wish to delete and click **Delete**. - -![Select To Delete Domain](../img/add_domain_11.png) - -By clicking the **Delete** button, the deletion will start and the domain will be deleted from this virtual machine. - -## Questions and Feedback - -If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/algorand.md b/collections/documentation/dashboard/solutions/algorand.md deleted file mode 100644 index 21c25c7..0000000 --- a/collections/documentation/dashboard/solutions/algorand.md +++ /dev/null @@ -1,97 +0,0 @@ -

Algorand

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) - - [Algorand Structure](#algorand-structure) -- [Run Default Node](#run-default-node) -- [Run Relay Node](#run-relay-node) -- [Run Participant Node](#run-participant-node) -- [Run Indexer Node](#run-indexer-node) -- [Select Capacity](#select-capacity) - -*** - -## Introduction - -[Algorand](https://www.algorand.com/) builds technology that accelerates the convergence between decentralized and traditional finance by enabling the simple creation of next-generation financial products, protocols, and exchange of value. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Algorand** - -### Algorand Structure - -- Algorand has two main [types](https://developer.algorand.org/docs/run-a-node/setup/types/#:~:text=The%20Algorand%20network%20is%20comprised,%2C%20and%20non%2Drelay%20nodes.) of nodes (Relay or Participant) you can run also a 4 networks you can run your node against. Combining the types you can get: - - Defualt: - This is a Non-relay and Non-participant - It can run on (Devnet, Testnet, Betanet, Mainnet) - - Relay: - Relay node Can't be participant. - It can run only on (Testnet, Mainnet) - - Participant: - Can run on any of the four nets. - - Indexer: - It is a default node but with Archival Mode enbled which will make you able to query the data of the blockchain. - -## Run Default Node - -The basic type. you select any network you want. and for the node type select Default. -![defaultdep](./img/solutions_algorand.png) - -after the deployment is done. `ssh` to the node and run `goal node status` -![defaulttest](./img/algorand_defaulttest.png) -here you see your node run against mainnet. - -## Run Relay Node - -Relay nodes are where other nodes connect. Therefore, a relay node must be able to support a large number of connections and handle the processing load associated with all the data flowing to and from these connections. Thus, relay nodes require significantly more power than non-relay nodes. Relay nodes are always configured in archival mode. - -The relay node must be publicaly accessable. so it must have public ip. -![relaydep](./img/algorand_relaydep.png) - -after the deployment is done. `ssh` to the node and run `goal node status` to see the status of the node. and also you can check if the right port is listening (:4161 for testnet, and :4160 for mainnet) -![relaytest](./img/algorand_relaytest.png) - -The next step accourding to the [docs](https://developer.algorand.org/docs/run-a-node/setup/types/#relay-node) is to register your `ip:port` on Algorand Public SRV. - -## Run Participant Node - -Participation means participation in the Algorand consensus protocol. An account that participates in the Algorand consensus protocol is eligible and available to be selected to propose and vote on new blocks in the Algorand blockchain. -Participation node is responsible for hosting participation keys for one or more online accounts. - -What you need? -- Account mnemonics on the network you deploy on (offline) you can check the status for you account on the AlgoExplorer. search by your account id. - - The account needs to have some microAlgo to sign the participation transaction. - - [Main net explorer](https://algoexplorer.io/) - - [Test net explorer](https://testnet.algoexplorer.io/) - -- First Round: is the first block you need your participaiton node to validate from. you can choose the last block form the explorer. - ![partexp](./img/algorand_partexp.png) -- Last Round: is the final block your node can validate. let's make it 30M - -![partdep](./img/algorand_partdep.png) - -after the deployment is done. `ssh` to the node and run `goal node status` to see the status of the node. you see it do catchup. and the fast catchup is to make the node sync with the latest block faster by only fetch the last 1k blocks. after it done it will start create the participation keys. -![partstatus](./img/algorand_partstatus.png) - -now if you check the explorer you can see the status of the account turned to Online -![partonl](./img/algorand_partonl.png) - -## Run Indexer Node - -The primary purpose of this Indexer is to provide a REST API interface of API calls to support searching the Algorand Blockchain. The Indexer REST APIs retrieve the blockchain data from a PostgreSQL compatible database that must be populated. This database is populated using the same indexer instance or a separate instance of the indexer which must connect to the algod process of a running Algorand node to read block data. This node must also be an Archival node to make searching the entire blockchain possible. - -![indexernode](./img/algorand_indexernode.png) - -After it finish you can access the indexer API at port `8980` and here are the [endpoint](https://developer.algorand.org/docs/rest-apis/indexer/) you can access. - -## Select Capacity - -The default scinario the capacity is computed based on the node (network/type) accourding to this [reference](https://howbigisalgorand.com/). -But you still can change this only to higher values by selecting the option `Set Custom Capacity` - \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/basic_environments_readme.md b/collections/documentation/dashboard/solutions/basic_environments_readme.md deleted file mode 100644 index d1bd532..0000000 --- a/collections/documentation/dashboard/solutions/basic_environments_readme.md +++ /dev/null @@ -1,11 +0,0 @@ -

Basic Environments

- -

Table of Contents

- -- [Virtual Machines](./vm_intro.md) - - [Micro and Full VM Differences ](./vm_differences.md) - - [Full Virtual Machine](./fullVm.md) - - [Micro Virtual Machine](./vm.md) -- [Kubernetes](./k8s.md) -- [NixOS MicroVM](./nixos_micro.md) -- [Add a Domain](./add_domain.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/caprover.md b/collections/documentation/dashboard/solutions/caprover.md deleted file mode 100644 index 32034d7..0000000 --- a/collections/documentation/dashboard/solutions/caprover.md +++ /dev/null @@ -1,165 +0,0 @@ -

CapRover

- -

Table of Contents

- -- [Introduction](#introduction) -- [Requirements](#requirements) -- [Configs Tab](#configs-tab) -- [Admin and Workers Tabs](#admin-and-workers-tabs) -- [The Domain Name](#the-domain-name) - - [Domain Name Example](#domain-name-example) -- [How to Know the IP Address](#how-to-know-the-ip-address) -- [How to Access the Admin Interface](#how-to-access-the-admin-interface) -- [How to Work with CapRover](#how-to-work-with-caprover) - -*** - -## Introduction - -CapRover is an extremely easy to use app/database deployment & web server manager for your NodeJS, Python, PHP, ASP.NET, Ruby, MySQL, MongoDB, Postgres, WordPress (and etc...) applications! - -It's blazingly fast and very robust as it uses Docker, nginx, LetsEncrypt and NetData under the hood behind its simple-to-use interface. - -- CLI for automation and scripting -- Web GUI for ease of access and convenience -- No lock-in! Remove CapRover and your apps keep working! -- Docker Swarm under the hood for containerization and clustering -- Nginx (fully customizable template) under the hood for load-balancing -- Let's Encrypt under the hood for free SSL (HTTPS) - -Caprover is a very cool management app for containers based on Docker Swarm. - -It has following benefits : - -- easy to deploy apps (in seconds) -- easy to create new apps -- super good monitoring -- can be extended over the TFGrid - -## Requirements - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Orchestrators** -- Click on **CapRover** - -## Configs Tab - -![ ](./img/solutions_caprover.png) - -- Enter domain for you Caprover instance, Be very careful about the domain name: it needs to be a wildcard domain name you can configure in your chosen domain name system. -- Enter password for you Caprover instance. - -## Admin and Workers Tabs - -![ ](./img/solutions_caprover_leader.png) - -![ ](./img/solutions_caprover_workers.png) -Note: Worker nodes only accept SSH keys of RSA format. - -Deployment will take couple of minutes. - -## The Domain Name - -As per the [CapRover documentation](https://caprover.com/docs/get-started.html), you need to point a wildcard DNS entry to the VM IP address of your CapRover instance. You have to do this after having deployed the CapRover instance, otherwise you won't have access to the VM IP address. - -Let’s say your domain is **example.com** and your subdomain is **subdomain**. You can set **\*.subdomain.example.com** as an A record in your DNS settings to point to the VM IP address of the server hosting the CapRover instance, where **\*** acts as the wildcard. To do this, go to the DNS settings of your domain name registrar, and set a wild card A record entry. - -On your domain name registrar, you can manage your DNS settings as such, with **subdomain** as an example: - -| Record | Host | Value | TTL | -| ------ | ------------- | ------------- | --------- | -| A | @ | VM IP address | Automatic | -| A | subdomain | VM IP address | Automatic | -| A | \*.subdomain | VM IP address | Automatic | - -We note here that **@** is the root domain (@ takes the value of your domain name, e.g. **example** in **example.com**), **subdomain** is the name of your subdomain (it can be anything you want), and **\*.subdomain** is the wildcard for **subdomain**. If you don't want to use a subdomain, but only the domain, you could use a wildcard linked to the domain instead of the subdomain (e.g. put **\*** instead of **\*.subdomain** in the column **Host**). - -Once you've point a wildcard DNS entry to your CapRover IP address and that the DNS is properly propagated, you can click the **Admin Panel** button to access CapRover. This will lead you to the following URL (with **subdomain.example.com** as an example): - -> captain.subdomain.example.com - -Note that, to confirm the DNS propagation, you can use a [DNS lookup tool](https://mxtoolbox.com/DNSLookup.aspx). As an example, you can use the URL **captain.subdomain.example.com** to check if the IP address resolves to the VM IP address. - -### Domain Name Example - -In the following example, we pick ```apps.openly.life``` which is a domain name that will point to the IP address of the CapRover instance (which we only know after deployment). - -![ ](./img/domain_name_caprover_config.png) - -> Note how the *.apps.openly.life points to the public IPv4 address that has been returned from the deployment. - -## How to Know the IP Address - -Go back to your CapRover weblet and go to the deployment list. Click on `Show Details`. - -![ ](./img/solution_caprover_list.png) - -- The public IPv4 address is visible in here -- Now you can configure the domain name (see above, don't forget to point the wildcard domain to the public IP address) - -Click on details if you want to see more details - -```json - -{ - "version": 0, - "name": "caprover_leader_cr_156e44f0", - "created": 1637843368, - "status": "ok", - "message": "", - "flist": "https://hub.grid.tf/samehabouelsaad.3bot/tf-caprover-main-a4f186da8d.flist", - "publicIP": { - "ip": "185.206.122.136/24", - "gateway": "185.206.122.1" - }, - "planetary": false, - "yggIP": "", - "interfaces": [ - { - "network": "caprover_network_cr_156e44f0", - "ip": "10.200.4.2" - } - ], - "capacity": { - "cpu": 4, - "memory": 8192 - }, - "mounts": [ - { - "name": "data0", - "mountPoint": "/var/lib/docker", - "size": 107374182400, - "state": "ok", - "message": "" - } - ], - "env": { - "SWM_NODE_MODE": "leader", - "CAPROVER_ROOT_DOMAIN": "apps.openly.life", - "PUBLIC_KEY": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/9RNGKRjHvViunSOXhBF7EumrWvmqAAVJSrfGdLaVasgaYK6tkTRDzpZNplh3Tk1aowneXnZffygzIIZ82FWQYBo04IBWwFDOsCawjVbuAfcd9ZslYEYB3QnxV6ogQ4rvXnJ7IHgm3E3SZvt2l45WIyFn6ZKuFifK1aXhZkxHIPf31q68R2idJ764EsfqXfaf3q8H3u4G0NjfWmdPm9nwf/RJDZO+KYFLQ9wXeqRn6u/mRx+u7UD+Uo0xgjRQk1m8V+KuLAmqAosFdlAq0pBO8lEBpSebYdvRWxpM0QSdNrYQcMLVRX7IehizyTt+5sYYbp6f11WWcxLx0QDsUZ/J" - }, - "entrypoint": "/sbin/zinit init", - "metadata": "", - "description": "caprover leader machine/node" -} -``` - -## How to Access the Admin Interface - -Make sure that you've point a wildcard DNS entry to your CapRover IP address (e.g. **185.206.122.136** in our example), as explained [here](#the-domain-name). - -* To access the CapRover admin interface, you can click the **Admin Panel** button or you can use the following admin URL template: **https://captain.subdomain.example.com**. - * Note the prefix **captain** and the usage of our wildcard domain. - -* The admin password is generated and visible behind the `Show Details` button of your CapRover deployment. - -![ ](./img/caprover_login.png) - -* You should now see the following screen: - -![ ](./img/captain_login+weblet_caprover_.png) - -## How to Work with CapRover - -* [CapRover Admin Tutorial](./caprover_admin.md) -* [CapRover Worker Tutorial](./caprover_worker.md) diff --git a/collections/documentation/dashboard/solutions/caprover_admin.md b/collections/documentation/dashboard/solutions/caprover_admin.md deleted file mode 100644 index 47dd8fd..0000000 --- a/collections/documentation/dashboard/solutions/caprover_admin.md +++ /dev/null @@ -1,59 +0,0 @@ -

CapRover Admin

- -

Table of Contents

- -- [Introduction](#introduction) -- [Step 1: Enable HTTPS](#step-1-enable-https) -- [Step 2: Add a Default Docker Registry](#step-2-add-a-default-docker-registry) -- [Step 3: Deploy an App](#step-3-deploy-an-app) -- [Step 4: Enable Monitoring](#step-4-enable-monitoring) -- [Step 5: Change Your Password](#step-5-change-your-password) - -*** - -## Introduction - -We present the steps to manage a CapRover Admin node. - -## Step 1: Enable HTTPS - -![ ](./img/enable_https_caprover.png) - -You need to specify your email address. - -You will have to login again. - -![ ](./img/caprover_https_activated.png) - -> Now force https. - -You will have to login again, and you should notice https is now used. - -## Step 2: Add a Default Docker Registry - -You'll have to add a default docker registry so other CapRover nodes in the cluster can download images from, and it can be self-hosted (managed by CapRover itself), to add it, go to `Cluster` -> `Docker Registry Configuration`. - -![ ](./img/caprover_docker_registry.png) - -You can check [official documentation](https://caprover.com/docs/app-scaling-and-cluster.html#setup-docker-registry) to know more about Docker registry options. - -## Step 3: Deploy an App - -![ ](./img/deploy_app_caprover1.png) - -just go to apps & follow the instructions, there is much more info on caprover website. - -## Step 4: Enable Monitoring - -![ ](./img/caprover_monitoring_start_.png) - -You should now see - -![ ](./img/caprover_monitoring_2_.png) - -## Step 5: Change Your Password - -- Go to `Settings` and change your password. This is important for your own security. - - -> Further information regarding the process of attaching a new node to the cluster can be found through the following documentation link: [Attach a New Node to the Cluster](./caprover_worker.md/#step-2-attach-a-new-node-to-the-cluster) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/caprover_worker.md b/collections/documentation/dashboard/solutions/caprover_worker.md deleted file mode 100644 index 5269e23..0000000 --- a/collections/documentation/dashboard/solutions/caprover_worker.md +++ /dev/null @@ -1,42 +0,0 @@ -

CapRover Worker

- -

Table of Contents

- -- [Introduction](#introduction) -- [Step 1: Add a Default Docker Registry](#step-1-add-a-default-docker-registry) -- [Step 2: Attach a New Node to the Cluster](#step-2-attach-a-new-node-to-the-cluster) - -*** - -## Introduction - -We present the steps to manage a CapRover Worker node. - -## Step 1: Add a Default Docker Registry - -You'll have to add a default docker registry so other CapRover nodes in the cluster can download images from, and it can be self-hosted (managed by CapRover itself), to add it, go to `Cluster` -> `Docker Registry Configuration`. - -![ ](./img/caprover_docker_registry.png) - -- Click `Add Self-Hosted Registry` button, then click `Enable Self-Hosted Registry` - -![ ](./img/caprover_docker_default_registry.png) - -You can check [official documentation](https://caprover.com/docs/app-scaling-and-cluster.html#setup-docker-registry) to know more about Docker registry options. - - - -## Step 2: Attach a New Node to the Cluster - -![ ](./img/caprover_add_worker.png) - -- Add the public IPv4 address that has been returned from the worker deployment in the `New node IP Address` field. -- Add your `SSH private key` (you can use this command `cat ~/.ssh/id_rsa` to get your private key). -- Click `Join cluster` button. - -You should see the new added node under **Current Cluster Nodes** -![ ](./img/caprover_node_added.png) - -If you faced any problem you can use the `Alternative method`. - -Also you can check for Troubleshooting instruction on [Caprover Troubleshooting](https://caprover.com/docs/troubleshooting.html#second) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/casper.md b/collections/documentation/dashboard/solutions/casper.md deleted file mode 100644 index 5071c09..0000000 --- a/collections/documentation/dashboard/solutions/casper.md +++ /dev/null @@ -1,51 +0,0 @@ -

CasperLabs

- -

Table of Contents

- -- [Introduction](#introduction) -- [Deployment](#deployment) - -*** - -## Introduction - -[Casper Network](https://casperlabs.io/) is a blockchain protocol built from the ground up to remain true to core Web3 principles and adapt to the needs of our evolving world. - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Casperlabs** - -## Deployment - -__Process__ : - -![ ](./img/solutions_casperlabs.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. Ex. ***cl98casp*.gent02.dev.grid.tf** - -- Select a capacity package: - - **Small**: {cpu: 2, memory: 4, diskSize: 100 } - - **Medium**: {cpu: 4, memory: 16, diskSize: 500 } - - **Large**: {cpu: 8, memory: 32, diskSize: 100 } - - Or choose a **Custom** plan -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Casperlab instance on. - -After that is done you can see a list of all of your deployed instances - -![ ](./img/casper4.png) - -Click on ***Visit*** to go to the homepage of your Casperlabs instance! The node takes a long time in order for the RPC service to be ready so be patient! - -![ ](./img/casper5.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/discourse.md b/collections/documentation/dashboard/solutions/discourse.md deleted file mode 100644 index 9bdff3c..0000000 --- a/collections/documentation/dashboard/solutions/discourse.md +++ /dev/null @@ -1,53 +0,0 @@ -

Discourse

- -

Table of Contents

- -- [Introduction](#introduction) -- [Deployment](#deployment) - -*** - -## Introduction - -[Discourse](https://www.discourse.org/) is the 100% open source discussion platform built for the next decade of the Internet. Use it as a mailing list, discussion forum, long-form chat room, and more! - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Discourse** - -## Deployment - -![ ](./img/solutions_discourse.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. Ex. ***dc98newdisc*.gent02.dev.grid.tf** - -- Enter administrator information including **Email**. This admin will have full permission on the deployed instance. -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2, diskSize: 15 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 50 } - - **Large**: {cpu: 4, memory: 16, diskSize: 100 } - - Or choose a **Custom** plan - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` - -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Discourse instance on. - -Unlike other solutions, Discourse requires that you have an SMTP server. So make sure you fill the fields in the **Mail Server** tab in order to deploy your instance successfully. - -![ ](./img/discourse4.png) - -After that is done you can see a list of all of your deployed instances - -![ ](./img/discourse5.png) - -Click on ***Visit*** to go to the homepage of your Discourse instance! - -![ ](./img/discourse6.png) diff --git a/collections/documentation/dashboard/solutions/funkwhale.md b/collections/documentation/dashboard/solutions/funkwhale.md deleted file mode 100644 index 8940728..0000000 --- a/collections/documentation/dashboard/solutions/funkwhale.md +++ /dev/null @@ -1,59 +0,0 @@ -

Funkwhale

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -[Funkwhale](https://funkwhale.audio/) is social platform to enjoy and share music. -Funkwhale is a community-driven project that lets you listen and share music and audio within a decentralized, open network. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Funkwhale** - -## Deployment - -__Process__ : - -![ ](./img/solutions_funkwhale.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. Ex. ***fw100myfunk*.gent02.dev.grid.tf** - -- Enter administrator information including **Username**, **Email** and **Password**. This admin user will have full permission on the deployed instance. - -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2, diskSize: 50 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 100 } - - **Large**: {cpu: 4, memory: 16, diskSize: 250 } - - Or choose a **Custom** plan -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` - -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Funkwhale instance on. - - -After that is done you can see a list of all of your deployed instances - -![ ](./img/funkwhale2.png) - -Click on ***Visit*** to go to the homepage of your Funkwhale instance! - -![ ](./img/funkwhale3.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/img/algorand_defaulttest.png b/collections/documentation/dashboard/solutions/img/algorand_defaulttest.png deleted file mode 100644 index 8fbad1c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_defaulttest.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_indexernode.png b/collections/documentation/dashboard/solutions/img/algorand_indexernode.png deleted file mode 100644 index 54540ab..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_indexernode.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_partdep.png b/collections/documentation/dashboard/solutions/img/algorand_partdep.png deleted file mode 100644 index bde2f8c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_partdep.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_partexp.png b/collections/documentation/dashboard/solutions/img/algorand_partexp.png deleted file mode 100644 index 1c452fb..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_partexp.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_partonl.png b/collections/documentation/dashboard/solutions/img/algorand_partonl.png deleted file mode 100644 index 984ebd5..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_partonl.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_partstatus.png b/collections/documentation/dashboard/solutions/img/algorand_partstatus.png deleted file mode 100644 index f8a9635..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_partstatus.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_relaydep.png b/collections/documentation/dashboard/solutions/img/algorand_relaydep.png deleted file mode 100644 index eb04aca..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_relaydep.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/algorand_relaytest.png b/collections/documentation/dashboard/solutions/img/algorand_relaytest.png deleted file mode 100644 index 870b06e..0000000 Binary files a/collections/documentation/dashboard/solutions/img/algorand_relaytest.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_1.png b/collections/documentation/dashboard/solutions/img/caprover_1.png deleted file mode 100644 index b6b3f39..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_add_node2.png b/collections/documentation/dashboard/solutions/img/caprover_add_node2.png deleted file mode 100644 index e3ae2a0..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_add_node2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_add_worker.png b/collections/documentation/dashboard/solutions/img/caprover_add_worker.png deleted file mode 100644 index ee0e83a..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_add_worker.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_cluster.png b/collections/documentation/dashboard/solutions/img/caprover_cluster.png deleted file mode 100644 index 0ede9c8..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_cluster.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_deploy_leader.png b/collections/documentation/dashboard/solutions/img/caprover_deploy_leader.png deleted file mode 100644 index 0d757e4..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_deploy_leader.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_deploy_worker.png b/collections/documentation/dashboard/solutions/img/caprover_deploy_worker.png deleted file mode 100644 index e4f80ce..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_deploy_worker.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_deploying.png b/collections/documentation/dashboard/solutions/img/caprover_deploying.png deleted file mode 100644 index 307569c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_deploying.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_detail_weblet.png b/collections/documentation/dashboard/solutions/img/caprover_detail_weblet.png deleted file mode 100644 index 4f619ff..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_detail_weblet.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_docker_default_registry.png b/collections/documentation/dashboard/solutions/img/caprover_docker_default_registry.png deleted file mode 100644 index 0998b1c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_docker_default_registry.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_docker_registry.png b/collections/documentation/dashboard/solutions/img/caprover_docker_registry.png deleted file mode 100644 index 06abbeb..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_docker_registry.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_https_activated.png b/collections/documentation/dashboard/solutions/img/caprover_https_activated.png deleted file mode 100644 index 2cbadd6..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_https_activated.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_login.png b/collections/documentation/dashboard/solutions/img/caprover_login.png deleted file mode 100644 index 3d27d7e..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_login.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_monitoring_2_.png b/collections/documentation/dashboard/solutions/img/caprover_monitoring_2_.png deleted file mode 100644 index c92bf01..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_monitoring_2_.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_monitoring_start_.png b/collections/documentation/dashboard/solutions/img/caprover_monitoring_start_.png deleted file mode 100644 index 2a5fa78..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_monitoring_start_.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_node_added.png b/collections/documentation/dashboard/solutions/img/caprover_node_added.png deleted file mode 100644 index 93e9a35..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_node_added.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/caprover_select_node.png b/collections/documentation/dashboard/solutions/img/caprover_select_node.png deleted file mode 100644 index ce155b1..0000000 Binary files a/collections/documentation/dashboard/solutions/img/caprover_select_node.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/captain_loginweblet_caprover_.png b/collections/documentation/dashboard/solutions/img/captain_loginweblet_caprover_.png deleted file mode 100644 index c4f14d9..0000000 Binary files a/collections/documentation/dashboard/solutions/img/captain_loginweblet_caprover_.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/casper2.png b/collections/documentation/dashboard/solutions/img/casper2.png deleted file mode 100644 index 4948b79..0000000 Binary files a/collections/documentation/dashboard/solutions/img/casper2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/casper3.png b/collections/documentation/dashboard/solutions/img/casper3.png deleted file mode 100644 index a9123f0..0000000 Binary files a/collections/documentation/dashboard/solutions/img/casper3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/casper4.png b/collections/documentation/dashboard/solutions/img/casper4.png deleted file mode 100644 index 17bd144..0000000 Binary files a/collections/documentation/dashboard/solutions/img/casper4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/casper5.png b/collections/documentation/dashboard/solutions/img/casper5.png deleted file mode 100644 index 5d3f1e7..0000000 Binary files a/collections/documentation/dashboard/solutions/img/casper5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/cluster_add_nodes.png b/collections/documentation/dashboard/solutions/img/cluster_add_nodes.png deleted file mode 100644 index c338f54..0000000 Binary files a/collections/documentation/dashboard/solutions/img/cluster_add_nodes.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/contracts_list.png b/collections/documentation/dashboard/solutions/img/contracts_list.png deleted file mode 100644 index 2990bde..0000000 Binary files a/collections/documentation/dashboard/solutions/img/contracts_list.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/deleted_contract_info.png b/collections/documentation/dashboard/solutions/img/deleted_contract_info.png deleted file mode 100644 index a3c0357..0000000 Binary files a/collections/documentation/dashboard/solutions/img/deleted_contract_info.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/deleted_contract_info_copy.png b/collections/documentation/dashboard/solutions/img/deleted_contract_info_copy.png deleted file mode 100644 index a3c0357..0000000 Binary files a/collections/documentation/dashboard/solutions/img/deleted_contract_info_copy.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/deplist1.png b/collections/documentation/dashboard/solutions/img/deplist1.png deleted file mode 100644 index e73d1fd..0000000 Binary files a/collections/documentation/dashboard/solutions/img/deplist1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/deploy_app_caprover1.png b/collections/documentation/dashboard/solutions/img/deploy_app_caprover1.png deleted file mode 100644 index 7a8fe1e..0000000 Binary files a/collections/documentation/dashboard/solutions/img/deploy_app_caprover1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/discourse4.png b/collections/documentation/dashboard/solutions/img/discourse4.png deleted file mode 100644 index eaeb779..0000000 Binary files a/collections/documentation/dashboard/solutions/img/discourse4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/discourse5.png b/collections/documentation/dashboard/solutions/img/discourse5.png deleted file mode 100644 index 01dca67..0000000 Binary files a/collections/documentation/dashboard/solutions/img/discourse5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/discourse6.png b/collections/documentation/dashboard/solutions/img/discourse6.png deleted file mode 100644 index f1566c8..0000000 Binary files a/collections/documentation/dashboard/solutions/img/discourse6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/domain_name_caprover_config.png b/collections/documentation/dashboard/solutions/img/domain_name_caprover_config.png deleted file mode 100644 index dca23c3..0000000 Binary files a/collections/documentation/dashboard/solutions/img/domain_name_caprover_config.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/enable_https_caprover.png b/collections/documentation/dashboard/solutions/img/enable_https_caprover.png deleted file mode 100644 index 4149288..0000000 Binary files a/collections/documentation/dashboard/solutions/img/enable_https_caprover.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm1.png b/collections/documentation/dashboard/solutions/img/fullvm1.png deleted file mode 100644 index ce820d2..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm2.png b/collections/documentation/dashboard/solutions/img/fullvm2.png deleted file mode 100644 index 811bdfc..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm4.png b/collections/documentation/dashboard/solutions/img/fullvm4.png deleted file mode 100644 index 5186f74..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm5.png b/collections/documentation/dashboard/solutions/img/fullvm5.png deleted file mode 100644 index 376d3d9..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm6.png b/collections/documentation/dashboard/solutions/img/fullvm6.png deleted file mode 100644 index 30348fc..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/fullvm7.jpg b/collections/documentation/dashboard/solutions/img/fullvm7.jpg deleted file mode 100644 index f5fd320..0000000 Binary files a/collections/documentation/dashboard/solutions/img/fullvm7.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/funkwhale2.png b/collections/documentation/dashboard/solutions/img/funkwhale2.png deleted file mode 100644 index de4fab4..0000000 Binary files a/collections/documentation/dashboard/solutions/img/funkwhale2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/funkwhale3.png b/collections/documentation/dashboard/solutions/img/funkwhale3.png deleted file mode 100644 index 7a5f00c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/funkwhale3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/k8s_dl_1.png b/collections/documentation/dashboard/solutions/img/k8s_dl_1.png deleted file mode 100644 index c8b5670..0000000 Binary files a/collections/documentation/dashboard/solutions/img/k8s_dl_1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/k8s_dl_2.png b/collections/documentation/dashboard/solutions/img/k8s_dl_2.png deleted file mode 100644 index a5874f7..0000000 Binary files a/collections/documentation/dashboard/solutions/img/k8s_dl_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/k8s_dl_4.png b/collections/documentation/dashboard/solutions/img/k8s_dl_4.png deleted file mode 100644 index be4361c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/k8s_dl_4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mastodon1.jpg b/collections/documentation/dashboard/solutions/img/mastodon1.jpg deleted file mode 100644 index e4bc3d6..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mastodon1.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mastodon2.jpg b/collections/documentation/dashboard/solutions/img/mastodon2.jpg deleted file mode 100644 index ae0748b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mastodon2.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mastodon3.jpg b/collections/documentation/dashboard/solutions/img/mastodon3.jpg deleted file mode 100644 index 591570b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mastodon3.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mastodon4.jpg b/collections/documentation/dashboard/solutions/img/mastodon4.jpg deleted file mode 100644 index f9afe85..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mastodon4.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mastodon5.jpg b/collections/documentation/dashboard/solutions/img/mastodon5.jpg deleted file mode 100644 index eb01556..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mastodon5.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mattermost3.png b/collections/documentation/dashboard/solutions/img/mattermost3.png deleted file mode 100644 index 2ada6dc..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mattermost3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mattermost4.png b/collections/documentation/dashboard/solutions/img/mattermost4.png deleted file mode 100644 index fd6ee19..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mattermost4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/mattermost5.png b/collections/documentation/dashboard/solutions/img/mattermost5.png deleted file mode 100644 index a45f632..0000000 Binary files a/collections/documentation/dashboard/solutions/img/mattermost5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_cap2.png b/collections/documentation/dashboard/solutions/img/new_cap2.png deleted file mode 100644 index 689da23..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_cap2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_cap3.png b/collections/documentation/dashboard/solutions/img/new_cap3.png deleted file mode 100644 index 31c5110..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_cap3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_funk2.png b/collections/documentation/dashboard/solutions/img/new_funk2.png deleted file mode 100644 index eb87d9f..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_funk2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_funk3.png b/collections/documentation/dashboard/solutions/img/new_funk3.png deleted file mode 100644 index 4e97d6d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_funk3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_k8s4.png b/collections/documentation/dashboard/solutions/img/new_k8s4.png deleted file mode 100644 index 9dded9c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_k8s4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_k8s5.png b/collections/documentation/dashboard/solutions/img/new_k8s5.png deleted file mode 100644 index e6aa91a..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_k8s5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_peer2.png b/collections/documentation/dashboard/solutions/img/new_peer2.png deleted file mode 100644 index 604fe10..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_peer2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_peer3.png b/collections/documentation/dashboard/solutions/img/new_peer3.png deleted file mode 100644 index 43987a4..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_peer3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm2.png b/collections/documentation/dashboard/solutions/img/new_vm2.png deleted file mode 100644 index c5ebf1d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm3.png b/collections/documentation/dashboard/solutions/img/new_vm3.png deleted file mode 100644 index 1981cb6..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm5.png b/collections/documentation/dashboard/solutions/img/new_vm5.png deleted file mode 100644 index 723a44e..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm6.png b/collections/documentation/dashboard/solutions/img/new_vm6.png deleted file mode 100644 index d2af47d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm7.png b/collections/documentation/dashboard/solutions/img/new_vm7.png deleted file mode 100644 index da8c92b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm7.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/new_vm8.png b/collections/documentation/dashboard/solutions/img/new_vm8.png deleted file mode 100644 index ee291c8..0000000 Binary files a/collections/documentation/dashboard/solutions/img/new_vm8.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nixos_micro1.png b/collections/documentation/dashboard/solutions/img/nixos_micro1.png deleted file mode 100644 index 0114907..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nixos_micro1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nixos_micro2.png b/collections/documentation/dashboard/solutions/img/nixos_micro2.png deleted file mode 100644 index d094ef3..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nixos_micro2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nixos_micro3.png b/collections/documentation/dashboard/solutions/img/nixos_micro3.png deleted file mode 100644 index 85247c3..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nixos_micro3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/node_selection.png b/collections/documentation/dashboard/solutions/img/node_selection.png deleted file mode 100644 index 50a30dc..0000000 Binary files a/collections/documentation/dashboard/solutions/img/node_selection.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nodep_2.png b/collections/documentation/dashboard/solutions/img/nodep_2.png deleted file mode 100644 index f1f7ac2..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nodep_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nodepilot_2.png b/collections/documentation/dashboard/solutions/img/nodepilot_2.png deleted file mode 100644 index 083551b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nodepilot_2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nodepilot_3.png b/collections/documentation/dashboard/solutions/img/nodepilot_3.png deleted file mode 100644 index 8b4549b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nodepilot_3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/nxios_micro1.png b/collections/documentation/dashboard/solutions/img/nxios_micro1.png deleted file mode 100644 index c2fc9d7..0000000 Binary files a/collections/documentation/dashboard/solutions/img/nxios_micro1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud1.png b/collections/documentation/dashboard/solutions/img/owncloud1.png deleted file mode 100644 index 2a5567e..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud2.png b/collections/documentation/dashboard/solutions/img/owncloud2.png deleted file mode 100644 index c34fab7..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud3.png b/collections/documentation/dashboard/solutions/img/owncloud3.png deleted file mode 100644 index 62995d3..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud4.png b/collections/documentation/dashboard/solutions/img/owncloud4.png deleted file mode 100644 index 98d16ae..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud5.png b/collections/documentation/dashboard/solutions/img/owncloud5.png deleted file mode 100644 index d441b86..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/owncloud6.png b/collections/documentation/dashboard/solutions/img/owncloud6.png deleted file mode 100644 index a994782..0000000 Binary files a/collections/documentation/dashboard/solutions/img/owncloud6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/presearch0.png b/collections/documentation/dashboard/solutions/img/presearch0.png deleted file mode 100644 index 29dedbe..0000000 Binary files a/collections/documentation/dashboard/solutions/img/presearch0.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/presearch4.png b/collections/documentation/dashboard/solutions/img/presearch4.png deleted file mode 100644 index 2c3c8e9..0000000 Binary files a/collections/documentation/dashboard/solutions/img/presearch4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/presearch5.png b/collections/documentation/dashboard/solutions/img/presearch5.png deleted file mode 100644 index 9c7ec67..0000000 Binary files a/collections/documentation/dashboard/solutions/img/presearch5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/presearch6.png b/collections/documentation/dashboard/solutions/img/presearch6.png deleted file mode 100644 index 9279f30..0000000 Binary files a/collections/documentation/dashboard/solutions/img/presearch6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/qvm1.png b/collections/documentation/dashboard/solutions/img/qvm1.png deleted file mode 100644 index 93277ab..0000000 Binary files a/collections/documentation/dashboard/solutions/img/qvm1.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/qvm2.png b/collections/documentation/dashboard/solutions/img/qvm2.png deleted file mode 100644 index 0777158..0000000 Binary files a/collections/documentation/dashboard/solutions/img/qvm2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/qvm_nodes.png b/collections/documentation/dashboard/solutions/img/qvm_nodes.png deleted file mode 100644 index 3670785..0000000 Binary files a/collections/documentation/dashboard/solutions/img/qvm_nodes.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/qvm_qsfs_config.png b/collections/documentation/dashboard/solutions/img/qvm_qsfs_config.png deleted file mode 100644 index 6e337e4..0000000 Binary files a/collections/documentation/dashboard/solutions/img/qvm_qsfs_config.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solution_caprover_list.png b/collections/documentation/dashboard/solutions/img/solution_caprover_list.png deleted file mode 100644 index 3f672e2..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solution_caprover_list.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_algorand.png b/collections/documentation/dashboard/solutions/img/solutions_algorand.png deleted file mode 100644 index ca7e860..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_algorand.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_caprover.png b/collections/documentation/dashboard/solutions/img/solutions_caprover.png deleted file mode 100644 index bf2f072..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_caprover.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_caprover_leader.png b/collections/documentation/dashboard/solutions/img/solutions_caprover_leader.png deleted file mode 100644 index 218931d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_caprover_leader.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_caprover_workers.png b/collections/documentation/dashboard/solutions/img/solutions_caprover_workers.png deleted file mode 100644 index 0b12a69..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_caprover_workers.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_casperlabs.png b/collections/documentation/dashboard/solutions/img/solutions_casperlabs.png deleted file mode 100644 index f9f3271..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_casperlabs.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_discourse.png b/collections/documentation/dashboard/solutions/img/solutions_discourse.png deleted file mode 100644 index d7e0c66..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_discourse.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_fullvm.png b/collections/documentation/dashboard/solutions/img/solutions_fullvm.png deleted file mode 100644 index 32d786d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_fullvm.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_funkwhale.png b/collections/documentation/dashboard/solutions/img/solutions_funkwhale.png deleted file mode 100644 index 6802db6..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_funkwhale.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_k8s.png b/collections/documentation/dashboard/solutions/img/solutions_k8s.png deleted file mode 100644 index 935de58..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_k8s.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_k8s_master.png b/collections/documentation/dashboard/solutions/img/solutions_k8s_master.png deleted file mode 100644 index 948a43b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_k8s_master.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_k8s_workers.png b/collections/documentation/dashboard/solutions/img/solutions_k8s_workers.png deleted file mode 100644 index 5e6df13..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_k8s_workers.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_mattermost.png b/collections/documentation/dashboard/solutions/img/solutions_mattermost.png deleted file mode 100644 index e47bb61..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_mattermost.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_microvm.png b/collections/documentation/dashboard/solutions/img/solutions_microvm.png deleted file mode 100644 index abad7ff..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_microvm.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_nextcloud.png b/collections/documentation/dashboard/solutions/img/solutions_nextcloud.png deleted file mode 100644 index a588f97..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_nextcloud.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_nodepilot.png b/collections/documentation/dashboard/solutions/img/solutions_nodepilot.png deleted file mode 100644 index 65de04c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_nodepilot.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_owncloud.png b/collections/documentation/dashboard/solutions/img/solutions_owncloud.png deleted file mode 100644 index 98f7302..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_owncloud.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_owncloud_visit.png b/collections/documentation/dashboard/solutions/img/solutions_owncloud_visit.png deleted file mode 100644 index 45c8857..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_owncloud_visit.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_peertube.png b/collections/documentation/dashboard/solutions/img/solutions_peertube.png deleted file mode 100644 index 341653d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_peertube.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_presearch.png b/collections/documentation/dashboard/solutions/img/solutions_presearch.png deleted file mode 100644 index 258fca0..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_presearch.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_subsquid.png b/collections/documentation/dashboard/solutions/img/solutions_subsquid.png deleted file mode 100644 index 2b2f132..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_subsquid.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_taiga.png b/collections/documentation/dashboard/solutions/img/solutions_taiga.png deleted file mode 100644 index b4f89e1..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_taiga.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_umbrel.png b/collections/documentation/dashboard/solutions/img/solutions_umbrel.png deleted file mode 100644 index 311d83c..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_umbrel.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/solutions_wordpress.png b/collections/documentation/dashboard/solutions/img/solutions_wordpress.png deleted file mode 100644 index 58e66b2..0000000 Binary files a/collections/documentation/dashboard/solutions/img/solutions_wordpress.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/subsquid_graphql.png b/collections/documentation/dashboard/solutions/img/subsquid_graphql.png deleted file mode 100644 index 82cf6d0..0000000 Binary files a/collections/documentation/dashboard/solutions/img/subsquid_graphql.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/subsquid_list.jpg b/collections/documentation/dashboard/solutions/img/subsquid_list.jpg deleted file mode 100644 index 596497f..0000000 Binary files a/collections/documentation/dashboard/solutions/img/subsquid_list.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/subsquid_list.png b/collections/documentation/dashboard/solutions/img/subsquid_list.png deleted file mode 100644 index c0a78aa..0000000 Binary files a/collections/documentation/dashboard/solutions/img/subsquid_list.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/taiga2.png b/collections/documentation/dashboard/solutions/img/taiga2.png deleted file mode 100644 index cdcfcd4..0000000 Binary files a/collections/documentation/dashboard/solutions/img/taiga2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/taiga3.png b/collections/documentation/dashboard/solutions/img/taiga3.png deleted file mode 100644 index 35ddd01..0000000 Binary files a/collections/documentation/dashboard/solutions/img/taiga3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/taiga4.png b/collections/documentation/dashboard/solutions/img/taiga4.png deleted file mode 100644 index 8b9c844..0000000 Binary files a/collections/documentation/dashboard/solutions/img/taiga4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/taiga5.png b/collections/documentation/dashboard/solutions/img/taiga5.png deleted file mode 100644 index f0bcc2b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/taiga5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/taiga6.png b/collections/documentation/dashboard/solutions/img/taiga6.png deleted file mode 100644 index 606939b..0000000 Binary files a/collections/documentation/dashboard/solutions/img/taiga6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/umbrel2.png b/collections/documentation/dashboard/solutions/img/umbrel2.png deleted file mode 100644 index 859e141..0000000 Binary files a/collections/documentation/dashboard/solutions/img/umbrel2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/umbrel3.png b/collections/documentation/dashboard/solutions/img/umbrel3.png deleted file mode 100644 index c8dacd5..0000000 Binary files a/collections/documentation/dashboard/solutions/img/umbrel3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/umbrel4.png b/collections/documentation/dashboard/solutions/img/umbrel4.png deleted file mode 100644 index 1d9bdf1..0000000 Binary files a/collections/documentation/dashboard/solutions/img/umbrel4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/umbrel5.png b/collections/documentation/dashboard/solutions/img/umbrel5.png deleted file mode 100644 index 4d7379d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/umbrel5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/vm_json.png b/collections/documentation/dashboard/solutions/img/vm_json.png deleted file mode 100644 index 1ab3737..0000000 Binary files a/collections/documentation/dashboard/solutions/img/vm_json.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/vm_list.png b/collections/documentation/dashboard/solutions/img/vm_list.png deleted file mode 100644 index 2a504b1..0000000 Binary files a/collections/documentation/dashboard/solutions/img/vm_list.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_peertube_instance.png b/collections/documentation/dashboard/solutions/img/weblet_peertube_instance.png deleted file mode 100644 index 4474a75..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_peertube_instance.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_peertube_listing.png b/collections/documentation/dashboard/solutions/img/weblet_peertube_listing.png deleted file mode 100644 index 93de06f..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_peertube_listing.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_vm4.png b/collections/documentation/dashboard/solutions/img/weblet_vm4.png deleted file mode 100644 index f088303..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_vm4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_vm5.png b/collections/documentation/dashboard/solutions/img/weblet_vm5.png deleted file mode 100644 index 0f778f5..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_vm5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_vm_overview.png b/collections/documentation/dashboard/solutions/img/weblet_vm_overview.png deleted file mode 100644 index 3262552..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_vm_overview.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/weblet_vm_presearch_result.jpg b/collections/documentation/dashboard/solutions/img/weblet_vm_presearch_result.jpg deleted file mode 100644 index ae77142..0000000 Binary files a/collections/documentation/dashboard/solutions/img/weblet_vm_presearch_result.jpg and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp10.png b/collections/documentation/dashboard/solutions/img/wp10.png deleted file mode 100644 index c4f9998..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp10.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp11.png b/collections/documentation/dashboard/solutions/img/wp11.png deleted file mode 100644 index 35e9f58..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp11.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp2.png b/collections/documentation/dashboard/solutions/img/wp2.png deleted file mode 100644 index 8e3aed3..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp2.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp3.png b/collections/documentation/dashboard/solutions/img/wp3.png deleted file mode 100644 index e485277..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp3.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp4.png b/collections/documentation/dashboard/solutions/img/wp4.png deleted file mode 100644 index 8d534f1..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp4.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp5.png b/collections/documentation/dashboard/solutions/img/wp5.png deleted file mode 100644 index 141497d..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp5.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp6.png b/collections/documentation/dashboard/solutions/img/wp6.png deleted file mode 100644 index 0f72d48..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp6.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp7.png b/collections/documentation/dashboard/solutions/img/wp7.png deleted file mode 100644 index 01af952..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp7.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp8.png b/collections/documentation/dashboard/solutions/img/wp8.png deleted file mode 100644 index f243cd2..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp8.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/img/wp9.png b/collections/documentation/dashboard/solutions/img/wp9.png deleted file mode 100644 index 34173dd..0000000 Binary files a/collections/documentation/dashboard/solutions/img/wp9.png and /dev/null differ diff --git a/collections/documentation/dashboard/solutions/k8s.md b/collections/documentation/dashboard/solutions/k8s.md deleted file mode 100644 index 11197ec..0000000 --- a/collections/documentation/dashboard/solutions/k8s.md +++ /dev/null @@ -1,98 +0,0 @@ -

Kubernetes

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Configs tab](#configs-tab) -- [Master and Workers tabs](#master-and-workers-tabs) -- [Kubeconfig](#kubeconfig) -- [Manage Workers](#manage-workers) - -*** - -## Introduction - -Kubernetes is the standard container orchestration tool. - -On the TF grid, Kubernetes clusters can be deployed out of the box. We have implemented [K3S](https://k3s.io/), a full-blown Kubernetes offering that uses only half of the memory footprint. It is packaged as a single binary and made more lightweight to run workloads in resource-constrained locations (fits e.g. IoT, edge, ARM workloads). - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Kubernetes** - -## Configs tab - -![ ](./img/solutions_k8s.png) - -- `Name`: Your Kubernetes Cluster name. -- `Cluster Token`: It's used for authentication between your worker nodes and master node. You could use the auto-generated one or type your own. - - -## Master and Workers tabs - -![ ](./img/solutions_k8s_master.png) -![ ](./img/solutions_k8s_workers.png) - -> Currently, we only support "single-master-multi-worker" k8s clusters. So you could always add more than one worker node by clicking on the **+** in the ***Worker*** tab. - - -## Kubeconfig -Once the cluster is ready, you can SSH into the cluster using `ssh root@IP` -> IP can be the public IP or the planetary network IP - -Onced connected via SSH, you can execute commands on the cluster like `kubectl get nodes`, and to get the kubeconfig, you can find it in `/root/.kube/config` - -> if it doesn't exist in `/root/.kube/config` it can be in `/etc/rancher/k3s/k3s.yaml` - -example: - -``` -root@WR768dbf76:~# cat /root/.kube/config -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTkRBeU5qWTBNVE13SGhjTk1qRXhNakl6TVRNek16TXpXaGNOTXpFeE1qSXhNVE16TXpNegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTkRBeU5qWTBNVE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUcGNtZE1KaWg1eGFTa1JlelNKVU5mUkQ5NWV6cE12amhVeUc2bWU4bTkKY0lQWENoNUZ2ZU81Znk1d1VTSTlYOFlGV2JkOGtRcG9vaVdVbStwYjFvU3hvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUtkL3VUU3FtWk12bHhtcWNYU3lxCmVhWERIbXd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnSUQ0cGNQWDl2R0F6SC9lTkhCNndVdmNZRi9HbXFuQVIKR2dqT1RSdWVia1lDSUdRUmUwTGJzQXdwMWNicHlYRWljV3V0aG1RQ1dwRXY1NThWZ3BoMFpETFAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= - server: https://127.0.0.1:6443 - name: default -contexts: -- context: - cluster: default - user: default - name: default -current-context: default -kind: Config -preferences: {} -users: -- name: default - user: - client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJWnptV1A4ellKaGd3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalF3TWpZMk5ERXpNQjRYRFRJeE1USXlNekV6TXpNek0xb1hEVEl5TVRJeQpNekV6TXpNek0xb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJINTZaZGM5aTJ0azAyNGQKcXBDQ2NRMndMMjc1QWtPZUFxalIzQjFTTGFQeG1oOG9IcXd4SzY2RTc1ZWQya2VySFIySnBZbWwwNE5sa0grLwpSd2kvMDNDalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVGhPakJSaExjeE53UDkzd0xtUzBYRUFUNjlSekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlBcjdDcDR2dks4Y2s0Q0lROEM5em5zVkFUZVhDaHZsUmdvanZuVXU4REZld0loQUlwRVYyMWJZVXBpUEkzVQowa3QvQmJqRUtjV1poVXNHQ0g0YzVNWTFFS0JhCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTkRBeU5qWTBNVE13SGhjTk1qRXhNakl6TVRNek16TXpXaGNOTXpFeE1qSXhNVE16TXpNegpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTkRBeU5qWTBNVE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUY3NlakN3TDQ5VkZvQnJhWVRyR3ByR2lMajNKeEw4ZVcwYnpTVDBWRGUKeFlrb3hDbDlnR0N6R2p1Q2Q0ZmZmRXV0QWdFMjU5MDFBWGJCU2VnOHdlSkJvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTRUb3dVWVMzTVRjRC9kOEM1a3RGCnhBRSt2VWN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU5CYWRhcFFZbnlYOEJDUllNODZtYWtMNkFDM0hSenMKL2l3Ukp6TnV6YytaQWlCZm14YytDTVZHQnBrblAzR2dWSWlFMFVQWkUrOFRnRUdkTTgrdCt4V2Ywdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K - client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSURXQURoZUl0RVdHWlFCc0tCSUpZTTZPeDB5TmRHQ1JjTDBTMUtvYjRTZ25vQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFZm5wbDF6MkxhMlRUYmgycWtJSnhEYkF2YnZrQ1E1NENxTkhjSFZJdG8vR2FIeWdlckRFcgpyb1R2bDUzYVI2c2RIWW1saWFYVGcyV1FmNzlIQ0wvVGNBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= -root@WR768dbf76:~# - -``` - -If you want to use kubectl through another machine, you need to change the line `server: https://127.0.0.1:6443` to be `server: https://PLANETARYIP_OR_PUBLICIP/6443` -replace PLANETARYIP_OR_PUBLICIP with the IP you want to reach th cluster through. - - -## Manage Workers -Add or Remove workers in any **Kubernetes cluster**. - - -- Kubernetes DeployedList Weblet -![ ](./img/k8s_dl_1.png) - -- Manager kubernetes workers -![ ](./img/k8s_dl_2.png) - -- Add a new worker -![ ](./img/new_k8s4.png) - -- Successfully added new worker -![ ](./img/k8s_dl_4.png) - -- Delete a worker -![ ](./img/new_k8s5.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/mattermost.md b/collections/documentation/dashboard/solutions/mattermost.md deleted file mode 100644 index 0e9528c..0000000 --- a/collections/documentation/dashboard/solutions/mattermost.md +++ /dev/null @@ -1,55 +0,0 @@ -

Mattermost

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -[Mattermost](https://mattermost.com/) A single point of collaboration. Designed specifically for digital operations. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Mattermost** - -## Deployment - -![ ](./img/solutions_mattermost.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. Ex. ***matter*.gent02.dev.grid.tf** - -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2, diskSize: 15 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 50 } - - **Large**: {cpu: 4, memory: 16, diskSize: 100 } - - Or choose a **Custom** plan -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` - -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Mattermost instance on. - - -- There's also an optional **SMTP Server** tab if you'd like to have your Mattermost instance configured with an SMTP server. - - ![ ](./img/mattermost3.png) - -After that is done you can see a list of all of your deployed instances - -![ ](./img/mattermost4.png) - -Click on ***Visit*** to go to the homepage of your Mattermost instance! You need to login using TFConnect so make sure you download the *TFConnect* app from your App Store. - -![ ](./img/mattermost5.png) diff --git a/collections/documentation/dashboard/solutions/nextcloud.md b/collections/documentation/dashboard/solutions/nextcloud.md deleted file mode 100644 index d25dd79..0000000 --- a/collections/documentation/dashboard/solutions/nextcloud.md +++ /dev/null @@ -1,207 +0,0 @@ -

Nextcloud

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Domain Names and Public IPs](#domain-names-and-public-ips) -- [Deploy Nextcloud](#deploy-nextcloud) -- [Nextcloud Setup](#nextcloud-setup) -- [DNS Details](#dns-details) - - [DNS Record with Public IPv4](#dns-record-with-public-ipv4) - - [DNS Record with Gateway](#dns-record-with-gateway) - - [DNS Propagation](#dns-propagation) -- [Talk](#talk) - - [Install Talk](#install-talk) - - [TURN](#turn) - - [Use Talk](#use-talk) -- [Backups and Updates](#backups-and-updates) - - [Create a Backup](#create-a-backup) - - [Automatic Backups and Updates](#automatic-backups-and-updates) -- [Troubleshooting](#troubleshooting) - - [Retrieve the Nextcloud AIO Password](#retrieve-the-nextcloud-aio-password) - - [Access the Nextcloud Interface Page](#access-the-nextcloud-interface-page) - - [Check the DNS Propagation](#check-the-dns-propagation) -- [Questions and Feedback](#questions-and-feedback) - -*** - -# Introduction - -[Nextcloud](https://nextcloud.com/) is a suite of client-server software for creating and using file hosting services. - -Nextcloud provides functionality similar to Dropbox, Office 365 or Google Drive when used with integrated office suites like Collabora Online or OnlyOffice. - - - -# Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Nextcloud** - - - -# Domain Names and Public IPs - -A domain name is required to use Nextcloud. You can either use your own, which we'll call a *custom domain*, or you can get a free subdomain from a gateway node. This won't impact the function of your deployment, it's just a matter of preference. If you want to use your own domain, follow the steps for custom domain wherever you see them below. - -Another choice to make before launching your Nextcloud instance is whether you want to reserve a public IPv4 for the deployment. Renting a public IP is an extra cost and is only required for the dedicated Nextcloud Talk video conferencing backend, recommended for calls with more than four participants. If you don't reserve a public IP, you can still use Talk in a more limited fashion (see the [Talk](#talk) section below for details). - -If you're not sure and just want the easiest, most affordable option, skip the public IP and use a gateway domain. - - - -# Deploy Nextcloud - -* On the [ThreeFold Dashboard](https://dashboard.grid.tf/), click on solutions from the sidebar, then click on **Nextcloud** -* Choose a name for your deployment - * Note: You can use the auto-generated name if you want -* Select a capacity package: - * **Minimum**: {cpu: 2, memory: 4gb, diskSize: 50gb } - * **Standard**: {cpu: 2, memory: 8gb, diskSize: 500gb } - * **Recommended**: {cpu: 4, memory: 16gb, diskSize: 1000gb } - * Or choose a **Custom** plan -* If want to reserve a public IPv4 address, click on Network then select **Public IPv4** -* If you want a [dedicated](../deploy/dedicated_machines.md) and/or a certified node, select the corresponding option -* Choose the location of the node - * `Country` - * `Farm Name` -* Select a node -* If you want to use a custom domain, click on **Custom domain** under **Domain Name** and write your domain name - * Example: `nextcloudwebsite.com` -* The **Select gateway** box will be visible whenever a gateway is required. If so, click it and choose a gateway - * If you are also using a custom domain, you must set your DNS record now before proceeding. The IP of the gateway will appear on screen. Check [below](#set-the-dns-record) for more information -* Click **Deploy** - - - -# Nextcloud Setup - -Once the weblet is deployed, the details page will appear. If you are using a custom domain with a public IPv4, you'll need to set your DNS record now using the IP address shown under **Public IPv4**. Again, see [below](#dns-details) for details. - -Before you can access Nextcloud itself, you'll need to decide which addons you want to install and complete a setup step. This is done through the AIO interface that's included with your deployment. To access it, you can visit the **Nextcloud Setup** link shown in the details page, or click on the **Nextcloud Setup** button under **Actions** in the deployments list to set up Nextcloud. - -* Once you have access to the **Nextcloud AIO setup page**, you will be given a password composed of 8 words. - * Use this password to access the **Nextcloud AIO interface page**. - * Store this password somewhere safe. It's only possible to recover it by using SSH. -* On the next page, you can add **Optionals addons** if you want. -* Click on **Download and start containers** to start the Nextcloud instance. -* Once the containers are properly started, you can access the Nextcloud admin login page by clicking **Open your Nextcloud**. - * You will be given an **Initial Nextcloud user name** and an **Initial Nextcloud password**. Use these credentials to log into the admin page. - * Store these credentials somewhere safe. -* Later, if you want to access the Nextcloud admin login page, you can simply click on the button **Open Nextcloud** under **Actions** in the deployment list. - -The installation is now complete and you have access to your Nextcloud instance. - - - -# DNS Details - -## DNS Record with Public IPv4 - -After deployment, you will have access to the IPv4 address of the VM you deployed on. You will need to add a **DNS A record** (Host: "@", Value: ) to your domain to access Nextcloud. This record type indicates the IP address of a given domain. - -## DNS Record with Gateway - -Before starting the deployment, you will need to add a **DNS A record** (Host: "@", Value: ) to your domain. The gateway IP will be shown to you when you select this option. - -## DNS Propagation - -When setting your own custom domain, it might take time for DNS to propagate. It is possible that you see the following message when opening the Nextcloud page: - ->"This site can't be reached. DNS address could not be found. Diagnosing the problem." - -This is normal. You might simply need to wait for the DNS to propagate completely. - - - -# Talk - -If you don't rent a public IP with your deployement, it's still possible to use Nextcloud Talk in a more limited fashion. It's generally understood that this method can work well for up to four participants in a call, and text chat also works without restriction. For larger calls, the dedicated backend, which requires a public IP, is recommended. - -While some calls can go entirely peer-to-peer and don't require any setup beyond installing the Talk app, a TURN server can be helpful to relay data when a peer-to-peer connection can't be established. There's more information on TURN servers after the install instructions. - -## Install Talk - -To install Talk, do the following: - -* Open the dropdown menu at the top right of the Nextcloud page -* Click on **Apps** -* In the left-side menu, select **Social & communication** -* Scroll down and locate the Talk app -* Click on **Download and enable** - -Once the Talk app is downloaded and enabled, you can find its icon at the top bar menu. - -## TURN - -As mentioned before, TURN servers relay data to help call participants connect to each other. All data sent to TURN server is encrypted in this case, so it's perfectly safe to use a free public server. - -That said, such free servers are not common, because relaying video chat uses a lot of bandwidth. As of the time of writing, Open Relay Project is one example that includes [instructions for use with Nextcloud Talk](https://www.metered.ca/tools/openrelay/#turn-server-for-nextcloud-talk). - -TURN server configuration can be found by opening the Talk settings, like this: - -* Open the dropdown menu at the top right of the Nextcloud page -* Click on **Personal settings** -* In the left-side menu, select **Talk** - -## Use Talk - -Once you've installed Talk and optionally added a TURN server, you can use Talk to create video conferences. - -Note that the host of the video meeting might need to turn the VPN off before creating a new conversation. - - - -# Backups and Updates - -## Create a Backup - -In the section **Backup and restore**, you can set a [BorgBackup](https://www.borgbackup.org/) of your Nextcloud instance. - -* Add a mount point and a directory name for your backup (e.g. **/mnt/backup**) and click **Submit backup location**. -* After the creation of the backup location, write down the **encryption password for backups** somewhere safe and offline. -* Click **Create backup** to create a BorgBackup of your Nextcloud instance. - * This will stop all containers, run the backup container and create the backup. -* Once the backup is complete, you can click on **Start containers** to restart the Nextcloud instance. - -## Automatic Backups and Updates - -After the first manual backup of your Nextcloud instance is complete, you can set automatic backups and updates. - -* In the section **Backup and restore**, open the dropdown menu **Click here to reveal all backup options**. -* In the section **Daily backup and automatic updates**, choose a time for your daily backup and click **Submit backup time**. - * To set automatic updates, make sure that the option **Automatically update all containers, the mastercontainer and on** is selected. - - - -# Troubleshooting - -## Retrieve the Nextcloud AIO Password - -You can retrieve the Nextcloud AIO password (8 words) by writing the following command line on the VM hosting your Nextcloud instance: - -``` -cat /mnt/data/docker/volumes/nextcloud_aio_mastercontainer/_data/data/configuration.json | grep password -``` - -## Access the Nextcloud Interface Page - -To access the Nextcloud interface page, follow those stepse - -* Open your Nextcloud instance -* In the top right Profile menu, select **Administration Settings** -* Under **Nextcloud All-in-One**, click **Open Nextcloud AIO Interface** - - - -## Check the DNS Propagation - -You can check if the DNS records are propagated globally with DNS propagation check services such as [DNS Checker](https://dnschecker.org/). You can use this tool to verify that your domain is properly pointing to the IPv4 address of the VM you deployed on. - - - -# Questions and Feedback - -If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/nixos_micro.md b/collections/documentation/dashboard/solutions/nixos_micro.md deleted file mode 100644 index 41f8932..0000000 --- a/collections/documentation/dashboard/solutions/nixos_micro.md +++ /dev/null @@ -1,66 +0,0 @@ -

NixOS MicroVM

- -

Table of Contents

- -- [Introduction](#introduction) -- [Access the ThreeFold Dashboard](#access-the-threefold-dashboard) -- [Deploy a NixOS MicroVM](#deploy-a-nixos-microvm) -- [Questions and Feedback](#questions-and-feedback) - -*** - -## Introduction - -__NixOS MicroVM__ refers to a minimalistic virtual machine environment based on the NixOS Linux distribution. -The NixOS MicroVM leverages these principles to create a highly customizable and reproducible virtual machine environment. It allows users to define the entire system configuration, including packages, services, and dependencies, in a declarative manner using the Nix language. This ensures that the MicroVM is consistent, easily reproducible, and can be version-controlled. - -In this guide, will learn how to make reproducible, declarative and reliable systems by deploying a NixOS MicroVM weblet in ThreeFold Dashboard. - -For more information on Nix, you can read the [Nix Reference Manual](https://nixos.org/manual/nix/stable/). - -## Access the ThreeFold Dashboard - -* Go to the ThreeFold Dashboard website, based on the deployment network you prefer: - * [Mainnet](https://dashboard.grid.tf) - * [Testnet](https://dashboard.test.grid.tf) - * [Devnet](https://dashboard.dev.grid.tf) - * [QAnet](https://dashboard.qa.grid.tf) - -* Make sure you have a [wallet](../wallet_connector.md) -* From the sidebar click on **Solutions** -* Click on **Micro Virtual Machine** to start your NixOS MicroVM Deployment - - - -## Deploy a NixOS MicroVM - -We now present the main steps to properly configure your NixOS MicroVM running on the TFGrid. - -* In the section `Config`, make sure to select `Nixos` as the `VM Image`. You can choose different parameters (CPU, Memory, etc.) for your deployment depending on your workload needs. - -![](./img/nxios-micro1.png) - -* In the section `Environment Variables`, you can add the default configurations for Nix. Here's an example: - * ``` - { pkgs ? import { } }: - let pythonEnv = pkgs.python3.withPackages(ps: [ ]); in pkgs.mkShell { packages = [ pythonEnv ]; } - ``` - * This will be written to `/root/default.nix`. You can change the Nix shell configuration there. - -![](./img/nixos-micro2.png) - -* In the section `Disks`, you should mount a disk large enough for Nix to store its files used for `nix-store`. - -![](./img/nixos-micro3.png) - -* Once your configured the parameters, you can deploy the MicroVM. - -If you need more information on how to SSH into your deployment, read [this section](../../system_administrators/getstarted/tfgrid3_getstarted.md) of the TF Manual. - - - -## Questions and Feedback - -You should now be able to easily deploy a NixOS MicroVM on the ThreeFold Grid. - -If you have any question or feedback, you can write a post on the [ThreeFold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/nodepilot.md b/collections/documentation/dashboard/solutions/nodepilot.md deleted file mode 100644 index 1fc212a..0000000 --- a/collections/documentation/dashboard/solutions/nodepilot.md +++ /dev/null @@ -1,52 +0,0 @@ -

NodePilot

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -This is a simple instance of upstream [Node Pilot](https://nodepilot.tech). - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Node Pilot** - -## Deployment - -![ ](./img/solutions_nodepilot.png) - -- Fill in the instance name: it's used to reference the node-pilot in the future. - -- Minimum CPU allowed is 8 cores and minimum memory allowed is 8192. - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes - -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Select a node to deploy your node-pilot instance on. - -> Or you can select a specific node with manual selection. - -- When using the [flist](https://hub.grid.tf/tf-official-vms/node-pilot-zdbfs.flist) you get a node pilot instance ready out-of-box. You need to get a public ipv4 to get it to works. - -After that is done you can see a list of all of your deployed instances - -![ ](./img/nodeP_2.png) - -Click on ***Visit*** to go to the registration page of your Node Pilot instance! - -![ ](./img/nodePilot_3.png) - -You can go to `https://publicip` and configure your node-pilot. You can upload a backup to the VM via ssh as well if you have a backup of a previous instance. - -What change compared to upstream node-pilot, we have out-of-box a transparent pre-filled blockchain database for some blochain (currently Fuse and Pokt as proof-of-concept). You can start one of theses blockchain in no-time and it will be automatically nearly sync already without the requirement of the full space locally nor downloading everything and killing bandwidth. diff --git a/collections/documentation/dashboard/solutions/owncloud.md b/collections/documentation/dashboard/solutions/owncloud.md deleted file mode 100644 index 16726a0..0000000 --- a/collections/documentation/dashboard/solutions/owncloud.md +++ /dev/null @@ -1,86 +0,0 @@ -

ownCloud

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deploy ownCloud](#deploy-owncloud) - - [Base](#base) - - [SMTP](#smtp) - - [List of Instances](#list-of-instances) -- [Admin Connection](#admin-connection) -- [TFConnect App Connection](#tfconnect-app-connection) - -*** - -## Introduction - -[ownCloud](https://owncloud.com/) develops and provides open-source software for content collaboration, allowing teams to easily share and work on files seamlessly regardless of device or location. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Solutions** -- Click on **ownCloud** - -## Deploy ownCloud - -![ ](./img/owncloud1.png) - -### Base - -- Enter an ownCloud deployment name. - - The name is used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. - - Ex. ***oc98newcloud*.gent02.dev.grid.tf** - -- Enter administrator information including **Username** and **Password**. - - This admin user will have full permission on the deployed instance. -- Select a capacity package: - - **Small**: {cpu: 2, memory: 8, diskSize: 250 } - - **Medium**: {cpu: 2, memory: 16, diskSize: 500 } - - **Large**: {cpu: 4, memory: 32, diskSize: 1000 } - - Or choose a **Custom** plan -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 -- Enable the `Dedicated` flag to retrieve only dedicated nodes -- Enable the `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- Enable the `Custom Domain` flag to use a custom domain -- Choose a gateway node to deploy your ownCloud instance on. - -Once you've set the deployment parameters, you can click on **Deploy**. - -### SMTP - -On the SMTP window, you can enable the optional `SMTP Server` flag if you want to have your ownCloud instance configured with an SMTP server. - -![ ](./img/owncloud4.png) - -### List of Instances - -When the deployment is ready, you will see a list of all of your deployed instances. - -![ ](./img/owncloud5.png) - -## Admin Connection - -Click on the button **Visit** under **Actions** to open the ownCloud login window. If you see **bad gateway**, you might simply need to wait a couple of minutes until the deployment completes. - -![ ](./img/solutions_owncloud_visit.png) - -To consult the deployment details, click on the button **Details** under **Actions**. On this page, you can access the **ownCloud Admin Username** and the **ownCloud Admin Password**. Use those credentials to log in as an administrator on your ownCloud deployment. - -![ ](./img/owncloud6.png) - -## TFConnect App Connection - -To connect to your ownCloud instance with the ThreeFold Connect app, you need to add permissions to your ThreeFold 3Bot ID by first [connecting as an administrator](#admin-connection). - -- Once you're connected as an admin, open the top-right menu and click on **Users**. -- To create a new user, set your 3Bot ID as the username with its corresponding email address, and set **Groups** as **admin**. Then click **Create**. -- You can now log out and connect to your ownCloud instance with the TF Connect app. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/peertube.md b/collections/documentation/dashboard/solutions/peertube.md deleted file mode 100644 index 79a74ee..0000000 --- a/collections/documentation/dashboard/solutions/peertube.md +++ /dev/null @@ -1,59 +0,0 @@ -

Peertube

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -[Peertube](https://joinpeertube.org/) aspires to be a decentralized and free/libre alternative to video broadcasting services. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Peertube** - -## Deployment - -![ ](./img/solutions_peertube.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. - the applied format `.` Ex. ***pt100peerprod*.gent02.dev.grid.tf** -- Enter an email and password which will be used for the admin login. -- Select a capacity package: - - **Small**: { cpu: 1, memory: 2, diskSize: 15 } - - **Medium**: { cpu: 2, memory: 4, diskSize: 100 } - - **Large**: { cpu: 4, memory: 16, diskSize: 250 } - - Or choose a **Custom** plan - - - `Public IPv4` flag gives the virtual machine a Public IPv4 - - `Public IPv6` flag gives the virtual machine a Public IPv6 - - `Planetary Network` to connect the Virtual Machine to Planetary network - - `Wiregaurd Access` to add a wiregaurd acces to the Virtual Machine -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` - -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Peertube instance on. - -After that is done you can see a list of all of your deployed instances - - -![ ](./img/weblet_peertube_listing.png) - -Click on ***Visit*** to go to the homepage of your Peertube instance! - -![ ](./img/weblet_peertube_instance.png) - -> Please note it may take sometime to be ready \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/presearch.md b/collections/documentation/dashboard/solutions/presearch.md deleted file mode 100644 index c0c3f55..0000000 --- a/collections/documentation/dashboard/solutions/presearch.md +++ /dev/null @@ -1,98 +0,0 @@ -

Presearch

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deploy a Presearch Node](#deploy-a-presearch-node) -- [Migrate an Existing Presearch Node to the TFGrid](#migrate-an-existing-presearch-node-to-the-tfgrid) -- [Verify if a 3Node Already Runs a Presearch Workload](#verify-if-a-3node-already-runs-a-presearch-workload) -- [Learn More About Presearch](#learn-more-about-presearch) -- [Questions and Feedback](#questions-and-feedback) - -*** - -## Introduction - -[Presearch](https://www.presearch.io/) is a community-powered, decentralized search engine that provides better results while protecting your privacy and rewarding you when you search. This weblet deploys a Presearch node. Presearch Nodes are used to process user search requests, and node operators earn Presearch PRE tokens for joining and supporting the network. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Presearch** - -## Deploy a Presearch Node - -![ ](./img/solutions_presearch.png) - -- Enter an instance name. - -- You need to sign up on Presearch in order to get your *Presearch Registration Code*. To sign up, go to the [Presearch](https://presearch.com) website, create your account and then head to your [dashboard](https://nodes.presearch.com/dashboard) to find your registration code. - -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 - - `Planetary Network` to connect the Virtual Machine to Planetary network - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` - -- Choose the node to deploy the Virtual Machine on -> Or you can select a specific node with manual selection. - -## Migrate an Existing Presearch Node to the TFGrid - -Now what if you already have a Presearch node deployed somewhere and would like to migrate to Threefold? - -We got you! All you need to do is: - -1. Login to your old server that has your node via SSH. -2. Run `docker cp presearch-node:/app/node/.keys presearch-node-keys` in order to generate your key-pair. -3. Head to the *Restore* tab in the Presearch weblet and paste your key-pair in the fields below and you'll be good to deploy! - -![ ](./img/presearch6.png) - -After that is done you can see a list of all of your deployed instances - -![ ](./img/presearch4.png ) - -Now head to your [dashboard](https://nodes.presearch.com/dashboard) again and scroll down to **Current Nodes**, you'll see your newly created node up and connected! - -![ ](./img/presearch5.png) - - -## Verify if a 3Node Already Runs a Presearch Workload - -You can do the following to verify if a Presearch workload deployed without a public IP address already has a Presearch workload running. Note that you will first need to deploy a Presearch workload on the 3Node. After deployment, you can SSH into the VM and do the verification. - -* SSH into the VM running the Presearch workload - * ``` - ssh root@ - ``` -* List running containers and identity the Presearch container - * ``` - docker ps - ``` -* Print the logs of the Presearch container - * ``` - docker logs - ``` -* If there is no other Presearch workload running on the 3Node, you will see a similar output: - * > 2023-10-16T12:18:33.780Z info: Node is listening for searches... -* If there is another Presearch workload running on the 3Node, you will see a similar output: - * > 2023-10-16T12:24:00.346Z error: Duplicate IP: This IP Address is already running another Node. Only one Node is permitted per IP Address. - * If there is another Presearch workload running, you will need to either deploy on another 3Node with a public IP or deploy on another node without a public IP that isn't running a Presearch deployment. - - - -## Learn More About Presearch - -To learn more about Presearch, you can read the [Presearch documentation](https://docs.presearch.io/). - -## Questions and Feedback - -If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/solutions.md b/collections/documentation/dashboard/solutions/solutions.md deleted file mode 100644 index 60732e3..0000000 --- a/collections/documentation/dashboard/solutions/solutions.md +++ /dev/null @@ -1,29 +0,0 @@ -

Solutions

- -This section provides a non-code easy way to deploy a whole solution on the TFGrid. - -

Table of Contents

- -- [Basic Environments](./basic_environments_readme.md) - - [Virtual Machines](./vm_intro.md) - - [Micro and Full VM Differences](./vm_differences.md) - - [Full Virtual Machine](./fullVm.md) - - [Micro Virtual Machine](./vm.md) - - [Kubernetes](./k8s.md) - - [NixOS MicroVM](./nixos_micro.md) -- [Ready Community Solutions](./ready_community_readme.md) - - [Caprover](./caprover.md) - - [Funkwhale](./funkwhale.md) - - [Peertube](./peertube.md) - - [Taiga](./taiga.md) - - [Owncloud](./owncloud.md) - - [Nextcloud](./nextcloud.md) - - [Discourse](./discourse.md) - - [Mattermost](./mattermost.md) - - [Presearch](./presearch.md) - - [CasperLabs](./casper.md) - - [Node Pilot](./nodepilot.md) - - [Subsquid](./subsquid.md) - - [Algorand](./algorand.md) - - [Wordpress](./wordpress.md) - - [Umbrel](./umbrel.md) diff --git a/collections/documentation/dashboard/solutions/subsquid.md b/collections/documentation/dashboard/solutions/subsquid.md deleted file mode 100644 index 063ff35..0000000 --- a/collections/documentation/dashboard/solutions/subsquid.md +++ /dev/null @@ -1,54 +0,0 @@ -

Subsquid

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -[Subsquid](https://www.subsquid.io/) indexer is a piece of software that reads all the blocks from a Substrate based blockchain, decodes and stores them for processing in a later stage. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Subsquid** - -## Deployment - -![ ](./img/solutions_subsquid.png) - -- Enter an instance name. - -- Enter an endpoint for a supported substrate chain. You can find the list of endpoints of supported cahins [here](https://github.com/polkadot-js/apps/blob/master/packages/apps-config/src/endpoints/production.ts). - - -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2 , diskSize: 50 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 100 } - - **Large**: {cpu: 4, memory: 16, diskSize: 250 } - - Or choose a **Custom** plan - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Subsquid instance on. - - -After that is done you can see a list of all of your deployed instances - -![ ](./img/subsquid_list.png) - -Click on ***Visit*** to go to the homepage of your Subsquid indexer instance! - -![ ](./img/subsquid_graphql.png) diff --git a/collections/documentation/dashboard/solutions/taiga.md b/collections/documentation/dashboard/solutions/taiga.md deleted file mode 100644 index 4291e12..0000000 --- a/collections/documentation/dashboard/solutions/taiga.md +++ /dev/null @@ -1,57 +0,0 @@ -

Taiga

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Deployment](#deployment) - -*** - -## Introduction - -[Taiga](https://www.taiga.io/) is the project management tool for multi-functional agile teams. It has a rich feature set and at the same time it is very simple to start with through its intuitive user interface. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Taiga** - -## Deployment - -![ ](./img/solutions_taiga.png) - -- Enter an Application Name. It's used in generating a unique subdomain on one of the gateways on the network alongside your twin ID. Ex. ***tg98taigar*.gent02.dev.grid.tf** - -- Enter administrator information including **Username**, **Email** and **Password**. This admin user will have full permission on the deployed instance. -- Select a capacity package: - - **Small**: {cpu: 2, memory: 4, diskSize: 100 } - - **Medium**: {cpu: 4, memory: 8, diskSize: 150 } - - **Large**: {cpu: 4, memory: 16, diskSize: 250 } - - Or choose a **Custom** plan - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy the Tiaga instance on -> Or you can select a specific node with manual selection. -- `Custom Domain` flag lets the user to use a custom domain -- Choose a gateway node to deploy your Funkwhale instance on. - - - -There's also an optional **Mail Server** tab if you'd like to have your Taiga instance configured with an SMTP server. - -![ ](./img/taiga4.png) - -After that is done you can see a list of all of your deployed instances - -![ ](./img/taiga5.png) - -Click on ***Visit*** to go to the homepage of your Taiga instance! - -![ ](./img/taiga6.png) diff --git a/collections/documentation/dashboard/solutions/umbrel.md b/collections/documentation/dashboard/solutions/umbrel.md deleted file mode 100644 index 21075ca..0000000 --- a/collections/documentation/dashboard/solutions/umbrel.md +++ /dev/null @@ -1,60 +0,0 @@ -

Umbrel

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) - -*** - -## Introduction - -[Umbrel](https://umbrel.com/) is an OS for running a personal server in your home. Self-host open source apps like Nextcloud, Bitcoin node, and more. - -## Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Applications** -- Click on **Umbrel** - -**Process** : -![Config](./img/solutions_umbrel.png) - -- Enter an instance name. -- Enter a Username - - will be used to create Umbrel dashboard account. -- Enter a Password - - Will be used to login to the Umbrel dashboard. - - Must be 12 to 30 characters . -- Select a capacity package: - - **Small**: { cpu: 1, memory: 2, diskSize: 10 } - - **Medium**: { cpu: 2, memory: 4 , diskSize: 50 } - - **Large**: { cpu: 4, memory: 16 , diskSize: 100 } - - Or choose a **Custom** plan - -- `Dedicated` flag to retrieve only dedeicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy the Umbrel instance on -> Or you can select a specific node with manual selection. - -**After Deploying**: - -You can see a list of all of your deployed instances - -![ ](./img/umbrel2.png) - -- you can click on `Show details` for more details about the Umbrel deployment. - ![ ](./img/umbrel3.png) - and for more detailed information switch to `JSON` tap. - ![ ](./img/umbrel4.png) -- Click on ***Admin Panel*** to go to the dashboard of your Umbrel instance! - - Enter the ***Password*** that you provided in `config` section to login into Umbrel dashboard. - > Forget the credentials? You can find them with `Show details` button. - -![ ](./img/umbrel5.png) - -> **Warning**: Due to the nature of the grid, shutdown, or restart your umbrel from the dashboard **MAY** make some unwanted behaviors. diff --git a/collections/documentation/dashboard/solutions/vm.md b/collections/documentation/dashboard/solutions/vm.md deleted file mode 100644 index 5e8db2f..0000000 --- a/collections/documentation/dashboard/solutions/vm.md +++ /dev/null @@ -1,62 +0,0 @@ -

Micro Virtual Machine

- -

Table of Contents

- -- [Introduction](#introduction) -- [Deployment](#deployment) - -*** - -## Introduction - -We present the steps to deploy a micro VM on the TFGrid. - - -## Deployment - -Deploy a new virtual machine on the Threefold Grid - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Solutions** -- Click on **Micro Virtual Machine** - -__Process__ : - -![ ](./img/solutions_microvm.png) - -- Fill in the instance name: it's used to reference the VM in the future. -- Choose the image from the drop down (e.g Alpine, Ubuntu) or you can click on `Other` and manually specify the flist URL and the entrypoint. -- Select a capacity package: - - **Small**: {cpu: 1, memory: 2, diskSize: 25 } - - **Medium**: {cpu: 2, memory: 4, diskSize: 50 } - - **Large**: {cpu: 4, memory: 16, diskSize: 100} - - Or choose a **Custom** plan -- Choose the network - - `Public IPv4` flag gives the virtual machine a Public IPv4 - - `Public IPv6` flag gives the virtual machine a Public IPv6 - - `Planetary Network` to connect the Virtual Machine to Planetary network - - `Wireguard Access` to add a wireguard acces to the Virtual Machine -- `GPU` flag to add GPU to the Virtual machine -- `Dedicated` flag to retrieve only dedicated nodes -- `Certified` flag to retrieve only certified nodes -- Choose the location of the node - - `Region` - - `Country` - - `Farm Name` -- Choose the node to deploy the Virtual Machine on -> Or you can select a specific node with manual selection. - -![](./img/nixos-micro2.png) -* In the section `Environment Variables`, you can add any environment variables that the machine might need - -![](./img/nixos-micro3.png) - -* In the section `Disks`, You can attach one or more disks to the Virtual Machine by clicking on the Disks tab and the plus `+` sign and specify the following parameters - - Disk name - - Disk size - -in the bottom of the page you can see a list of all of the virual machines you deployed. you can click on `Show details` for more details - -![](./img/vm_list.png) -You can also go to JSON tab for full details -![ ](./img/vm_json.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/vm_differences.md b/collections/documentation/dashboard/solutions/vm_differences.md deleted file mode 100644 index a6dff11..0000000 --- a/collections/documentation/dashboard/solutions/vm_differences.md +++ /dev/null @@ -1,29 +0,0 @@ -

Micro and Full VM Differences

- -

Table of Contents

- -- [Introduction](#introduction) -- [Micro Virtual Machine](#micro-virtual-machine) -- [Full Virtual Machine](#full-virtual-machine) - -*** - -## Introduction - -We present the main differences between a micro VM and a full VM. This is useful information when it comes to choosing the proper deployment on the TFGrid. - -## Micro Virtual Machine - -- It's meant to host microservice. and the user should enter the entrypoint. -- The user has no control over ther kernel used to run the machine. -- The network setup will be created for the user. And the vm's init process can assume that it will be fully set up (according to the config the user provided) by the time it is started. -- Mountpoints will also be setup for the user. The environment variables passed will be available inside the the vm. - -## Full Virtual Machine - -- The users run their own operating system, but the image must be - - EFI bootable - - Cloud-init enabled -- It contains a default disk attached, as the boot image will be copied to this disk. -- The default disk is mounted on / so if you want to attach any additional disks, you have to choose a different mounting point. -- A /image.raw file is used as "boot disk". This /image.raw is copied to the first attached volume of the vm. Cloud-init will take care of resizing the filesystem on the image to take the full disk size allocated in the deployment. \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/vm_intro.md b/collections/documentation/dashboard/solutions/vm_intro.md deleted file mode 100644 index d9cc0a7..0000000 --- a/collections/documentation/dashboard/solutions/vm_intro.md +++ /dev/null @@ -1,11 +0,0 @@ -

Virtual Machines

- -On the TFGrid, you can deploy both micro and full virtual machines. - -

Table of Contents

- -- [Micro and Full VM Differences ](./vm_differences.md) -- [Full Virtual Machine](./fullVm.md) -- [Micro Virtual Machine](./vm.md) -- [Nixos MicroVM](./nixos_micro.md) -- [Add a Domain](./add_domain.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/solutions/wordpress.md b/collections/documentation/dashboard/solutions/wordpress.md deleted file mode 100644 index 57f2938..0000000 --- a/collections/documentation/dashboard/solutions/wordpress.md +++ /dev/null @@ -1,147 +0,0 @@ -

WordPress

- -

Table of Contents

- -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Domain Name and IP Address](#domain-name-and-ip-address) -- [DNS Details with Custom Domain](#dns-details-with-custom-domain) - - [DNS Record with Public IPv4](#dns-record-with-public-ipv4) - - [DNS Record with Gateway](#dns-record-with-gateway) - - [DNS Propagation](#dns-propagation) -- [Deployment Process](#deployment-process) -- [Access WordPress](#access-wordpress) - - [WordPress Instance Website](#wordpress-instance-website) - - [WordPress Instance Admin Page](#wordpress-instance-admin-page) -- [WordPress Instance Credentials](#wordpress-instance-credentials) -- [Questions and Feedback](#questions-and-feedback) - -*** - -# Introduction - -[WordPress](https://wordpress.org/) is the most popular CMS on the market, powering 65.2% of websites whose CMS we know. That translates to 42.4% of all websites – nearly half of the internet. It is a popular option for those who want to build a website or a blog. - -# Prerequisites - -- Make sure you have a [wallet](../wallet_connector.md) -- From the sidebar click on **Solutions** -- Click on **Wordpress** - -# Domain Name and IP Address - -A domain name is required to use WordPress. You can either use your own, which we'll call a **custom domain**, or you can get a free subdomain from a gateway node. Note that this won't impact the function of your deployment, it's just a matter of preference. If you want to use your own domain, follow the steps for custom domain wherever you see them below. - -Another choice to make before launching your WordPress instance is whether you want to reserve a public IPv4 for the deployment. Note that renting a public IPv4 address is an extra cost. If you do not enable IPv4, the deployment will be provided a gateway IPv4 address. - -If you're not sure and just want the easiest, most affordable option, do not enable public IPv4 nor custom domain. - -# DNS Details with Custom Domain - -In this section, we cover the essential DNS information when deploying a WordPress instance with a custom domain. - -You can skip this section if you did not enable **Custom Domain** in **Domain Name**. - -As a general reference, here is what setting a DNS A record can look like: - -![ ](img/wp11.png) - -This record type indicates the IP address of a given domain. - -## DNS Record with Public IPv4 - -Consider the following if you've enabled **Custom Domain** in **Domain Name** and **Public IPv4** in **Network**. - -After deployment, you will have access to the IPv4 address of the VM you deployed on. You will need to add a **DNS A record** (Host: "@", Value: ) to your domain to access WordPress. - -## DNS Record with Gateway - -Consider the following if you've enabled **Custom domain** in **Domain Name** but did not enable **Public IPv4** in **Network**. - -Before deploying the WordPress instance, you will have access to the gateway IPv4 address. You will need to add a **DNS A record** (Host: "@", Value: ) to your domain to access WordPress. - -## DNS Propagation - -When setting a DNS A record, it might take time for the DNS to propagate. It is possible that you see the following message when opening the WordPress page: - ->"This site can't be reached. DNS address could not be found. Diagnosing the problem." - -This is normal. You might simply need to wait for the DNS to propagate completely. - -You can check if the DNS records are propagated globally with DNS propagation check services such as [DNS Checker](https://dnschecker.org/). You can use this tool to verify that your domain is properly pointing to either the VM or the gateway IPv4 address. - -# Deployment Process - -In this section, we cover the steps to deploy a WordPress instance on the Playground. - -![Config](./img/solutions_wordpress.png) - -- Enter an instance name or leave the auto-generated instance name -- Enter the admin information or leave the auto-generated information - - **Username**: This will be used as the MySQL DB username and for Wp-admin - - **Password**: This will be used as the MySQL DB password and for Wp-admin - - **Email**: This will be used for Wp-admin -- Select a capacity package: - - **Small**: { cpu: 1, memory: 2 , diskSize: 15 } - - **Medium**: { cpu: 2, memory: 4 , diskSize: 50 } - - **Large**: { cpu: 4, memory: 16 , diskSize: 100 } - - Or choose a **Custom** plan - -- Choose the network - - **Public IPv4** flag gives the virtual machine a Public IPv4 - -- **Dedicated** flag to retrieve only dedicated nodes -- **Certified** flag to retrieve only certified nodes -- Choose the location of the node - - **Country** - - **Farm Name** -- Choose the node to deploy the WordPress instance on -- **Custom Domain** flag lets the user to use a custom domain -- Choose a gateway node to deploy your WordPress instance on - - If you've enabled IPv4, you do not need to choose a gateway node - -# Access WordPress - -In the section **WordPress Instances**, you can see a list of all of your deployed instances: - -![ ](./img/wp2.png) - -You can click on **Show details** under **Actions** for more details about the WordPress deployment. - -![ ](img/wp8.png) - -![ ](img/wp3.png) - -For more detailed information, you can switch to the **Json** tab. - -![ ](img/wp4.png) - -## WordPress Instance Website - -Click on **Visit** under **Actions** to go to the homepage of your WordPress instance. - -![ ](img/wp10.png) - -![ ](img/wp5.png) - -## WordPress Instance Admin Page - -Click on **Admin Panel** to go to the WordPress admin page (**wp-admin**) of your WordPress instance. - -![ ](img/wp9.png) - -Enter the **Username** and the **Password** that you provided in the **config** section to log into the admin panel. - -![ ](img/wp6.png) - -![ ](img/wp7.png) - -# WordPress Instance Credentials - -At any time, you can find the credentials of your WordPress instance by clicking on the **Show details** button under **Actions**. - -![ ](img/wp8.png) - -# Questions and Feedback - -If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. diff --git a/collections/documentation/dashboard/tfchain/tf_dao.md b/collections/documentation/dashboard/tfchain/tf_dao.md deleted file mode 100644 index 648280b..0000000 --- a/collections/documentation/dashboard/tfchain/tf_dao.md +++ /dev/null @@ -1,41 +0,0 @@ -

DAO Voting

- -The TFChain DAO (i.e. Decentralized Autonomous Organization) feature integrates decentralized governance capabilities into the ThreeFold Dashboard. It enables community members to participate in decision-making processes and to contribute to the evolution of the ThreeFold ecosystem. Through the TFChain DAO, users can propose, vote on, and implement changes to the network protocols, policies, and operations, fostering a collaborative and inclusive environment. - -

Table of Contents

- -- [An Introduction to the DAO concept](#an-introduction-to-the-dao-concept) -- [Prerequisites to Vote](#prerequisites-to-vote) -- [How to Vote for a Proposal](#how-to-vote-for-a-proposal) -- [The Goal of the Threefold DAO](#the-goal-of-the-threefold-dao) - -*** - -## An Introduction to the DAO concept - -[A decentralized autonomous organization (DAO)](../../../knowledge_base/about/dao/dao.md) is an entity with no central leadership. Decisions get made from the bottom-up, governed by a community organized around a specific set of rules enforced on a blockchain. - -DAOs are internet-native organizations collectively owned and managed by their members. They have built-in treasuries that are only accessible with the approval of their members. Decisions are made via proposals the group votes on during a specified period. - - - -## Prerequisites to Vote - -Voting for a DAO proposal is very simple. You first need to meet certain requirements to be able to vote. - -- Have a [Threefold farm](../farms/farms.md) -- Have at least one active [3node server](../../farmers/3node_building/3node_building.md) on the farm -- If you created your farm with the Threefold Connect app - - [Import your farm on the Threefold Dashboard](../../threefold_token/storing_tft/tf_connect_app.md#move-farm-from-the-tf-connect-app-to-the-tf-portal-polkadotjs) - - - -## How to Vote for a Proposal - -To vote, you need to log into your Threefold Dashboard account, go to **TF DAO** section of **TFChain** and vote for an active proposal. Make sure to read the proposition and ask questions on the Threefold Forum proposition post if you have any. - -## The Goal of the Threefold DAO - -The goal of DAO voting system is to gather the thoughts and will of the Threefold community and build projects that are aligned with the ethos of the project. - -We encourage anyone to share their ideas. Who knows? Your sudden spark of genius might lead to an accepted proposal on the Threefold DAO! diff --git a/collections/documentation/dashboard/tfchain/tf_minting_reports.md b/collections/documentation/dashboard/tfchain/tf_minting_reports.md deleted file mode 100644 index 158ba04..0000000 --- a/collections/documentation/dashboard/tfchain/tf_minting_reports.md +++ /dev/null @@ -1,5 +0,0 @@ -# TF Minting Reports - -TFGrid Minting Explorer, works by simply entering a receipt hash and you will get full minting report in return. - -![](../img/Minting.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/tfchain/tf_token_bridge.md b/collections/documentation/dashboard/tfchain/tf_token_bridge.md deleted file mode 100644 index f7391c2..0000000 --- a/collections/documentation/dashboard/tfchain/tf_token_bridge.md +++ /dev/null @@ -1,44 +0,0 @@ -# TF Token Bridge - -Transferring TFT between Stellar and Tfchain - -## Usage - -This document will explain how you can transfer TFT from Tfchain to Stellar and back. - -![bridge page](../img/bridge.png) - -## Prerequisites - -- Stellar wallet - -- Account on TFchain (use TF Dashboard to create one). - -## Stellar to Tfchain - -You can deposit to Tfchain using the bridge page on the TF Dashboard - -A deposit of tokens from the Stellar network onto TF-Chain needs to happen from a Stellar wallet, like in the ThreeFold Connect App. - -You have 2 options: - -- TFT needs to be sent to the bridge account -- specifying in the memo field the twinID that was generated with the Twin creation e.g. twin_110 (dont forget twin_) -- Note there is a transaction cost of 1 TFT. - -Or - -- You can scan the QR code - -![bridge](../img/bridge_deposit.png) - -## Tfchain to Stellar - -You can bridge back to stellar using the bridge page on the dashboard, click *withdraw*: - -After indicating the destination address and the amount to be transferred, click *Send*. - -![withdraw](../img/bridge_withdraw.png) - -A withdraw fee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT. -The amount withdrawn from TFChain will be sent to your Stellar wallet. diff --git a/collections/documentation/dashboard/tfchain/tf_token_transfer.md b/collections/documentation/dashboard/tfchain/tf_token_transfer.md deleted file mode 100644 index 733eff6..0000000 --- a/collections/documentation/dashboard/tfchain/tf_token_transfer.md +++ /dev/null @@ -1,26 +0,0 @@ -# TF Token Transfer - -Manage your TFT on TFChain. - -## Transfer TFT between TFChain accounts - -You can transfer TFTs between two accounts that exist on the same chain. - -> Remark: testnet and mainnet both have the same TFTs but as the 2 chains are different, there is no way to do a direct transfer between accounts on testnet and on mainnet. - -You can transfer TFTs by recipient address or recipient twin id; by selecting the needed tab. - - -### Transfer by twin id - -Fill in the recipient twin id, the amount of tokens to transfer, and click on `Send`. - -![](../img/dashboard_transfer_twin.png) - -### Transfer by address - -Fill in the recipient address, the amount of tokens to transfer, and click on `Send`. - -![](../img/dashboard_transfer_address.png) - -There is no transfer fee, just a signing fee of `0.001` TFT. diff --git a/collections/documentation/dashboard/tfchain/tfchain.md b/collections/documentation/dashboard/tfchain/tfchain.md deleted file mode 100644 index bda6640..0000000 --- a/collections/documentation/dashboard/tfchain/tfchain.md +++ /dev/null @@ -1,20 +0,0 @@ -# TFChain - -Here you will find everything related to the ThreeFold chain. this includes: - -- Detailed account information from the [Your Profile](./your_profile.md) section. -- Information about what DAO is and how to vote on DAO proposals from the [TF DAO](./tf_dao.md) section. -- Transferring TFTs on different chains from the [TF Token Bridge](./tf_token_bridge.md) section. -- Transferring TFTs on the TFChain from the [TF Token Transfer](./tf_token_transfer.md) section. -- getting miniting reports from the [TF Minting Reports](./tf_minting_reports.md) section. - - ![](../img/sidebar_4.png) - -*** -## Table of Content - -- [Your Profile](./your_profile.md) -- [TF DAO](./tf_dao.md) -- [TF Token Bridge](./tf_token_bridge.md) -- [TF Token Transfer](./tf_token_transfer.md) -- [TF Minting Reports](./tf_minting_reports.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/tfchain/your_profile.md b/collections/documentation/dashboard/tfchain/your_profile.md deleted file mode 100644 index 091cffe..0000000 --- a/collections/documentation/dashboard/tfchain/your_profile.md +++ /dev/null @@ -1,13 +0,0 @@ -

Twin Management

- -The TF Twin management feature of the ThreeFold Dashboard enables users to create, manage, and monitor their individual digital entities known as **Twins**. A Twin can represent a virtual machine (VM) or a container running on the ThreeFold Grid. With the Twin management, users can easily deploy and scale their workloads, allocate resources, and configure networking and storage settings for their Twins. - -![](../img/twin.png) - -The twin details consists of three main items. - -- `ID` Your unique identifier for your twin on the ThreeFold chain. -- `Address` Your public address on the ThreeFold chain. -- `Relay` A relay is a component that facilitates the reliable and secure transfer of messages between different entities within the ThreeFold ecosystem. - -To create a twin check the [Wallet Connector](../wallet_connector.md) Section. \ No newline at end of file diff --git a/collections/documentation/dashboard/tfgrid/grid_status.md b/collections/documentation/dashboard/tfgrid/grid_status.md deleted file mode 100644 index 5969a99..0000000 --- a/collections/documentation/dashboard/tfgrid/grid_status.md +++ /dev/null @@ -1,5 +0,0 @@ -# Grid Status - -Check the status and health of ThreeFold services From [Grid Status](https://status.grid.tf/status/threefold) - -![](../img/grid_health.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/tfgrid/node_monitoring.md b/collections/documentation/dashboard/tfgrid/node_monitoring.md deleted file mode 100644 index a7c8d51..0000000 --- a/collections/documentation/dashboard/tfgrid/node_monitoring.md +++ /dev/null @@ -1,5 +0,0 @@ -# Node Monitoring - -Monitor and check the metrics and status of Zero-OS nodes from [Node Monitoring](https://metrics.grid.tf/d/rYdddlPWkfqwf/zos-host-metrics?orgId=2&refresh=30s) - -![](../img/Monitoring.png) \ No newline at end of file diff --git a/collections/documentation/dashboard/tfgrid/node_statistics.md b/collections/documentation/dashboard/tfgrid/node_statistics.md deleted file mode 100644 index 0980105..0000000 --- a/collections/documentation/dashboard/tfgrid/node_statistics.md +++ /dev/null @@ -1,18 +0,0 @@ -# Statistics - -Statistics allows you to see the distribution of 3Nodes all over the world with information on how many nodes are available and in which country. - -![](../img/statistics.png) - -Here you can see generic overview about: - -- Number of Nodes -- Number of Dedicated Nodes -- Number of Farms -- Number of Countries -- The capacity CRU, SRU, HRU, MRU -- Number of GPUs -- Number of Gateways -- Number of Twins -- The number of public IPs available -- Number of Contracts \ No newline at end of file diff --git a/collections/documentation/dashboard/tfgrid/tfgrid.md b/collections/documentation/dashboard/tfgrid/tfgrid.md deleted file mode 100644 index 2b06d9b..0000000 --- a/collections/documentation/dashboard/tfgrid/tfgrid.md +++ /dev/null @@ -1,17 +0,0 @@ -# TFGrid - -Check and use all things related to the threefold grid. Including: - -- The status of ThreeFold services from the [Grid Status](./grid_status.md) website. -- The statistics of all nodes that are available on the ThreeFold grid from [Node Statistics](./node_statistics.md). -- The health and status of Zero-OS nodes that are available on the ThreeFold grid from [Node Monitoring](./node_monitoring.md). - - ![](../img/sidebar_1.png) - -*** - -## Table of Content - -- [Grid Status](./grid_status.md) -- [Node Statistics](./node_statistics.md) -- [Node Monitoring](./node_monitoring.md) \ No newline at end of file diff --git a/collections/documentation/dashboard/toc.md b/collections/documentation/dashboard/toc.md deleted file mode 100644 index 5507842..0000000 --- a/collections/documentation/dashboard/toc.md +++ /dev/null @@ -1,8 +0,0 @@ -## Dashboard TOC - -- [Home](./home.md) -- [Wallet Connector](./wallet_connector.md) -- [CapRover](./caprover.md) -- [Virtual Machine](./vm.md) -- [Funkwhale](./funkwhale.md) -- [Peertube](./peertube.md) diff --git a/collections/documentation/dashboard/vm_presearch.md b/collections/documentation/dashboard/vm_presearch.md deleted file mode 100644 index f438507..0000000 --- a/collections/documentation/dashboard/vm_presearch.md +++ /dev/null @@ -1,51 +0,0 @@ -# Mount a Presearch node on TFGrid3 using VM - -The fastest way to mount a Presearch node on TFGrid3 is inside a VM. - -Steps : -- Set up a VM, see [here](./vm.md). It is recommended to reserve a fix IP. You can also try out the planetary network (so reserve a VM without public IP), as long as the node you select is connected to the internet through an IPv4 address that isn't used yet for a Presearch node, you don't explicitly need to reserve a public IPv4 address. However, the planetary network is still in beta phase and might generate performance issues. -- 1 CPU is enough for a PRE node. As we still need to install Docker on the VM before deploying a PRE node, please choose 8192 memory size. -- Once your VM is set up, SSH into our machine. - -![](./img/vm_list.png) - -For the VM having an IP address you can enter the terminal command -``` -ssh root@185.206.122.162 -``` - -If you didn't reserve a public IPv4 address, you can ssh into your machine using the IPv6 address. Peopl doing this, should however first set up their identity in the Planetary Network with Yggdrasil. See [here](yggdrasil_client) to know how to do this. - -``` -ssh root@300:aa1b:e91b:720f:ae5f:8991:6df8:1ec9 -``` - -Now you have an empty VM, also Docker still needs to be installed. - -Execute the following commands inside your VM : - -``` -apt update ; -apt install sudo ; -sudo apt install apt-transport-https ca-certificates curl software-properties-common ; -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - ; -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" ; -``` -Finally, install Docker. The Ubuntu machine does not come with `systemd`. The following does the trick : - -``` -apt-get install docker-ce docker-ce-cli containerd.io -apt-cache madison docker-ce - -dockerd & -``` - -Once Docker is set up, you can launch the PRE node instructions on your VM: - -``` -docker stop presearch-node ; docker rm presearch-node ; docker stop presearch-auto-updater ; docker rm presearch-auto-updater ; docker run -d --name presearch-auto-updater --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock presearch/auto-updater --cleanup --interval 900 presearch-auto-updater presearch-node ; docker pull presearch/node ; docker run -dt --name presearch-node --restart=unless-stopped -v presearch-node-storage:/app/node -e REGISTRATION_CODE=$YOUR_REGISTRATION_CODE_HERE presearch/node ; docker logs -f presearch-node -``` - -And you're done ! - -![ ](./img/weblet_vm_presearch_result.jpg) \ No newline at end of file diff --git a/collections/documentation/dashboard/wallet_connector.md b/collections/documentation/dashboard/wallet_connector.md deleted file mode 100644 index 842339a..0000000 --- a/collections/documentation/dashboard/wallet_connector.md +++ /dev/null @@ -1,42 +0,0 @@ -

Wallet Connector

- -

Table of Contents

- -- [Introduction](#introduction) -- [Supported Networks](#supported-networks) -- [Process](#process) - -*** - -## Introduction - -To interact with TFChain, users need to set a wallet connector. - -## Supported Networks - -Currently, we're supporting four different networks: - -- Dev net, for development purposes - - [https://dashboard.dev.grid.tf](https://dashboard.dev.grid.tf) -- QA net, for internal testing and verifications - - [https://dashboard.qa.grid.tf](https://dashboard.qa.grid.tf) -- Test net, for testing purposes - - [https://dashboard.test.grid.tf](https://dashboard.test.grid.tf) -- Main net, for production-ready purposes - - [https://dashboard.grid.tf](https://dashboard.grid.tf) - -![ ](./img/profile_manager1.png) - -## Process - -Start entering the following information required to create your new profile. - -![ ](./img/profile_manager2.png) - -- `Mnemonics` are the secret words of your Polkadot account. Click on the **Create Account** button to generate yours. -- `Password` is used to access your account -- `Confirm Password` - -After you finish typing your credentials, click on **Connect**. Once your profile gets activated, you should find your **Twin ID** and **Address** generated under your **_Mnemonics_** for verification. Also, your **Account Balance** will be available at the top right corner under your profile name. - -![ ](./img/profile_manager3.png) diff --git a/collections/documentation/developers/.collection b/collections/documentation/developers/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/faq/.collection b/collections/faq/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/faq/faq.md b/collections/faq/faq.md new file mode 100644 index 0000000..7edf150 --- /dev/null +++ b/collections/faq/faq.md @@ -0,0 +1,2490 @@ +

ThreeFold FAQ

+ +

Table of Contents

+ +- [GENERAL FAQ](#general-faq) + - [Basic Facts](#basic-facts) + - [What is the the ThreeFold blockchain?](#what-is-the-the-threefold-blockchain) + - [What is the architecture of the ThreeFold Grid in simple terms?](#what-is-the-architecture-of-the-threefold-grid-in-simple-terms) + - [What is the difference between Internet capacity and connectivity? Does ThreeFold replace my Internet service provider (ISP)?](#what-is-the-difference-between-internet-capacity-and-connectivity-does-threefold-replace-my-internet-service-provider-isp) + - [What are the priorities of ThreeFold (the Three P of ThreeFold)? ThreeFold is a Planet first project, what does it mean?](#what-are-the-priorities-of-threefold-the-three-p-of-threefold-threefold-is-a-planet-first-project-what-does-it-mean) + - [I want to help build the new Internet. How can I become a ThreeFold certified 3node partner?](#i-want-to-help-build-the-new-internet-how-can-i-become-a-threefold-certified-3node-partner) + - [How can I create a twin on the TF Grid?](#how-can-i-create-a-twin-on-the-tf-grid) + - [ThreeFold Communication](#threefold-communication) + - [Is there a ThreeFold app for mobile?](#is-there-a-threefold-app-for-mobile) + - [I want to reach the ThreeFold community. What are ThreeFold social links?](#i-want-to-reach-the-threefold-community-what-are-threefold-social-links) + - [Could we reach out someone for publishing research work on ThreeFold?](#could-we-reach-out-someone-for-publishing-research-work-on-threefold) + - [Who can I write to for a proposal? Where can I send a proposal email for a new partnership opportunity with ThreeFold?](#who-can-i-write-to-for-a-proposal-where-can-i-send-a-proposal-email-for-a-new-partnership-opportunity-with-threefold) + - [How can I track and follow the progress and development of ThreeFold?](#how-can-i-track-and-follow-the-progress-and-development-of-threefold) + - [Why do some forum posts need to be approved?](#why-do-some-forum-posts-need-to-be-approved) + - [The Technology of ThreeFold](#the-technology-of-threefold) + - [What is a 3Node?](#what-is-a-3node) + - [What is the difference between a 3node and a ThreeFold farm?](#what-is-the-difference-between-a-3node-and-a-threefold-farm) + - [What is Zero-OS from ThreeFold?](#what-is-zero-os-from-threefold) + - [ThreeFold uses Quantum Safe Storage technology, what does it mean?](#threefold-uses-quantum-safe-storage-technology-what-does-it-mean) + - [Quantum Safe File System (QSFS) allows for part of the storage to go down and it can self repair, however it’s still attached to a single VM and a single point of failure. Can a QSFS instance be reattached to another VM to recover it?](#quantum-safe-file-system-qsfs-allows-for-part-of-the-storage-to-go-down-and-it-can-self-repair-however-its-still-attached-to-a-single-vm-and-a-single-point-of-failure-can-a-qsfs-instance-be-reattached-to-another-vm-to-recover-it) + - [Where does the ThreeFold Explorer take its data from?](#where-does-the-threefold-explorer-take-its-data-from) + - [How can I use the Gridproxy public API to query information on the TFGrid?](#how-can-i-use-the-gridproxy-public-api-to-query-information-on-the-tfgrid) + - [How can I see the stats of the ThreeFold Grid?](#how-can-i-see-the-stats-of-the-threefold-grid) + - [What is the difference between a seed phrase (mnemonics) and an HEX secret?](#what-is-the-difference-between-a-seed-phrase-mnemonics-and-an-hex-secret) + - [Buying and Transacting TFT](#buying-and-transacting-tft) + - [How long does it take when you use the BSC-Stellar Bridge?](#how-long-does-it-take-when-you-use-the-bsc-stellar-bridge) + - [On my website, users can donate TFT on the Stellar Chain. Is there a way for users on my website to easily track the total sum of TFT donated?](#on-my-website-users-can-donate-tft-on-the-stellar-chain-is-there-a-way-for-users-on-my-website-to-easily-track-the-total-sum-of-tft-donated) + - [TF Connect App, TF Dashboard, GraphQL, Grix Proxy and Polkadot Substrate](#tf-connect-app-tf-dashboard-graphql-grix-proxy-and-polkadot-substrate) + - [Is there a way to create or import another wallet in TF Connect App?](#is-there-a-way-to-create-or-import-another-wallet-in-tf-connect-app) + - [I created a farm on the TF Chain. On the TF Connect App Farmer Migration section, my farm is under Other v3 farms, is this normal?](#i-created-a-farm-on-the-tf-chain-on-the-tf-connect-app-farmer-migration-section-my-farm-is-under-other-v3-farms-is-this-normal) + - [I am trying to access my wallet in the ThreeFold Connect App. It worked fine before, but now I just get a white screen. What does it mean and what can I do?](#i-am-trying-to-access-my-wallet-in-the-threefold-connect-app-it-worked-fine-before-but-now-i-just-get-a-white-screen-what-does-it-mean-and-what-can-i-do) + - [When I open the ThreeFold Connect App, I get the error: Error in initialization in Flagsmith. How can I fix this issue?](#when-i-open-the-threefold-connect-app-i-get-the-error-error-in-initialization-in-flagsmith-how-can-i-fix-this-issue) + - [Apart form the ThreeFold Connect App Wallet, how can I check my TFT balance?](#apart-form-the-threefold-connect-app-wallet-how-can-i-check-my-tft-balance) + - [Is it possible to export the transaction history of a wallet to a CSV file?](#is-it-possible-to-export-the-transaction-history-of-a-wallet-to-a-csv-file) + - [How can I use GraphQl to find information on the ThreeFold Grid?](#how-can-i-use-graphql-to-find-information-on-the-threefold-grid) + - [What are the different links to ThreeFold's Graph QL depending on the network?](#what-are-the-different-links-to-threefolds-graph-ql-depending-on-the-network) + - [How can I find 3Nodes with IPv6 addresses?](#how-can-i-find-3nodes-with-ipv6-addresses) + - [How can I use GraphQL to see contracts on my 3Nodes?](#how-can-i-use-graphql-to-see-contracts-on-my-3nodes) + - [How can I use Grid Proxy to find information on the ThreeFold Grid and 3Nodes?](#how-can-i-use-grid-proxy-to-find-information-on-the-threefold-grid-and-3nodes) + - [Who is hosting GraphQL and Grid Proxy on the ThreeFold Grid?](#who-is-hosting-graphql-and-grid-proxy-on-the-threefold-grid) + - [What is the difference between uptime, status and power state?](#what-is-the-difference-between-uptime-status-and-power-state) + - [I do not remember the name (ThreeFold 3bot ID) associated with my seed phrase on the ThreeFold Connect app. Can I recover my TF Connect app account with only the seed phrase and not the name (3bot ID) associated with it?](#i-do-not-remember-the-name-threefold-3bot-id-associated-with-my-seed-phrase-on-the-threefold-connect-app-can-i-recover-my-tf-connect-app-account-with-only-the-seed-phrase-and-not-the-name-3bot-id-associated-with-it) +- [USERS FAQ](#users-faq) + - [TF Grid Functionalities](#tf-grid-functionalities) + - [What are the type of storage available on TF Grid?](#what-are-the-type-of-storage-available-on-tf-grid) + - [Deployments on the ThreeFold Grid](#deployments-on-the-threefold-grid) + - [Does the ThreeFold Grid charge the total resources rented or it only charges the resources used during deployment?](#does-the-threefold-grid-charge-the-total-resources-rented-or-it-only-charges-the-resources-used-during-deployment) + - [Do I pay for Internet traffic while deploying workloads on IPv4, IPv6 or Planetary Network?](#do-i-pay-for-internet-traffic-while-deploying-workloads-on-ipv4-ipv6-or-planetary-network) + - [What is the monthly cost for an IPv4 or an IPv6 public address on the ThreeFold Grid?](#what-is-the-monthly-cost-for-an-ipv4-or-an-ipv6-public-address-on-the-threefold-grid) + - [What are the differences between a container, a micro virtual machine and a full virtual machine (VM)?](#what-are-the-differences-between-a-container-a-micro-virtual-machine-and-a-full-virtual-machine-vm) + - [What is a 3Node gateway? How can I configure a 3Node as a gateway node?](#what-is-a-3node-gateway-how-can-i-configure-a-3node-as-a-gateway-node) + - [When connecting remotely with SSH, I get the following error: "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!...". What can I do to fix this?](#when-connecting-remotely-with-ssh-i-get-the-following-error-warning-remote-host-identification-has-changed-what-can-i-do-to-fix-this) + - [How can I remove one host from known\_hosts?](#how-can-i-remove-one-host-from-known_hosts) + - [How can I add ThreeFold peers in the Yggdrasil configuration file?](#how-can-i-add-threefold-peers-in-the-yggdrasil-configuration-file) + - [How can I see Yggdrasil/Planetary Network's peers?](#how-can-i-see-yggdrasilplanetary-networks-peers) + - [How can I ping an Yggdrasil IP or IPv6 address?](#how-can-i-ping-an-yggdrasil-ip-or-ipv6-address) + - [Is there a way to test if I am properly connected to the Yggdrasil network (Planetary Network)?](#is-there-a-way-to-test-if-i-am-properly-connected-to-the-yggdrasil-network-planetary-network) + - [How can I change the username of my SSH key?](#how-can-i-change-the-username-of-my-ssh-key) + - [What is ThreeFold's stance on sharded workload? Will ThreeFold embrace and move towards distributed data chunks or stay with the one deployment, one node model?](#what-is-threefolds-stance-on-sharded-workload-will-threefold-embrace-and-move-towards-distributed-data-chunks-or-stay-with-the-one-deployment-one-node-model) + - [Tutorials and Guides](#tutorials-and-guides) + - [What is the minimum amount of TFT to deploy a Presearch node? How can I get a TFT discount when I deploy a Presearch node?](#what-is-the-minimum-amount-of-tft-to-deploy-a-presearch-node-how-can-i-get-a-tft-discount-when-i-deploy-a-presearch-node) + - [Can I use the same seed phrase for my mainnet and testnest accounts? How can I transfer my TFT from mainnet to testnet or vice versa?](#can-i-use-the-same-seed-phrase-for-my-mainnet-and-testnest-accounts-how-can-i-transfer-my-tft-from-mainnet-to-testnet-or-vice-versa) + - [Do I need a full or micro virtual machine (VM) when I run QSFS, quantum safe file system, on the ThreeFold Grid?](#do-i-need-a-full-or-micro-virtual-machine-vm-when-i-run-qsfs-quantum-safe-file-system-on-the-threefold-grid) + - [Linux, Github, Containers and More](#linux-github-containers-and-more) + - [Where should I start to learn more about Linux?](#where-should-i-start-to-learn-more-about-linux) + - [How can I clone a single branch of a repository on Github?](#how-can-i-clone-a-single-branch-of-a-repository-on-github) + - [Grace Period (Status Paused)](#grace-period-status-paused) + - [The status of my deployment is paused, in grace period, how can I resume the deployment?](#the-status-of-my-deployment-is-paused-in-grace-period-how-can-i-resume-the-deployment) + - [Once I refund my TF wallet, how long does it take for the deployment to resume from grace period?](#once-i-refund-my-tf-wallet-how-long-does-it-take-for-the-deployment-to-resume-from-grace-period) + - [Can I SSH into my deployments when they are in grace period (i.e. when their status is paused)?](#can-i-ssh-into-my-deployments-when-they-are-in-grace-period-ie-when-their-status-is-paused) + - [How long is the grace period (i.e. when the deployment status is paused)?](#how-long-is-the-grace-period-ie-when-the-deployment-status-is-paused) + - [Terraform](#terraform) + - [Working with Terraform, I get the following error: failed to create contract: ContractIsNotUnique. Is there a fix to this issue?](#working-with-terraform-i-get-the-following-error-failed-to-create-contract-contractisnotunique-is-there-a-fix-to-this-issue) + - [I am working with Terraform. What do I have to write in the file env.tfvars?](#i-am-working-with-terraform-what-do-i-have-to-write-in-the-file-envtfvars) + - [I am working with Terraform and I am using the example in Terraform Provider Grid. How can I use the example main.tf file with environment variables? Why am I getting the message Error: account not found, when deploying with Terraform?](#i-am-working-with-terraform-and-i-am-using-the-example-in-terraform-provider-grid-how-can-i-use-the-example-maintf-file-with-environment-variables-why-am-i-getting-the-message-error-account-not-found-when-deploying-with-terraform) + - [Users Troubleshooting and Error Messages](#users-troubleshooting-and-error-messages) + - [When deploying a virtual machine (VM) on the ThreeFold Grid, I get the following message after trying a full system update and upgrade: GRUB failed to install to the following devices... Is there a fix to this issue?](#when-deploying-a-virtual-machine-vm-on-the-threefold-grid-i-get-the-following-message-after-trying-a-full-system-update-and-upgrade-grub-failed-to-install-to-the-following-devices-is-there-a-fix-to-this-issue) + - [While deploying on the TF Dashboard, I get the following error :"global workload with the same name exists: conflict". What can I do to fix this issue?](#while-deploying-on-the-tf-dashboard-i-get-the-following-error-global-workload-with-the-same-name-exists-conflict-what-can-i-do-to-fix-this-issue) + - [ThreeFold Connect App](#threefold-connect-app) + - [TF Connect App is now asking for a 4-digit password (PIN). I don't remember it as I usually use touch or face ID to unlock the app. What can I do?](#tf-connect-app-is-now-asking-for-a-4-digit-password-pin-i-dont-remember-it-as-i-usually-use-touch-or-face-id-to-unlock-the-app-what-can-i-do) + - [Is there a way to have more than one wallet in TF Connect App?](#is-there-a-way-to-have-more-than-one-wallet-in-tf-connect-app) + - [What is the difference between 10.x.y.z and 192.168.x.y addresses?](#what-is-the-difference-between-10xyz-and-192168xy-addresses) +- [DEVELOPERS FAQ](#developers-faq) + - [General Information for Developer](#general-information-for-developer) + - [Does Zero-OS assign private IPv4 addresses to workloads?](#does-zero-os-assign-private-ipv4-addresses-to-workloads) + - [Why does each 3Node server have two IP addresses associated with it?](#why-does-each-3node-server-have-two-ip-addresses-associated-with-it) + - [Can Zero-OS assign public IPv4 or IPv6 addresses to workloads?](#can-zero-os-assign-public-ipv4-or-ipv6-addresses-to-workloads) + - [What does MAC mean when it comes to networking?](#what-does-mac-mean-when-it-comes-to-networking) + - [I am a developer looking for a way to automatically convert BSC tokens into TFT. Could you please share tips on how to swap regular tokens into TFT, on backend, without and browser extensions, via any platform API?](#i-am-a-developer-looking-for-a-way-to-automatically-convert-bsc-tokens-into-tft-could-you-please-share-tips-on-how-to-swap-regular-tokens-into-tft-on-backend-without-and-browser-extensions-via-any-platform-api) + - [Test Net](#test-net) + - [Can I get some free TFT to test on Test Net](#can-i-get-some-free-tft-to-test-on-test-net) +- [FARMERS FAQ](#farmers-faq) + - [TFT Farming Basics](#tft-farming-basics) + - [My Titan is v2.1 and the ThreeFold Grid is v3., what is the distinction?](#my-titan-is-v21-and-the-threefold-grid-is-v3-what-is-the-distinction) + - [When will I receive the farming rewards for my 3Nodes?](#when-will-i-receive-the-farming-rewards-for-my-3nodes) + - [What is the TFT minting process? Is it fully automated?](#what-is-the-tft-minting-process-is-it-fully-automated) + - [What should I do if I did not receive my farming rewards this month?](#what-should-i-do-if-i-did-not-receive-my-farming-rewards-this-month) + - [What is the TFT entry price of my 3Node farming rewards?](#what-is-the-tft-entry-price-of-my-3node-farming-rewards) + - [What is the necessary uptime for a 3Node per period of one month?](#what-is-the-necessary-uptime-for-a-3node-per-period-of-one-month) + - [How can I check the uptime of my 3Nodes? Is there a tool to check the uptime of 3Node servers on the ThreeFold Grid?](#how-can-i-check-the-uptime-of-my-3nodes-is-there-a-tool-to-check-the-uptime-of-3node-servers-on-the-threefold-grid) + - [I set up a 3Node in the middle of the month, does it affect uptime requirements and rewards?](#i-set-up-a-3node-in-the-middle-of-the-month-does-it-affect-uptime-requirements-and-rewards) + - [What is the difference between a certified and a non-certified 3Node?](#what-is-the-difference-between-a-certified-and-a-non-certified-3node) + - [What are the different certifications available for 3Node servers and farms? What are the Gold and Silver certifications?](#what-are-the-different-certifications-available-for-3node-servers-and-farms-what-are-the-gold-and-silver-certifications) + - [What is the difference between V2 and V3 minting?](#what-is-the-difference-between-v2-and-v3-minting) + - [What is the TFT minting address on Stellar Chain?](#what-is-the-tft-minting-address-on-stellar-chain) + - [Can Titans and DIY 3Nodes share the same farm?](#can-titans-and-diy-3nodes-share-the-same-farm) + - [Do I need one farm for each 3Node?](#do-i-need-one-farm-for-each-3node) + - [Can a single farm be composed of many 3Nodes?](#can-a-single-farm-be-composed-of-many-3nodes) + - [Can a single 3Node be on more than one farm?](#can-a-single-3node-be-on-more-than-one-farm) + - [Do I need one reward address per 3Node?](#do-i-need-one-reward-address-per-3node) + - [How can I access the expert bootstrap mode for Zero-OS?](#how-can-i-access-the-expert-bootstrap-mode-for-zero-os) + - [When it comes to the Zero-OS bootstrap image, can I simply duplicate the first image I burnt when I build another 3Node?](#when-it-comes-to-the-zero-os-bootstrap-image-can-i-simply-duplicate-the-first-image-i-burnt-when-i-build-another-3node) + - [If a node is unused for certain time (e.g. many months offline), will it be erased by the Grid?](#if-a-node-is-unused-for-certain-time-eg-many-months-offline-will-it-be-erased-by-the-grid) + - [Can a farm be erased from TF Grid?](#can-a-farm-be-erased-from-tf-grid) + - [On the ThreeFold Connect App, it says I need to migrate my Titan farm from V2 to V3. What do I have to do? How long does this take?](#on-the-threefold-connect-app-it-says-i-need-to-migrate-my-titan-farm-from-v2-to-v3-what-do-i-have-to-do-how-long-does-this-take) + - [How can I migrate my DIY farm from V2 to V3?](#how-can-i-migrate-my-diy-farm-from-v2-to-v3) + - [What does the pricing policy ID of a farm represent?](#what-does-the-pricing-policy-id-of-a-farm-represent) + - [What is the difference between TiB and TB? Why doesn't the TF Explorer shows the same storage space as my disk?](#what-is-the-difference-between-tib-and-tb-why-doesnt-the-tf-explorer-shows-the-same-storage-space-as-my-disk) + - [Farming Rewards and Related Notions](#farming-rewards-and-related-notions) + - [What are the rewards of farming? Can I get more rewards when my 3Node is being utilized?](#what-are-the-rewards-of-farming-can-i-get-more-rewards-when-my-3node-is-being-utilized) + - [How can I know the potential farming rewards for Grid Utilization?](#how-can-i-know-the-potential-farming-rewards-for-grid-utilization) + - [What is the easiest way to farm ThreeFold tokens (TFT)?](#what-is-the-easiest-way-to-farm-threefold-tokens-tft) + - [When do I receive my rewards?](#when-do-i-receive-my-rewards) + - [Do farming rewards take into account the type of RAM, SSD, HDD and CPU of the 3Node server?](#do-farming-rewards-take-into-account-the-type-of-ram-ssd-hdd-and-cpu-of-the-3node-server) + - [Can I send my farming rewards directly to a crypto exchange?](#can-i-send-my-farming-rewards-directly-to-a-crypto-exchange) + - [Do I need collateral to farm ThreeFold tokens?](#do-i-need-collateral-to-farm-threefold-tokens) + - [Can I add external drives to the 3Nodes to increase rewards and resources available to the ThreeFold Grid?](#can-i-add-external-drives-to-the-3nodes-to-increase-rewards-and-resources-available-to-the-threefold-grid) + - [Do I have access to the TFT rewards I receive each month when farming?](#do-i-have-access-to-the-tft-rewards-i-receive-each-month-when-farming) + - [What is TFTA? Is it still used?](#what-is-tfta-is-it-still-used) + - [Is there a way to certify a DIY 3Node? How can I become a 3Node certified vendor and builder?](#is-there-a-way-to-certify-a-diy-3node-how-can-i-become-a-3node-certified-vendor-and-builder) + - [Does it make sense to make my farm a company?](#does-it-make-sense-to-make-my-farm-a-company) + - [What is the difference between uptime and downtime, and between online and offline, when it comes to 3Nodes?](#what-is-the-difference-between-uptime-and-downtime-and-between-online-and-offline-when-it-comes-to-3nodes) + - [My 3Node server grid utilization is low, is it normal?](#my-3node-server-grid-utilization-is-low-is-it-normal) + - [3Node Farming Requirements](#3node-farming-requirements) + - [Can I host more than one 3Node server at my house?](#can-i-host-more-than-one-3node-server-at-my-house) + - [Is Wifi supported? Can I farm via Wifi instead of an Ethernet cable?](#is-wifi-supported-can-i-farm-via-wifi-instead-of-an-ethernet-cable) + - [I have 2 routers with each a different Internet service provider. I disconnected the ethernet cable from one router and connected it to the other router. Do I need to reboot the 3Node?](#i-have-2-routers-with-each-a-different-internet-service-provider-i-disconnected-the-ethernet-cable-from-one-router-and-connected-it-to-the-other-router-do-i-need-to-reboot-the-3node) + - [Do I need any specific port configuration when booting a 3Node?](#do-i-need-any-specific-port-configuration-when-booting-a-3node) + - [How much electricity does a 3Node use?](#how-much-electricity-does-a-3node-use) + - [Has anyone run stress tests to know the power consumption at heavy load of certain 3Nodes?](#has-anyone-run-stress-tests-to-know-the-power-consumption-at-heavy-load-of-certain-3nodes) + - [Can the Titan 3Node be run on PoE? (Power Over Ethernet)](#can-the-titan-3node-be-run-on-poe-power-over-ethernet) + - [What is the relationship between the 3Node's resources and bandwidth?](#what-is-the-relationship-between-the-3nodes-resources-and-bandwidth) + - [What is the bandwidth needed when it comes to running 3Nodes on the Grid?](#what-is-the-bandwidth-needed-when-it-comes-to-running-3nodes-on-the-grid) + - [Can I run Zero-OS on a virtual machine?](#can-i-run-zero-os-on-a-virtual-machine) + - [Is it possible to build a DIY 3Node with VMWare VM ?](#is-it-possible-to-build-a-diy-3node-with-vmware-vm-) + - [Can I run a 3Node on another operating system, like Windows, MAC or Linux?](#can-i-run-a-3node-on-another-operating-system-like-windows-mac-or-linux) + - [What is the minimum SSD requirement for a 3Node server to farm ThreeFold tokens (TFT)?](#what-is-the-minimum-ssd-requirement-for-a-3node-server-to-farm-threefold-tokens-tft) + - [Is it possible to have a 3Node server running on only HDD disks?](#is-it-possible-to-have-a-3node-server-running-on-only-hdd-disks) + - [Building a 3Node - Steps and Details](#building-a-3node---steps-and-details) + - [How can I be sure that I properly wiped my disks?](#how-can-i-be-sure-that-i-properly-wiped-my-disks) + - [If I wipe my disk to create a new node ID, will I lose my farming rewards during the month?](#if-i-wipe-my-disk-to-create-a-new-node-id-will-i-lose-my-farming-rewards-during-the-month) + - [My disks have issues with Zero-OS and my 3Nodes. How can I do a factory reset of the disks?](#my-disks-have-issues-with-zero-os-and-my-3nodes-how-can-i-do-a-factory-reset-of-the-disks) + - [Before doing a bootstrap image, I need to format my USB key. How can I format my USB key?](#before-doing-a-bootstrap-image-i-need-to-format-my-usb-key-how-can-i-format-my-usb-key) + - [What do you use to burn (or to load) the Zero-OS bootstrap image onto a USB stick?](#what-do-you-use-to-burn-or-to-load-the-zero-os-bootstrap-image-onto-a-usb-stick) + - [Should I do a UEFI image or a BIOS image to bootstrap Zero-OS?](#should-i-do-a-uefi-image-or-a-bios-image-to-bootstrap-zero-os) + - [How do I set the BIOS or UEFI of my 3Node?](#how-do-i-set-the-bios-or-uefi-of-my-3node) + - [For my 3Node server, do I need to enable virtualization in BIOS or UEFI?](#for-my-3node-server-do-i-need-to-enable-virtualization-in-bios-or-uefi) + - [How can I boot a 3Node server with a Zero-OS bootstrap image?](#how-can-i-boot-a-3node-server-with-a-zero-os-bootstrap-image) + - [The first time I booted my 3Node server, it says that the node is not registered yet. What can I do?](#the-first-time-i-booted-my-3node-server-it-says-that-the-node-is-not-registered-yet-what-can-i-do) + - [The first time I boot my 3Node, the node gets registered but it says cache disk : no ssd. What can I do?](#the-first-time-i-boot-my-3node-the-node-gets-registered-but-it-says-cache-disk--no-ssd-what-can-i-do) + - [The first time I boot my 3 node, the node gets registered and it says cache disk : OK, but the table System Used Capacity is empty. What can I do?](#the-first-time-i-boot-my-3-node-the-node-gets-registered-and-it-says-cache-disk--ok-but-the-table-system-used-capacity-is-empty-what-can-i-do) + - [I have a relatively old server (e.g. Dell R710 or R620, Z840). I have trouble booting Zero-OS. What could I do?](#i-have-a-relatively-old-server-eg-dell-r710-or-r620-z840-i-have-trouble-booting-zero-os-what-could-i-do) + - [I connected a SATA SSD to a CD-DVD optical drive adaptor. My system does not recognize the disk. What can I do?](#i-connected-a-sata-ssd-to-a-cd-dvd-optical-drive-adaptor-my-system-does-not-recognize-the-disk-what-can-i-do) + - [Can someone explain what should I put in the Public IP part of my farm? Should I just insert my Public IP and Gateway (given by my ISP)?](#can-someone-explain-what-should-i-put-in-the-public-ip-part-of-my-farm-should-i-just-insert-my-public-ip-and-gateway-given-by-my-isp) + - [Farming Optimization](#farming-optimization) + - [What is the difference between a ThreeFold 3Node and a ThreeFold farm? What is the difference between the farm ID and the node ID?](#what-is-the-difference-between-a-threefold-3node-and-a-threefold-farm-what-is-the-difference-between-the-farm-id-and-the-node-id) + - [How can I know how many GB of SSD and RAM do I need?](#how-can-i-know-how-many-gb-of-ssd-and-ram-do-i-need) + - [What is the optimal ratio of virtual cores (vcores or threads), SSD storage and RAM memory? What is the best optimization scenario for a 3Node, in terms of ThreeFold tokens (TFT) farming rewards?](#what-is-the-optimal-ratio-of-virtual-cores-vcores-or-threads-ssd-storage-and-ram-memory-what-is-the-best-optimization-scenario-for-a-3node-in-terms-of-threefold-tokens-tft-farming-rewards) + - [What does TBW mean? What is a good TBW level for a SSD disk?](#what-does-tbw-mean-what-is-a-good-tbw-level-for-a-ssd-disk) + - [Are SATA and SAS drives interchangeable?](#are-sata-and-sas-drives-interchangeable) + - [What is the speed difference between SAS and SATA disks?](#what-is-the-speed-difference-between-sas-and-sata-disks) + - [Is it possible to do a graceful shutdown to a 3Node server? How can you shutdown or power off a 3Node server?](#is-it-possible-to-do-a-graceful-shutdown-to-a-3node-server-how-can-you-shutdown-or-power-off-a-3node-server) + - [Is it possible to have direct access to Zero-OS's core to force a reboot?](#is-it-possible-to-have-direct-access-to-zero-oss-core-to-force-a-reboot) + - [Do I need some port forwarding in my router for each 3Node server?](#do-i-need-some-port-forwarding-in-my-router-for-each-3node-server) + - [Are there ways to reduce 3Node servers' noises?](#are-there-ways-to-reduce-3node-servers-noises) + - [I built a 3Node out of old hardware. Is it possible that my BIOS or UEFI has improper time and date set as factory default?](#i-built-a-3node-out-of-old-hardware-is-it-possible-that-my-bios-or-uefi-has-improper-time-and-date-set-as-factory-default) + - [I have rack servers in my ThreeFold farm. Can I set rack servers vertically?](#i-have-rack-servers-in-my-threefold-farm-can-i-set-rack-servers-vertically) + - [Farming and Maintenance](#farming-and-maintenance) + - [How can I check if there is utilization on my 3Nodes?](#how-can-i-check-if-there-is-utilization-on-my-3nodes) + - [Do I need the Zero-OS bootstrap image drive (USB or CD-DVD) when I reboot, or can I boot Zero-OS from the 3Node main hard drive?](#do-i-need-the-zero-os-bootstrap-image-drive-usb-or-cd-dvd-when-i-reboot-or-can-i-boot-zero-os-from-the-3node-main-hard-drive) + - [It's written that my node is using 100% of HRU. What does it mean?](#its-written-that-my-node-is-using-100-of-hru-what-does-it-mean) + - [On the ThreeFold Node Finder, I only see half of the virtual cores or threads my 3Node has, what can I do?](#on-the-threefold-node-finder-i-only-see-half-of-the-virtual-cores-or-threads-my-3node-has-what-can-i-do) + - [Why are the 3Nodes' resources different on the ThreeFold Node Finder and the ThreeFold Dashboard?](#why-are-the-3nodes-resources-different-on-the-threefold-node-finder-and-the-threefold-dashboard) + - [How can I test the health of my disks?](#how-can-i-test-the-health-of-my-disks) + - [How can I transfer my 3Node from one farm to another?](#how-can-i-transfer-my-3node-from-one-farm-to-another) + - [What do CRU, MRU, HRU and SRU mean on the ThreeFold Node Finder?](#what-do-cru-mru-hru-and-sru-mean-on-the-threefold-node-finder) + - [I have more than one ThreeFold 3Node farm, but I want all my 3Nodes on only one farm. How can I put all my 3Nodes on one farm? How can I change the farm ID of my 3Node?](#i-have-more-than-one-threefold-3node-farm-but-i-want-all-my-3nodes-on-only-one-farm-how-can-i-put-all-my-3nodes-on-one-farm-how-can-i-change-the-farm-id-of-my-3node) + - [How can I know if my 3Node is online on the Grid?](#how-can-i-know-if-my-3node-is-online-on-the-grid) + - [I booted my 3Node and the monitor says it's online and connected to the Grid. But the ThreeFold Node Finder says it is offline? What can I do?](#i-booted-my-3node-and-the-monitor-says-its-online-and-connected-to-the-grid-but-the-threefold-node-finder-says-it-is-offline-what-can-i-do) + - [My 3Node does show on the ThreeFold Node Finder, but not on the ThreeFold Dashboard, what can I do?](#my-3node-does-show-on-the-threefold-node-finder-but-not-on-the-threefold-dashboard-what-can-i-do) + - [If I upgrade my 3Node, will it increase my rewards?](#if-i-upgrade-my-3node-will-it-increase-my-rewards) + - [I booted my 3Node for the first time at the beginning of the month, then I did some upgrade or downgrade, will the ThreeFold Grid recognize the new hardware? Will it still be the same 3Node ID?](#i-booted-my-3node-for-the-first-time-at-the-beginning-of-the-month-then-i-did-some-upgrade-or-downgrade-will-the-threefold-grid-recognize-the-new-hardware-will-it-still-be-the-same-3node-id) + - [Is it possible to ask the 3Node to refetch the node information on the monitor?](#is-it-possible-to-ask-the-3node-to-refetch-the-node-information-on-the-monitor) + - [When does Zero-OS detect the capacity of a 3Node?](#when-does-zero-os-detect-the-capacity-of-a-3node) + - [Where is the 3Node ID stored?](#where-is-the-3node-id-stored) + - [Is there a way to backup my node ID in order to restore a 3Node if the disk with the node ID gets corrupted or breaks down?](#is-there-a-way-to-backup-my-node-id-in-order-to-restore-a-3node-if-the-disk-with-the-node-id-gets-corrupted-or-breaks-down) + - [If I upgrade my 3Node, does it change the node ID?](#if-i-upgrade-my-3node-does-it-change-the-node-id) + - [Does it make sense to recreate my node when the price drops?](#does-it-make-sense-to-recreate-my-node-when-the-price-drops) + - [My 3Node lost power momentarily and I had to power it back on manually. Is there a better way to proceed?](#my-3node-lost-power-momentarily-and-i-had-to-power-it-back-on-manually-is-there-a-better-way-to-proceed) + - [Do I need to change the battery BIOS?](#do-i-need-to-change-the-battery-bios) + - [Do I need to enable UEFI Network Stack?](#do-i-need-to-enable-uefi-network-stack) + - [I want redundancy of power for my 3 nodes. I have two PSU on my Dell server. What can I do?](#i-want-redundancy-of-power-for-my-3-nodes-i-have-two-psu-on-my-dell-server-what-can-i-do) + - [Why isn't there support for RAID? Does Zero-OS work with RAID?](#why-isnt-there-support-for-raid-does-zero-os-work-with-raid) + - [Is there a way to bypass RAID in order for Zero-OS to have bare metals on the system? (No RAID controller in between storage and the Grid.)](#is-there-a-way-to-bypass-raid-in-order-for-zero-os-to-have-bare-metals-on-the-system-no-raid-controller-in-between-storage-and-the-grid) + - [I have a 3Node rack server. Is it possible to use a M.2 to SATA adapter in order to put the M.2 SATA disk in the HDD bay (onboard storage)?](#i-have-a-3node-rack-server-is-it-possible-to-use-a-m2-to-sata-adapter-in-order-to-put-the-m2-sata-disk-in-the-hdd-bay-onboard-storage) + - [My 3Node uses only PCIe adapters and SSD NVME disks. Do I need the RAID controller on?](#my-3node-uses-only-pcie-adapters-and-ssd-nvme-disks-do-i-need-the-raid-controller-on) + - [Can I change the name of my farm on polkadot.js?](#can-i-change-the-name-of-my-farm-on-polkadotjs) + - [How can I delete a farm on polkadot.js?](#how-can-i-delete-a-farm-on-polkadotjs) + - [I try to delete a node on the TF Dashboard, but it doesn’t work. Is there any other way to proceed that could work?](#i-try-to-delete-a-node-on-the-tf-dashboard-but-it-doesnt-work-is-there-any-other-way-to-proceed-that-could-work) + - [My 3Node has 2 ethernet ports in the back, with one written AMT above, what does it mean? Can I use this port to connect my 3Node to the ThreeFold Grid?](#my-3node-has-2-ethernet-ports-in-the-back-with-one-written-amt-above-what-does-it-mean-can-i-use-this-port-to-connect-my-3node-to-the-threefold-grid) + - [My 3Node is based on a the hardware Z600, Z620 or Z820, can I run it headless or without a GPU?](#my-3node-is-based-on-a-the-hardware-z600-z620-or-z820-can-i-run-it-headless-or-without-a-gpu) + - [Is it possible to add high-level GPU on rack servers to farm more TFT?](#is-it-possible-to-add-high-level-gpu-on-rack-servers-to-farm-more-tft) + - [If I change farm, will my node IDs change on my 3Node servers?](#if-i-change-farm-will-my-node-ids-change-on-my-3node-servers) + - [Troubleshooting and Error Messages](#troubleshooting-and-error-messages) + - [Is it possible to access the Error Screen or Log Screen?](#is-it-possible-to-access-the-error-screen-or-log-screen) + - [What does it mean when I see, during the 3Node boot, the message: error = context deadline exceeded?](#what-does-it-mean-when-i-see-during-the-3node-boot-the-message-error--context-deadline-exceeded) + - [How can I fix the error messages: "context deadline exceeded" accompanied with "node is behind acceptable delay with timestamp"?](#how-can-i-fix-the-error-messages-context-deadline-exceeded-accompanied-with-node-is-behind-acceptable-delay-with-timestamp) + - [I try to boot a 3Node, but I get the error: "No Route to Host on Linux". What does it mean?](#i-try-to-boot-a-3node-but-i-get-the-error-no-route-to-host-on-linux-what-does-it-mean) + - [How can I fix the error: "Network configuration succeed but Zero-OS kernel could not be downloaded" when booting a 3Node?](#how-can-i-fix-the-error-network-configuration-succeed-but-zero-os-kernel-could-not-be-downloaded-when-booting-a-3node) + - [Using SAS disks, I get the error; "No ssd found, failed to register". What can I do to fix this?](#using-sas-disks-i-get-the-error-no-ssd-found-failed-to-register-what-can-i-do-to-fix-this) + - [When booting a 3Node, how to fix the error: "no disks: registration failed"?](#when-booting-a-3node-how-to-fix-the-error-no-disks-registration-failed) + - [My SSD is sometimes detected as HDD by Zero-OS when there is a reboot. Is there a fix or a way to test the SSD disk?](#my-ssd-is-sometimes-detected-as-hdd-by-zero-os-when-there-is-a-reboot-is-there-a-fix-or-a-way-to-test-the-ssd-disk) + - [When booting a 3Node, I get the message: failed to register node: failed to create node: failed to submit extrinsic: Invalid Transaction: registration failed. What could fix this?](#when-booting-a-3node-i-get-the-message-failed-to-register-node-failed-to-create-node-failed-to-submit-extrinsic-invalid-transaction-registration-failed-what-could-fix-this) + - [I try to boot a 3Node, but I get the message no route with default gateway found. What does it mean?](#i-try-to-boot-a-3node-but-i-get-the-message-no-route-with-default-gateway-found-what-does-it-mean) + - [I have trouble connecting the 3Node to the Grid with a 10GB NIC card. What can I do?](#i-have-trouble-connecting-the-3node-to-the-grid-with-a-10gb-nic-card-what-can-i-do) + - [I switch the ethernet cable to a different port when my 3Node was running. Internet connection is lost. What can I do?](#i-switch-the-ethernet-cable-to-a-different-port-when-my-3node-was-running-internet-connection-is-lost-what-can-i-do) + - [I get the error Certificate is not yet valid when booting my 3Node server, what can I do?](#i--get-the-error-certificate-is-not-yet-valid-when-booting-my-3node-server-what-can-i-do) + - [When running wipefs to wipe my disks on Linux, I get either of the following errors: "syntax error near unexpected token" or "Probing Initialized Failed". Is there a fix?](#when-running-wipefs-to-wipe-my-disks-on-linux-i-get-either-of-the-following-errors-syntax-error-near-unexpected-token-or-probing-initialized-failed-is-there-a-fix) + - [I did a format on my SSD disk, but Zero-OS still does not recognize them. What's wrong?](#i-did-a-format-on-my-ssd-disk-but-zero-os-still-does-not-recognize-them-whats-wrong) + - [I have a Dell Rx10 server (R610, 710, 910). When I boot Zero-OS I get the message Probing EDD and the 3Node doesn't boot from there. What can I do?](#i-have-a-dell-rx10-server-r610-710-910-when-i-boot-zero-os-i-get-the-message-probing-edd-and-the-3node-doesnt-boot-from-there-what-can-i-do) + - [My 3Node doesn't boot properly without a monitor plugged in. What can I do?](#my-3node-doesnt-boot-properly-without-a-monitor-plugged-in-what-can-i-do) + - [My 3Node is running on the Grid, but when I plugged in the monitor, it states: Disabling IR #16. Is there a problem?](#my-3node-is-running-on-the-grid-but-when-i-plugged-in-the-monitor-it-states-disabling-ir-16-is-there-a-problem) + - [My 3Node won't boot without disabling the Secure Boot option, is it safe?](#my-3node-wont-boot-without-disabling-the-secure-boot-option-is-it-safe) + - [When I tried to boot my 3Node, at some point the screen went black, with or without a blinking hyphen or dash. What could cause this and what could I do to resolve the issue?](#when-i-tried-to-boot-my-3node-at-some-point-the-screen-went-black-with-or-without-a-blinking-hyphen-or-dash-what-could-cause-this-and-what-could-i-do-to-resolve-the-issue) + - [My 3Nodes go offline after a modem reboot. Is there a way to prevent this?](#my-3nodes-go-offline-after-a-modem-reboot-is-there-a-way-to-prevent-this) + - [When I boot my 3Node, it reaches the Welcome to Zero-OS window, but it doesn't boot properly and there's an error message: failed to load object : type substrate..., what can I do?](#when-i-boot-my-3node-it-reaches-the-welcome-to-zero-os-window-but-it-doesnt-boot-properly-and-theres-an-error-message-failed-to-load-object--type-substrate-what-can-i-do) + - [When I try to access iDRAC on a web browswer, even with protected mode off, I get the error The webpage cannot be found, what can I do?](#when-i-try-to-access-idrac-on-a-web-browswer-even-with-protected-mode-off-i-get-the-error-the-webpage-cannot-be-found-what-can-i-do) + - [When booting the 3Node, I get the error Network interface detected but autoconfiguration failed. What can I do?](#when-booting-the-3node-i-get-the-error-network-interface-detected-but-autoconfiguration-failed-what-can-i-do) + - [When I boot my Dell server, I get the message: All of the disks from your previous configuration are gone... Press any key to continue or 'C' to load the configuration utility. What can I do?](#when-i-boot-my-dell-server-i-get-the-message-all-of-the-disks-from-your-previous-configuration-are-gone-press-any-key-to-continue-or-c-to-load-the-configuration-utility-what-can-i-do) + - [I have a Dell R620. In Zero-OS, I get the failure message No network card found and then the 3Node reebots after few seconds. The same happens for every LAN input. What can I do?](#i-have-a-dell-r620-in-zero-os-i-get-the-failure-message-no-network-card-found-and-then-the-3node-reebots-after-few-seconds-the-same-happens-for-every-lan-input-what-can-i-do) + - [I am using freeDos to crossflash my raid controller on a Dell server, but I can't see the RAID controller with the Command Info. What can I do?](#i-am-using-freedos-to-crossflash-my-raid-controller-on-a-dell-server-but-i-cant-see-the-raid-controller-with-the-command-info-what-can-i-do) + - [Can I use a VGA to HDMI adaptor to connect a TV screen or monitor to the 3Node? I tried to boot a 3Node with a VGA to HDMI adaptor but the boot fails, what can I do?](#can-i-use-a-vga-to-hdmi-adaptor-to-connect-a-tv-screen-or-monitor-to-the-3node-i-tried-to-boot-a-3node-with-a-vga-to-hdmi-adaptor-but-the-boot-fails-what-can-i-do) + - [When I try to boot my 3Node, the fans start spinning fast with a loud noise and the screen is black. What can I do to resolve this?](#when-i-try-to-boot-my-3node-the-fans-start-spinning-fast-with-a-loud-noise-and-the-screen-is-black-what-can-i-do-to-resolve-this) + - [When booting Zero-OS with IPV6 configurations, I get the errors (1) dial tcp: address IPV6-address too many columns in address and (2) no pools matches key: not routable. What can I do to fix this issue?](#when-booting-zero-os-with-ipv6-configurations-i-get-the-errors-1-dial-tcp-address-ipv6-address-too-many-columns-in-address-and-2-no-pools-matches-key-not-routable-what-can-i-do-to-fix-this-issue) + - [When booting a 3Node, Zero-OS downloads fine, but then I get the message: error no route with default gateway found, and the message: info check if interface has a cable plugged in. What could fix this?](#when-booting-a-3node-zero-os-downloads-fine-but-then-i-get-the-message-error-no-route-with-default-gateway-found-and-the-message-info-check-if-interface-has-a-cable-plugged-in-what-could-fix-this) + - [How can I update Dell and HP servers to Intel E5-2600v2, E5-2400v2 and E5-4600v2, when applicable?](#how-can-i-update-dell-and-hp-servers-to-intel-e5-2600v2-e5-2400v2-and-e5-4600v2-when-applicable) + - [How can I update the firmware and driver of a Dell PowerEdge server?](#how-can-i-update-the-firmware-and-driver-of-a-dell-poweredge-server) + - [When I boot a 3Node in UEFI mode, it gets stuck at: Initializing Network Device, is there a way to fix this?](#when-i-boot-a-3node-in-uefi-mode-it-gets-stuck-at-initializing-network-device-is-there-a-way-to-fix-this) + - [When I boot my 3Node, it gets stuck during the Zero-OS download. It never reaches 100%. What can I do to fix this issue?](#when-i-boot-my-3node-it-gets-stuck-during-the-zero-os-download-it-never-reaches-100-what-can-i-do-to-fix-this-issue) + - [When booting a 3Node, I get the error=“context deadline exceeded” module=network error=failed to initialize rmb api failed to initialized admin mw: failed to get farm: farm not found: object not found. What can I do to fix this issue?](#when-booting-a-3node-i-get-the-errorcontext-deadline-exceeded-modulenetwork-errorfailed-to-initialize-rmb-api-failed-to-initialized-admin-mw-failed-to-get-farm-farm-not-found-object-not-found-what-can-i-do-to-fix-this-issue) + - [ThreeFold Grid and Data](#threefold-grid-and-data) + - [How is the farming minting reward calculated? Is the Grid always monitoring my 3Node?](#how-is-the-farming-minting-reward-calculated-is-the-grid-always-monitoring-my-3node) + - [How does communication happen on the ThreeFold Grid at the 3Node's level?](#how-does-communication-happen-on-the-threefold-grid-at-the-3nodes-level) + - [What is the ThreeFold Node Status bot Telegram link?](#what-is-the-threefold-node-status-bot-telegram-link) + - [How does the ThreeFold Node Status bot work? How can I use the ThreeFold Node Status bot to verify if my 3Node is online?](#how-does-the-threefold-node-status-bot-work-how-can-i-use-the-threefold-node-status-bot-to-verify-if-my-3node-is-online) + - [How does the Telegram Status Bot get information from my 3Node? My 3Node is online on the ThreeFold Node Finder, but offline on the Telegram Status Bot, is this normal?](#how-does-the-telegram-status-bot-get-information-from-my-3node-my-3node-is-online-on-the-threefold-node-finder-but-offline-on-the-telegram-status-bot-is-this-normal) + - [I noticed that when I reboot my 3Node, the uptime counter on the ThreeFold Node Finder goes back to zero. Does it mean I lose uptime and the uptime start over again when I reboot the 3Node?](#i-noticed-that-when-i-reboot-my-3node-the-uptime-counter-on-the-threefold-node-finder-goes-back-to-zero-does-it-mean-i-lose-uptime-and-the-uptime-start-over-again-when-i-reboot-the-3node) + - [One of my nodes is showing the wrong location. Any problem with that?](#one-of-my-nodes-is-showing-the-wrong-location-any-problem-with-that) + - [Memory](#memory) + - [Can I use different type of RAM for the same 3Node?](#can-i-use-different-type-of-ram-for-the-same-3node) + - [How can I know if the memory I am buying is correct for my specific hardware?](#how-can-i-know-if-the-memory-i-am-buying-is-correct-for-my-specific-hardware) + - [What do the terms RDIMM, LDIMM, UDIMM, LRDIMM, FBDIMM mean when it comes to RAM memory sticks?](#what-do-the-terms-rdimm-ldimm-udimm-lrdimm-fbdimm-mean-when-it-comes-to-ram-memory-sticks) + - [What is the difference between ECC and non-ECC memory?](#what-is-the-difference-between-ecc-and-non-ecc-memory) + - [How can I change the RAM memory sticks on my 3Nodes? How can I achieve dual channel configuration with sticks of RAM?](#how-can-i-change-the-ram-memory-sticks-on-my-3nodes-how-can-i-achieve-dual-channel-configuration-with-sticks-of-ram) + - [What does RAM mean?](#what-does-ram-mean) + - [What does DIMM mean when it comes to RAM sticks?](#what-does-dimm-mean-when-it-comes-to-ram-sticks) + - [I have 24 DIMMS ram slots on my server. Can I use them all?](#i-have-24-dimms-ram-slots-on-my-server-can-i-use-them-all) +- [Ask a Question to the ThreeFold Community](#ask-a-question-to-the-threefold-community) + +*** + +# GENERAL FAQ + +## Basic Facts + +### What is the the ThreeFold blockchain? + +ThreeFold blockchain is the layer 0 infrastructure for an open source peer-to-peer (P2P) Internet owned by humanity. + + + +### What is the architecture of the ThreeFold Grid in simple terms? + +Essentially, the ThreeFold Grid is composed of the people using it, the 3Node servers offering compute, storage and network resources, and the TF Chain, which is the blockchain of ThreeFold. + +Middlewares are also used, such as GraphQL and GrixProxy, to get and organize data from the ThreeFold Chain. They help to make data available and to manage load. + +3Nodes store workloads data and can report on their state to the TF Grid and to middlewares. + + +### What is the difference between Internet capacity and connectivity? Does ThreeFold replace my Internet service provider (ISP)? + +In simple terms, the Internet is composed of both capacity and connectivity. Capacity is where the data and resources are being handled, for example in servers. Connectivity is the infrastructure that transfer data and resources between servers. The latter is linked to the typical Internet service provider (ISP). + +ThreeFold’s technology enables distributed capacity generation, but ThreeFold doesn’t deal in connectivity. +3nodes offer Internet capacity, but farmers still rely on connectivity provider like the usual Internet service provider (ISP). + +### What are the priorities of ThreeFold (the Three P of ThreeFold)? ThreeFold is a Planet first project, what does it mean? + +ThreeFold is working for the Planet, the People and Profit, in this very order of importance. Planet comes first as it is our home to us all. A humane enterprise always has people before profit, and serious entrepreneurs know profit cannot be left out of the equation of a thriving project. + + +### I want to help build the new Internet. How can I become a ThreeFold certified 3node partner? + +Apply [here](https://marketplace.3node.global/index.php?dispatch=companies.apply_for_vendor) to become a ThreeFold certified 3node partner. + + + +### How can I create a twin on the TF Grid? + +There are 2 ways to create a twin: + +You can create a twin via the [ThreeFold Dashboard](../dashboard/wallet_connector.md). + +You can also create a twin via the ThreeFold Connect app. Indeed, a twin is automatically generated while creating a farm. Note that, in this case, the twin will be created on mainnet. + + +## ThreeFold Communication + +### Is there a ThreeFold app for mobile? + +Yes! ThreeFold Connect App (TF Connect App) is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +You can use this app to create a ThreeFold ID, a ThreeFold Wallet and also a ThreeFold Farm to link all your 3nodes. +The ThreeFold Connect Wallet, with its Stellar payout address, can be used for transactions as well as to receive farming rewards. +The News section gives you the latest information on the fast ThreeFold development and growth. + + +### I want to reach the ThreeFold community. What are ThreeFold social links? + +You can find links to the ThreeFold community at the bottom of the main page of the [ThreeFold website](https://www.threefold.io/). + + +### Could we reach out someone for publishing research work on ThreeFold? + +You can send an email to info@threefold.io for publishing research on ThreeFold. + + +### Who can I write to for a proposal? Where can I send a proposal email for a new partnership opportunity with ThreeFold? + +You can mail your proposal to info@threefold.io or write about your proposal on the [ThreeFold Forum](http://forum.threefold.io/). + + + +### How can I track and follow the progress and development of ThreeFold? + +There are two main places where you can track the progress of ThreeFold. ThreeFold is open source and its developments can be easily tracked on Github. + +* You can read about the ongoing ThreeFold Tech projects [here](https://github.com/orgs/threefoldtech/projects). +* You can read about the ongoing ThreeFold Foundation projects [here](https://github.com/orgs/threefoldfoundation/projects?query=is%3Aopen). + + + +### Why do some forum posts need to be approved? + +The configurations of the ThreeFold forum make so that some posts need approval, depending on where they are posted. Like posting from meta-sections, and some specific sections. + +Also, note that certain posts can automatically get flagged for moderation based on their content. + + + +## The Technology of ThreeFold + +### What is a 3Node? + +It is essentially a single server that makes up a larger network of servers which together form the ThreeFold Grid. Essentially any modern computer can be turned into a 3node (DIY Farming) and you can [buy plug and play 3nodes](https://marketplace.3node.global/index.php) as state of the art modern computer. + + + +### What is the difference between a 3node and a ThreeFold farm? + +A 3node is a single server connected to the Grid. Each 3node is linked to a farm. A farm can be composed of multiple 3nodes. + + +### What is Zero-OS from ThreeFold? + +Zero-OS is a stateless and lightweight operating system designed to host anything that runs on Linux, in a decentralized way. Once installed, Zero-OS locks the hardware and dedicates its capacity to the People’s Internet via the ThreeFold Blockchain. + + + +### ThreeFold uses Quantum Safe Storage technology, what does it mean? + +Quantum computers are theoretically capable of doing huge calculations in a short period of time. By this fact alone, it is a great potential threat to future online safety. ThreeFold solves this future problem before it even becomes a reality. Indeed, Zero-os can compress, encrypt, and disperse data across the Grid. + + + +### Quantum Safe File System (QSFS) allows for part of the storage to go down and it can self repair, however it’s still attached to a single VM and a single point of failure. Can a QSFS instance be reattached to another VM to recover it? + +QSFS is built from storage devices which are distributed and decentralized. + +The storage engine is a software running on a VM that can run everywhere. +If the storage engine needs to run on a different VM the config needs to pushed to the new VM. +In short, yes Quantum safe file system (QSFS) can be recovered on a different VM. It is not automated yet on Zero-OS. A video tutorial will be shared soon. + + + + +### Where does the ThreeFold Explorer take its data from? + +The ThreeFold Explorer takes its data from this website: [https://gridproxy.grid.tf/](https://gridproxy.grid.tf/). + +To explore Grid Proxy, you can use Swagger: [https://gridproxy.grid.tf/swagger/index.html](https://gridproxy.grid.tf/swagger/index.html). You will then be able to query the TF Grid and extract data. + +See the next Q&A for more information on Swagger. + + + +### How can I use the Gridproxy public API to query information on the TFGrid? + +You can go to the Gridproxy public API Swagger index: [https://gridproxy.grid.tf/swagger/index.html](https://gridproxy.grid.tf/swagger/index.html). + +There you can query information such as information on a 3node. + +For example, asking the Gridproxy for the nodeID 466, you get the following URL: `https://gridproxy.grid.tf/nodes/466`. + +To get specific information, you can add parameters, for example: `https://gridproxy.grid.tf/nodes/466/status`. + +When you know the URL representation of the query, you can simply use the URL directly on a web browser. + + + +### How can I see the stats of the ThreeFold Grid? + +You can go to think link: [https://stats.grid.tf/](https://stats.grid.tf/) to see the stats of the ThreeFold Grid. + +You can also check this [thread](https://forum.threefold.io/t/grid-stats-new-nodes-utilization-overview/3291/) on the ThreeFold forum. + + +### What is the difference between a seed phrase (mnemonics) and an HEX secret? + +A seed phrase (also called mnemonics) is a set of words from a carefully selected pool that can be used to derive cryptographic secrets. A HEX secret is a more direct representation of such secret that the computer uses. In the case of an HEX secret, there is no extra information present to form complete words of the seed phrase. + +In typical usage, multiple secrets can be derived from a seed phrase via a one-way operation. The secret and its derived public key are sufficient to do any cryptographic operation like signing transactions or encrypting data, but it can't be used to get back the words of the seed phrase. + + + +## Buying and Transacting TFT + + +### How long does it take when you use the BSC-Stellar Bridge? + +The bridge will process deposits/withdrawals within 48 hours. + + + +### On my website, users can donate TFT on the Stellar Chain. Is there a way for users on my website to easily track the total sum of TFT donated? + +There is a simple way to do this. The [Stellar Explorer](https://stellar.expert/explorer/public) has an embeddable widget that you can insert on any website, including WordPress. + +Simply go to the account you’re interested in showing the balance of, look for “Balance History”, select TFT, and finally click the small icon next to the heading to reveal the embed code. In your WordPress page editor, in HTML mode, paste the embed code. + + +## TF Connect App, TF Dashboard, GraphQL, Grix Proxy and Polkadot Substrate + +### Is there a way to create or import another wallet in TF Connect App? + +The TF Connect App supports Stellar and TF Chain wallets. The app by default can create one wallet. To add any number of additional wallets, you must create a wallet on Stellar or TF Chain and then import it with the import function. + + + +### I created a farm on the TF Chain. On the TF Connect App Farmer Migration section, my farm is under Other v3 farms, is this normal? + +Yes this is normal. Farms created on TF Chain instead of the TF Connect App will appear in *Other v3 farms*. + + + +### I am trying to access my wallet in the ThreeFold Connect App. It worked fine before, but now I just get a white screen. What does it mean and what can I do? + +On the TF Connect App, when you get a white screen, it means that there is a connection issue. It can help to try other networks; maybe try switching between ethernet cable or wifi. Or you can also try it later when the connection might be more stable. + + + +### When I open the ThreeFold Connect App, I get the error: Error in initialization in Flagsmith. How can I fix this issue? + +To fix this Flagsmith error message on the ThreeFold Connect app, you can try the following methods: + +* Check your internet connection +* Update your phone current operating system (OS) version +* Update the date and time on your phone + + +### Apart form the ThreeFold Connect App Wallet, how can I check my TFT balance? + +You can go on [Stellar.Expert](https://stellar.expert). With your wallet address, you will be able to see your transactions and wallet details. + + + +### Is it possible to export the transaction history of a wallet to a CSV file? + +Yes, every blockchain has an explorer function and these explorer functions allow you to see transactions and export them. TFT is on 2 chains at the moment: Stellar and Polkadot. + +For Stellar based TFT’s there is an explorer here: https://stellar.expert/explorer/public. Enter you wallet address in the top left search box, and after pressing enter you should see every transaction related to your account. + +If you are not deploying/doing things on the TF Grid (dev, test or mainnet) you will not have transferred any tokens to the TF Chain, therefore all your tokens/wallets will be on the Stellar Chain. + + +### How can I use GraphQl to find information on the ThreeFold Grid? + +To find information on the ThreeFold Grid with GraphQL, go to this [link](https://graphql.grid.tf/graphql). On the left menu, choose the parameters you want to search and write the necessary information, if needed, then click on the Play button in the middle section, at the top. + +Here's an example of a query, where we want to find all the farms containing "duck" in their name. + +query MyQuery { + farms(where: {name_contains: "duck"}) { + name + farmID + } +} + +This code can be written automatically if you simply select the proper parameters in the left menu. +For the previous example, we had to click on "farms", then "where", and then "name_contains". After clicking on "name_contains", you need to add the words you are looking for, in this example we had "duck". Further down the menu, we simply had to click on "farmID" and "name", and then click the Play button. The results of the query appear on the right screen. + + + +### What are the different links to ThreeFold's Graph QL depending on the network? + +The links for the Development, Test and Main Networks are the following: + +* Main Net Graph QL + * [https://graphql.grid.tf/graphql](https://graphql.grid.tf/graphql) +* Test Net Graph QL + * [https://graphql.test.grid.tf/graphql](https://graphql.test.grid.tf/graphql) +* Dev Net Graph QL + * [https://graphql.dev.grid.tf/graphql](https://graphql.dev.grid.tf/graphql) + + + +### How can I find 3Nodes with IPv6 addresses? + +You can use [GraphQL](https://graphql.grid.tf/graphql) for such queries. + +Use the following code to search for 3Nodes with IPv6 addresses. +Enter the following code on the middle window and click on the "Play" button. + +``` +query MyQuery { + publicConfigs { + ipv6 + node { + nodeID + } + } +} +``` + +The 3nodes with IPv6 addresses will appear on the right window. + +For more information on how to use Graph QL, read [this Q&A](#how-can-i-use-graphql-to-find-information-on-the-threefold-grid). + + + +### How can I use GraphQL to see contracts on my 3Nodes? + +Go to [ThreeFold's GraphQL](https://graphql.grid.tf/graphql) and write the following query: + +``` +query MyQuery { + nodeContracts(where: {state_eq: Created, nodeID_eq: 42}) { + resourcesUsed { + cru + hru + mru + sru + } + contractID + twinID + nodeID + } +} + +``` + +This will show you contracts on the 3Node as well as resources used. You can play with the different parameters. + +How can I see the farm associated with a node? + +``` +query MyQuery { + nodes(where: {nodeID_eq: 57}) { + farmID + nodeID + } +} +``` + + + +### How can I use Grid Proxy to find information on the ThreeFold Grid and 3Nodes? + +To find information on the ThreeFold Grid with GraphQL, you need to write this URL: https://gridproxy.grid.tf/, followed by your specific query. Here's an example if we wanted to see all the available farm on the TF Grid that has "duck" in its name: + +https://gridproxy.grid.tf/farms?name_contains=duck + +The Grid Proxy is appropriate for high volume application. +You can find the parameters to be written in the URL when visiting the [GraphQL explorer](https://graphql.grid.tf/graphql). + + + +### Who is hosting GraphQL and Grid Proxy on the ThreeFold Grid? + +GraphQL and Grid Proxy are hosted by ThreeFold for everyone to use. + +Note that it is also possible to run your own instance of those tools. + + + +### What is the difference between uptime, status and power state? + +There are three distinctly named endpoints or fields that exist in the back end systems: + +* Uptime + * number of seconds the node was up, as of it's last uptime report. This is the same on GraphQL and Grid Proxy. +* Status + * this is a field that only exists on the Grid Proxy, which corresponds to whether the node sent an uptime report within the last 40 minutes. +* Power state + * this is a field that only exists on GraphQL, and it's the self reported power state of the node. This only goes to "down" if the node shut itself down at request of the Farmerbot. + + + +### I do not remember the name (ThreeFold 3bot ID) associated with my seed phrase on the ThreeFold Connect app. Can I recover my TF Connect app account with only the seed phrase and not the name (3bot ID) associated with it? + +If you forgot the name associated with your seed phrase on the TF Connect app, you can always create a new identity (ThreeFold 3bot ID) and import your wallet using the old seed phrase. + +Since the Connect App is also used for identity and authentication, you need both the name (3bot ID) and seed phrase to fully recover your account. The wallet is only linked to the seed phrase and not the name (3bot ID). + + +# USERS FAQ + +## TF Grid Functionalities + + +### What are the type of storage available on TF Grid? + +There’s two type of storage that van de used on the TF Grid. + +1. VM which has a virtual disk. The virtual disk is a straightforward volume on a local hard disk. Everything stored on this virtual disk is stored only on this virtual (and thus physical) disk. Delete the VM and the virtual disk and the content is gone. +2. Quantum safe storage. Quantum safe storage uses a “Storage Engine” that parts, compresses, encrypts and then mathematically describes the data. + + +## Deployments on the ThreeFold Grid + + +### Does the ThreeFold Grid charge the total resources rented or it only charges the resources used during deployment? + +Billing is based on how many resources you reserve, not how much you use them. For this reason, it can be a good idea to deploy the minimim resources needed per project. + + + +### Do I pay for Internet traffic while deploying workloads on IPv4, IPv6 or Planetary Network? + +You do pay for internet traffic while deploying on the ThreeFold Grid. It is calculated during deployment and paid with ThreeFold tokens (TFT). + +Note that the private overlay network traffic is not billed. + + + +### What is the monthly cost for an IPv4 or an IPv6 public address on the ThreeFold Grid? + +The cost for an IPv4 public address is around 3$/month (USD). + +For an IPv6 address, there is no cost. + + + +### What are the differences between a container, a micro virtual machine and a full virtual machine (VM)? + +The following is a list of certain features related to containers as well as full and micro virtual machines. + +* Container + * generally designed to run a single application + * doesn't need to include a full operating system + * relies on the kernel of the host system, no hypervisor needed + * isolated from the rest of the host system for security and can be limited in resources used + * examples: on the [Playground](https://playground.grid.tf/), we have [Kubernetes](https://library.threefold.me/info/manual/#/manual__weblets_k8s?id=kubernetes) and [Caprover](https://library.threefold.me/info/manual/#/manual__weblets_caprover?id=caprover), which are both environments that host containers +* Micro VM + * a container image promoted to run as a VM by pairing with a generic kernel + * more isolated than a container, thus more secure + * generally lighter than a full VM + * can be created from any Docker container image by uploading it to the [TF Hub](https://hub.grid.tf/) + * examples: on the [Playground](https://playground.grid.tf/), we have Ubuntu 20.04, Alpine-3, CentOS-8 and more. +* Full VM + * contains a complete operating system including kernel + * capable of anything that can be done with a Linux server + * compatible with any guides and tutorials written for the same version of the distribution they are running + * normally contains systemd, unlike containers which normally do not + * can load kernel modules or replace the kernel entirely, so has best compatibility + * generally heavier than micro VM + * examples: on the [Playground](https://playground.grid.tf/), we have Ubuntu 18.04, 20.04, 22.04 and more + +Note that you can run Kubernetes on a micro VM and you can run a very minimal operating system in a full VM. There are many possibilities when using those technologies. + + + +### What is a 3Node gateway? How can I configure a 3Node as a gateway node? + +A 3Node becomes a gateway when a ThreeFold farmer adds a public IP address to the node itself on the [ThreeFold Dashboard](https://dashboard.grid.tf/). In doing so, the IP address is then handed over to the base operating system of the node itself. The IP address can then be used in the overall functions of the TF Grid. + +Note that this process differs from when an IP address that has been added to a farm is deployed with a workload in order for that workload to be accessible on the Internet. + +To configure a 3Node as a gateway node, you need a public IP block from your internet service provider (ISP). + +You can configure a 3Node as a gateway node on the [TF mainnet](https://dashboard.grid.tf/), [TF testnet](https://dashboard.test.grid.tf/) and [TF devnet](https://dashboard.dev.grid.tf/). You thus need to choose the correct TF Dashboard link (main, test, dev). + +To configure a 3Node as a gateway node, follow these steps: + +* Configure your DNS records + * Type: A + * Name: + * Value: + * Type: NS + * Name: _acme-challenge. + * Value: . + * Type: CNAME + * Name: *.. + * Value: + * Type: AAAA + * Name: + * Value: +* Configure your 3Node parameters on the TF Dashboard + * Go to the [ThreeFold Dashboard](https://dashboard.grid.tf/) + * Go to the section **Portal** + * Go to the subsection **Farms** + * Choose the 3Node you want to turn into a gateway node and click on **Actions** (Add a public config) on the right + * Enter the necessary information and click **Save** + * IPV4: Enter the IPv4 address of your public IP block + * Gateway: Enter the gateway of your public IP block + * IPV6: Enter the IPv6 address of your public IP block + * Gateway IPV6: Enter the gateway of your public IP block + * Domain: . + +Once this is done, you should see the IPv4 and IPv6 addresses in the section **PUB** of your 3Node screen. + +To learn more about this process, [watch this great video](https://youtu.be/axvKipK7MQM). + + + +### When connecting remotely with SSH, I get the following error: "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!...". What can I do to fix this? + +If you've already done an SSH connection on your computer, the issue is most probably that the "host key has just been changed". To fix this, try one of those two solutions: + +* Linux and MAC: + * ``` + sudo rm ~/.ssh/known_hosts + ``` +* Windows: + * ``` + rm ~/.ssh/known_hosts + ``` + +To be more specific, you can remove the probematic host: + +* Windows, Linux and MAC: + * ``` + ssh-keygen -R + ``` + +Once this is done, you should be able to SSH into your 3Node deployment. + + + +### How can I remove one host from known_hosts? + +You can write the following command +``` +ssh-keygen -R hostname +``` + +Where hostname would be the IPv4 or IPv6 address for example. +In the case of the ThreeFold Grid, it can also be the Planetary Network address. + + + +### How can I add ThreeFold peers in the Yggdrasil configuration file? + +In the file /etc/yggdrasil.conf, write the following in the "Peers" section: + +``` + Peers: [ + # Threefold Lochrist + tcp://gent01.grid.tf:9943 + tcp://gent02.grid.tf:9943 + tcp://gent03.grid.tf:9943 + tcp://gent04.grid.tf:9943 + tcp://gent01.test.grid.tf:9943 + tcp://gent02.test.grid.tf:9943 + tcp://gent01.dev.grid.tf:9943 + tcp://gent02.dev.grid.tf:9943 + # GreenEdge + tcp://gw291.vienna1.greenedgecloud.com:9943 + tcp://gw293.vienna1.greenedgecloud.com:9943 + tcp://gw294.vienna1.greenedgecloud.com:9943 + tcp://gw297.vienna1.greenedgecloud.com:9943 + tcp://gw298.vienna1.greenedgecloud.com:9943 + tcp://gw299.vienna2.greenedgecloud.com:9943 + tcp://gw300.vienna2.greenedgecloud.com:9943 + tcp://gw304.vienna2.greenedgecloud.com:9943 + tcp://gw306.vienna2.greenedgecloud.com:9943 + tcp://gw307.vienna2.greenedgecloud.com:9943 + tcp://gw309.vienna2.greenedgecloud.com:9943 + tcp://gw313.vienna2.greenedgecloud.com:9943 + tcp://gw324.salzburg1.greenedgecloud.com:9943 + tcp://gw326.salzburg1.greenedgecloud.com:9943 + tcp://gw327.salzburg1.greenedgecloud.com:9943 + tcp://gw328.salzburg1.greenedgecloud.com:9943 + tcp://gw330.salzburg1.greenedgecloud.com:9943 + tcp://gw331.salzburg1.greenedgecloud.com:9943 + tcp://gw333.salzburg1.greenedgecloud.com:9943 + tcp://gw422.vienna2.greenedgecloud.com:9943 + tcp://gw423.vienna2.greenedgecloud.com:9943 + tcp://gw424.vienna2.greenedgecloud.com:9943 + tcp://gw425.vienna2.greenedgecloud.com:9943 + ] +``` + + + +### How can I see Yggdrasil/Planetary Network's peers? + +On MAC and Linux, write the following line in the terminal: + +``` +/etc/yggdrasil.conf +``` + +On Windows, one of the two following lines should work: + +``` +%ALLUSERSPROFILE%\Yggdrasil\yggdrasil.conf +``` + +``` +C:\ProgramData\Yggdrasil\ +``` + +These are the general location of Yggdrasil. It can change depending on how you installed Yggdrasil. + + + +### How can I ping an Yggdrasil IP or IPv6 address? + +Usually, using the command `ping` works. + +If the typical `ping` doesn't work, try this instead: `ping6` + +For example, the following will send 2 pings: + +``` +ping6 -c 2 yggdrasil_address +``` + + + + +### Is there a way to test if I am properly connected to the Yggdrasil network (Planetary Network)? + +To check if you are properly connected to the Yggdrasil network, try reaching this website: + +``` +http://[319:3cf0:dd1d:47b9:20c:29ff:fe2c:39be]/ +``` + +If you can reach this website, it means that you are properly connected. + +For more information on how to connect to Yggrasil (and the Planetary Network), read [this guide](../system_administrators/getstarted/ssh_guide/ssh_guide.md). + + + + +### How can I change the username of my SSH key? + +In the terminal, write the following line: + +``` +ssh-keygen -C newname +``` + +Make sure to replace "newname" by the name you want. + + +### What is ThreeFold's stance on sharded workload? Will ThreeFold embrace and move towards distributed data chunks or stay with the one deployment, one node model? + +The ThreeFold Grid is basically agnostic when it comes to how you structure your deployment. + +If you want to put all your storage and compute on one node and lose everything if it goes down, you can do so. + +If you want a highly distributed and fault tolerant system with high availability where data is never lost, you can also do so. You could build an architecture with single nodes running single workloads as the building blocks. + +Containerized micro service architectures running on e.g. Kubernetes are basically the way that compute is being "sharded" already in the mainstream of IT, especially at large scales. Such applications fit well on the ThreeFold Grid today, since they account for the possibility that individual nodes may fail. However, many applications aren't built this way and it takes work to adapt them. + +Self driving and self healing IT is one of the core concepts of ThreeFold. What we've built so far is an excellent foundation for making this reality. Some additional features might come into TF Chain and the network itself to enable such features. That being said, a lot is already possible using the existing system. It's just a matter of gluing things together in the right way. + + + +## Tutorials and Guides + + +### What is the minimum amount of TFT to deploy a Presearch node? How can I get a TFT discount when I deploy a Presearch node? + +The minimum amount of TFT that needs to be in your ThreeFold Profile before you can deploy a Presearch node is 2 TFT. But this would not last very long. + +To benefit from the biggest reduction in price (-60%), you need to have a sufficient amount of TFT in your wallet. The TFT is not locked and simply needs to be present in your wallet. + + +For the capacity of a Presearch node, the amount is around 5000 TFT, covering 3 years that the workload could run. + +Note that a Presearch node requires about 3 days to stabilize. + + + + +### Can I use the same seed phrase for my mainnet and testnest accounts? How can I transfer my TFT from mainnet to testnet or vice versa? + +Yes, you can use the same seed phrase for you main and testnet accounts. They will have the same address on each chain but they are really separate accounts. It is much like using the same wallet on Ethereum and BSC for example. + +To transfer your TFT from mainnet to testnet or vice versa, you need to send your TFT to the Stellar chain first. Let's say you want to transfer TFT from the mainnet to the testnet. Here are the steps: + +* Open your mainnet profile on the [mainnet Dashboard](https://dashboard.grid.tf/), on the left menu, choose Portal and then Swap. Click the button Withdraw. Send your TFTs from the mainnet address to your Stellar address. + +* Open your testnet profile on the [testnet Dashboard](https://dashboard.test.grid.tf/), on the left menu, choose Portal and then Swap. Click the button Deposit. Send your TFTs from your Stellar address to the testnet address. You can use the QR code option to make the transfer. + +To go from testnet to mainnet, simply use the URLs in the opposite order. + +> Note that the fees are of 1 TFT per transfer. + + + +### Do I need a full or micro virtual machine (VM) when I run QSFS, quantum safe file system, on the ThreeFold Grid? + +QSFS can be run on both a full virtual machine or a micro virtual machine (VM). The QSFS is a "mountable" object, like a disk. It's defined in its own block, then specified as a mount within a VM. + + +## Linux, Github, Containers and More + +### Where should I start to learn more about Linux? + +While the [ThreeFold Manual](https://www.manual.grid.tf/) would be a good place to learn about ThreeFold and how to deploy on the TF Grid, to learn specifically about Linux, a good place to start is the [Linux website](https://www.linux.org/). There you will find [many tutorials](https://www.linux.org/forums/#linux-tutorials.122). + +A general advice to learn Linux, and computers in general, is to develop the skill of finding answers by following your natural curiosity: most of the questions have been asked before and answers can be found through search engines. + +Before doing any web search, you can use the resources already on hand in the terminal. For any command, you can try adding `-h` or `--help` for a brief description of what it does and to see some commonly used arguments. Typing `man` and then the command name will bring up a more detailed manual, assuming it exists and is installed (e.g. `man sudo`). + +Some ThreeFold users also point out that using different AI and LLM can be very helpful in the process of learning Linux and computers in general. + + +### How can I clone a single branch of a repository on Github? + +You can clone a single branch of a repository with the following line: + +``` +git clone --single-branch --branch branch_name https://github.com/GITHUB_ACCOUNT/REPOSITORY_NAME +``` + + + +## Grace Period (Status Paused) + +### The status of my deployment is paused, in grace period, how can I resume the deployment? + +When your wallet is running out of TFT to pay for deployments, your deployments will be paused. +To resume your deployments, simply fill up your wallet with more TFT. + + + +### Once I refund my TF wallet, how long does it take for the deployment to resume from grace period? + +It can take around one hour to change the status from "paused" to "ok" and thus for the deployment to resume. + + + +### Can I SSH into my deployments when they are in grace period (i.e. when their status is paused)? + +While in grace period, you might not be able to SSH into your deployment. Refund your wallet to resume deployments. + + + +### How long is the grace period (i.e. when the deployment status is paused)? + +The grace period is 2 weeks. During this period, you can refill your wallet to resume your deployment. + + + +## Terraform + +### Working with Terraform, I get the following error: failed to create contract: ContractIsNotUnique. Is there a fix to this issue? + +This error happens when a contract with the same data is already active. For example, two conflicting contracts are not in the same deployment. You can try to change the data on the main.tf file to make sure it's not a self-conflicting terraform deployment. + + + +### I am working with Terraform. What do I have to write in the file env.tfvars? + +This env.tfvars should look like the following, with the proper content within the quotes: + +> MNEMONICS = "write your seed phrase" +> +> NETWORK = "write the main network" +> +> SSH_KEY = "write your ssh-key" + +Note that this could change based on your specific Terraform deployment. + + + +### I am working with Terraform and I am using the example in Terraform Provider Grid. How can I use the example main.tf file with environment variables? Why am I getting the message Error: account not found, when deploying with Terraform? + +This Q&A is linked with the main.tf file in the [QSFS section of the ThreeFold Tech repository](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/qsfs/main.tf). + +This error happens when you did not properly set your environment variables. +In the main.tf file, add those lines at the top: + +> variable "MNEMONICS" { +> +> type = string +> +> description = "The mnemonic phrase used to generate the seed for the node." +> +> } + +> variable "NETWORK" { +> +> type = string +> +> default = "main" +> +> description = "The network to connect the node to." +> +> } + +> variable "SSH_KEY" { +> +> type = string +> +> } + +Within the file main.tf, set those lines: + +> provider "grid" { +> +> mnemonics = "${var.MNEMONICS}" +> +> network = "${var.NETWORK}" +> +> } + +> env_vars = { +> +> SSH_KEY = "${var.SSH_KEY}" +> +> } + +Note: Make sure that you properly set your variables in the file env.tfvars. + + +## Users Troubleshooting and Error Messages + +### When deploying a virtual machine (VM) on the ThreeFold Grid, I get the following message after trying a full system update and upgrade: GRUB failed to install to the following devices... Is there a fix to this issue? + +When deploying a virtual machine and doing a full system update and upgrade (apt update, apt upgrade), if you get the error: GRUB failed to install to the following devices /dev/vda15, try this to fix it: + +> apt-mark hold grub-efi-amd64-signed + +This should fix the issue. + + +### While deploying on the TF Dashboard, I get the following error :"global workload with the same name exists: conflict". What can I do to fix this issue? + +This error happens if you deployed a workload on the TF Dashboard with the same deployment name of a previous deployment. In general, you can simply refresh the TF Dashboard browser page and/or change the deployment name for a new one. This should fix the issue. + + + +## ThreeFold Connect App + +### TF Connect App is now asking for a 4-digit password (PIN). I don't remember it as I usually use touch or face ID to unlock the app. What can I do? + +When you set up your the app, you are asked a 4-digit password (PIN). After some time, the app will be asking for this PIN when users may have been exclusively using touch/face ID. You can reset it by recovering the account with your seedphrase. + + +### Is there a way to have more than one wallet in TF Connect App? + +Yes, this is perfectly possible. You can have multiple wallets in the TF Connect app. You can have multiple wallets for the Stellar network and multiple wallets for Polkadot Substrate. + +For example, you can create a wallet on the Stellar Blockchain and import it on TF Connect App with the function *Import Wallet*. Just copy the seedphrase and it will be imported in TF Connect App. + +Note: There will not be an automatic function in the app to create a new wallet. You must do it manually. + + + +### What is the difference between 10.x.y.z and 192.168.x.y addresses? + +The addresses 10.x.y.z and 192.168.x.y are on the private network. + +A 10.x.y.z address is just as valid as a 192.168.x.y address for a private network. If all the devices get a 10.x.y.z network, your private network should work properly. But if some of your devices are getting 10.x.y.z addresses while others are getting 192.168.x.y addresses, then you have two conflicting DHCP servers in your network. + + + +# DEVELOPERS FAQ + +## General Information for Developer + +### Does Zero-OS assign private IPv4 addresses to workloads? + +No. Zero-OS will request two IP address from the DHCP. If you only have one physical NIC connected, Zero-OS will assign the second IP address as a virtual device. + + + +### Why does each 3Node server have two IP addresses associated with it? + +Each node has two IP adresses. + +One is for for Zero-OS and one is for for the DMZ (demilitarized zone, sometimes referred to as a perimeter network or screened subnet). This separates public/private traffic from each other. + +Note: Zero-OS will request two IP address from the DHCP. If you only have one physical NIC connected, Zero-OS will assign the second IP address as a virtual device. + +### Can Zero-OS assign public IPv4 or IPv6 addresses to workloads? +Yes it can provide both standard and Yggdrasil connections. + + +### What does MAC mean when it comes to networking? +MAC means *media access control*. It is a unique hardware ID. It helps the network to recognize your machine. It is then, for example, possible to assign a specific and fixed IP address to your hardware. + + + + +### I am a developer looking for a way to automatically convert BSC tokens into TFT. Could you please share tips on how to swap regular tokens into TFT, on backend, without and browser extensions, via any platform API? + +TFT is implemented as a cross-chain asset (BToken) on BSC. + +Swapping via Pancakeswap directly without a browser and extension can be done by calling the Pancakeswap's BSC contract directly. Note that this requires some EVM knowledge on calling contracts. + +TFT is a standard ERC-20 contract, called BEP20 on BSC. You will thus need to call the approved method in the TFT contract to allow Pancakeswap to transfer TFTs from your account. + +The TFT contract allows you to approve a large amount and when Pancakeswap makes the transfer, the TFT will be deducted from the transaction. This will not reset so you will not have to make a call to approve every swap. This will thus save some gas. + + + +## Test Net + +### Can I get some free TFT to test on Test Net + +The TFT on Test Net is real TFT. There are ways to get free TFT to explore Test Net, such as joining the Beta Tester Group. More information [here](https://forum.threefold.io/t/join-the-grid-3-0-beta-testers-group/1194/21). + + + +# FARMERS FAQ + +## TFT Farming Basics + +### My Titan is v2.1 and the ThreeFold Grid is v3., what is the distinction? + +Titan v2.1 is the hardware. Before v2.1, there was 2.0. The hardware currently being shipped is the Titan v2.1. + +When you read v3, it refers to the ThreeFold Grid. Titans are now being sent ready for TF Grid v3 so they are being referred to as Titan v3. In short, the current Titans are v2.1 hardware ready for ThreeFold Grid v3. + + + +### When will I receive the farming rewards for my 3Nodes? + +Farming rewards are usually sent around the 8th of each month. This can vary slightly because the verification process is not yet fully automated. + +For more information on the minting process, read the next [QnA](#what-is-the-tft-minting-process-is-it-fully-automated). + + + +### What is the TFT minting process? Is it fully automated? + +Minting is based on blockchain data according to strict rules that are carried out by computers with humans involved only to check for errors and to sign the resulting transactions. + +There is a human verification mechanism through multisignatures for calculations done on the data as stored in the blockchain. This explains the timing differences when it comes to the monthly farming rewards distribution, since enough people need to sign off. + +The detailed minting process for V3 is as follow: + +- TFChain, ThreeFold's blockchain, has all the details about capacity provided by the nodes. +- TFChain is used to track uptime. +- Zero-OS reports to TFChain. +- The code in [this repo](https://github.com/threefoldtech/minting_v3) uses the information from the blockchain to calculate the TFT to be minted. +- A proof of what needs to be minted and why is created. This proof is then sent to our guardians. +- The guardians need to double check the execution and the minting report. This is like a human check on the automated process. +- The guardians need to sign. Only when consensus is achieved the minting as suggested will happen. This allows human to check the code. + +It is important to understant that TFChain tracks the capacity and uptime and is the source for the minting. + +Note: Additional auditing code will be added in V4 (i.e. special code generated at runtime for verification) using security primitives on motherboards. + +For more information on the minting periods, read this [QnA](#what-is-the-start-and-end-of-the-current-minting-period-what-are-the-minting-periods-for-threefold-farming-in-2023). + + + +### What should I do if I did not receive my farming rewards this month? + +If you did not receive your farming rewards, please contact us via our live chats. We will then investigate the situation. + +You can find our live chat option on the TF Connect App ([Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en_CA&gl=US), [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885)), the [TF Forum](http://forum.threefold.io/), [TF Dashboard](https://dashboard.grid.tf/) and the [TF website](https://threefold.io/). +For the TF Connect app, select the Support option. +For the other websites, it is a blue icon on the right bottom part of the page. + +The easiest way to contact the ThreeFold support is to use [this link](https://threefoldfaq.crisp.help/en/) and click on the chat icon. + +### What is the TFT entry price of my 3Node farming rewards? + +Currently, the TFT entry price of the 3Nodes' farming rewards is 8 cents (0.08 USD). + + + +### What is the necessary uptime for a 3Node per period of one month? + +Note that as of now, rewards are proportional to the uptime, so (e.g.) 40% uptime farms 40% of the total uptime period. + +When implemented : For certified Titans 3Nodes, it is 97% uptime per month (21.6h). For DIY 3Nodes, it is 95% uptime per month (36h). For professional certified 3 nodes, it is 99.5% uptime per month (3.6h). + + +### How can I check the uptime of my 3Nodes? Is there a tool to check the uptime of 3Node servers on the ThreeFold Grid? + +You can go on the [ThreeFold Dashboard](https://dashboard.grid.tf/), and select "Farms" in the Portal menu. When you select a specific 3Node in your farm, you can see a visual graph of the 3Node's uptime of the last month. By clicking on "Node Statistics", you can see past and present uptime periods. + + + +### I set up a 3Node in the middle of the month, does it affect uptime requirements and rewards? + +You will still need to meet the required uptime (95% for DIY, etc.3), but the period starts when you connected the 3Node for the first time, instead of the usual start of the month. This only applies the first month. Afterwards, the uptime is calculated on the whole month. *Read last Q+A + + +### What is the difference between a certified and a non-certified 3Node? + +A certified 3Node will receive 25% more reward compared to a non-certified 3Node. +You can visit the [ThreeFold Marketplace](https://marketplace.3node.global/) for more information on certified 3Nodes. +DIY certified 3Nodes are a future possibility. + + + +### What are the different certifications available for 3Node servers and farms? What are the Gold and Silver certifications? + +3Nodes can be certified. You can buy certified 3Nodes [here](https://marketplace.3node.global/). + +Farms can also be certified. The certifications are: [gold certified farming](https://forum.threefold.io/t/gep-gold-certified-farming-specs-closed/2925) and [silver certified farming](https://forum.threefold.io/t/silver-booster-request-for-comments/3416). + +Note that gold and silver certifications are still being discussed. Join the discussion on the [ThreeFold Forum](http://forum.threefold.io/). + + +### What is the difference between V2 and V3 minting? + +V2 is being sunset. New miners should directly onboard to V3. +On the tokenomics side, V2 rewards decrease as the difficulty level increases. For the V3 rewards, the rewards are constant for 5 years. In short, V3 is more profitable in the long run. For more information, read [this post](https://forum.threefold.io/t/comparison-v2-vs-v3-minting/2122). + + + +### What is the TFT minting address on Stellar Chain? + +The TFT minting address on Stellar Chain is the following: GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47 + +You can see it on the Stellar Explorer [here](https://stellar.expert/explorer/public/account/GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47). + + + +### Can Titans and DIY 3Nodes share the same farm? + +Yes. It's one big ThreeFold family! A farm can have several 3Nodes (Titans or DIY) and each 3Node can be linked to only one farm. + + + +### Do I need one farm for each 3Node? + +No. You only need one farm. One farm can have multiple 3Nodes. When setting your farm, you will add an address for the farming rewards. All farming rewards from each 3Node of your farm will be sent to this address. Note that you can choose to have more than one farm. It is up to you. + + + +### Can a single farm be composed of many 3Nodes? + +Yes. You can have many 3Nodes on the same farm. + + + +### Can a single 3Node be on more than one farm? + +No, this is not possible. + + + +### Do I need one reward address per 3Node? + +You do not need more than one address linked to one farm. All of your 3Nodes can be connected to the same farm. Rewards per each 3Nodes will all be sent to the address linked to your farm. + + + +### How can I access the expert bootstrap mode for Zero-OS? + +You can access the expert bootstrap mode for Zero-OS at this link: [https://bootstrap.grid.tf/expert](https://bootstrap.grid.tf/expert). + + + +### When it comes to the Zero-OS bootstrap image, can I simply duplicate the first image I burnt when I build another 3Node? + +Yes. What is needed on this bootstrap image is to have the proper farm ID. The bootstrap image will be the same for all your different 3Nodes. It's a good TF farming practice to leave the bootstrap image plugged in the 3Node at all time. + + + +### If a node is unused for certain time (e.g. many months offline), will it be erased by the Grid? + +No, nodes only get deleted if the farm owner chooses to do so. Old "nodes" are really just entries in TF Chain and TF Chain does not modify or delete this data without external input. + + + +### Can a farm be erased from TF Grid? + +No, this is not possible. In the future, we will implement some features in order to allow the cleaning of unused farms. As of now, this is not possible. Also, an old farm does not take resources on the TF Grid, or very little. + + + +### On the ThreeFold Connect App, it says I need to migrate my Titan farm from V2 to V3. What do I have to do? How long does this take? + +To migrate, read [this documentation](https://forum.threefold.io/t/what-to-do-if-your-farm-is-still-on-grid-v2-closed/3761). + + + + +### How can I migrate my DIY farm from V2 to V3? + +Create a new [bootstrap image](https://bootstrap.grid.tf/) using your new V3 Farm ID. To create a new V3 Farm ID, you can use the ThreeFold Connect App or the ThreeFold Dashboard. + + + +### What does the pricing policy ID of a farm represent? + +The pricing policy is the definition of how the network bills for workloads (pricing for each resource type, discounts, etc.). + + + +### What is the difference between TiB and TB? Why doesn't the TF Explorer shows the same storage space as my disk? + +Terabyte (TB) and Tebibyte (TiB) are units of digital information used to measure storage capacity and data transfer rate. While terabyte is a decimal standard unit, Tebibyte is binary. + +* 1 TB = 1000^4 bytes +* 1 TiB = 1024^4 bytes. + +There are thus more bytes in 1 TiB than in 1 TB. +1 TiB is equal to 1.099511627776 TB. + +You can play around these 2 notions by exploring this [TiB-TB converter](https://www.dataunitconverter.com/tebibyte-to-terabyte/). + +You can also check [this table](https://www.seagate.com/ca/en/support/kb/why-does-my-hard-drive-report-less-capacity-than-indicated-on-the-drives-label-172191en/) to compare different OS system's storage representation. + + + +## Farming Rewards and Related Notions + +### What are the rewards of farming? Can I get more rewards when my 3Node is being utilized? + +By connecting a 3Node to the Grid, you get Farming Rewards. If you set a public IP address for the Grid to use, you will receive additional rewards when your 3Node is being utilized by users on the Grid. All rewards are paid in TFT. To know the potential rewards, use the [simulator](https://simulator.grid.tf/). More information on sales channel will be communicated in the future. + + + +### How can I know the potential farming rewards for Grid Utilization? + +Go on the [ThreeFold simulator](https://simulator.grid.tf/), enter your 3Node resources, check the Public IP address. This will enable farming rewards from the parameter NU Required Per CU. Check the difference in the farming rewards per month. Note that you will need a Public IP address. + + + +### What is the easiest way to farm ThreeFold tokens (TFT)? + +Buy a [certified 3Node](https://marketplace.3node.global/index.php). This is more or less *plug n play*! You can also build a [DIY 3Node](#what-are-the-general-requirements-for-a-diy-3node-server). It's fun and there are many resources to help you along the way. + + + +### When do I receive my rewards? + +They are distributed once a month, around the 8th*. Distributions are not daily, or after a certain threshold. Note that upcoming minting rules may have a 24 month lockup or until 30% utilization for 3 months on your 3Node. + +*This can change slightly depending on the current situation. + + + +### Do farming rewards take into account the type of RAM, SSD, HDD and CPU of the 3Node server? + +No. The farming rewards do not take into account the specific type of RAM, SSD, HDD and CPU. The farming rewards take into account the quantity of storage and compute units (e.g. TB of SSD/HDD, GB of RAM, # of virtual cores). + + +### Can I send my farming rewards directly to a crypto exchange? + +This is not possible. When you send tokens to a crypto exchange, you need to include a memo with your wallet address and the current farming rewards system of ThreeFold is already using that memo space to send the correct farming information. + + +### Do I need collateral to farm ThreeFold tokens? + +Many decentralized data projects require collateral, but not ThreeFold. There is an ongoing discussion on collateral. Join the discussion [here](https://forum.threefold.io/t/should-tft-collateral-be-required-for-3nodes/3724). + + +### Can I add external drives to the 3Nodes to increase rewards and resources available to the ThreeFold Grid? + +As of now, you cannot add external drives to a 3Nodes. It is not yet supported. It might be in the future and we will update the FAQ if/when this happens. + + + +### Do I have access to the TFT rewards I receive each month when farming? + +For now, V3 farming rewards are distributed as TFT on Stellar and they are immediately available. The lock system will be implemented on chain. Tokens will be staked to your address until unlock conditions are met. Conditions are: 2 years of farming or 30% of proof-of-utilization for 3 months per 3Node. + + + +### What is TFTA? Is it still used? + +Note that on V3, TFTA will not be issued anymore. + + + +### Is there a way to certify a DIY 3Node? How can I become a 3Node certified vendor and builder? + +As of now, only certified ThreeFold partners can certify a 3Node. You could become a ThreeFolder vendor and offer certified 3Node. Read more [here](https://marketplace.3node.global/index.php?dispatch=companies.apply_for_vendor). + + + +### Does it make sense to make my farm a company? + +There is no general answer to this. Here's what a ThreeFold member thinks about this if you are living in the USA. Check for your current location if this makes sense for you. DYOR. + +> "Definitely do this project as a business entity. You can write off equipment, utilities and a portion of your home's square footage if your are hosting the equipment at home." TFarmer + + + +### What is the difference between uptime and downtime, and between online and offline, when it comes to 3Nodes? + +Uptime and status are two different things. As long as the 3Node is powered on, its uptime does not reset. Its status changes to offline if it hasn't made an uptime report in the last two hours. Even if after more than two hours, the 3Node isn't yet online, the uptime does not reset. Uptime is a function of the system being powered on. + + +### My 3Node server grid utilization is low, is it normal? + +This is normal. Currently, not all 3Nodes are being utilized by users on the ThreeFold Grid. Each month, more and more utilization happens and the ThreeFold Grid continually expans. To read on the ThreeFold Grid utilization and expansion, check this [TF Forum post](https://forum.threefold.io/t/grid-stats-new-nodes-utilization-overview/3291). + + + + +## 3Node Farming Requirements + + +### Can I host more than one 3Node server at my house? + +Yes, but do not host more than your bandwidth can support. + + + +### Is Wifi supported? Can I farm via Wifi instead of an Ethernet cable? + +No. Wifi is not supported by Zero-OS due to a number of issues, like reliability, performance, configuration requirements and driver support. It's all about Ethernet cables here. + + + +### I have 2 routers with each a different Internet service provider. I disconnected the ethernet cable from one router and connected it to the other router. Do I need to reboot the 3Node? + +You do not need to reboot. The 3Node will be able to reconnect to the ThreeFold Grid. + + + + +### Do I need any specific port configuration when booting a 3Node? + +No, as long as the 3Node is connected to the Internet via an ethernet cable (wifi is not supported), Zero-OS will be able to boot. Usually with the DHCP, it automatically assigns an IP address. + + + +### How much electricity does a 3Node use? + +A small DIY 3Node based on a compact office computer will draw under 20W. A full size server will draw around 100W idling. Note that a 3Node actively used on the Grid (proof-of-utilization) will draw more power, but also generate passive income on top of farming if you have a public IP address. + +For more information, read thes section [Calculate the Total Electricity Cost of Your Farm](../farmers/farming_optimization/farming_costs.md#calculate-the-total-electricity-cost-of-your-farm) of the Farming Guide. + + + +### Has anyone run stress tests to know the power consumption at heavy load of certain 3Nodes? + +The community is starting to gather some data on this. As of now, we know that a R720 with 2x2690v2 cpu, 4TB NVME SSE P4510 and 320GB ram will draw 390W @100% load. With 2x2650L v2, it's around 300W with fans at full speed. More info will be added as we gather more data. + +### Can the Titan 3Node be run on PoE? (Power Over Ethernet) + +Titans don't come equipped for Power Over Ethernet (PoE). If you have a NUC based Titan there are some PoE lids that might be compatible. + + + +### What is the relationship between the 3Node's resources and bandwidth? + +A 3Node connects to the ThreeFold Grid and transfers information, whether it is in the form of compute, storage or network units (CU, SU, NU respectively). The more resources your 3Nodes offer to the Grid, the more bandwidth will be needed to transfer the additional information. + + +### What is the bandwidth needed when it comes to running 3Nodes on the Grid? + +The strict minimum for one Titan is 1 mbps of bandwidth. + +If you want to expand your ThreeFold farm, you should check the following to make sure your bandwidth will be sufficient when there will be Grid utilization. + +> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) + +This equation means that for each TB of HDD you need 5 mbps of bandwidth, and for each TB of SSD, 8 Threads and 64GB of RAM (whichever is higher), you need 10 mbps of bandwidth. + +This means a proper bandwidth for a Titan would be 10 mbps. As stated, 1 mbps is the strict minimum for one Titan. + +The bandwidth needed for a given 3Node is not yet set in stone and you are welcome to participate in ongoing the [discussion on this subject](https://forum.threefold.io/t/storage-bandwidth-ratio/1389) on the Forum. + + + +### Can I run Zero-OS on a virtual machine? + +You can. But you won't farm TFT. To farm TFT, Zero-OS needs to be on bare metal. + + +### Is it possible to build a DIY 3Node with VMWare VM ? + +It wouldn't be possible to get farming rewards from such configuration. You need to run a 3Node Zero-OS on bare metal and no virtual machine is permitted. Indeed, to farm TFT you need bare metal. Virtual Machine will not work. Furthermore, all disks of a 3Node need to be wiped completely, thus no other OS can be on the 3Node. + +It would be possible to simply set Zero-OS as a VM on VMWare VM, but it wouldn't farm as stated. + + +### Can I run a 3Node on another operating system, like Windows, MAC or Linux? + +No. ThreeFold runs its own operating system (OS), which is Zero-OS. You thus need to start with completely wiped disks. + +### What is the minimum SSD requirement for a 3Node server to farm ThreeFold tokens (TFT)? + +You need a theoretical minimum of 500 GB SSD on a desktop or server. Less could work. + + + +### Is it possible to have a 3Node server running on only HDD disks? + +This is not possible. A 3Node needs at least one SSD disk of 500 GB. + + + +## Building a 3Node - Steps and Details + + +### How can I be sure that I properly wiped my disks? + +A wiped disk has: +- no label +- no partition +- no filesystem +- only zeroes + +On Linux to see if the disk is only composed of zeroes, use the command line (example with disk sda): + +> cmp /dev/sda /dev/zero + +If there is only zeroes, you should get the output: + +> cmp: EOF on /dev/sda + +You can also use the command line: + +> sudo pv /dev/sda | od | head + +Here are some useful command lines for Linux to make sure there are no partitions, no labels, no filesystems and that the disks are filled with zeroes only: + +> sudo fdisk -l +> +> sudo fdisk -lf +> +> sudo parted -l +> +> sudo parted /dev/sda 'print’ +> +> lsblk --f +> +> lsblk --raw + + + +### If I wipe my disk to create a new node ID, will I lose my farming rewards during the month? + +No, you wouldn't lose any farming rewards. You will get both the rewards for your previous node ID's uptime and also the new node ID's uptime. + +Note that this is the case with the current farming rewards based on total uptime, without any minimum threshold. + + + +### My disks have issues with Zero-OS and my 3Nodes. How can I do a factory reset of the disks? + +> Warning: this is destructive. It erases the disk sda in this example. + +Boot a Linux in Try mode and run the following command: + +> sudo badblocks -svw -b 512 -t 0x00 /dev/sda + +*In this example, the disk selected is sda. Choose the proper disk name in your current situation, e.g. sdb, sdc..). + +This line will read and (over)write zeroes (0x00) everywhere on the disk. + +To understand the line of code, note that 512 is the block size and that without -b BLOCKSIZE, the process would simply go slower. +0x00 represents the zero byte. + + + +This will take some time, but it should reset the disk and hopefully fix any issues. + +Note: it takes the 3Node server around 1 percentage per minute for a 2TB SSD disk to accomplish the badblock operation. + + + +### Before doing a bootstrap image, I need to format my USB key. How can I format my USB key? + +*Note that BalenaEtcher will format and burn your bootstrap image in the same process. See next Q+A for more details. + +Windows: This is done easily with diskpart. Here's all the coding needed (with disk X as an example, make sure you choose the correct disk): run Command Prompt as an administrator (right-click option), write *diskpart*, then in diskpart write *list disk*, choose your disk and write *select disk X*, write *clean*, write *create partition primary*, write *format fs=fat32*, then write *assign*. If for any reason, the process doesn't work, quit diskpart and redo the whole thing. This should fix any bug. Cautious with diskpart, it's destructive. + +MAC: This is done easily with Disk Utility. Go in Disk Utility. Select your USB key, click on *erase* on the top, write a name for your USB key, then choose a format (MS-DOS (FAT) if USB key < 32GB, exFAT if USB key > 32GB), then click *erase*, then click *done*. + +LINUX: In the Terminal, write *df*, find your disk (here we use sdX), write *sudo umount /dev/sdX*, write this line (with the proper FORMAT) *sudo mkfs.FORMAT /dev/sdX* [FORMAT= vfat for FAT32, ntfs for NTFS, exfat for exFAT], then finally verify the formatting by writing *sudo fsck /dev/sdX*. + +### What do you use to burn (or to load) the Zero-OS bootstrap image onto a USB stick? + +For MAC, Linux and Windows, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. Rufus can also be used for Windows. + +Also, got Linux systems, you can transfer the dowloaded image with the dd command: *dd if=created_boot_loader_file.img of=/dev/sd?* where the input file is the downloaded file from http:/bootloader.grid.tf and the output file (device) is the USB stick device. + + +### Should I do a UEFI image or a BIOS image to bootstrap Zero-OS? + +It depends on your 3Node's system. Newer computers and servers will accept UEFI. If it does not work with UEFI, please try with the options ISO (BIOS CD/DVD) or USB (BIOS image) on the [ThreeFold bootstrap website](https://bootstrap.grid.tf). Read the next Q+A for more information on BIOS/UEFI. + + + +### How do I set the BIOS or UEFI of my 3Node? + +You can read this [documentation](../farmers/3node_building/5_set_bios_uefi.md) to learn more about BIOS and UEFI settings for a DIY 3Node. + + + +### For my 3Node server, do I need to enable virtualization in BIOS or UEFI? + +Yes, you should enable virtualization. On Intel, it is denoted as *CPU virtualization* and on ASUS, it is denoted as *SVM*. Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI. + + + +### How can I boot a 3Node server with a Zero-OS bootstrap image? + +Plug the USB key containing the Zero-OS bootstrap image with your farm ID then power on your 3Node. If the BIOS/UEFI is set correctly and the disks are all wiped, it should boot correctly the first time. If you have any problem booting your 3Node, read the section [Troubleshooting and Error Messages](#troubleshooting-and-error-messages) of the FAQ. + + + +### The first time I booted my 3Node server, it says that the node is not registered yet. What can I do? + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer *: NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +If after some time (couple hours), the 3Node doesn't get registered, there might be something off with the Grid connection. You can then try to reboot the 3Node, or wait and boot it later. If it persists, you can check the rest of the Troubleshooting section of the Farmer FAQ, or ask around the ThreeFold Telegram Farmer chat or the ThreeFold chat for help. + + + +### The first time I boot my 3Node, the node gets registered but it says cache disk : no ssd. What can I do? + +This probably means that you either haven't connected a SSD or that you need to wipe your SSD disk(s). Zero-OS runs on bare metal and needs a minimum of one SSD disk (min 500GB & 50 GB per CU). You will see "cache disk : OK" when it works. + + + +### The first time I boot my 3 node, the node gets registered and it says cache disk : OK, but the table System Used Capacity is empty. What can I do? + +Most of the time, just wait and data will appear. If you want to be sure your 3Node is online on the Grid, you can check the [Node Finder](https://dashboard.grid.tf/), which fetch information every 2 hours. If it persist, first try to simply reboot your 3Node. + + + +### I have a relatively old server (e.g. Dell R710 or R620, Z840). I have trouble booting Zero-OS. What could I do? + +Sometimes, Zero-OS will not boot in UEFI mode on older servers. In that case, try to boot in BIOS mode. Use either a USB key or the CD/DVD optical drive (the 4th and 5th option on https://bootstrap.grid.tf/) and make sure to select BIOS and not UEFI mode in your server settings. + + +### I connected a SATA SSD to a CD-DVD optical drive adaptor. My system does not recognize the disk. What can I do? + +Try to set AHCI mode instead of Legacy mode in SATA settings in the BIOS. + + +### Can someone explain what should I put in the Public IP part of my farm? Should I just insert my Public IP and Gateway (given by my ISP)? + +Assuming you are a DIY farmer and operate from your home, this field can be left blank. You do not have to fill in any details. + +The add IP option is for farmers that have a block of IP addresses routed to their router (in data centers mostly) and want to present “dedicated IP” addresses for deployments. For more information on how to set the public configuration, go to [this link](https://library.threefold.me/info/manual/#/manual__public_config). + + + + + +## Farming Optimization + +### What is the difference between a ThreeFold 3Node and a ThreeFold farm? What is the difference between the farm ID and the node ID? + +A farm is a composition of one or many 3Nodes. A 3Node is a computer connected to the ThreeFold Grid. Each farm has its farm ID and each 3Node has its node ID. + + + +### How can I know how many GB of SSD and RAM do I need? + +You need 50 GB of SSD per compute units (CU) and a minimum of 500 GB SSD and 2 GB of RAM per 3Node. + +A 3Node has, in general, 2 compute units (CU) per thread. Thus, for peak optimisation, you need 100 GB SSD and 8GB RAM per thread. + +### What is the optimal ratio of virtual cores (vcores or threads), SSD storage and RAM memory? What is the best optimization scenario for a 3Node, in terms of ThreeFold tokens (TFT) farming rewards? + +In short, for peak optimization, aim for 100 GB SSD of storage and 8GB RAM of memory per virtual core (vcore or thread). + +For example, a 32 threads (32 vcores) 3Nodes would need 3.2 TB SSD and 256GB RAM to be optimal, reward-wise. +That is: 32 * 100 = 3200 GB SSD = 3.2TB SSD, and 32 * 8 = 256 GB RAM total. + +Adding more GB of RAM would not increase your TFT rewards. You would need more vcores if you want to expand. + +NB: This is purely based on reward considerations. Some users might need different ratios for different specific uses of the Grid. + + + +### What does TBW mean? What is a good TBW level for a SSD disk? + +TBW means Terabytes Written. TBW directly measures how much you can write cumulatively into the drive over its lifetime. For your 3Node, it can be a good idea to prioritize a minimum ratio of 500 TBW per 1TB for SSD. + +*Note that TBW is not a technical specification, but a general claim from the manufacturer. For this reason, it can also be good to check the warranty of the disk. For example, if a manufacturer offers a 5-year warranty to its product, it indicates that the company thinks its product will last a long time. + + +### Are SATA and SAS drives interchangeable? + +This goes only one way. You can put a SATA drive in a SAS slot, but you can’t put a SAS drive in a SATA slot. See the [next question](#what-is-the-speed-difference-between-sas-and-sata-disks) for more information. + + +### What is the speed difference between SAS and SATA disks? + +One of the big differences between SATA and SAS is the transfer speed. Using SATA disks with SAS cables, you will be limited by the SATA transfer speed. + +* Sata I : 150 MB/s +* Sata II : 300 MB/s +* Sata III : 600 MB/s +* SAS : 600-1500 MB/s + +Note: You will most probably need to re-flash the raid card if you use the front panel disks (onboard storage) of your server. + + + + +### Is it possible to do a graceful shutdown to a 3Node server? How can you shutdown or power off a 3Node server? + +There are no "graceful" shutdowns of 3Nodes. You can shutdown a 3Node from the software side. You need to shut it down manually directly on the hardware. 3Nodes are self-healing and if they suddenly power down, no data or information will be lost. + + + + +### Is it possible to have direct access to Zero-OS's core to force a reboot? + +No, this is not possible. The general philosophy with Zero-OS is: no shell, no GUI, and no remote control. In other words, anything that could potentially provide attack surface is off the table. This ensures a high security level to Zero-OS and the ThreeFold Grid in general. To reboot a 3Node, you have to do it manually. + + + +### Do I need some port forwarding in my router for each 3Node server? + +No, this is not needed. + + +### Are there ways to reduce 3Node servers' noises? + +To reduce the noise, you can remove all the unnecessary cards in the servers as well as the HDD disks if you don't use them. Unplugging the SAS cables can also help. You can also set the fans to adjust their speed instead of being constant. + +At the end of the day, servers were manufactured for durability and efficiency, and not for being quiet. Most servers are placed in server rooms where noise doesn't matter much. + + + +### I built a 3Node out of old hardware. Is it possible that my BIOS or UEFI has improper time and date set as factory default? + +Yes. Make sure you have the correct time and date in BIOS to avoid errors when trying to boot Zero-OS. It might not cause any problems, but sometimes it does. + + + +### I have rack servers in my ThreeFold farm. Can I set rack servers vertically? + +In general, it is not recommended to set rack servers vertically as they were designed to be laid flat in racks. That being said, if you want to set your rack vertically, here are some general rules to follow. Do so at your own risk. + +First, make sure the parts in the servers are well installed and that they will not fall if laid vertically. Second, and foremost, you want to make sure that there will not be any overheating. This means to make sure you don't block the front and rear of the unit, so heat can dissipate thought the vents. + +If you want to put the rack vertically with the longest side of the rack laying upward, having the power supply units (PSUs) on the very top will ensure that heat dissipate well. + + + + +## Farming and Maintenance + +### How can I check if there is utilization on my 3Nodes? + +To see if there is utilization on your 3Node, you can consult the [TF Dashboard](https://dashboard.grid.tf/), go to the Farm section and consult the information under your 3Nodes. Note that the quickest way is to check if there are CPUs reserved. + + + +### Do I need the Zero-OS bootstrap image drive (USB or CD-DVD) when I reboot, or can I boot Zero-OS from the 3Node main hard drive? + +It is advised to keep the bootstrap image plugged in your 3Node. Once your node has been booted with Zero-OS via the USB key, you can remove the USB key, but if something happens and the node needs to reconnect with the network, it won’t be able to do so. We advise people to let the USB key always in so the node can reconnect with the network if needed. + +You need the bootstrap image device plugged in every time you reboot a 3Node and it's a good practice to keep it plugged in all the time. The technical explanation is: (1) at first boot, it creates a minimum requirement on SSD which is used as cache (2) each time the system restarts it reuses this SSD piece but Zero-OS bootstrap is also needed to download the last image. Indeed, image is not stored on the machine and also no boot loader is installed. + + +### It's written that my node is using 100% of HRU. What does it mean? + +HRU stands for your HDD space available. It means that you are using 100% of the HDD space available, or equivalently that you have no HDD on your system. + + +### On the ThreeFold Node Finder, I only see half of the virtual cores or threads my 3Node has, what can I do? + +Check in the BIOS settings and make sure you have enabled Virtual Cores (or Hyper Threading/Logical Cores). + + + +### Why are the 3Nodes' resources different on the ThreeFold Node Finder and the ThreeFold Dashboard? + +There is a difference because one shows the resources in GiB and the other in GB. It's just a way to display information, the resources are ultimately the same, but shown differently. 1 GiB = 1.073741824 GB. + + + +### How can I test the health of my disks? + +There are many ways to test the health of your disks. For some information on this, you can have a look at this [TF forum post](https://forum.threefold.io/t/testing-ssd-health/3436). + + + +### How can I transfer my 3Node from one farm to another? + +To transfer your 3Node from one farm to another, simply make a new bootstrap image of the new farm, connect the new bootstrap image to the 3Node and restart the node. + +Note that you can use [balenaEtcher](https://etcher.balena.io/) to format and burn the bootstrap image at the same time. + + + +### What do CRU, MRU, HRU and SRU mean on the ThreeFold Node Finder? + +CRU means the number of virtual cores. MRU means the GB of ram (memory). HRU means the HDD capacity storage and SRU means the SSD capacity storage. + + + +### I have more than one ThreeFold 3Node farm, but I want all my 3Nodes on only one farm. How can I put all my 3Nodes on one farm? How can I change the farm ID of my 3Node? + +If you have more than one 3Node associated with more than one farm, it is possible to put all the 3Nodes under only one farm. +The following shows how to change the farm ID of your 3Node. + +For each 3Node, you need to create a new USB key Zero-OS boostrap image associated with the chosen farm ID. This will ensure your 3Node links to the chosen farm. +To create a new USB key Zero-OS bootstrap image, you can download the bootstrap image [here](https://v3.bootstrap.grid.tf/) or you can clone an existing USB key bootstrap image associated with the chosen farm ID. In both case, you can use a software like [BalenaEtcher](https://www.balena.io/etcher/). +Once you have the new USB key bootstrap image, plug it into the 3Node and reboot the server. +After reboot, the 3Node will have the same node ID as before, but it will now be associated with the chosen farm ID. + +Note: If you want a new node ID for your 3Node, you will need to wipe the main SSD of your 3Node. + + + +### How can I know if my 3Node is online on the Grid? + +There are multiple answers to this. + +(1) You can plug a monitor to your 3Node and check directly its status. + +(2) You can check your 3Node status by using the Node Finder of the [ThreeFold Dashboard](https://dashboard.grid.tf/). + +(3) You can also use the unofficial, but highly useful [ThreeFold Node Status Bot](https://t.me/tfnodestatusbot) on Telegram. + + + +### I booted my 3Node and the monitor says it's online and connected to the Grid. But the ThreeFold Node Finder says it is offline? What can I do? + +The Node Finder refetches information every 2 hours. You can wait 2 hours and see if the problem persists. Then/or, you could reboot the 3Node and see if the Node Finder sees it as online. + + + +### My 3Node does show on the ThreeFold Node Finder, but not on the ThreeFold Dashboard, what can I do? + +If you're 3Node is correctly registered on the ThreeFold Grid but you cannot see it on the ThreeFold Dashboard, there can be many different ways to solve this issue. + +One way to fix this is to go in the TF Dashboard, select *change the address* and then simply re-paste the same address, then the extension will ask you to resign. Usually this fixes the issue. + +If the first method did not work, you can try to remove the account and add it back up on the Polkadot.js extension. Before doing so, make sure you have a back up of your seed phrase as you will need it to re-enter the account. Your 3Node should then appear. + + +### If I upgrade my 3Node, will it increase my rewards? + +Yes. Use the simulator to verify the additional rewards. But note that, currently, upgrades are not recognized until the next minting cycle. + + +### I booted my 3Node for the first time at the beginning of the month, then I did some upgrade or downgrade, will the ThreeFold Grid recognize the new hardware? Will it still be the same 3Node ID? + +Downgrades are counted in the current minting cycle. So you mint for the entire cycle at the downgraded specs. + +Minting only considers a single configuration per node per cycle, and that is the minimum configuration seen at any point during the cycle. With this logic in mind, upgrades will only be recognized at the next minting cycle. Note that your 3Node will have a new ID and new price entry if you changed the SSD containing the 3Node ID. + + + + + + +### Is it possible to ask the 3Node to refetch the node information on the monitor? + +To refetch information, press " q " on the keyboard. + + + +### When does Zero-OS detect the capacity of a 3Node? + +Zero-OS only detects capacity right after it boots. + + + + +### Where is the 3Node ID stored? + +For the current Zero-OS version, the node ID is stored in the first SSD you install on your 3Node*. If you change or erase this disk, this disk will lose its current 3Node ID. + +*Note that if, at the first boot, you put a SSD SATA and a SSD NVME at the same time, the node ID will be registered on the SSD SATA. + + + + +### Is there a way to backup my node ID in order to restore a 3Node if the disk with the node ID gets corrupted or breaks down? + +Yes, you can do a backup, but as of now this process must be done manually. + +One way is to boot a Linux USB image in *Try* mode, open up the *File* folder of your disk that contains the node ID. Click on *+ Other Locations.* Then, open the folder that contains the folder *zos-cache* and open the folder *identityd.* In this folder, select the file *seed.txt* and make a copy of it in a safe place (USB key, notebook, e-mail, etc.). If the disk which contains your node ID is damaged, simply reboot the 3Node with a new disk and the Zero-OS bootstrap image. Your 3Node will connect to the Grid and assign a new node ID. Once this is done, reboot the 3Node, but this time with the Linux USB image, go in the same folder as stated before and replace the new *seed.txt* file with the old file. Reboot your 3Node with the Zero-OS bootstrap and you're done. + + + +### If I upgrade my 3Node, does it change the node ID? + +Upgrades won't change the node ID, unless you replace the SSD where the node ID is stored (see above for more info on this). + + + + +### Does it make sense to recreate my node when the price drops? + +Short answer: no. Long answer: [click here](https://forum.threefold.io/t/does-it-make-sense-to-recreate-my-node-when-the-price-drops/). + + +### My 3Node lost power momentarily and I had to power it back on manually. Is there a better way to proceed? + +In your BIOS, go in *Security Settings* and choose *Last* for *AC Power Recovery*. If you want, set a delay between 60 and 240 seconds. This will ensure your 3Node does not power on and off frantically if your power flickers on and off, thus potentially damaging the unit. On other BIOS, it's *After Power Loss*, and you should choose *Previous State*. + +*Depending on your 3Node, the parameter might have a different name. + + +### Do I need to change the battery BIOS? + +It can be a good thing to change it when you buy an old desktop or server to make sure it lasts long. When the battery goes out of power, the 3Node won't have access to the BIOS settings if it loses power momentarily. + + + +### Do I need to enable UEFI Network Stack? + +You don't need to if you use a removable media (e.g. USB key) as a booting image. It is needed only if you boot from a PXE server on your network. You should keep this feature disabled. Enable it only if you know 100% what you are doing. Otherwise it might bring vulnerabilities in terms of network security. + + +### I want redundancy of power for my 3 nodes. I have two PSU on my Dell server. What can I do? + +Make sure you enable the Hot Spare feature. This feature is accessible in iDRAC Settings - Power Configuration. Other servers might have this function, with a different name and configuration. Check the server's manual for more details. + + + +### Why isn't there support for RAID? Does Zero-OS work with RAID? + +RAID is a technology that has brought resilience and security to the IT industry. But it has some limitations that we at ThreeFold did not want to get stuck in. We developed a different (and more efficient way to store data reliably. Please have a look [here](https://library.threefold.me/info/threefold#/cloud/threefold__cloud_products?id=storage-quantum-safe-filesystem). + +This Quantum Safe Storage overcomes some of the shortfalls of RAID and is able to work over multiple nodes geographically spread on the TF Grid. + + +### Is there a way to bypass RAID in order for Zero-OS to have bare metals on the system? (No RAID controller in between storage and the Grid.) + +Yes it is possible. "You can use the on board storage on a server without RAID. You can [re-flash](https://fohdeesha.com/docs/perc.html) the RAID card, turn on HBA/non-RAID mode, or install a different card. No need for RAID." @FLnelson It's usually easy to set servers such as a HP Proliant with the HBA mode. For Dell servers, you can either cross-flash the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller. + + + +### I have a 3Node rack server. Is it possible to use a M.2 to SATA adapter in order to put the M.2 SATA disk in the HDD bay (onboard storage)? + +Yes, it is possible. You will most probably need to bypass the RAID controller for Zero-OS to access properly the onboard storage. See previous question. + + + +### My 3Node uses only PCIe adapters and SSD NVME disks. Do I need the RAID controller on? + +The onboard RAID controller is not linked to your PCIe SSDs. In this case, you can switch the RAID controller off. + + + +### Can I change the name of my farm on polkadot.js? + +It’s possible to rename farms through the Polkadot UI. For mainnet, use [this link](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics). + +1. Under *using the selected account*, select the account that owns the farm +2. Choose *tfgridModule* from the dropdown menu *submit the following extrinsic*, +3. Select *updateFarm(id, name, pricingPolicyId)* +4. Under *name: Bytes*, write the new farm name +5. Finally, click on the bottom *Submit Transaction* at the bottom right of the screen + + + +### How can I delete a farm on polkadot.js? + +The ability to delete a farm was removed from TF Chain due to concerns that nodes could be left without a farm and thus cause problems with billing. + + + +### I try to delete a node on the TF Dashboard, but it doesn’t work. Is there any other way to proceed that could work? + +It’s possible to delete nodes through the Polkadot UI. For mainnet, use [this link](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics). + +1. Under *using the selected account*, select the account that owns the farm, +2. Choose *tfgridModule* from the dropdown menu *submit the following extrinsic* +3. Select *deleteNodeFarm(nodeId)* +4. Under *id: u32*, write the ID of the 3Node you want to delete +5. Finally, click on the bottom *Submit Transaction* at the bottom right of the screen + + + +### My 3Node has 2 ethernet ports in the back, with one written AMT above, what does it mean? Can I use this port to connect my 3Node to the ThreeFold Grid? + +First, let's define the term AMT. It means: Active Management Technology. Without going into too much details, it is to remotely access servers via the network at the BIOS level. Thus, you should plug the ethernet cable in the port next to AMT, and not into the AMT port. You can explore AMT properties if you want remote access to your server. + + + +### My 3Node is based on a the hardware Z600, Z620 or Z820, can I run it headless or without a GPU? + +For the Z600, there is a great [video on youtube](https://www.youtube.com/watch?app=desktop&v=JgBYbaT-N-w). + +For the Z620 and the Z820, you need to do some variation on the video above. In the BIOS, go in File, then Replicated Setup, and select Save to Removable Storage Device. This will save a text file on your USB key. Then, in the text file, go to Headless Mode and remove the * in front of Disable and put it in front of Enable. Save the file and then go back into BIOS. Now go in File, then Replicated Setup, and select Restore from Removable Storage Device. + +Running your 3Node without the GPU can save some power consumption as well as giving you one more extra slot for other hardware. + + +### Is it possible to add high-level GPU on rack servers to farm more TFT? + +Some farmers had success installing GPUs such as the RTX3080 in servers as small as 2U (such as R730). Connections such as 250W 8-pin plug are needed on each riser. Generally, tower servers have more space to add high-level GPU. + +Note: GPU farming will be implemented in the future. + + + +### If I change farm, will my node IDs change on my 3Node servers? + +You can move 3Nodes between farms without losing the node IDs. You simply need to change the boot media with the new farm ID and then reboot the 3Node. The 3Node will have the same node ID but it will now be associated with the new farm ID. + + +## Troubleshooting and Error Messages + +### Is it possible to access the Error Screen or Log Screen? + +Yes! On the Zero-OS console, hit alt-F2 to open up the Error/Log Screen, and hit alt-F3 to go back to the main screen. + + +### What does it mean when I see, during the 3Node boot, the message: error = context deadline exceeded? + +In general, this message means that the ThreeFold Grid asked something to your 3Node, and your 3Node could not respond fast enough. It is usually necessary to read the following error message to understand the situation more specifically. + +### How can I fix the error messages: "context deadline exceeded" accompanied with "node is behind acceptable delay with timestamp"? + +This often indicates that the real-time clock of the system is not synced with current time. There have been different fixes reported to this issue. + +You can boot the node using a Ubuntu live image to sync the hardware time. After that, you can reboot the node and it should boot normally. + +You can fix this manually in the BIOS. Go to the BIOS settings and adjust the **Time** and **Date** settings. + +You can also try to adjust the clock by NTP over the network, if it applies to your case. + +### I try to boot a 3Node, but I get the error: "No Route to Host on Linux". What does it mean? + +There are many potential answers to this. Perhaps the Host is offline, the service isn't running. This is usually the reason with TF Grid. It means the Grid is not responsive. In this case, try to boot the 3Node later. If it persists ask TF Support. + +There can also be other reasons. You might have connected to the wrong port. Perhaps you have configured iptables to block connections on that port. Your DNS might be improperly configured. You might have an Incorrect Network or Host Configuration. Many troubleshoots are possible. Here's a [good place to start](https://www.maketecheasier.com/fix-no-route-to-host-error-linux/). + + + +### How can I fix the error: "Network configuration succeed but Zero-OS kernel could not be downloaded" when booting a 3Node? + +To fix the error "Network configuration succeed but Zero-OS kernel could not be downloaded", you can try to restart the router and reboot the 3Node. This usually fixes the issue. If this doesn't work, check if the router is still functional. The cause of this issue might be that your router is broken. + + + +### Using SAS disks, I get the error; "No ssd found, failed to register". What can I do to fix this? + +First make sure to wipe the disks and then boot your 3Node. If you've wiped the disks and it doesn't work, it's been reported that using the command "diskpart clean command" on Windows can fix this issue. + + + +### When booting a 3Node, how to fix the error: "no disks: registration failed"? + +There can be many different fixes for this error. Here are some troubleshooting tips to test separately: + +* In BIOS, enable AHCI +* Make sure to [wipe the disks](../farmers/3node_building/4_wipe_all_disks.md) of the 3Nodes +* If the 3Node has a RAID Controller: + * Disabled the RAID controller, OR; + * [Flash the RAID controller](https://fohdeesha.com/docs/perc.html) (i.e. crossflashing), OR; + * Change the controller to a Dell H310 controller (for Dell servers) +* Try the command **badblocks** (replace **sda** with your specific disk). Note that this command will delete all the data on the disk + * ``` + sudo badblocks -svw -b 512 -t 0x00 /dev/sda + ``` + + +### My SSD is sometimes detected as HDD by Zero-OS when there is a reboot. Is there a fix or a way to test the SSD disk? + +If your SSD disk shows as HDD, usually you can reboot the 3Node and Z-OS adjusts correctly. + +Anyone experiencing frequently this issue where Z-OS sometimes detects an SSD as HDD can try the following: + +* Boot up a live Ubuntu Desktop image +* Run the benchmark utility within the Disks app +* Check if the seektime of the disk is sufficient for Z-OS + * If the seektime is above 0.5ms, Z-OS will consider your SSD as HDD + +**Detailed Steps:** + +* Boot a Ubuntu Linux live USB +* Install **gnome-disks** if it isn't already installed: + * ``` + sudo apt install gnome-disks + ``` +* Open the application launcher and search for **Disks** +* Select your disk +* Click on the tree dots menu +* Select **Benchmark Disk...** + * Use the default parameters + * **Transfer rate**: This is not relevant for our current test + * You can set to it to minimum (e.g. 2) + * **Sample size**: 10 MB is sufficient +* Check the average access time on the [ThreeFold repository](https://www.github.com/threefoldtech/seektime) + * Check seek time for HDD and SSD + * A SSD needs to be <=0.5ms +* If the result is above 0.5ms, this is why Z-OS doesn't recognize the disk properly + * You can then run diagnostics (e.g. smartmontools) + * If this is not fixable, you should change disk (e.g. take a more performing disk) + +Note: The green dots on the output represent seektime and that's what Z-OS is looking at. Specifically, it checks that the average seektime is below 0.5ms. If the seektime is above this, Z-OS will consider your SSD as HDD. + + + +### When booting a 3Node, I get the message: failed to register node: failed to create node: failed to submit extrinsic: Invalid Transaction: registration failed. What could fix this? + +The most probable fix to this error is simply to properly wipe your disk(s): + +* [Wipe your disks on Linux](#what-can-you-do-to-zero-out-your-disks-how-can-i-wipe-the-disks-of-my-3node-server-with-linux) + +* [Wipe your disks on Windows](#how-can-i-wipe-a-disk-with-windows) + + +### I try to boot a 3Node, but I get the message no route with default gateway found. What does it mean? + +First, let's see the main terms. Default gateway act as an access point to other networks, in this case the TF Grid, when there is a back and forth exchange of data packets. + +While the last question implied a communication problem from the Grid, this error message usually means that the 3Node has communication problem. In short, it has difficulty reaching the TF Grid. There are many ways to troubleshoot this error. First, let's give the most direct solution. Make sure you have a direct connection with your Internet Service Provider (ISP): your 3Node should be connected to a router or a switch via an ethernet cable. Wifi doesn't work. Make sure your DHCP is set correctly. + +If the problem persists, check the default gateway of your 3Node and then make sure your router can reach it. + +*See next Q+A for more a possible solution. + + + +### I have trouble connecting the 3Node to the Grid with a 10GB NIC card. What can I do? + +As of now, Zero-OS sometimes has trouble with 10GB NIC card. The easiest solution to this is to connect your 3Node with the 1GB NIC card. This should solve the issue. More fine tuning might be needed to get your 3Node to work with a 10GB NIC card. Future Zero-OS version might solve this issue. + + + +### I switch the ethernet cable to a different port when my 3Node was running. Internet connection is lost. What can I do? + +When your 3Node boots, Zero-OS marks the NIC port. This means you cannot change NIC port when your 3Node is running. You can either put back the ethernet cable in the initial NIC port, or reboot the 3Node. At boot, Zero-OS will marks the new NIC port as the main entry. + + + +### I get the error Certificate is not yet valid when booting my 3Node server, what can I do? + +Make sure your firmware is up to date. If necessary, reinstall it. You might have to install then re-install the firmware if your system is very old. + + + +### When running wipefs to wipe my disks on Linux, I get either of the following errors: "syntax error near unexpected token" or "Probing Initialized Failed". Is there a fix? + +Many different reasons can cause this issue. When you get that error, sometimes it is because your are trying to wipe your boot USB by accident. If this is not the case, and you really are trying to wipe the correct disk, here are some fixes to try out, with the disk `sda` as an example: + +* Fix 1: + * Force the wiping of the disk: + * ``` + sudo wipefs -af /dev/sda + ``` +* Fix 2: + * Unmount the disk then wipe it: + * ``` + sudo umount /dev/sda + ``` + * ``` + sudo wipefs -a /dev/sda + ``` + + + +### I did a format on my SSD disk, but Zero-OS still does not recognize them. What's wrong? + +Formatting is one thing, but to boot properly, Zero-OS needs to work on a completely wipe disk. Thus, make sure you [wipe your disks](#what-can-you-do-to-zero-out-your-disks-how-can-i-wipe-the-disks-with-linux). Formatting is not enough. + + + +### I have a Dell Rx10 server (R610, 710, 910). When I boot Zero-OS I get the message Probing EDD and the 3Node doesn't boot from there. What can I do? + +For the R610 and 710, you can simply re-flash the card. See [this link](https://fohdeesha.com/docs/perc.html) for more information. For the 910, you can’t re-flash the card. In this case, get a LSI Dell card and it should work. (They are cheap when you buy them used online.) + + +### My 3Node doesn't boot properly without a monitor plugged in. What can I do? + +First, try to disable the "Halt On" mode in BIOS. If you do not have this option, try simply enabling the Legacy Support (Dell BIOS for example). If this doesn't work, try to plug in a Dummy Plug/Headless Ghost/Display Emulator in your 3Node. This will simulate a plugged monitor. This should fix the problem. + + +### My 3Node is running on the Grid, but when I plugged in the monitor, it states: Disabling IR #16. Is there a problem? + +In general, you can simply ignore this error statement. This error is linked to the Nvidia binary driver. It simply means that your 3Node lost connection with the graphic card (by unplugging and replugging the monitor for example). + + +### My 3Node won't boot without disabling the Secure Boot option, is it safe? + +In the case where you want to boot Zero-OS, disabling Secure Boot option is safe. With Secure Boot disabled, it can be easier or even necessary when it comes to booting Zero-OS. Secure Boot is used when you want to lock the BIOS/UEFI settings. + + + +### When I tried to boot my 3Node, at some point the screen went black, with or without a blinking hyphen or dash. What could cause this and what could I do to resolve the issue? + +There is a possibility that this happens because you are booting your 3Node on a HDD. A 3Node needs a minimum of 500GB of SSD to work properly. + +Also, make sure that you are using the correct boot option (Legacy BIOS or UEFI) in the Settings and that it corresponds to the correct booting image on the ThreeFold Bootstrap page. + +This problem often arises when you plugged your disks in the wrong controller. For example, try unpluging the disks from the SAS controller, and plug them in the SATA controller. Also, disable the SAS controller if needed. + +In a Legacy BIOS boot, make sure Legacy is enabled and disable *Data Execution Prevention* if possible. + +Also, it might have to do with your RAID controller configuration. Make sure this is properly set. For example, configuring all the HDD disks into one logical disk can fix this problem, or re-flashing the RAID card can also help. + + + +### My 3Nodes go offline after a modem reboot. Is there a way to prevent this? + +Yes, there are many ways to prevent this. An easy solution is to set the DHCP server to reserve local IPs for the 3Nodes MAC addresses. + +This problem is also preventable if your router stays online during the modem reboot. + +Indeed, rebooting the 3Nodes is necessary when there are local IP changes, as 3Nodes are addressed a local IP addresses when they are booted. + +The DHCP will addresses any local IP address that is available when you are booting a 3Node. Reserving local IP addresses is a good TF farming practice. + + + +### When I boot my 3Node, it reaches the Welcome to Zero-OS window, but it doesn't boot properly and there's an error message: failed to load object : type substrate..., what can I do? + +Usually simply rebooting the 3Node fixes this problem. + + + +### When I try to access iDRAC on a web browswer, even with protected mode off, I get the error The webpage cannot be found, what can I do? + +Open iDRAC in the Internet Explorer emulator extension (IE Tab) in Chrome, then update iDRAC. It should work elsewhere then. Sometimes, it will be needed to add "ST1=code" at the end of the IE Tab url. + + + +### When booting the 3Node, I get the error Network interface detected but autoconfiguration failed. What can I do? + +First make sure your network cable is plugged in and that your DHCP is working and responding. If you change the NIC port of the ethernet cable, make sure to reboot the 3Node so Zero-OS can change the NIC port attribution. + +Some farmers reported that this got fixed by simply powering off the 3Node(s), the router and modem for 2 minutes then powering it all back on. Resetting the modem and router (switch on the hardware) in the process can also help. + +If this doesn't work, try to upgrade the firmware of the NIC and the motherboard. If this still doesn't work, the NIC card might be broken. Try with another NIC card. + + + +### When I boot my Dell server, I get the message: All of the disks from your previous configuration are gone... Press any key to continue or 'C' to load the configuration utility. What can I do? + +Many changes to your server can lead to this message. + +Usually, the easiest solution is to reset the disk configuration in iDRAC's configuration utility. + +What can causes this message: + +1. During a new installation, the cables connecting to your external storage are not wired to the correct ports. +2. Your RAID adapter has failed. +3. Your SAS cables are not plugged properly or are malfunctioning. + +Note: Resetting the configuration will destroy all data on all virtual disks. Make sure you know what you are doing! In doubt, ask the TF community. + + + +### I have a Dell R620. In Zero-OS, I get the failure message No network card found and then the 3Node reebots after few seconds. The same happens for every LAN input. What can I do? + +The first thing to try here is to boot the server in BIOS mode instead of UEFI mode. If this doesn't fix the problem, try the following. + +Sometimes, this happens when the firmwares of BIOS, iDRAC, Lifecycle Controller and NIC are incompatible to each other. The solution is then to update them all correctly. Some problems can arise in the process. + +First, you should try to do the updates using iDRAC as you can update both iDRAC and BIOS there. If this does not work, try to update separate, with a live-linux distro, the BIOS, iDRAC and Lifecycle Controller. Once this is done, the server should be able to do a liveupdate, https to dell support website, via lifecycle-controller. This would update the other components. For more details on this method, watch [this video](https://www.youtube.com/watch?v=ISA7j2BKgjI). + +Note: Some farmers have reported that the Broadcom NIC card does not work well for Zero-OS and that a standard Intel PCI NIC card replacement resolved the issue. This could be a more straightforward method if updating the firmwares doesn't resolve the issue. + + + +### I am using freeDos to crossflash my raid controller on a Dell server, but I can't see the RAID controller with the Command Info. What can I do? + +Turn on the raid controller in the BIOS, otherwise freeDos does not show you the raid controller with the command Info. + + + +### Can I use a VGA to HDMI adaptor to connect a TV screen or monitor to the 3Node? I tried to boot a 3Node with a VGA to HDMI adaptor but the boot fails, what can I do? + +This might work, but it has been reported by farmers that Zero-OS might have difficulties booting when this is done with a VGA/HDMI adaptor on a TV screen. This is most likely due to the TV screen not supporting the output once the system loaded into Zero-OS. The easy fix to this issue is to use a standard computer monitor with a VGA plug. + + + +### When I try to boot my 3Node, the fans start spinning fast with a loud noise and the screen is black. What can I do to resolve this? + +There may be several causes to this issue. You can try to remove all the RAM sticks, to clean the dust and then to reseat the RAM sticks. If it still doesn't resolve the issue, you can check the RAM sticks one by one to see if one is malfunctioning. This often resolves the issue. Also, some cables might not be properly connected. + + + +### When booting Zero-OS with IPV6 configurations, I get the errors (1) dial tcp: address IPV6-address too many columns in address and (2) no pools matches key: not routable. What can I do to fix this issue? + +This usually means that the IPV6 attributed is not valid. It is also often caused when the DNS configuration does not resolve IPV6 correctly. + +To fix this issue, it is often necessary to adjust the IPV6 settings related to the router and the modem. Confirming with your Internet service provider (ISP) that the IPV6 settings are properly configured could also be necessary to fix the issue. + + + + +### When booting a 3Node, Zero-OS downloads fine, but then I get the message: error no route with default gateway found, and the message: info check if interface has a cable plugged in. What could fix this? + +Make sure you have network stack enabled in BIOS. If so, check you ethernet port and make sure that it's clean. Also make sure the ethernet rj45 connectors are clean on both ends. If that does not work, verify the state of your SATA cables. If all this doesn't work, download and re-install Zero-OS. + + + +### How can I update Dell and HP servers to Intel E5-2600v2, E5-2400v2 and E5-4600v2, when applicable? + +There are many ressources online with steps on how to do this. You can check this [youtube video](https://www.youtube.com/watch?v=duzrULLtonM) on Dell and HP servers, as welll as this [documentation](https://ixnfo.com/en/hp-proliant-gen8-update-to-support-cpu-e5-2600v2-e5-2400v2-e5-4600v2.html) for HP Proliant Gen8. + + + +### How can I update the firmware and driver of a Dell PowerEdge server? + +Dell has excellen documentation for this. Read [this](https://www.dell.com/support/kbdoc/en-us/000128194/updating-firmware-and-drivers-on-dell-emc-poweredge-servers) for the detailed steps. + + + +### When I boot a 3Node in UEFI mode, it gets stuck at: Initializing Network Device, is there a way to fix this? + +In short, booting the 3Node in BIOS mode instead of UEFI mode usually fixes this issue. + +You can make bootable USB with the USB option of the [Zero-OS bootstrap image page](https://bootstrap.grid.tf/). Make sure to boot your server using BIOS and not UEFI. In the boot sequence, make the USB as your first choice to boot. + + + +### When I boot my 3Node, it gets stuck during the Zero-OS download. It never reaches 100%. What can I do to fix this issue? + +Here are some ways to troubleshoot your 3Node when it cannot download Zero-OS completely (to 100%): + +* Sometimes, just rebooting the 3Node and/or trying a little bit later can work. +* It can help to reboot the modem and the router. +* Make sure your BIOS/UEFI is up to date. Updating the BIOS/UEFI can help. +* It can also help to set the correct date and time. + + + +### When booting a 3Node, I get the error=“context deadline exceeded” module=network error=failed to initialize rmb api failed to initialized admin mw: failed to get farm: farm not found: object not found. What can I do to fix this issue? + +Usually, the simple fix to this issue is to make sure that your bootstrap image is on the same network as your farm. For example, if you created your farm on the Main net, you should use a Main net Zero-OS bootstrap image. + + + +## ThreeFold Grid and Data + + +### How is the farming minting reward calculated? Is the Grid always monitoring my 3Node? + +The Grid uses an algorithm that does not continually monitor the 3Node. It does its best to determine uptime through occasional checking in, which we call ping. The 3Node sends a ping to the Grid and the Grid sends a ping to the 3Node to confirm the reception. (Ping-Pong!) + +It’s helpful to understand that the Grid is really just 3Nodes and TF Chain (which itself is a collection of nodes). Nodes report their uptime by writing an entry on TF Chain, about once every two hours. These reports are used for minting. + + + +### How does communication happen on the ThreeFold Grid at the 3Node's level? + +There are two ways to get information about nodes. Once is to query TF Chain, and the other is to communicate with nodes directly. + + + +### What is the ThreeFold Node Status bot Telegram link? + +The link is the following: https://t.me/tfnodestatusbot. + + + +### How does the ThreeFold Node Status bot work? How can I use the ThreeFold Node Status bot to verify if my 3Node is online? + +1. Click on this link: https://t.me/tfnodestatusbot + +2. To subscribe a node, write the command */subscribe nodeID*. With the ID 100 as example, write: */subscribe 100* + +3. To verify the status of all your 3Nodes, write: */status* + +4. To verify the status of all your 3Nodes through Yggdrasil, write: */ping* + +Note: The bot should send you alerts when it considers any registered node to be offline. + + + +### How does the Telegram Status Bot get information from my 3Node? My 3Node is online on the ThreeFold Node Finder, but offline on the Telegram Status Bot, is this normal? + +The status bot communicates directly by sending pings to the nodes over Yggdrasil every five minutes. Therefore, it will report on temporary network interruptions that might not affect your total uptime calculation as used for minting. + + + +### I noticed that when I reboot my 3Node, the uptime counter on the ThreeFold Node Finder goes back to zero. Does it mean I lose uptime and the uptime start over again when I reboot the 3Node? + +No. The only uptime you lose is the time your 3Node was offline from the ThreeFold Grid. This ThreeFold Grid still has the data of your total uptime of the month. The Node Finder only shows this statistics as: "*This node has been up non-stop without being rebooted for now* [insert time]". If you maintain a total uptime above the minimum uptime, you're fine. + +*For now the farming rewards are proportional to the total uptime. + + +### One of my nodes is showing the wrong location. Any problem with that? +The ThreeFold Node Finder is showing your ISP location. This is perfectly normal. + + +## Memory + +### Can I use different type of RAM for the same 3Node? + +No. Always use the same type of RAM per 3Node. If you use RDIMM, go all RDIMM, etc. Check your hardware specifications to make sure you have the right type of memory. + + + +### How can I know if the memory I am buying is correct for my specific hardware? + +To be sure, look into the owner's manual of your specific computer. + +In general, you can go to [Memory.net](https://memory.net/) and look for your specific computer model. As general steps, select your computer's system in *By system*, then select the series and then select the specific model of the series. You will then see available memories to buy from memory.net. You can also simply read the documentation at the bottom. The memory type supported by your computer will be explained. Then you can buy the memory needed from any other computer store. + +For servers, you can check with Cloudninja's documentation [here](https://cloudninjas.com/pages/server-memory). Search for your specific hardware and look for the compatible memory. This reference is good for rack and tower servers. + + + +### What do the terms RDIMM, LDIMM, UDIMM, LRDIMM, FBDIMM mean when it comes to RAM memory sticks? + +Well first, the DIMM means dual inline memory module. + +* U stands for or unregistered (or unbuffered). + +* R stands for registered memory. + +* LR stands for load-reduced. + +* FB stands for fully-buffered. + + + +### What is the difference between ECC and non-ECC memory? + +ECC means error correction code memory. This type of memory can detect and correct data corruption. Non-ECC mostly cannot detect nor correct, but some can detect, but never correct data corruption. Check your hardware specifications to make sure you have the right type of memory (ECC or non-ECC). + + +### How can I change the RAM memory sticks on my 3Nodes? How can I achieve dual channel configuration with sticks of RAM? + +First, always use RAM sticks of the same size and type. It should be noted on your motherboard which slots to populate first. As a general guide, there is usually 2 slots A and B, with each 2 memory stick entries. You must then install the ram sticks on A1 and B1 in order to achieve dual channel, then A2 and B2 if you have more (visual order: A1 A2 B1 B2). + +> Example: You want to start with your largest sticks, evenly distributed between both processors and work your way down to your smallest. Let's take an example with 2 processors as well as 4x 16GB sticks and 4x 8GB sticks. The arrangement would be A1-16GB, B1-16GB, A2-16GB, B2-16GB, A3-8GB, B3-8GB, A4-8GB, B4-8GB. Avoid odd numbers as well. You optimally want pairs. So if you only have 5x 8GB sticks, only install 4 until you have an even 6. + + + +### What does RAM mean? + +RAM means random access memory. Those type of memory can be read and changed in any order. + + + + +### What does DIMM mean when it comes to RAM sticks? + +It means *dual in-line memory module*. This type of computer memory is natively 64 bits, enabling fast data transfer. + + + +### I have 24 DIMMS ram slots on my server. Can I use them all? + +Be careful when installing memory on a server. Always check your server's documentation to make sure your RAM sticks combination are correct. + +For example, on the Dell R720, you can have 24x16gb RAM ECC sticks, but it can only handle 16 Quad ranked DIMMs. In this case, you can fill up all slots with registered DIMMs if you have a maximum of 4 quad DIMMS ranked on each CPU. + +# Ask a Question to the ThreeFold Community + +If you have any question, you can ask the ThreeFold community via the ThreeFold forum or the ThreeFold Telegram channels: + +* [ThreeFold Forum](https://forum.threefold.io/) +* [ThreeFold General TG Channel](https://t.me/threefold) +* [ThreeFold Farmer TG Channel](https://t.me/threefoldfarmers) +* [TF Grid Tester TG Channel](https://t.me/threefoldtesting) + +> Note 1: If we wrote something wrong, tell us! + +> Note 2: This is a collective effort. A big *Thank You* to the great ThreeFold Community. Many Q+A are contributions from the enthusiast farmers, users and developers of the ever-growing ThreeFold community. \ No newline at end of file diff --git a/collections/faq/img/3nodes.png b/collections/faq/img/3nodes.png new file mode 100644 index 0000000..c7fe437 Binary files /dev/null and b/collections/faq/img/3nodes.png differ diff --git a/collections/faq/img/faq_img_readme.md b/collections/faq/img/faq_img_readme.md new file mode 100644 index 0000000..5c71297 --- /dev/null +++ b/collections/faq/img/faq_img_readme.md @@ -0,0 +1 @@ +# Image folder for the FAQ of the Threefold Manual 3.0 diff --git a/collections/faq/img/minting2022.png b/collections/faq/img/minting2022.png new file mode 100644 index 0000000..3335f69 Binary files /dev/null and b/collections/faq/img/minting2022.png differ diff --git a/collections/faq/img/tf_general.jpg b/collections/faq/img/tf_general.jpg new file mode 100644 index 0000000..9925a1d Binary files /dev/null and b/collections/faq/img/tf_general.jpg differ diff --git a/collections/faq/img/tf_grid.png b/collections/faq/img/tf_grid.png new file mode 100644 index 0000000..3996369 Binary files /dev/null and b/collections/faq/img/tf_grid.png differ diff --git a/collections/faq/img/tf_grid_3nodes.png b/collections/faq/img/tf_grid_3nodes.png new file mode 100644 index 0000000..49f5a98 Binary files /dev/null and b/collections/faq/img/tf_grid_3nodes.png differ diff --git a/collections/faq/img/wethreepedia_developer.png b/collections/faq/img/wethreepedia_developer.png new file mode 100644 index 0000000..58a1967 Binary files /dev/null and b/collections/faq/img/wethreepedia_developer.png differ diff --git a/collections/faq/img/wethreepedia_faq_poster.jpg b/collections/faq/img/wethreepedia_faq_poster.jpg new file mode 100644 index 0000000..bf43918 Binary files /dev/null and b/collections/faq/img/wethreepedia_faq_poster.jpg differ diff --git a/collections/faq/img/wethreepedia_validator.jpg b/collections/faq/img/wethreepedia_validator.jpg new file mode 100644 index 0000000..f23a7cf Binary files /dev/null and b/collections/faq/img/wethreepedia_validator.jpg differ diff --git a/collections/farmers/.collection b/collections/farmers/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/farmers/3node_building/1_create_farm.md b/collections/farmers/3node_building/1_create_farm.md new file mode 100644 index 0000000..460fc32 --- /dev/null +++ b/collections/farmers/3node_building/1_create_farm.md @@ -0,0 +1,88 @@ +

1. Create a Farm

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Create a TFChain Account](#create-a-tfchain-account) +- [Create a Farm](#create-a-farm) +- [Create a ThreeFold Connect Wallet](#create-a-threefold-connect-wallet) +- [Add a Stellar Address for Payout](#add-a-stellar-address-for-payout) + - [Farming Rewards Distribution](#farming-rewards-distribution) +- [More Information](#more-information) + +*** + +## Introduction + +We cover the basic steps to create a farm with the ThreeFold Dashboard. We also create a TFConnect app wallet to receive the farming rewards. + +## Create a TFChain Account + +We create a TFChain account using the ThreeFold Dashboard. + +Go to the [ThreeFold Dashboard](https://dashboard.grid.tf/), click on **Create Account**, choose a password and click **Connect**. + +![tfchain_create_account](./img/dashboard_tfchain_create_account.png) + +Once your profile gets activated, you should find your Twin ID and Address generated under your Mnemonics for verification. Also, your Account Balance will be available at the top right corner under your profile name. + +![tf_mnemonics](./img/dashboard_tf_mnemonics.png) + +## Create a Farm + +We create a farm using the dashboard. + +In the left-side menu, select **Farms** -> **Your Farms**. + +![your_farms](./img/dashboard_your_farms.png) + +Click on **Create Farm**, choose a farm name and then click **Create**. + +![create_farm](./img/dashboard_create_farm.png) + +![farm_name](./img/dashboard_farm_name.png) + +## Create a ThreeFold Connect Wallet + +Your farming rewards should be sent to a Stellar wallet with a TFT trustline enabled. The simplest way to proceed is to create a TF Connect app wallet as the TFT trustline is enabled by default on this wallet. For more information on TF Connect, read [this section](../../threefold_token/storing_tft/tf_connect_app.md). + +Let's create a TFConnect Wallet and take note of the wallet address. First, download the app. + +This app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +- Note that for Android phones, you need at minimum Android Nougat, the 8.0 software version. +- Note that for iOS phones, you need at minimum iOS 14.5. It will be soon available to iOS 13. + +Open the app, click **SIGN UP**, choose a ThreeFold Connect Id, write your email address, take note of the seed phrase and choose a pin. Once this is done, you will have to verify your email address. Check your email inbox. + +In the app menu, click on **Wallet** and then click on **Create Initial Wallet**. + +To find your wallet address, click on the **circled i** icon at the bottom of the screen. + +![dashboard_tfconnect_wallet_1](./img/dashboard_tfconnect_wallet_1.png) + +Click on the button next to your Stellar address to copy the address. + +![dashboard_tfconnect_wallet_2](./img/dashboard_tfconnect_wallet_2.png) + +You will need the TF Connect wallet address for the next section. + +> Note: Make sure to keep your TF Connect Id and seed phrase in a secure place offline. You will need these two components to recover your account if you lose access. + +## Add a Stellar Address for Payout + +In the **Your Farms** section of the dashboard, click on **Add/Edit Stellar Payout Address**. + +![dashboard_walletaddress_1](./img/dashboard_walletaddress_1.png) + +Paste your Stellar wallet address and click **Submit**. + +![dashboard_walletaddress_2](./img/dashboard_walletaddress_2.png) + +### Farming Rewards Distribution + +Farming rewards will be sent to your farming wallet around the 8th of each month. This can vary depending on the situation. The minting is done automatically by code and verified by humans as a double check. + +## More Information + +For more information, such as setting IP addresses, you can consult the [Dashboard Farms section](../../dashboard/farms/farms.md). \ No newline at end of file diff --git a/collections/farmers/3node_building/2_bootstrap_image.md b/collections/farmers/3node_building/2_bootstrap_image.md new file mode 100644 index 0000000..9234242 --- /dev/null +++ b/collections/farmers/3node_building/2_bootstrap_image.md @@ -0,0 +1,177 @@ +

2. Create a Zero-OS Bootstrap Image

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the Zero-OS Bootstrap Image](#download-the-zero-os-bootstrap-image) +- [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image) + - [CD/DVD BIOS](#cddvd-bios) + - [USB Key BIOS+UEFI](#usb-key-biosuefi) + - [BalenaEtcher (MAC, Linux, Windows)](#balenaetcher-mac-linux-windows) + - [CLI (Linux)](#cli-linux) + - [Rufus (Windows)](#rufus-windows) +- [Additional Information (Optional)](#additional-information-optional) + - [Expert Mode](#expert-mode) + - [Use a Specific Kernel](#use-a-specific-kernel) + - [Disable GPU](#disable-gpu) + - [Bootstrap Image URL](#bootstrap-image-url) + - [Zeros-OS Bootstrapping](#zeros-os-bootstrapping) + - [Zeros-OS Expert Bootstrap](#zeros-os-expert-bootstrap) + +*** + +## Introduction + +We will now learn how to create a Zero-OS bootstrap image in order to boot a DIY 3Node. + +## Download the Zero-OS Bootstrap Image + +Let's download the Zero-OS bootstrap image. + +In the Farms section of the Dashboard, click on **Bootstrap Node Image** + +![dashboard_bootstrap_farm](./img/dashboard_bootstrap_farm.png) + +or use the direct link [https://v3.bootstrap.grid.tf](https://v3.bootstrap.grid.tf): + +``` +https://v3.bootstrap.grid.tf +``` + +![Farming_Create_Farm_21](./img/farming_createfarm_21.png) + +This is the Zero-OS v3 Bootstrapping page. + +![Farming_Create_Farm_22](./img/farming_createfarm_22.png) + +Write your farm ID and choose production mode. + +![Farming_Create_Farm_23](./img/farming_createfarm_23.png) + +If your system is new, you might be able to run the bootstrap in UEFI mode. + +![Farming_Create_Farm_24](./img/farming_createfarm_24.png) + +For older systems, run the bootstrap in BIOS mode. For BIOS CD/DVD, choose **ISO**. For BIOS USB, choose **USB** + +Download the bootstrap image. Next, we will burn the bootstrap image. + + + +## Burn the Zero-OS Bootstrap Image + +We show how to burn the Zero-OS bootstrap image. A quick and modern way is to burn the bootstrap image on a USB key. + +### CD/DVD BIOS + +For the BIOS **ISO** image, download the file and burn it on a DVD. + +### USB Key BIOS+UEFI + +There are many ways to burn the bootstrap image on a USB key. The easiest way that works for all operating systems is to use BalenaEtcher. We also provide other methods. + +#### BalenaEtcher (MAC, Linux, Windows) + +For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the bootstrap image file you downloaded previously. + +> Note: There are alternatives to BalenaEtcher (e.g. [usbimager](https://gitlab.com/bztsrc/usbimager/)). + +**General Steps with BalenaEtcher:** + +1. Download BalenaEtcher +2. Open BalenaEtcher +3. Select **Flash from file** +4. Find and select the bootstrap image (with your correct farm ID) +5. Select **Target** (your USB key) +6. Select **Flash** + +That's it. Now you have a bootstrap image on Zero-OS as a bootable removable media device. + + +#### CLI (Linux) + +For the BIOS **USB** and the UEFI **EFI IMG** images, you can do the following on Linux: + + sudo dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd* + +Here the * is to indicate that you must adjust according to your disk. To see your disks, write lsblk in the command window. Make sure you select the proper disk! + +*If you USB key is not new, make sure that you format it before burning the Zero-OS image. + +#### Rufus (Windows) + +For Windows, if you are using the "dd" able image, instead of writing command line, you can use the free USB flashing program called [Rufus](https://sourceforge.net/projects/rufus.mirror/) and it will automatically do this without needing to use the command line. Rufus also formats the boot media in the process. + +## Additional Information (Optional) + +We cover some additional information. Note that the following information is not needed for a basic farm setup. + +### Expert Mode + +You can use the [expert mode](https://v3.bootstrap.grid.tf/expert) to generate specific Zero-OS bootstrap images. + +Along the basic options of the normal bootstrap mode, the expert mode allows farmers to add extra kernel arguments and decide which kernel to use from a vast list of Zero-OS kernels. + +#### Use a Specific Kernel + +You can use the expert mode to choose a specific kernel. Simply set the information you normally use and then select the proper kernel you need in the **Kernel** drop-down list. + +![](./img/bootstrap_kernel_list.png) + +#### Disable GPU + +You can use the expert mode to disable GPU on your 3Node. + +![](./img/bootstrap_disable-gpu.png) + +In the expert mode of the Zero-OS Bootstrap generator, fill in the following information: + +- Farmer ID + - Your current farm ID +- Network + - The network of your farm +- Extra kernel arguments + - ``` + disable-gpu + ``` +- Kernel + - Leave the default kernel +- Format + - Choose a bootstrap image format +- Click on **Generate** +- Click on **Download** + +### Bootstrap Image URL + +In both normal and expert mode, you can use the generated URL to quickly download a Zero-OS bootstrap image based on your farm specific setup. + +Using URLs can be a very quick and efficient way to create new bootstrap images once your familiar with the Zero-OS bootstrap URL template and some potential varations. + +``` +https://.bootstrap.grid.tf//////.../ +``` + +Note that the arguments and the kernel are optional. + +The following content will provide some examples. + +#### Zeros-OS Bootstrapping + +On the [main page](https://v3.bootstrap.grid.tf/), once you've written your farm ID and selected a network, you can copy the generated URL of any given image format. + +For example, the following URL is a download link to an **EFI IMG** of the Zero-OS bootstrap image of farm 1 on the main TFGrid v3 network: + +``` +https://v3.bootstrap.grid.tf/uefimg/prod/1 +``` + +#### Zeros-OS Expert Bootstrap + +You can use the generated sublink at the **Generate step** of the expert mode to get a quick URL to download your bootstrap image. + +- After setting the parameters and arguments, click on **Generate** +- Add the **Target** content to the following URL `https://v3.bootstrap.grid.tf` + - For example, the following URL sets an **ipxe** script of the Zero-OS bootstrap of farm 1 on the main TFGrid v3 network, with the **disable-gpu** function enabled as an extra kernel argument and a specific kernel: + - ``` + https://v3.bootstrap.grid.tf/ipxe/test/1/disable-gpu/zero-os-development-zos-v3-generic-b8706d390d.efi + ``` \ No newline at end of file diff --git a/collections/farmers/3node_building/3_set_hardware.md b/collections/farmers/3node_building/3_set_hardware.md new file mode 100644 index 0000000..6f053dd --- /dev/null +++ b/collections/farmers/3node_building/3_set_hardware.md @@ -0,0 +1,188 @@ +

3. Set the Hardware

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Hardware Requirements](#hardware-requirements) + - [3Node Requirements Summary](#3node-requirements-summary) +- [Bandwidth Requirements](#bandwidth-requirements) +- [Link to Share Farming Setup](#link-to-share-farming-setup) +- [Powering the 3Node](#powering-the-3node) + - [Surge Protector](#surge-protector) + - [Power Distribution Unit (PDU)](#power-distribution-unit-pdu) + - [Uninterrupted Power Supply (UPS)](#uninterrupted-power-supply-ups) + - [Generator](#generator) +- [Connecting the 3Node to the Internet](#connecting-the-3node-to-the-internet) + - [Z-OS and Switches](#z-os-and-switches) +- [Using Onboard Storage (3Node Servers)](#using-onboard-storage-3node-servers) +- [Upgrading a DIY 3Node](#upgrading-a-diy-3node) + +*** + + +## Introduction + +In this section of the ThreeFold Farmers book, we cover the essential farming requirements when it comes to ThreeFold 3Node hardware. + +The essential information are available in the section [3Node Requirements Summary](#3node-requirements-summary). + +## Hardware Requirements + + +You need a theoretical minimum of 500 GB of SSD and 2 GB of RAM on a mini pc, desktop or server. In short, for peak optimization, aim for 100 GB of SSD and 8GB of RAM per thread. (Thread is equivalent to virtual core or logical core.) + +Also, TFDAO might implement a farming parameter based on [passmark](https://www.cpubenchmark.net/cpu_list.php). From the ongoing discussion on the Forum, you should aim at a CPU mark of 1000 and above per core. + +> 3Node optimal farming hardware ratio -> 100 GB of SSD + 8 GB of RAM per Virtual Core + +Note that you can run Zero-OS on a Virtual Machine (VM), but you won't farm any TFT from this process. To farm TFT, Zero-OS needs to be on bare metal. + +Also, note that ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. You cannot farm TFT with Windows, Linux or MAC OS installed on your disks. If you need to use such OS temporarily, boot it in Try mode with a removable media (USB key). + +Note: Once you have the necessary hardware, you need to [create a farm](./1_create_farm.md), [create a Zero-OS bootstrap image](./2_bootstrap_image.md), [wipe your disks](./4_wipe_all_disks.md) and [set the BIOS/UEFI](./5_set_bios_uefi.md) . Then you can [boot your 3Node](./6_boot_3node.md). If you are planning in building a farm in data center, [read this section](../advanced_networking/advanced_networking_toc.md). + + + +### 3Node Requirements Summary + + + +Any computer with the following specifications can be used as a DIY 3Node. + +- Any 64-bit hardware with an Intel or AMD processor chip. +- Servers, desktops and mini computers type hardware are compatible. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A ratio of 100GB of SSD and 8GB of RAM per thread is recommended. +- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT. +- A [passmark](https://www.passmark.com/) of 1000 per core is recommended and will probably be a minimum requirement in the future. + +*A passmark of 1000 per core is recommend and will be a minimum requirement in the future. This is not yet an official requirement. A 3Node with less than 1000 passmark per core of CPU would not be penalized if it is registered before the DAO settles the [Passmark Question](https://forum.threefold.io/t/cpu-benchmarking-for-reward-calculations/2479). + + + +## Bandwidth Requirements + + + +A 3Node connects to the ThreeFold Grid and transfers information, whether it is in the form of compute, storage or network units (CU, SU, NU respectively). The more resources your 3Nodes offer to the Grid, the more bandwidth will be needed to transfer the additional information. In this section, we cover general guidelines to make sure you have enough bandwidth on the ThreeFold Grid when utilization will be happening. + +Note that the TFDAO will need to discuss and settle on clearer guidelines in the near future. For now, we propose those general guidelines. Being aware of these numbers as you build and scale your ThreeFold farm will set you in the proper direction. + +> **The strict minimum for one Titan is 1 mbps of bandwidth**. + +If you want to expand your ThreeFold farm, you should check the following to make sure your bandwidth will be sufficient when there will be Grid utilization. + +**Bandwidth per 3Node Equation** + +> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) + +This equation means that for each TB of HDD you need 5 mbps of bandwidth, and for each TB of SSD, 8 Threads and 64GB of RAM (whichever is higher), you need 10 mbps of bandwidth. + +This means a proper bandwidth for a Titan would be 10 mbps. As stated, 1 mbps is the strict minimum for one Titan. + + + +## Link to Share Farming Setup + + +If you want ideas and suggestions when it comes to building DIY 3Nodes, a good place to start is by checking what other farmers have built. [This post on the Forum](https://forum.threefold.io/t/lets-share-our-farming-setup/286) is a great start. The following section also contains great DIY 3Node ideas. + +## Powering the 3Node + +### Surge Protector + +A surge protector is highly recommended for your farm and your 3Nodes. This ensures your 3Nodes will not overcharge if a power surge happens. Whole-house surge protectors are also an option. + +### Power Distribution Unit (PDU) + +A PDU (power distribution unit) is useful in big server settings in order to manage your wattage and keep track of your power consumption. + + +### Uninterrupted Power Supply (UPS) + + +A UPS (uninterrupted power supply) is great for a 3Node if your power goes on and off frequently for short periods of time. This ensures your 3Node does not need to constantly reboot. If your electricity provider is very reliable, a UPS might not be needed, as the small downtime resulting from rare power outages with not exceed the DIY downtime limit*. (95% uptime, 5% downtime = 36 hours per month.) Of course, for greater Grid utilization experience, considering adding a UPS to your ThreeFold farm can be highly beneficial. + +Note: Make sure to have AC Recovery Power set properly so your 3Node goes back online if power shutdowns momentarily. UPS are generally used in data center to make sure people have enough time to do a "graceful" shutdown of the units when power goes off. In the case of 3Nodes, they do not need graceful shutdowns as Zero-OS cannot lose data while functioning. The only way to power down a 3Node is simply to turn it off directly on the machine. + + +### Generator + + +A generator will be needed for very large installation with or without an unsteady main power supply. + + + +## Connecting the 3Node to the Internet + +As a general consideration, to connect a 3Node to the Internet, you must use an Ethernet cable and set DHCP as a network management protocol. Note that WiFi is not supported with ThreeFold farming. + +The general route from the 3Node to the Internet is the following: + +> 3Node -> Switch (optional) -> Router -> Modem + +Note that most home routers come with a built-in switch to provide multiple Ethernet ports. Using a stand-alone switch is optional, but can come quite handy when farmers have many 3Nodes. + + + +### Z-OS and Switches + +Switches can be managed or unmanaged. Managed switches come with managed features made available to the user (typically more of such features on premium models). + +Z-OS can work with both types of switches. As long as there's a router reachable on the other end offering DHCP and a route to the public internet, it's not important what's in between. Generally speaking, switches are more like cables, just part of the pipes that connect devices in a network. + +We present a general overview of the two types of switches. + +**Unmanaged Switches** + +Unmanaged are the most common type and if someone just says "switch" this is probably what they mean. These switches just forward traffic along to its destination in a plug and play manner with no configuration. When a switch is connected to a router, you can think of the additional free ports on the switch as essentially just extra ports on the router. It's a way to expand the available ports and sometimes also avoid running multiple long cables. My nodes are far from my router, so I run a single long ethernet cable to a switch next to the nodes and then use multiple shorter cables to connect from the switch to the nodes. + +**Managed Switches** + +Managed switches have more capabilities than unmanaged switches and they are not very common in home settings (at least not as standalone units). Some of our farmers do use managed switches. These switches offer much more control and also require configuration. They can enable advanced features like virtual LANs to segment the network. + + + +## Using Onboard Storage (3Node Servers) + +If your 3Node is based on a server, you can either use PCIe slots and PCIe-NVME adapter to install SSD NVME disk, or you can use the onboard storage. + +Usually, servers use RAID technology for onboard storage. RAID is a technology that has brought resilience and security to the IT industry. But it has some limitations that ThreeFold did not want to get stuck with. ThreeFold developed a different and more efficient way to [store data reliably](https://library.threefold.me/info/threefold#/cloud/threefold__cloud_products?id=storage-quantum-safe-filesystem). This Quantum Safe Storage overcomes some of the shortfalls of RAID and is able to work over multiple nodes geographically spread on the TF Grid. This means that there is no RAID controller in between data storage and the TF Grid. + +For your 3Nodes, you want to bypass RAID in order for Zero-OS to have bare metals on the system. + +To use onboard storage on a server without RAID, you can + +1. [Re-flash](https://fohdeesha.com/docs/perc.html) the RAID card +2. Turn on HBA/non-RAID mode +3. Install a different card. + +For HP servers, you simply turn on the HBA mode (Host Bus Adapter). + +For Dell servers, you can either cross, or [re-flash](https://fohdeesha.com/docs/perc.html), the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller. + + + +Once the disks are wiped, you can shutdown your 3Node and remove the Linux Bootstrap Image (USB key). Usually, there will be a message telling you when to do so. + + + +## Upgrading a DIY 3Node + + + +As we've seen in the [List of Common DIY 3Nodes](#list-of-common-diy-3nodes), it is sometimes necessary, and often useful, to upgrade your hardware. + +**Type of upgrades possible** + +- Add TBs of SSD/HDD +- Add RAM +- Change CPU +- Change BIOS battery +- Change fans + +For some DIY 3Node, no upgrades are required and this constitutes a good start if you want to explore DIY building without going into too much additional steps. + +For in-depth videos on how to upgrade mini-pc and rack servers, watch these great [DIY videos](https://www.youtube.com/user/floridanelson). \ No newline at end of file diff --git a/collections/farmers/3node_building/3node_building.md b/collections/farmers/3node_building/3node_building.md new file mode 100644 index 0000000..8f26b69 --- /dev/null +++ b/collections/farmers/3node_building/3node_building.md @@ -0,0 +1,14 @@ +

Building a DIY 3Node

+ +This section of the ThreeFold Farmers book presents the necessary and basic steps to build a DIY 3Node. + +For advanced farming information, such as GPU farming and room parameters, refer to the section [Farming Optimization](../farming_optimization/farming_optimization.md). + +

Table of Contents

+ +- [1. Create a Farm](./1_create_farm.md) +- [2. Create a Zero-OS Bootstrap Image](./2_bootstrap_image.md) +- [3. Set the Hardware](./3_set_hardware.md) +- [4. Wipe All the Disks](./4_wipe_all_disks.md) +- [5. Set the BIOS/UEFI](./5_set_bios_uefi.md) +- [6. Boot the 3Node](./6_boot_3node.md) \ No newline at end of file diff --git a/collections/farmers/3node_building/4_wipe_all_disks.md b/collections/farmers/3node_building/4_wipe_all_disks.md new file mode 100644 index 0000000..4e252a5 --- /dev/null +++ b/collections/farmers/3node_building/4_wipe_all_disks.md @@ -0,0 +1,106 @@ +

4. Wipe All the Disks

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [1. Create a Linux Bootstrap Image](#1-create-a-linux-bootstrap-image) +- [2. Boot Linux in *Try Mode*](#2-boot-linux-in-try-mode) +- [3. Use wipefs to Wipe All the Disks](#3-use-wipefs-to-wipe-all-the-disks) +- [Troubleshooting](#troubleshooting) + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we explain how to wipe all the disks of your 3Node. + + + +## Main Steps + +It only takes a few steps to wipe all the disks of a 3Node. + +1. Create a Linux Bootstrap Image +2. Boot Linux in *Try Mode* +3. Wipe All the Disks + +ThreeFold runs its own OS, which is Zero-OS. You thus need to start with completely wiped disks. Note that ALL disks must be wiped. Otherwise, Zero-OS won't boot. + +An easy method is to simply download a Linux distribution and wipe the disk with the proper command line in the Terminal. + +We will show how to do this with Ubuntu 20.04. LTS. This distribution is easy to use and it is thus a good introduction for Linux, in case you haven't yet explored this great operating system. + + + +## 1. Create a Linux Bootstrap Image + +Download the Ubuntu 20.04 ISO file [here](https://releases.ubuntu.com/20.04/) and burn the ISO image on a USB key. Make sure you have enough space on your USB key. You can also use other Linux Distro such as [GRML](https://grml.org/download/), if you want a lighter ISO image. + +The process here is the same as in section [Burning the Bootstrap Image](./2_bootstrap_image.md#burn-the-zero-os-bootstrap-image), but with the Linux ISO instead of the Zero-OS ISO. [BalenaEtcher](https://www.balena.io/etcher/) is recommended as it formats your USB in the process, and it is available for MAC, Windows and Linux. + + + +## 2. Boot Linux in *Try Mode* + +When you boot the Linux ISO image, make sure to choose *Try Mode*. Otherwise, it will install Linux on your computer. You do not want this. + + + +## 3. Use wipefs to Wipe All the Disks + +When you use wipefs, you are removing all the data on your disk. Make sure you have no important data on your disks, or make sure you have copies of your disks before doing this operation, if needed. + +Once Linux is booted, go into the terminal and write the following command lines. + +First, you can check the available disks by writing in a terminal or in a shell: + +``` +lsblk +``` + +To see what disks are connected, write this command: + +``` +fdisk -l +``` + +If you want to wipe one specific disk, here we use *sda* as an example, write this command: + +``` +sudo wipefs -a /dev/sda +``` + +And replace the "a" in sda by the letter of your disk, as shown when you did *lsblk*. The term *sudo* gives you the correct permission to do this. + +To wipe all the disks in your 3Node, write the command: + +``` +sudo for i in /dev/sd*; do wipefs -a $i; done +``` + +If you have any `fdisk` entries that look like `/dev/nvme`, you'll need to adjust the command line. + +For a nvme disk, here we use *nvme0* as an example, write: + +``` +sudo wipefs -a /dev/nvme0 +``` + +And replace the "0" in nvme0 by the number corresponding to your disk, as shown when you did *lsblk*. + +To wipe all the nvme disks, write this command line: + +``` +sudo for i in /dev/nvme*; do wipefs -a $i; done +``` + +## Troubleshooting + +If you're having issues wiping the disks, you might need to use **--force** or **-f** with wipefs (e.g. **sudo wipefs -af /dev/sda**). + +If you're having trouble getting your disks recognized by Zero-OS, some farmers have had success enabling AHCI mode for SATA in their BIOS. + +If you are using a server with onboard storage, you might need to [re-flash the RAID card](../../faq/faq.md#is-there-a-way-to-bypass-raid-in-order-for-zero-os-to-have-bare-metals-on-the-system-no-raid-controller-in-between-storage-and-the-grid). + + diff --git a/collections/farmers/3node_building/5_set_bios_uefi.md b/collections/farmers/3node_building/5_set_bios_uefi.md new file mode 100644 index 0000000..4c6c3e9 --- /dev/null +++ b/collections/farmers/3node_building/5_set_bios_uefi.md @@ -0,0 +1,172 @@ +

5. Set the BIOS/UEFI

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Z-OS and DHCP](#z-os-and-dhcp) + - [Regular Computer and 3Node Network Differences](#regular-computer-and-3node-network-differences) + - [Static IP Addresses](#static-ip-addresses) +- [The Essential Features of BIOS/UEFI for a 3Node](#the-essential-features-of-biosuefi-for-a-3node) +- [Setting the Remote Management of a Server with a Static IP Address (Optional)](#setting-the-remote-management-of-a-server-with-a-static-ip-address-optional) +- [Update the BIOS/UEFI firmware (Optional)](#update-the-biosuefi-firmware-optional) + - [Check the BIOS/UEFI version on Windows](#check-the-biosuefi-version-on-windows) + - [Check the BIOS/UEFI version on Linux](#check-the-biosuefi-version-on-linux) + - [Update the BIOS firmware](#update-the-bios-firmware) +- [Additional Information](#additional-information) + - [BIOS/UEFI and Zero-OS Bootstrap Image Combinations](#biosuefi-and-zero-os-bootstrap-image-combinations) + - [Troubleshoot](#troubleshoot) + + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we explain how to properly set the BIOS/UEFI of your 3Node. + +Note that the BIOS mode is usually needed for older hardware while the UEFI mode is usually needed for newer hardware, when it comes to booting properly Zero-OS on your DIY 3Node. + +If it doubt, start with UEFI and if it doesn't work as expected, try with BIOS. + +Before diving into the BIOS/UEFI settings, we will present some general considerations on Z-OS and DHCP. + +## Z-OS and DHCP + +The operating system running on the 3Nodes is called Zero-OS (Z-OS). When it comes to setting the proper network for your 3Node farm, you must use DHCP since Z-OS is going to request an IP from the DHCP server if there's one present, and it won't get network connectivity if there's no DHCP. + +The Z-OS philosophy is to minimize configuration wherever possible, so there's nowhere to supply a static config when setting your 3Node network. Instead, the farmer is expected to provide DHCP. + +While it is possible to set fixed IP addresses with the DHCP for the 3Nodes, it is recommended to avoid this and just set the DHCP normally without fixed IP addresses. + +By setting DHCP in BIOS/UEFI, an IP address is automatically assigned by your router to your 3Node every time you boot it. + +### Regular Computer and 3Node Network Differences + +For a regular computer (not a 3Node), if you want to use a static IP in a network with DHCP, you'd first turn off DHCP and then set the static IP to an IP address outside the DHCP range. That being said, with Z-OS, there's no option to turn off DHCP and there's nowhere to set a static IP, besides public config and remote management. In brief, the farmer must provide DHCP, either on a private or a public range, for the 3Node to boot. + +### Static IP Addresses + +In the ThreeFold ecosystem, there are only two situations where you would work with static IP addresses: to set a public config to a 3Node or a farm, and to remotely manage your 3Nodes. + +**Static IP and Public Config** + +You can [set a static IP for the public config of a 3Node or a farm](./1_create_farm.md#optional-add-public-ip-addresses). In thise case, the 3Node takes information from TF Chain and uses it to set a static configuration on a NIC (or on a virtual NIC in the case of single NIC systems). + +**Static IP and Remote Management** + +You can [set a static IP address to remotely manage a 3Node](#setting-the-remote-management-of-a-server-static-ip-address). + + + +## The Essential Features of BIOS/UEFI for a 3Node + +There are certain things that you should make sure are set properly on your 3Node. + +As a general advice, you can Load Defaults (Settings) on your BIOS, then make sure the options below are set properly. + +* Choose the correct combination of BIOS/UEFI and bootstrap image on [https://bootstrap.grid.tf/](https://bootstrap.grid.tf/) + * Newer system will use UEFI + * Older system will use BIOS + * Hint: If your 3Node boot stops at *Initializing Network Devices*, try the other method (BIOS or UEFI) +* Set Multi-Processor and Hyperthreading at Enabled + * Sometimes, it will be written Virtual Cores, or Logical Cores. +* Set Virtualization at Enabled + * On Intel, it is denoted as CPU virtualization and on ASUS, it is denoted as SVM. + * Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI. +* Set AC Recovery at Last Power State + * This will make sure your 3Node restarts after losing power momentarily. +* Select the proper Boot Sequence for the 3Node to boot Zero-OS from your bootstrap image + * e.g., if you have a USB key as a bootstrap image, select it in Boot Sequence +* Set Server Lookup Method (or the equivalent) at DNS. Only use Static IP if you know what you are doing. + * Your router will assign a dynamic IP address to your 3Node when it connects to Internet. +* Set Client Address Method (or the equivalent) at DHCP. Only use Static IP if you know what you are doing. + * Your router will assign a dynamic IP address to your 3Node when it connects to Internet. +* Secure Boot should be left at disabled + * Enable it if you know what you are doing. Otherwise, it can be set at disabled. + + + + +## Setting the Remote Management of a Server with a Static IP Address (Optional) + + +Note from the list above that by enabling the DHCP and DNS in BIOS, dynamic IP addresses will be assigned to 3Nodes. This way, you do not need any specific port configuration when booting a 3Node. + +As long as the 3Node is connected to the Internet via an ethernet cable (WiFi is not supported), Zero-OS will be able to boot. By setting DHCP in BIOS, an IP address is automatically assigned to your 3Node every time you boot it. This section concerns 3Node servers with remote management functions and interfaces. + +You can set up a node through static routing at the router without DHCP by assigning the MAC address of the NIC to a IP address within your private subnet. This will give a static IP address to your 3Node. + +With a static IP address, you can then configure remote management on servers. For Dell, [iDRAC](https://www.dell.com/support/kbdoc/en-us/000134243/how-to-setup-and-manage-your-idrac-or-cmc-for-dell-poweredge-servers-and-blades) is used, and for HP, [ILO](https://support.hpe.com/hpesc/public/docDisplay?docId=a00045463en_us&docLocale=en_US) is used. + + + +## Update the BIOS/UEFI firmware (Optional) + + +Updating the BIOS firmware is not always necessary, but to do so can help prevent future errors and troubleshootings. Making sure the Date and Time are set correctly can also help the booting process. + +Note: updating the BIOS/UEFI firmware is optional, but recommended. + + +### Check the BIOS/UEFI version on Windows + +Hit *Start*, type in *cmd* in the search box and click on *Command Prompt*. Write the line + +> wmic bios get smbiosbiosversion + +This will give you the BIOS or UEFI firmware of your PC. + +### Check the BIOS/UEFI version on Linux + +Simply type the following command + +> sudo dmidecode | less + +or this line: + +> sudo dmidecode -s bios-version + +### Update the BIOS firmware + +1. On the manufacturer's website, download the latest BIOS/UEFI firmware +2. Put the file on a USB flash drive (+unzip if necessary) +3. Restart your hardware and enter the BIOS/UEFI settings +4. Navigate the menus to update the BIOS/UEFI + +## Additional Information + +### BIOS/UEFI and Zero-OS Bootstrap Image Combinations + +To properly boot the Zero-OS image, you can either use an image made for a BIOS system or a UEFI system, this depends on your system. + +BIOS is older technology. It means *Basic Input/Output System*. + +UEFI is newer technology. It means *Unified Extensible Firmware Interface*. BIOS/UEFI is, in a way, the link between the hardware and the software of your computer. + +In general, setting a 3Node is similar whether it is with a BIOS or UEFI system. The important is to choose the correct combination of boot media and boot mode (BIOS/UEFI). + +The bootstrap images are available [here](https://bootstrap.grid.tf/). + +The choices are: + +1. EFI IMG - UEFI +2. EFI FILE - UEFI +3. iPXE - Boot from network +4. ISO - BIOS +5. USB - BIOS +6. LKRN - Boot from network + +Choices 1 and 2 are for UEFI (newer models). +Choices 4 and 5 are for BIOS (newer models). +Choices 3 and 6 are mainly for network boot. + +Refer to [this previous section](./2_bootstrap_image.md) for more information on creating a Zero-OS bootstrap image. + +For information on how to boot Zero-OS with iPXE, read [this section](./6_boot_3node.md#advanced-booting-methods-optional). + +### Troubleshoot + +You might have to try UEFI first and if it doesn't work, try BIOS. Usually when this is the case (UEFI doesn't work with your current computer), the following message will be shown: + +> Initializing Network Devices... + +And then... nothing. This means that you are still in the BIOS of the hardware and boot is not even started yet. When this happens, try the BIOS mode of your computer. \ No newline at end of file diff --git a/collections/farmers/3node_building/6_boot_3node.md b/collections/farmers/3node_building/6_boot_3node.md new file mode 100644 index 0000000..404e2e9 --- /dev/null +++ b/collections/farmers/3node_building/6_boot_3node.md @@ -0,0 +1,169 @@ +

6. Boot the 3Node

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [1. Booting the 3Node with Zero-OS](#1-booting-the-3node-with-zero-os) +- [2. Check the 3Node Status Online](#2-check-the-3node-status-online) +- [3. Receive the Farming Rewards](#3-receive-the-farming-rewards) +- [Advanced Booting Methods (Optional)](#advanced-booting-methods-optional) + - [PXE Booting with OPNsense](#pxe-booting-with-opnsense) + - [PXE Booting with pfSense](#pxe-booting-with-pfsense) +- [Booting Issues](#booting-issues) + - [Multiple nodes can run with the same node ID](#multiple-nodes-can-run-with-the-same-node-id) + +*** + + +## Introduction + +We explain how to boot the 3Node with the Zero-OS bootstrap image with a USB key. We also include optional advanced booting methods using OPNSense and pfSense. + +One of the great features of Zero-OS is that it can be completely run within the cache of your 3Node. Indeed, the booting device that contains your farm ID will connect to the ThreeFold Grid and download everything needed to run smoothly. There are many benefits in terms of security and protection of data that comes with this. + +## 1. Booting the 3Node with Zero-OS + +To boot Zero-OS, insert your Zero-OS bootstrap image USB key, power on your computer and choose the right booting sequence and parameters ([BIOS or UEFI](./5_set_bios_uefi.md)) in your BIOS/UEFI settings. Then, restart the 3Node. Zero-OS should boot automatically. + +Note that you need an ethernet cable connected to your router or switch. You cannot farm on the ThreeFold Grid with Wifi. + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +If time passes (an hour and more) and the node does not get registered, in many cases, [wiping the disks](./4_wipe_all_disks.md) all over again and trying another reboot usually resolves this issue. + +Once you have your node ID, you can also go on the ThreeFold Dashboard to see your 3Node and verify that your 3Node is online. + +## 2. Check the 3Node Status Online + +You can use the ThreeFold [Node Finder](../../dashboard/deploy/node_finder.md) to verify that your 3Node is online. + +* [ThreeFold Main Net Dashboard](https://dashboard.grid.tf/) +* [ThreeFold Test Net Dashboard](https://dashboard.test.grid.tf/) +* [ThreeFold Dev Net Dashboard](https://dashboard.dev.grid.tf/) +* [ThreeFold QA Net Dashboard](https://dashboard.qa.grid.tf/) + + +## 3. Receive the Farming Rewards + +The farming reward will be sent once per month at the address you gave when you set up your farm. You can review this process [here](./1_create_farm.md#add-a-stellar-address-for-payout). + +That's it. You've now completed the necessary steps to build a DIY 3Node and to connect it to the Grid. + +## Advanced Booting Methods (Optional) + +### PXE Booting with OPNsense + +> This documentation comes from the [amazing Network Booting Guide](https://forum.ThreeFold.io/t/network-booting-tutorial/2688) by @Fnelson on the ThreeFold Forum. + +Network booting ditches your standard boot USB with a local server. This TFTP server delivers your boot files to your 3 nodes. This can be useful in bigger home farms, but is all but mandatory in a data center setup. + +Network boot setup is quite easy and is centered about configuring a TFTP server. There are essentially 2 options for this, a small dedicated server such as a raspberry pi, or piggybacking on your pfsense or opnsense router. I would recommend the latter as it eliminates another piece of equipment and is probably more reliable. + +**Setting Up Your Router to Allow Network Booting** + +These steps are for OPNsense, PFsense may differ. These set are required regardless of where you have your TFTP server. + +> Services>DHCPv4>LAN>Network Booting + +Check “Enable Network Booting” + +Enter the IP address of your TFTP server under “Set next-server IP”. This may be the router’s IP or whatever device you are booting from. + +Enter “pxelinux.0” under Set default bios filename. + +Ignore the TFTP Server settings. + + +**TFTP server setup on a debian machine such as Ubuntu or Raspberry Pi** + +> apt-get update +> +> apt-get install tftpd-hpa +> +> cd /srv/tftp/ +> +> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz +> +> wget http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0 +> +> wget https://bootstrap.grid.tf/krn/prod/ --no-check-certificate +> +> mv ipxe-prod.lkrn +> +> tar -xvzf netboot.tar.gz +> +> rm version.info netboot.tar.gz +> +> rm pxelinux.cfg/default +> +> chmod 777 /srv/tftp/pxelinux.cfg (optional if next step fails) +> +> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default + + +**TFTP Server on a OPNsense router** + +> Note: When using PFsense instead of OPNsense, steps are probably similar, but the directory or other small things may differ. + +The first step is to download the TFTP server plugin. Go to system>firmware>Status and check for updates, follow prompts to install. Then click the Plugins tab and search for tftp, install os-tftp. Once that is installed go to Services>TFTP (you may need to refresh page). Check the Enable box and input your router ip (normally 192.168.1.1). Click save. + +Turn on ssh for your router. In OPNsense it is System>Settings>Administration. Then check the Enable, root login, and password login. Hop over to Putty and connect to your router, normally 192.168.1.1. Login as root and input your password. Hit 8 to enter the shell. + +In OPNsense the tftp directory is /usr/local/tftp + +> cd /usr/local +> +> mkdir tftp +> +> cd ./tftp +> +> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/netboot.tar.gz +> +> fetch http://ftp.nl.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/pxelinux.0 +> +> fetch https://bootstrap.grid.tf/krn/prod/ +> +> mv ipxe-prod.lkrn +> +> tar -xvzf netboot.tar.gz +> +> rm version.info netboot.tar.gz +> +> rm pxelinux.cfg/default +> +> echo 'default ipxe-prod.lkrn' >> pxelinux.cfg/default + +You can get out of shell by entering exit or just closing the window. + +**3Node Setup** + +Set the server to BIOS boot and put PXE or network boot as the first choice. At least on Dell machines, make sure you have the network cable in plug 1 or it won’t boot. + + + +### PXE Booting with pfSense + +> This documentation comes from the [amazing Network Booting Guide](https://forum.threefold.io/t/network-booting-tutorial/2688/7) by @TheCaptain on the ThreeFold Forum. + +These are the steps required to enable PXE booting on pfSense. This guide assumes you’ll be using the router as your PXE server; pfSense allows boot file uploads directly from its web GUI. + +* Log into your pfSense instance + * Go to System>Package Manager + * Search and add ‘tftpd’ package under ‘Available Packages’ tab +* Go to Services>TFTP Server + * Under ‘Settings’ tab check enable and enter the router IP in TFTP Server Bind IP field +* Switch to ‘Files’ tab under Services>TFTP Server and upload your ‘ipxe-prod.efi’ file acquired from https://v3.bootstrap.grid.tf/ (second option labeled ‘EFI Kernel’) +* Go to Services>DHCP Server + * Under ‘Other Options’ section click Display Advance next to ‘TFTP’ and enter router IP + * Click Display Advance next to ‘Network Booting’ + * Check enable, enter router IP in Next Server field + * Enter ipxe-prod.efi in Default BIOS file name field + +That's it! You’ll want to ensure your clients are configured with boot priority set as IPv4 in first spot. You might need to disable secure boot and enable legacy boot within BIOS. + +## Booting Issues + +### Multiple nodes can run with the same node ID + +This is a [known issue](https://github.com/threefoldtech/info_grid/issues/122) and will be resolved once the TPM effort gets finalized. + diff --git a/collections/farmers/3node_building/gpu_farming.md b/collections/farmers/3node_building/gpu_farming.md new file mode 100644 index 0000000..1f9ef8f --- /dev/null +++ b/collections/farmers/3node_building/gpu_farming.md @@ -0,0 +1,72 @@ +

GPU Farming

+ +Welcome to the *GPU Farming* section of the ThreeFold Manual! + +In this guide, we delve into the realm of GPU farming, shedding light on the significance of Graphics Processing Units (GPUs) and how they can be seamlessly integrated into the ThreeFold ecosystem. + +

Table of Contents

+ +- [Understanding GPUs](#understanding-gpus) +- [Get Started](#get-started) +- [Install the GPU](#install-the-gpu) +- [GPU Node and the Farmerbot](#gpu-node-and-the-farmerbot) +- [Set a Price for the GPU Node](#set-a-price-for-the-gpu-node) +- [Check the GPU Node on the Node Finder](#check-the-gpu-node-on-the-node-finder) +- [Reserving the GPU Node](#reserving-the-gpu-node) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Understanding GPUs + +A Graphics Processing Unit, or GPU, is a specialized electronic circuit designed to accelerate the rendering of images and videos. Originally developed for graphics-intensive tasks in gaming and multimedia applications, GPUs have evolved into powerful parallel processors with the ability to handle complex computations, such as 3D rendering, AI and machine learning. + +In the context of ThreeFold, GPU farming involves harnessing the computational power of Graphics Processing Units to contribute to the decentralized grid. This empowers users to participate in the network's mission of creating a more equitable and efficient internet infrastructure. + +## Get Started + +In this guide, we focus on the integration of GPUs with a 3Node, the fundamental building block of the ThreeFold Grid. The process involves adding a GPU to enhance the capabilities of your node, providing increased processing power and versatility for a wide range of tasks. Note that any Nvidia or AMD graphics card should work as long as it's supported by the system. + +## Install the GPU + +We cover the basic steps to install the GPU on your 3Node. + +* Find a proper GPU model for your specific 3Node hardware +* Install the GPU on the server + * Note: You might need to move or remove some pieces of your server to make room for the GPU +* (Optional) Boot the 3Node with a Linux distro (e.g. Ubuntu) and use the terminal to check if the GPU is recognized by the system + * ``` + sudo lshw -C Display + ``` + * Output example with an AMD Radeon (on the line `product: ...`) +![gpu_farming](./img/cli_display_gpu.png) +* Boot the 3Node with the ZOS bootstrap image + +## GPU Node and the Farmerbot + +If you are using the Farmerbot, it might be a good idea to first boot the GPU node without the Farmerbot (i.e. to remove the node in the config file and restart the Farmerbot). Once you've confirmed that the GPU is properly detected by TFChain, you can then put back the GPU node in the config file and restart the Farmerbot. While this is not necessary, it can be an effective way to test the GPU node separately. + +## Set a Price for the GPU Node + +You can [set additional fees](../farming_optimization/set_additional_fees.md) for your GPU dedicated node on the [TF Dashboard](https://dashboard.grid.tf/). + +When a user reserves your 3Node as a dedicated node, you will receive TFT payments once every 24 hours. These TFT payments will be sent to the TFChain account of your farm's twin. + +## Check the GPU Node on the Node Finder + +You can use the [Node Finder](../../dashboard/deploy/node_finder.md) on the [TF Dashboard](https://dashboard.grid.tf/) to verify that the node is displayed as having a GPU. + +* On the Dashboard, go to the Node Finder +* Under **Node ID**, write the node ID of the GPU node +* Once the results are displayed, you should see **1** under **GPU** + * If you are using the Status bot, you might need to change the node status under **Select Nodes Status** (e.g. **Down**, **Standby**) to see the node's information + +> Note: It can take some time for the GPU parameter to be displayed. + +## Reserving the GPU Node + +Now, users can reserve the node in the **Dedicated Nodes** section of the Dashboard and then deploy workloads using the GPU. For more information, read [this documentation](../../dashboard/deploy/node_finder.md#dedicated-nodes). + +## Questions and Feedback + +If you have any questions or feedback, we invite you to discuss with the ThreeFold community on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmer chat](https://t.me/threefoldfarmers) on Telegram. \ No newline at end of file diff --git a/collections/farmers/3node_building/img/bootstrap_disable-gpu.png b/collections/farmers/3node_building/img/bootstrap_disable-gpu.png new file mode 100644 index 0000000..f72f450 Binary files /dev/null and b/collections/farmers/3node_building/img/bootstrap_disable-gpu.png differ diff --git a/collections/farmers/3node_building/img/bootstrap_kernel_list.png b/collections/farmers/3node_building/img/bootstrap_kernel_list.png new file mode 100644 index 0000000..3f3f559 Binary files /dev/null and b/collections/farmers/3node_building/img/bootstrap_kernel_list.png differ diff --git a/collections/farmers/3node_building/img/cli_display_gpu.png b/collections/farmers/3node_building/img/cli_display_gpu.png new file mode 100644 index 0000000..777b68b Binary files /dev/null and b/collections/farmers/3node_building/img/cli_display_gpu.png differ diff --git a/collections/farmers/3node_building/img/dashboard_1.png b/collections/farmers/3node_building/img/dashboard_1.png new file mode 100644 index 0000000..bcfed02 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_1.png differ diff --git a/collections/farmers/3node_building/img/dashboard_2.png b/collections/farmers/3node_building/img/dashboard_2.png new file mode 100644 index 0000000..1f6538a Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_2.png differ diff --git a/collections/farmers/3node_building/img/dashboard_4.png b/collections/farmers/3node_building/img/dashboard_4.png new file mode 100644 index 0000000..54c413a Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_4.png differ diff --git a/collections/farmers/3node_building/img/dashboard_5.png b/collections/farmers/3node_building/img/dashboard_5.png new file mode 100644 index 0000000..8bfe4e4 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_5.png differ diff --git a/collections/farmers/3node_building/img/dashboard_6.png b/collections/farmers/3node_building/img/dashboard_6.png new file mode 100644 index 0000000..980d6d3 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_6.png differ diff --git a/collections/documentation/dashboard/img/dashboard_bootstrap_farm.png b/collections/farmers/3node_building/img/dashboard_bootstrap_farm.png similarity index 100% rename from collections/documentation/dashboard/img/dashboard_bootstrap_farm.png rename to collections/farmers/3node_building/img/dashboard_bootstrap_farm.png diff --git a/collections/farmers/3node_building/img/dashboard_create_farm.png b/collections/farmers/3node_building/img/dashboard_create_farm.png new file mode 100644 index 0000000..858da74 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_create_farm.png differ diff --git a/collections/farmers/3node_building/img/dashboard_farm_name.png b/collections/farmers/3node_building/img/dashboard_farm_name.png new file mode 100644 index 0000000..c250693 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_farm_name.png differ diff --git a/collections/farmers/3node_building/img/dashboard_tf_mnemonics.png b/collections/farmers/3node_building/img/dashboard_tf_mnemonics.png new file mode 100644 index 0000000..ec92cde Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_tf_mnemonics.png differ diff --git a/collections/farmers/3node_building/img/dashboard_tfchain_create_account.png b/collections/farmers/3node_building/img/dashboard_tfchain_create_account.png new file mode 100644 index 0000000..1bdcd95 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_tfchain_create_account.png differ diff --git a/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png b/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png new file mode 100644 index 0000000..52a9679 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_1.png differ diff --git a/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png b/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png new file mode 100644 index 0000000..1ef9dc9 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_tfconnect_wallet_2.png differ diff --git a/collections/farmers/3node_building/img/dashboard_walletaddress_1.png b/collections/farmers/3node_building/img/dashboard_walletaddress_1.png new file mode 100644 index 0000000..da65f76 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_walletaddress_1.png differ diff --git a/collections/farmers/3node_building/img/dashboard_walletaddress_2.png b/collections/farmers/3node_building/img/dashboard_walletaddress_2.png new file mode 100644 index 0000000..a36e095 Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_walletaddress_2.png differ diff --git a/collections/farmers/3node_building/img/dashboard_your_farms.png b/collections/farmers/3node_building/img/dashboard_your_farms.png new file mode 100644 index 0000000..58de47a Binary files /dev/null and b/collections/farmers/3node_building/img/dashboard_your_farms.png differ diff --git a/collections/farmers/3node_building/img/farming_001.png b/collections/farmers/3node_building/img/farming_001.png new file mode 100644 index 0000000..b666fce Binary files /dev/null and b/collections/farmers/3node_building/img/farming_001.png differ diff --git a/collections/farmers/3node_building/img/farming_30.png b/collections/farmers/3node_building/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_30.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_1.png b/collections/farmers/3node_building/img/farming_createfarm_1.png new file mode 100644 index 0000000..95545d8 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_1.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_2.png b/collections/farmers/3node_building/img/farming_createfarm_2.png new file mode 100644 index 0000000..488f970 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_2.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_21.png b/collections/farmers/3node_building/img/farming_createfarm_21.png new file mode 100644 index 0000000..ab36d45 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_21.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_22.png b/collections/farmers/3node_building/img/farming_createfarm_22.png new file mode 100644 index 0000000..7cfc6ba Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_22.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_23.png b/collections/farmers/3node_building/img/farming_createfarm_23.png new file mode 100644 index 0000000..3537771 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_23.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_24.png b/collections/farmers/3node_building/img/farming_createfarm_24.png new file mode 100644 index 0000000..4b2da96 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_24.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_3.png b/collections/farmers/3node_building/img/farming_createfarm_3.png new file mode 100644 index 0000000..349ec17 Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_3.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_4.png b/collections/farmers/3node_building/img/farming_createfarm_4.png new file mode 100644 index 0000000..b717bfe Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_4.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_5.png b/collections/farmers/3node_building/img/farming_createfarm_5.png new file mode 100644 index 0000000..a022a1f Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_5.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_6.png b/collections/farmers/3node_building/img/farming_createfarm_6.png new file mode 100644 index 0000000..922b77b Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_6.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_7.png b/collections/farmers/3node_building/img/farming_createfarm_7.png new file mode 100644 index 0000000..e4bdb6e Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_7.png differ diff --git a/collections/farmers/3node_building/img/farming_createfarm_8.png b/collections/farmers/3node_building/img/farming_createfarm_8.png new file mode 100644 index 0000000..e8d023c Binary files /dev/null and b/collections/farmers/3node_building/img/farming_createfarm_8.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_1.png b/collections/farmers/3node_building/img/tf_dashboard_2023_1.png new file mode 100644 index 0000000..c93f67c Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_1.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_10.png b/collections/farmers/3node_building/img/tf_dashboard_2023_10.png new file mode 100644 index 0000000..880b4fe Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_10.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_11.png b/collections/farmers/3node_building/img/tf_dashboard_2023_11.png new file mode 100644 index 0000000..f5aea83 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_11.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_12.png b/collections/farmers/3node_building/img/tf_dashboard_2023_12.png new file mode 100644 index 0000000..afb06e0 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_12.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_13.png b/collections/farmers/3node_building/img/tf_dashboard_2023_13.png new file mode 100644 index 0000000..110a97a Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_13.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_14.png b/collections/farmers/3node_building/img/tf_dashboard_2023_14.png new file mode 100644 index 0000000..8c2d2c6 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_14.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_15.png b/collections/farmers/3node_building/img/tf_dashboard_2023_15.png new file mode 100644 index 0000000..30d41f2 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_15.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_16.png b/collections/farmers/3node_building/img/tf_dashboard_2023_16.png new file mode 100644 index 0000000..320536f Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_16.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_2.png b/collections/farmers/3node_building/img/tf_dashboard_2023_2.png new file mode 100644 index 0000000..b1276d9 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_2.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_4.png b/collections/farmers/3node_building/img/tf_dashboard_2023_4.png new file mode 100644 index 0000000..9ce84ae Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_4.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_5.png b/collections/farmers/3node_building/img/tf_dashboard_2023_5.png new file mode 100644 index 0000000..488bad0 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_5.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_6.png b/collections/farmers/3node_building/img/tf_dashboard_2023_6.png new file mode 100644 index 0000000..77b3f03 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_6.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_7.png b/collections/farmers/3node_building/img/tf_dashboard_2023_7.png new file mode 100644 index 0000000..21e55fc Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_7.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_8.png b/collections/farmers/3node_building/img/tf_dashboard_2023_8.png new file mode 100644 index 0000000..a00dfa1 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_8.png differ diff --git a/collections/farmers/3node_building/img/tf_dashboard_2023_9.png b/collections/farmers/3node_building/img/tf_dashboard_2023_9.png new file mode 100644 index 0000000..64b9685 Binary files /dev/null and b/collections/farmers/3node_building/img/tf_dashboard_2023_9.png differ diff --git a/collections/farmers/3node_building/minting_receipts.md b/collections/farmers/3node_building/minting_receipts.md new file mode 100644 index 0000000..ded9cac --- /dev/null +++ b/collections/farmers/3node_building/minting_receipts.md @@ -0,0 +1,105 @@ +

Minting Receipts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Access the Reports](#access-the-reports) +- [Available Information](#available-information) +- [TFT Farming Registration Price](#tft-farming-registration-price) + +*** + +## Introduction + +Once you have the receipt hash of your node minting, you can get the [minting report](../../dashboard/tfchain/tf_minting_reports.md) of your node. + +## Access the Reports + +- On the Dashboard, go to **TFChain** -> **TF Minting Reports** +- Enter your receipt hash +- Consult your minting report + +## Available Information + +The ThreeFold Alpha minting tool will present the following information for each minting receipt hash: + +- Node Info: This contains the basic information in relation to your node. + - Node ID + - Farm Name and ID + - Measured Uptime +- Node Resources: These resources are related to the [cloud units](../../../knowledge_base/cloud/cloudunits.md) and the [resource units](../../../knowledge_base/cloud/resource_units_calc_cloudunits.md). + - CU + - SU + - NU + - CRU + - MRU + - SRU + - HRU +- TFT Farmed: This is the quantity of TFT farmed during the minting period. +- Payout Address: The payout address is the Stellar address you set to receive your farming rewards. + +## TFT Farming Registration Price + +Currently, minting is set at a TFT value of 0.08 USD. This TFT farming registration price (i.e. the TFT minting value) can be seen as a farming difficulty level. The higher this number is, the less TFT is minted for the same given node. This number is not related to the TFT market price and is currently fixed. + +The ThreeFold DAO can vote to change this number. For example, if the ThreeFold DAO decides to increase the TFT minting value to 0.10 USD, the farming difficulty would be increased by 25% (0.08 * 1.25 = 0.10). This updated TFT farming registration price would then affect all new nodes that are registered after the DAO vote is passed. + + \ No newline at end of file diff --git a/collections/farmers/advanced_networking/advanced_networking_toc.md b/collections/farmers/advanced_networking/advanced_networking_toc.md new file mode 100644 index 0000000..f30a86f --- /dev/null +++ b/collections/farmers/advanced_networking/advanced_networking_toc.md @@ -0,0 +1,13 @@ +

Advanced Networking

+ +Welcome to the *Advanced Networking* section of the ThreeFold Manual. + +In this section, we provide advanced networking tips for farms with public IPs and in data centers (DC). We also cover the differences between IPv4 and IPv6 networking. + +

Table of Contents

+ +- [Networking Overview](./networking_overview.md) +- [Network Considerations](./network_considerations.md) +- [Network Setup](./network_setup.md) + +> Note: This documentation does not constitute a complete set of knowledge on setting farms with public IP addresses in a data center. Please make sure to do your own research and communicate with your data center and your Internet service provider for any additional information. \ No newline at end of file diff --git a/collections/farmers/advanced_networking/network_considerations.md b/collections/farmers/advanced_networking/network_considerations.md new file mode 100644 index 0000000..0576fd9 --- /dev/null +++ b/collections/farmers/advanced_networking/network_considerations.md @@ -0,0 +1,120 @@ +

Network Considerations

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Running ZOS (v2) at Home](#running-zos-v2-at-home) +- [Running ZOS (v2) in a Multi-Node Farm in a DC](#running-zos-v2-in-a-multi-node-farm-in-a-dc) + - [Necessities](#necessities) + - [IPv6](#ipv6) + - [Routing/Firewalling](#routingfirewalling) + - [Multi-NIC Nodes](#multi-nic-nodes) + - [Farmers and the TFGrid](#farmers-and-the-tfgrid) + +*** + +## Introduction + +Running ZOS on a node is just a matter of booting it with a USB stick, or with a dhcp/bootp/tftp server with the right configuration so that the node can start the OS. +Once it starts booting, the OS detects the NICs, and starts the network configuration. A Node can only continue it's boot process till the end when it effectively has received an IP address and a route to the Internet. Without that, the Node will retry indefinitely to obtain Internet access and not finish it's startup. + +So a Node needs to be connected to a __wired__ network, providing a dhcp server and a default gateway to the Internet, be it NATed or plainly on the public network, where any route to the Internet, be it IPv4 or IPv6 or both is sufficient. + +For a node to have that ability to host user networks, we **strongly** advise to have a working IPv6 setup, as that is the primary IP stack we're using for the User Network's Mesh to function. + +## Running ZOS (v2) at Home + +Running a ZOS Node at home is plain simple. Connect it to your router, plug it in the network, insert the preconfigured USB stick containing the bootloader and the `farmer_id`, power it on. + +## Running ZOS (v2) in a Multi-Node Farm in a DC + +Multi-Node Farms, where a farmer wants to host the nodes in a data centre, have basically the same simplicity, but the nodes can boot from a boot server that provides for DHCP, and also delivers the iPXE image to load, without the need for a USB stick in every Node. + +A boot server is not really necessary, but it helps! That server has a list of the MAC addresses of the nodes, and delivers the bootloader over PXE. The farmer is responsible to set-up the network, and configure the boot server. + +### Necessities + +The Farmer needs to: + +- Obtain an IPv6 prefix allocation from the provider. A `/64` will do, that is publicly reachable, but a `/48` is advisable if the farmer wants to provide IPv6 transit for User Networks +- If IPv6 is not an option, obtain an IPv4 subnet from the provider. At least one IPv4 address per node is needed, where all IP addresses are publicly reachable. +- Have the Nodes connected on that public network with a switch so that all Nodes are publicly reachable. +- In case of multiple NICS, also make sure his farm is properly registered in BCDB, so that the Node's public IP Addresses are registered. +- Properly list the MAC addresses of the Nodes, and configure the DHCP server to provide for an IP address, and in case of multiple NICs also provide for private IP addresses over DHCP per Node. +- Make sure that after first boot, the Nodes are reachable. + +### IPv6 + +IPv6, although already a real protocol since '98, has seen reluctant adoption over the time it exists. That mostly because ISPs and Carriers were reluctant to deploy it, and not seeing the need since the advent of NAT and private IP space, giving the false impression of security. +But this month (10/2019), RIPE sent a mail to all it's LIRs that the last consecutive /22 in IPv4 has been allocated. Needless to say, but that makes the transition to IPv6 in 2019 of utmost importance and necessity. +Hence, ZOS starts with IPv6, and IPv4 is merely an afterthought ;-) +So in a nutshell: we greatly encourage Farmers to have IPv6 on the Node's network. + +### Routing/Firewalling + +Basically, the Nodes are self-protecting, in the sense that they provide no means at all to be accessed through listening processes at all. No service is active on the node itself, and User Networks function solely on an overlay. +That also means that there is no need for a Farm admin to protect the Nodes from exterior access, albeit some DDoS protection might be a good idea. +In the first phase we will still allow the Host OS (ZOS) to reply on ICMP ping requests, but that 'feature' might as well be blocked in the future, as once a Node is able to register itself, there is no real need to ever want to try to reach it. + +### Multi-NIC Nodes + +Nodes that Farmers deploy are typically multi-NIC Nodes, where one (typically a 1GBit NIC) can be used for getting a proper DHCP server running from where the Nodes can boot, and one other NIC (1Gbit or even 10GBit), that then is used for transfers of User Data, so that there is a clean separation, and possible injections bogus data is not possible. + +That means that there would be two networks, either by different physical switches, or by port-based VLANs in the switch (if there is only one). + +- Management NICs + The Management NIC will be used by ZOS to boot, and register itself to the GRID. Also, all communications from the Node to the Grid happens from there. +- Public NICs + +### Farmers and the TFGrid + +A Node, being part of the Grid, has no concept of 'Farmer'. The only relationship for a Node with a Farmer is the fact that that is registered 'somewhere (TM)', and that a such workloads on a Node will be remunerated with Tokens. For the rest, a Node is a wholly stand-alone thing that participates in the Grid. + +```text + 172.16.1.0/24 + 2a02:1807:1100:10::/64 ++--------------------------------------+ +| +--------------+ | +-----------------------+ +| |Node ZOS | +-------+ | | +| | +-------------+1GBit +--------------------+ 1GBit switch | +| | | br-zos +-------+ | | +| | | | | | +| | | | | | +| | | | +------------------+----+ +| +--------------+ | | +-----------+ +| | OOB Network | | | +| | +----------+ ROUTER | +| | | | +| | | | +| | | | +| +------------+ | +----------+ | +| | Public | | | | | +| | container | | | +-----+-----+ +| | | | | | +| | | | | | +| +---+--------+ | +-------------------+--------+ | +| | | | 10GBit Switch | | +| br-pub| +-------+ | | | +| +-----+10GBit +-------------------+ | +----------> +| +-------+ | | Internet +| | | | +| | +----------------------------+ ++--------------------------------------+ + 185.69.167.128/26 Public network + 2a02:1807:1100:0::/64 + +``` + +Where the underlay part of the wireguard interfaces get instantiated in the Public container (namespace), and once created these wireguard interfaces get sent into the User Network (Network Resource), where a user can then configure the interface a he sees fit. + +The router of the farmer fulfills 2 roles: + +- NAT everything in the OOB network to the outside, so that nodes can start and register themselves, as well get tasks to execute from the BCDB. +- Route the assigned IPv4 subnet and IPv6 public prefix on the public segment, to which the public container is connected. + +As such, in case that the farmer wants to provide IPv4 public access for grid proxies, the node will need at least one (1) IPv4 address. It's free to the farmer to assign IPv4 addresses to only a part of the Nodes. +On the other hand, it is quite important to have a proper IPv6 setup, because things will work out better. + +It's the Farmer's task to set up the Router and the switches. + +In a simpler setup (small number of nodes for instance), the farmer could setup a single switch and make 2 port-based VLANs to separate OOB and Public, or even wit single-nic nodes, just put them directly on the public segment, but then he will have to provide a DHCP server on the Public network. \ No newline at end of file diff --git a/collections/farmers/advanced_networking/network_setup.md b/collections/farmers/advanced_networking/network_setup.md new file mode 100644 index 0000000..d05d204 --- /dev/null +++ b/collections/farmers/advanced_networking/network_setup.md @@ -0,0 +1,86 @@ +

Network Setup

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Network Setup for Farmers](#network-setup-for-farmers) + - [Step 1. Testing for IPv6 Availability in Your Location](#step-1-testing-for-ipv6-availability-in-your-location) + - [Step 2. Choosing the Setup to Connect Your Nodes](#step-2-choosing-the-setup-to-connect-your-nodes) + - [2.1 Home Setup](#21-home-setup) + - [2.2 Data Center/Advanced Setup](#22-data-centeradvanced-setup) +- [General Notes](#general-notes) + +*** + +# Introduction + +0-OS nodes participating in the Threefold grid, need connectivity of course. They need to be able to communicate over +the Internet with each-other in order to do various things: + +- download its OS modules +- perform OS module upgrades +- register itself to the grid, and send regular updates about it's status +- query the grid for tasks to execute +- build and run the Overlay Network +- download flists and the effective files to cache + +The nodes themselves can have connectivity in a few different ways: + +- Only have RFC1918 private addresses, connected to the Internet through NAT, NO IPv6 + Mostly, these are single-NIC (Network card) machines that can host some workloads through the Overlay Network, but + can't expose services directly. These are HIDDEN nodes, and are mostly booted with an USB stick from + bootstrap.grid.tf . +- Dual-stacked: having RFC1918 private IPv4 and public IPv6 , where the IPv6 addresses are received from a home router, +but firewalled for outgoing traffic only. These nodes are effectively also HIDDEN +- Nodes with 2 NICs, one that has effectively a NIC connected to a segment that has real public +addresses (IPv4 and/or IPv6) and one NIC that is used for booting and local +management. (OOB) (like in the drawing for farmer setup) + +For Farmers, we need to have Nodes to be reachable over IPv6, so that the nodes can: + +- expose services to be proxied into containers/vms +- act as aggregating nodes for Overlay Networks for HIDDEN Nodes + +Some Nodes in Farms should also have a publicly reachable IPv4, to make sure that clients that only have IPv4 can +effectively reach exposed services. + +But we need to stress the importance of IPv6 availability when you're running a multi-node farm in a datacentre: as the +grid is boldly claiming to be a new Internet, we should make sure we adhere to the new protocols that are future-proof. +Hence: IPv6 is the base, and IPv4 is just there to accomodate the transition. + +Nowadays, RIPE can't even hand out consecutive /22 IPv4 blocks any more for new LIRs, so you'll be bound to market to +get IPv4, mostly at rates of 10-15 Euro per IP. Things tend to get costly that way. + +So anyway, IPv6 is not an afterthought in 0-OS, we're starting with it. + +# Network Setup for Farmers + +This is a quick manual to what is needed for connecting a node with zero-OS V2.0 + +## Step 1. Testing for IPv6 Availability in Your Location +As descibed above the network in which the node is instaleld has to be IPv6 enabled. This is not an afterthought as we are building a new internet it has to ba based on the new and forward looking IP addressing scheme. This is something you have to investigate, negotiate with you connectivity provider. Many (but not all home connectivity products and certainly most datacenters can provide you with IPv6. There are many sources of infromation on how to test and check whether your connection is IPv6 enabled, [here is a starting point](http://www.ipv6enabled.org/ipv6_enabled/ipv6_enable.php) + +## Step 2. Choosing the Setup to Connect Your Nodes + +Once you have established that you have IPv6 enabled on the network you are about to deploy, you have to make sure that there is an IPv6 DHCP facility available. Zero-OS does not work with static IPv6 addresses (at this point in time). So you have choose and create one of the following setups: + +### 2.1 Home Setup + +Use your (home) ISP router Ipv6 DHCP capabilities to provide (private) IPv6 addresses. The principle will work the same as for IPv4 home connections, everything happens enabled by Network Adress Translation (just like anything else that uses internet connectivity). This should be relatively straightforward if you have established that your conenction has IPv6 enabled. + +### 2.2 Data Center/Advanced Setup + +In this situation there are many options on how to setup your node. This requires you as the expert to make a few decisions on how to connect what what the best setup is that you can support for the operaitonal time of your farm. The same basics principles apply: + - You have to have a block of (public) IPv6 routed to your router, or you have to have your router setup to provide Network Address Translation (NAT) + - You have to have a DHCP server in your network that manages and controls IPV6 ip adress leases. Depending on your specific setup you have this DHCP server manage a public IPv6 range which makes all nodes directly connected to the public internet or you have this DHCP server manage a private block of IPv6 addresses which makes all your nodes connect to the internet through NAT. + +As a farmer you are in charge of selecting and creating the appropriate network setup for your farm. + +# General Notes + +The above setup will allows your node(s) to appear in explorer on the TFGrid and will allow you to earn farming tokens. At stated in the introduction ThreeFold is creating next generation internet capacity and therefore has IPv6 as it's base building block. Connecting to the current (dominant) IPv4 network happens for IT workloads through so called webgateways. As the word sais these are gateways that provide connectivity between the currenct leading IPv4 adressing scheme and IPv6. + +We have started a forum where people share their experiences and configurations. This will be work in progress and forever growing. + +**IMPORTANT**: You as a farmer do not need access to IPV4 to be able to rent capacity for IT workloads that need to be visible on IPV4, this is something that can happen elsewhere on the TFGrid. + diff --git a/collections/farmers/advanced_networking/networking_overview.md b/collections/farmers/advanced_networking/networking_overview.md new file mode 100644 index 0000000..c4bc322 --- /dev/null +++ b/collections/farmers/advanced_networking/networking_overview.md @@ -0,0 +1,94 @@ +

Networking Overview

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Possible Configurations](#possible-configurations) +- [Overall Requirements](#overall-requirements) +- [Notes and Warnings](#notes-and-warnings) + - [Management Interfaces](#management-interfaces) + - [Data Center Cable Management](#data-center-cable-management) + - [Static IP Uplink](#static-ip-uplink) +- [Testing the Setup](#testing-the-setup) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this section, we provide advanced networking tips for farms with public IPs and in data centers (DC). The information available in this section is a combination of documentation from ThreeFold and tips and advice from community members who experienced first-hand the creation of ThreeFold farms that make use of public IPs block in data centers, personal data centers and home farms. A special thank you to those who contributed to improving the TFGrid and its knowledge base documentation. + +## Possible Configurations + +For farmers who have public IPs, extra considerations are needed in setting up the network of the farm. We will go through the main considerations in this section. + +First, we must acknowledge that by the open-source and design of ThreeFold farming, a farm can range from a simple [single 3Node](../3node_building/3node_building.md) setup, to a multi-rack farm hosted in a typical data center, and everything in-between, from the farmer experiencing with public IP blocks, to the entrepreneur who builds their own data center at home. + +There are thus many types of farms and each will have varying configurations. The simplest way to set up a farm has been extensively discussed in the first steps of creating a farm. But what are the other more complex configurations possible? Let's go through some of those: + +- Network link + - DC provides a network link into the farmer's rack +- Router and switch + - The farmer provider their own router and switch + - DC provides a router and/or switch in the rack +- Gateway IP and public IP + - Gateway IP provided is in the same range as the public IPs + - Gateway IP is in a different range than the public IPs +- Segmenting + - Farmer segments the OOB ("Zos"/private) interfaces and the public interfaces into + - separate VLANs, OR; + - uses separate switches altogether + - No segmenting is actually necessary, farmer connects all interfaces to one switch + +## Overall Requirements + +There are overall requirements for any 3Node farm using IP address blocks in a data centere or at home: + +- There must be at least one interface that provide DHCP to each node +- Public IPs must be routable from at least one interface + +Note that redundancy can help in avoiding single point of failure [(SPOF)](https://en.wikipedia.org/wiki/Single_point_of_failure). + +## Notes and Warnings + +### Management Interfaces + +You should make sure to never expose management interfaces to the public internet. + + +### Data Center Cable Management + +It's important to have a good cable management, especially if you are in a data center. Proper cable management will improve the cooling streams of your farm. There shouldn't be any cable in front of the fans. This way, your servers will last longer. If you want to patch a rack, you have to have all lenght of patch cables from 30cm to 3m. Also, try to keep the cables as short as possible. Arrange the cables in bundles of eight and lead them to the sides of the rack as much as possible for optimal airflow. + + + + + +### Static IP Uplink + +If your DC uplink is established by simple static IP (which is the case in most DCs), there is a simple setup possible. Note that if you have to use PPPoE or pptp/L2TP (like a consumer internet connection at most homes), this would not work. + +If your WAN is established by static IP, you can simply attach the WAN uplink provided by the DC to one of the switches (and not to the WAN-side of your own router). Then, the WAN-side of the router needs to be attached to the switch too. By doing so, your nodes will be able to connect directly to the DC gateway, in the same way that the router is connecting its WAN-side to the gateway, without the public IP traffic being routed/bridged through the router (bypassing). + +With a network configured like this, it is absolutely not important on which ports you connect which NIC of your nodes. You can just randomly plug them anywhere. But there is one restriction: the DC uplink must use a static IP. Dynamic IP would also not work because you would then have two DHCP servers in the same physical network (the one from the DC and your own router). + +## Testing the Setup + +Manual and automatic validation of the network of a farm are possible. More information on automatic validation will be added in the future. + +You can test the network of your farm manually by deploying a workload on your 3Nodes with either a gateway or a public IP reserved. + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Farmer Chat](https://t.me/threefoldfarmers) on Telegram. \ No newline at end of file diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md b/collections/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md new file mode 100644 index 0000000..de74eb0 --- /dev/null +++ b/collections/farmers/complete_diy_guides/3node_diy_desktop/3node_diy_desktop.md @@ -0,0 +1,406 @@ +

Building a DIY 3Node: Desktop Computer

+ +In the following 3Node DIY guide, you will learn how to turn a Dell Optiplex 7020 into a 3Node farming on the ThreeFold Grid. + +Note that the process is similar for other desktop computers. + +
+ + + +

Table of Contents

+ + + +- [Prerequisite](#prerequisite) + - [DIY 3Node Computer Requirements](#diy-3node-computer-requirements) + - [DIY 3Node Material List](#diy-3node-material-list) +- [1. Create a Farm](#1-create-a-farm) + - [Using Dashboard](#using-dashboard) + - [Using TF Connect App](#using-tf-connect-app) +- [2. Create a Zero-OS Bootstrap Image](#2-create-a-zero-os-bootstrap-image) + - [Download the Zero-OS Boostrap Image](#download-the-zero-os-boostrap-image) + - [Burn the Zero-OS Bootstrap Image](#burn-the-zero-os-bootstrap-image) +- [3. Set the Hardware](#3-set-the-hardware) +- [4. Wipe All the Disks](#4-wipe-all-the-disks) + - [1. Create a Linux Boostrap Image](#1-create-a-linux-boostrap-image) + - [2. Boot Linux in Try Mode](#2-boot-linux-in-try-mode) + - [3. Use wipefs to Wipe All Disks](#3-use-wipefs-to-wipe-all-disks) +- [5. Set the BIOS/UEFI](#5-set-the-biosuefi) + - [The Essential Features of BIOS/UEFI for a 3Node](#the-essential-features-of-biosuefi-for-a-3node) + - [Set the BIOS/UEFI on a Dell Optiplex 7020](#set-the-biosuefi-on-a-dell-optiplex-7020) +- [6. Boot the 3Node](#6-boot-the-3node) + - [Check the Node Status](#check-the-node-status) + - [Farming Rewards Distribution](#farming-rewards-distribution) +- [Additional Information](#additional-information) + +*** + +
+ + + +# Prerequisite + +## DIY 3Node Computer Requirements + + + +Any computer with the following specifications can be used as a DIY 3Node. + +- Any 64-bit hardware with an Intel or AMD processor chip. +- Servers, desktops and mini computers type hardware are compatible. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A ratio of 100GB of SSD and 8GB of RAM per thread is recommended. +- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT. +- A [passmark](https://www.passmark.com/) of 1000 per core is recommended and will be a minimum requirement in the future. + +In this guide, we are using a Dell Optiplex 7020. It constitutes a perfect affordable entry DIY 3Node as it can be bought refurbished with the proper ratio of 100GB of SSD and 8GB of RAM per thread, and this, without any need of upgrades or updates. + + + +## DIY 3Node Material List + + + +* Any computer respecting the DIY 3Node Computer Requirements stated above +* Ethernet cable +* Router + Modem +* Surge Protector +* 2x USB key 4 Go +* Android/iOS Phone +* Computer monitor and cable, keyboard and mouse +* MAC/Linux/Windows Computer + + + +
+ +# 1. Create a Farm + +You can create a farm with either the ThreeFold Dashboard or the ThreeFold Connect app. + +## Using Dashboard + +The Dashboard section contains all the information required to [create a farm](../../../dashboard/farms/your_farms.md). + +## Using TF Connect App + +You can [create a ThreeFold farm](../../../threefold_token/storing_tft/tf_connect_app.md) with the ThreeFold Connect App. + + +# 2. Create a Zero-OS Bootstrap Image + +## Download the Zero-OS Boostrap Image + +We will now learn how to create a Zero-OS Bootstrap Image in order to boot a DIY 3Node. + +Go on the [ThreeFold Zero-OS Bootstrap Link](https://v3.bootstrap.grid.tf) as shown above. + +![Farming_Create_Farm_21](./img/farming_createfarm_21.png) + +This is the Zero-OS v3 Bootstrapping page. + +![Farming_Create_Farm_22](./img/farming_createfarm_22.png) + +Write your farm ID and choose production mode. + +![Farming_Create_Farm_23](./img/farming_createfarm_23.png) + +Download the bootstrap image. Next, we will burn the bootstrap image. + + + +
+ +## Burn the Zero-OS Bootstrap Image + + + +For **MAC**, **Linux** and **Windows**, you can use [BalenaEtcher](https://www.balena.io/etcher/) to load/flash the image on a USB stick. This program also formats the USB in the process. This will work for the option **EFI IMG** for UEFI boot, and with the option **USB** for BIOS boot. Simply follow the steps presented to you and make sure you select the correct bootstrap image file you downloaded previously. + +General Steps: + +1. Download BalenaEtcher at [https://balena.io/etcher](https://balena.io/etcher) + +![3node_diy_desktop_42.png](img/3node_diy_desktop_42.png) + +![3node_diy_desktop_43.png](img/3node_diy_desktop_43.png) + +![3node_diy_desktop_44.png](img/3node_diy_desktop_44.png) + +![3node_diy_desktop_45.png](img/3node_diy_desktop_45.png) + +![3node_diy_desktop_48.png](img/3node_diy_desktop_48.png) + +![3node_diy_desktop_49.png](img/3node_diy_desktop_49.png) + +2. Open BalenaEtcher + +![3node_diy_desktop_50.png](img/3node_diy_desktop_50.png) + +3. Select **Flash from file** + +![3node_diy_desktop_52.png](img/3node_diy_desktop_52.png) + +1. Find and select the bootstrap image in your computer + +2. Select **Target** (your USB key) + +![3node_diy_desktop_53.png](img/3node_diy_desktop_53.png) + +![3node_diy_desktop_54.png](img/3node_diy_desktop_54.png) + +6. Select **Flash** + +![3node_diy_desktop_55.png](img/3node_diy_desktop_55.png) + +![3node_diy_desktop_56.png](img/3node_diy_desktop_56.png) + + +That's it. Now you have a bootstrap image on Zero-OS as a bootable removable media device. + + + +
+ +# 3. Set the Hardware + +Setting the hardware of this DIY 3node is very easy as there are no updates or upgrades needed. Simply unbox the computer and plug everything. + +![3node_diy_desktop_40.png](img/3node_diy_desktop_40.jpeg) + +![3node_diy_desktop_38.png](img/3node_diy_desktop_38.jpeg) + +![3node_diy_desktop_30.png](img/3node_diy_desktop_30.jpeg) + +![3node_diy_desktop_29.png](img/3node_diy_desktop_29.jpeg) + +Plug the computer cable in the surge protector + +![3node_diy_desktop_6.png](img/3node_diy_desktop_6.png) + +Connect the computer cable, the ethernet cable, the mouse and keyboard cable and the monitor cable. + +![3node_diy_desktop_13.png](img/3node_diy_desktop_13.jpeg) + +Plug the ethernet cable in the router (or the switch) + +![3node_diy_desktop_6.png](img/3node_diy_desktop_3.png) + + + +
+ +# 4. Wipe All the Disks + +In this section, we will learn how to create a Linux bootstrap image, boot it in Try mode and then wipe all the disks in your 3Node. To create a Linux boostrap image, follow the same process as when we burnt the Zero-OS Boostrap Image. + + + +## 1. Create a Linux Boostrap Image + + + +1. Download the Ubuntu 20.04 ISO file [here](https://releases.ubuntu.com/20.04/) +2. Burn the ISO image on a USB key with Balena Etcher + + + +## 2. Boot Linux in Try Mode + + + +1. Insert your Linux boostrap image USB key in your computer and boot it +2. During boot, press F12 to enter into Settings +3. Select your booting device, here it is: *UEFI: USB DISK 2.0* + +![3node_diy_desktop_107.png](img/3node_diy_desktop_107.jpeg) + +4. Select Try or install Ubuntu + +![3node_diy_desktop_106.png](img/3node_diy_desktop_106.jpeg) + +5. Select Try Ubuntu + +![3node_diy_desktop_105.png](img/3node_diy_desktop_105.jpeg) + + + +## 3. Use wipefs to Wipe All Disks + + + +Once Ubuntu is booted, you will land on the main page. + +![3node_diy_desktop_67.png](img/3node_diy_desktop_67.png) + +At the bottom left of the screen, click on Applications. + +![3node_diy_desktop_68.png](img/3node_diy_desktop_68.png) + +In Applications, select Terminal. + +![3node_diy_desktop_69.png](img/3node_diy_desktop_69.png) + +If you don't see it, write terminal in the search box. + +![3node_diy_desktop_70.png](img/3node_diy_desktop_70.png) + +You will land in the Ubuntu Terminal. + +![3node_diy_desktop_71.png](img/3node_diy_desktop_71.png) + +Write the command line *lsblk* as shown below. You will then see the disks in your computer. You want to wipe the main disk, but not the USB key we are using, named *sdb* here. We can see here that the SSD disk, *sda*, has 3 partitions: *sda1*, *sda2*, *sda3*. Note that when wiping the disks, we want no partition. + +In this case, the disk we want to wipe is *sda*. + +![3node_diy_desktop_72.png](img/3node_diy_desktop_72.png) + +Write the command line *sudo wipefs -a /dev/sda*. This will wipe the disk *sda*. + +![3node_diy_desktop_73.png](img/3node_diy_desktop_73.png) + +If you write the command line *lsblk* once more, you should see that your SSD disk has no more partition. The disk has been properly wiped. + +![3node_diy_desktop_74.png](img/3node_diy_desktop_74.png) + +Power off the computer by selecting *Power Off* after having clicked on the button at the top right of the screen as shown below. + +![3node_diy_desktop_75.png](img/3node_diy_desktop_75.png) + +That's it! The disks are all wiped. All that is left now is to set the BIOS/UEFI settings and then boot the 3Node! + + + +
+ +# 5. Set the BIOS/UEFI + +Before booting the main operating system, in our case Zero-OS, a computer will boot in either BIOS or UEFI mode. Older systems use BIOS and newer systems uses UEFI. Both BIOS and UEFI are low-lewel softwares needed to interact between the hardware and the main OS of the computer. Note that BIOS is also called Legacy BIOS. + +## The Essential Features of BIOS/UEFI for a 3Node + + + +There are certain things that you should make sure are set properly on your 3Node. + +As a general advice, you can Load Defaults (Settings) on your BIOS, then make sure the options below are set properly. + +* Choose the correct combination of BIOS/UEFI and bootstrap image on [https://bootstrap.grid.tf/](https://bootstrap.grid.tf/) + * Newer system will use UEFI --> the Dell Optiplex 7020 uses UEFI + * Bootstrap image: *EFI IMG* and *EFI FILE* + * Older system will use Legacy BIOS + * Bootstrap image: *ISO* and *USB* +* Set *Multi-Processor* and *Hyperthreading* at Enabled + * Sometimes, it will be written *Virtual Cores*, or *Logical Cores*. +* Set *Virtualization* at Enabled + * On Intel, it is denoted as *CPU virtualization* and on ASUS, it is denoted as *SVM*. + * Make sure virtualization is enabled and look for the precise terms in your specific BIOS/UEFI. +* Enable *Network Stack* (sometimes called *Network Boot*) +* Set *AC Recovery* at *Last Power State* + * This will make sure your 3Node restarts after losing power momentarily. +* Select the proper *Boot Sequence* for the 3Node to boot Zero-OS from your bootstrap image + * e.g., if you have a USB key as a bootstrap image, select it in *Boot Sequence* +* Set *Server Lookup Method* (or the equivalent) at *DNS*. + * Only use Static IP if you know what you are doing. + * Your router will automatically assign a dynamic IP address to your 3Node when it connects to Internet. +* Set *Client Address Method* (or the equivalent) at *DHCP*. Only use Static IP if you know what you are doing. + * Your router will automatically assign a dynamic IP address to your 3Node when it connects to Internet. +* *Secure Boot* should be left at disabled + * Enable it if you know what you are doing. Otherwise, it can be set at disabled. + + + +
+ +## Set the BIOS/UEFI on a Dell Optiplex 7020 + + + +1. Insert your Zero-OS boostrap image USB key in your computer and boot it. +2. During boot, press F12 to enter into *Settings* then choose *BIOS Setup*. + +![3node_diy_desktop_104.jpeg](img/3node_diy_desktop_109.png) + +3. In BIOS Setup, click on Load Default and confirm by clicking on *OK* + +![3node_diy_desktop_115.png](img/3node_diy_desktop_115.png) + +4. Leave the BIOS Setup (Exit) and re-enter. This will set the default settings. + +5. Go through each page and make sure you are following the guidelines set in the section Essential Features as shown in the following pictures. + +![3node_diy_desktop_.png](img/3node_diy_desktop_116.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_117.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_118.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_114.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_127.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_120.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_128.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_122.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_123.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_129.png) + +![3node_diy_desktop_.png](img/3node_diy_desktop_125.png) + + +6. Once you are done, click on *Exit* and then click *Yes* to save your changes. The 3node will now boot Zero-OS. + +![3node_diy_desktop_126.png](img/3node_diy_desktop_126.png) + + + +
+ +# 6. Boot the 3Node + +If your BIOS/UEFI settings are set properly and you have the Zero-OS bootstrap image USB key plugged in, your 3node should automatically boot Zero-OS every time that it is turned on. + +1. Power on the 3Node with the Zero-OS boostrap image USB key +2. Let the 3Node load Zero-OS + * The first time it boots, the 3node will register to the TF Grid +3. Verify the 3Node's status on ThreeFold Explorer + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +This is the final screen you should see when your 3Node is connected to the ThreeFold Grid. Note that it is normal if it is written *no public config* next to *PUB* as we did not set any public IP address. + +Naturally, your node ID as well as your farm ID and name will be shown. + +![3node_diy_desktop_76.png](img/3node_diy_desktop_130.png) + +Once you have your node ID, you can also go on the ThreeFold Dashboard to see your 3Node and verify that your 3Node is online. + + + +
+ +## Check the Node Status + +You can use the [Node Finder](../../../dashboard/deploy/node_finder.md) on the [TF Dashboard](https://dashboard.grid.tf/) to verify that the node is online. + +Enter your node ID and click **Apply**. + +## Farming Rewards Distribution + + + +The farming reward will be sent once per month directly in your ThreeFold Connect App wallet. Farming rewards are usually distributed around the 5th of each month. + + + +# Additional Information + +Congratulations, you now have built your first ThreeFold 3Node server! + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers). \ No newline at end of file diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpeg new file mode 100644 index 0000000..80e850c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_104.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpeg new file mode 100644 index 0000000..088454a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_105.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpeg new file mode 100644 index 0000000..a24ff11 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_106.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpeg new file mode 100644 index 0000000..8ab6fdb Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_107.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png new file mode 100644 index 0000000..b426984 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_108.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png new file mode 100644 index 0000000..45841c5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_109.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png new file mode 100644 index 0000000..71377f1 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_110.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png new file mode 100644 index 0000000..7b5eeb4 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_111.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png new file mode 100644 index 0000000..ef5fcb3 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_112.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png new file mode 100644 index 0000000..8ce20bd Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_114.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png new file mode 100644 index 0000000..ca2260e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_115.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png new file mode 100644 index 0000000..65cf981 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_116.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png new file mode 100644 index 0000000..8c67944 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_117.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png new file mode 100644 index 0000000..726b43c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_118.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png new file mode 100644 index 0000000..2dff9b7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_119.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png new file mode 100644 index 0000000..507141d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_120.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png new file mode 100644 index 0000000..53775ac Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_121.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png new file mode 100644 index 0000000..8a2fbab Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_122.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png new file mode 100644 index 0000000..77c28e2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_123.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png new file mode 100644 index 0000000..a4b7660 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_124.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png new file mode 100644 index 0000000..d23c25a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_125.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png new file mode 100644 index 0000000..c35857e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_126.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png new file mode 100644 index 0000000..1a0249c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_127.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png new file mode 100644 index 0000000..2e0c8c7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_128.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png new file mode 100644 index 0000000..734aa84 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_129.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpeg new file mode 100644 index 0000000..94ffab0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_13.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png new file mode 100644 index 0000000..0630b55 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_130.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpeg new file mode 100644 index 0000000..b4a953a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_23.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpeg new file mode 100644 index 0000000..66fa069 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_25.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpeg new file mode 100644 index 0000000..f773e8f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_26.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpeg new file mode 100644 index 0000000..e8eaf53 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_27.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpeg new file mode 100644 index 0000000..8333b2b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_28.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpeg new file mode 100644 index 0000000..b86b9af Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_29.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpeg new file mode 100644 index 0000000..30456be Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png new file mode 100644 index 0000000..7442f5d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_3.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpeg new file mode 100644 index 0000000..622d1f9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_30.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpeg new file mode 100644 index 0000000..2ffe2de Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_31.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpeg new file mode 100644 index 0000000..a818dd6 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_38.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpeg new file mode 100644 index 0000000..3093e7e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_40.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png new file mode 100644 index 0000000..e7336f6 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_42.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png new file mode 100644 index 0000000..1c139d8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_43.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png new file mode 100644 index 0000000..dae09b3 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_44.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png new file mode 100644 index 0000000..668e6eb Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_45.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png new file mode 100644 index 0000000..b789d93 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_48.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png new file mode 100644 index 0000000..057fbc9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_49.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png new file mode 100644 index 0000000..17b0329 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_50.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png new file mode 100644 index 0000000..b19d799 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_52.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png new file mode 100644 index 0000000..600c1ec Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_53.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png new file mode 100644 index 0000000..56a5237 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_54.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png new file mode 100644 index 0000000..81da31e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_55.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png new file mode 100644 index 0000000..a2590e7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_56.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpeg new file mode 100644 index 0000000..8bd003e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png new file mode 100644 index 0000000..d893e54 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_6.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png new file mode 100644 index 0000000..46ae916 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_67.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png new file mode 100644 index 0000000..0842c19 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_68.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png new file mode 100644 index 0000000..2d2b6b9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_69.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png new file mode 100644 index 0000000..192a5d7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_70.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png new file mode 100644 index 0000000..7609bb2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_71.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png new file mode 100644 index 0000000..14efc8b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_72.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png new file mode 100644 index 0000000..d645b73 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_73.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png new file mode 100644 index 0000000..439fdb7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_74.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png new file mode 100644 index 0000000..7d7c8d2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_75.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpeg new file mode 100644 index 0000000..84cf1b0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png new file mode 100644 index 0000000..263ddf7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_76.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpeg new file mode 100644 index 0000000..25ed402 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_77.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpeg new file mode 100644 index 0000000..0c61dc3 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_78.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpeg new file mode 100644 index 0000000..9658fb5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_80.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpeg new file mode 100644 index 0000000..9e69a2c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_81.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpeg new file mode 100644 index 0000000..b98ed0b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_83.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpeg new file mode 100644 index 0000000..c3a77b6 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_86.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpeg new file mode 100644 index 0000000..0ed0169 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_88.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpeg new file mode 100644 index 0000000..8711387 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_89.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpeg new file mode 100644 index 0000000..41e9d80 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_90.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpeg new file mode 100644 index 0000000..836ffae Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_91.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpeg new file mode 100644 index 0000000..a671268 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_92.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpeg new file mode 100644 index 0000000..1cf820e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_94.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpeg new file mode 100644 index 0000000..0e8fcb2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_95.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpeg new file mode 100644 index 0000000..71f9646 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_96.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpeg new file mode 100644 index 0000000..782eaea Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_97.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpeg b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpeg new file mode 100644 index 0000000..187342c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/3node_diy_desktop_98.jpeg differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_001.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_001.png new file mode 100644 index 0000000..b666fce Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_001.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png new file mode 100644 index 0000000..415a4b3 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_002.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png new file mode 100644 index 0000000..47d3e2f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_003.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png new file mode 100644 index 0000000..0e1aa75 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_004.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png new file mode 100644 index 0000000..6acfa59 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_005.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png new file mode 100644 index 0000000..f874fcf Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_006.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png new file mode 100644 index 0000000..d85e2bf Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_25.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png new file mode 100644 index 0000000..3a22175 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_26.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png new file mode 100644 index 0000000..dd04e45 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_27.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png new file mode 100644 index 0000000..36f2eda Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_28.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png new file mode 100644 index 0000000..e7d6d40 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_29.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_30.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_30.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_1.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_1.png new file mode 100644 index 0000000..95545d8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_1.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png new file mode 100644 index 0000000..deec259 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_10.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png new file mode 100644 index 0000000..ceb60ae Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_11.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png new file mode 100644 index 0000000..70e8053 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_12.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png new file mode 100644 index 0000000..1016cb8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_13.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png new file mode 100644 index 0000000..7468258 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_14.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png new file mode 100644 index 0000000..0a34bd0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_15.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png new file mode 100644 index 0000000..1e32275 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_16.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png new file mode 100644 index 0000000..0ee82ea Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_17.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png new file mode 100644 index 0000000..b3f4386 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_18.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png new file mode 100644 index 0000000..74d7963 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_19.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_2.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_2.png new file mode 100644 index 0000000..488f970 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_2.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png new file mode 100644 index 0000000..05a80e8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_20.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_21.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_21.png new file mode 100644 index 0000000..ab36d45 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_21.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_22.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_22.png new file mode 100644 index 0000000..7cfc6ba Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_22.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_23.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_23.png new file mode 100644 index 0000000..3537771 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_23.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_24.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_24.png new file mode 100644 index 0000000..4b2da96 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_24.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_3.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_3.png new file mode 100644 index 0000000..349ec17 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_3.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_4.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_4.png new file mode 100644 index 0000000..b717bfe Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_4.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_5.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_5.png new file mode 100644 index 0000000..a022a1f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_5.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_6.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_6.png new file mode 100644 index 0000000..922b77b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_6.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_7.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_7.png new file mode 100644 index 0000000..e4bdb6e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_7.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_8.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_8.png new file mode 100644 index 0000000..e8d023c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_8.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png new file mode 100644 index 0000000..b6e852a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_createfarm_9.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png new file mode 100644 index 0000000..f1b4b1e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_1.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png new file mode 100644 index 0000000..e8c547b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_10.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png new file mode 100644 index 0000000..d0e58bc Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_11.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png new file mode 100644 index 0000000..980e33f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_12.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png new file mode 100644 index 0000000..c4956c5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_13.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png new file mode 100644 index 0000000..f251968 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_14.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png new file mode 100644 index 0000000..efd806e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_15.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png new file mode 100644 index 0000000..72d1994 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_16.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png new file mode 100644 index 0000000..cc9ba7f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_17.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png new file mode 100644 index 0000000..8fe5d2f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_18.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png new file mode 100644 index 0000000..6c6796c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_19.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png new file mode 100644 index 0000000..eccb743 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_2.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png new file mode 100644 index 0000000..da5c0ca Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_20.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png new file mode 100644 index 0000000..27abf0c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_21.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png new file mode 100644 index 0000000..adde6b0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_22.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png new file mode 100644 index 0000000..aabf0a0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_23.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png new file mode 100644 index 0000000..9f09996 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_24.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png new file mode 100644 index 0000000..c37a89c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_25.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png new file mode 100644 index 0000000..5cb2f75 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_26.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png new file mode 100644 index 0000000..c790ae0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_27.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png new file mode 100644 index 0000000..2648b65 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_28.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png new file mode 100644 index 0000000..52a66cd Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_29.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png new file mode 100644 index 0000000..f0b992c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_3.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png new file mode 100644 index 0000000..2e30308 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_30.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png new file mode 100644 index 0000000..ff1b9ac Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_31.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png new file mode 100644 index 0000000..ab617bf Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_32.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png new file mode 100644 index 0000000..3c6d591 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_33.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png new file mode 100644 index 0000000..944688f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_34.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png new file mode 100644 index 0000000..845bd35 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_35.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png new file mode 100644 index 0000000..b294bcb Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_36.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png new file mode 100644 index 0000000..c61e06e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_37.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png new file mode 100644 index 0000000..732f98a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_38.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png new file mode 100644 index 0000000..0c78bc0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_39.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png new file mode 100644 index 0000000..5047a5c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_4.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png new file mode 100644 index 0000000..6651627 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_40.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png new file mode 100644 index 0000000..839e929 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_41.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png new file mode 100644 index 0000000..5f84480 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_42.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png new file mode 100644 index 0000000..ba1e751 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_43.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png new file mode 100644 index 0000000..4a10071 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_44.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png new file mode 100644 index 0000000..0d6e86d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_45.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png new file mode 100644 index 0000000..2039941 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_46.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png new file mode 100644 index 0000000..71ea72e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_47.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png new file mode 100644 index 0000000..fd357c7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_48.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png new file mode 100644 index 0000000..0f82c97 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_49.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png new file mode 100644 index 0000000..6c960d9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_5.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png new file mode 100644 index 0000000..0ecbc06 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_50.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png new file mode 100644 index 0000000..86ecf04 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_51.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png new file mode 100644 index 0000000..9d2b5a4 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_52.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png new file mode 100644 index 0000000..141c24d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_53.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png new file mode 100644 index 0000000..2c97d1b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_54.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png new file mode 100644 index 0000000..2d7fe0e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_55.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png new file mode 100644 index 0000000..e090e08 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_56.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png new file mode 100644 index 0000000..b9289a4 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_57.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png new file mode 100644 index 0000000..a78d6d8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_58.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png new file mode 100644 index 0000000..4f6e7f2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_59.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png new file mode 100644 index 0000000..1ccab0c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_6.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png new file mode 100644 index 0000000..be8eea7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_7.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png new file mode 100644 index 0000000..1239f37 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_8.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png new file mode 100644 index 0000000..4a6acad Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_tf_wallet_9.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png new file mode 100644 index 0000000..7ae95fc Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_10.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png new file mode 100644 index 0000000..6ce3313 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_11.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png new file mode 100644 index 0000000..b26559e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_12.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png new file mode 100644 index 0000000..9486799 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_13.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png new file mode 100644 index 0000000..e5fc90b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_5.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png new file mode 100644 index 0000000..ce1f1ba Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_7.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png new file mode 100644 index 0000000..b8039d9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_8.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png new file mode 100644 index 0000000..882bcf7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/farming_wallet_9.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_12.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_12.png new file mode 100644 index 0000000..0bcdb35 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_12.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_13.png b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_13.png new file mode 100644 index 0000000..477500e Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/old_backup/farming_createfarm_13.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md new file mode 100644 index 0000000..6f45a2d --- /dev/null +++ b/collections/farmers/complete_diy_guides/3node_diy_desktop/img/readme.md @@ -0,0 +1 @@ +# Images of Farming Guide documentation for Threefold Manual 3.0 diff --git a/collections/farmers/complete_diy_guides/3node_diy_desktop/readme.md b/collections/farmers/complete_diy_guides/3node_diy_desktop/readme.md new file mode 100644 index 0000000..362d32f --- /dev/null +++ b/collections/farmers/complete_diy_guides/3node_diy_desktop/readme.md @@ -0,0 +1,19 @@ +# DIY 3node Desktop for the Threefold Manual 3.0 +* The **diy_3node_desktop.md** file + * contains the easiest DIY 3node guide possible + * no upgrades + * no updates + +* The **diy_3node_desktop.pdf** file + * can be used as an offline reference + * can be shared among the Threefold community + +Updates: This DIY Guide will be turned into a short 1 minute video to share. + + +# Threefold Ebooks + +1. FAQ +2. Complete Farming Guide +3. DIY 3node Desktop Computer - Farming Guide +4. DIY 3node Rack Server - Farming Guide \ No newline at end of file diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md b/collections/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md new file mode 100644 index 0000000..3686915 --- /dev/null +++ b/collections/farmers/complete_diy_guides/3node_diy_rack_server/3node_diy_rack_server.md @@ -0,0 +1,373 @@ +

Building a DIY 3Node: Rack Server

+ +In the following 3Node DIY guide, you will learn how to turn a Dell server (R620, R720) into a 3Node farming on the ThreeFold Grid. + +Note that the process is similar for other rack servers. + + +

Table of Contents

+ + + +- [Setting Up the Hardware](#setting-up-the-hardware) + - [Avoiding Static Discharge](#avoiding-static-discharge) + - [Setting the M.2 NVME SSD Disk with the PCIe Adaptor](#setting-the-m2-nvme-ssd-disk-with-the-pcie-adaptor) + - [Checking the RAM sticks](#checking-the-ram-sticks) + - [General Rules when Installing RAM Sticks](#general-rules-when-installing-ram-sticks) + - [Procedure to Install RAM Sticks](#procedure-to-install-ram-sticks) + - [Installing the SSD Disks](#installing-the-ssd-disks) + - [Plugging the 3node Server](#plugging-the-3node-server) + - [Removing the DVD Optical Drive - Installing a SSD disk in the DVD Optical Drive Slot](#removing-the-dvd-optical-drive---installing-a-ssd-disk-in-the-dvd-optical-drive-slot) + - [Using Onboard Storage - RAID Controller Details](#using-onboard-storage---raid-controller-details) +- [Zero-OS Bootstrap Image](#zero-os-bootstrap-image) + - [Creating a Farm](#creating-a-farm) + - [Using Dashboard](#using-dashboard) + - [Using TF Connect App](#using-tf-connect-app) + - [Wiping All the Disks](#wiping-all-the-disks) + - [Downloading the Zero-OS Bootstrap Image](#downloading-the-zero-os-bootstrap-image) + - [DVD ISO BIOS Image](#dvd-iso-bios-image) + - [USB BIOS Image](#usb-bios-image) +- [BIOS Settings](#bios-settings) + - [Processor Settings](#processor-settings) + - [Boot Settings](#boot-settings) +- [Booting the 3Node](#booting-the-3node) +- [Additional Information](#additional-information) + - [Differences between the R620 and the R720](#differences-between-the-r620-and-the-r720) + - [Different CPUs and RAMs Configurations for 3Node Dell Servers](#different-cpus-and-rams-configurations-for-3node-dell-servers) +- [Closing Words](#closing-words) + +*** + +# Setting Up the Hardware + +![3node_diy_rack_server_1](./img/3node_diy_rack_server_1.png) + +Dell R620 1U server + + +## Avoiding Static Discharge + +![3node_diy_rack_server_2](./img/3node_diy_rack_server_2.png) + + +Some will recommend to wear anti-static gloves as shown here. If you don’t have anti-static gloves, remember this: + +> Always touch the metal side of the server before manipulating the hardware. + +Your hands will discharge the static on the outside of the box, which is secure. + +## Setting the M.2 NVME SSD Disk with the PCIe Adaptor + +![3node_diy_rack_server_3](./img/3node_diy_rack_server_3.png) + +Here is one of the two 2TB SSD NVME m.2 that we will install on the server. Above the SSD is the PCIe Gen 3, x4 that we will use to connect the SSD to the server. + +![3node_diy_rack_server_4](./img/3node_diy_rack_server_4.png) + +You can see at the left of the adaptor that there is a metal piece that can be used to hold more firmly the PCIe adaptor and the SSD. We will remove it for this DIY build. Why? Because it is not necessary as the adaptor can hold the weight of the SSD. Also, this metal piece is full while the brackets in the server have holes in it. This will ensure a better airflow and thus less heat. + +![3node_diy_rack_server_5](./img/3node_diy_rack_server_5.png) + +We remove the screws with a star screwdriver. + +![3node_diy_rack_server_6](./img/3node_diy_rack_server_6.png) + +![3node_diy_rack_server_7](./img/3node_diy_rack_server_7.png) + +This SSD already has a heatsink. There is no need to use the heatsink included in the PCIe adaptor kit. If you remove the heatsink or the sticker under the SSD, you will lose your 5 years warranty. + +![3node_diy_rack_server_8](./img/3node_diy_rack_server_8.png) + +When you put the SSD in the adaptor, make sure you have the opening in line with the adaptor. + +![3node_diy_rack_server_9](./img/3node_diy_rack_server_9.png) + +![3node_diy_rack_server_10](./img/3node_diy_rack_server_10.png) + +Fitting in the SSD takes some force. Do not overdo it and take your time! + +![3node_diy_rack_server_11](./img/3node_diy_rack_server_11.png) + +It’s normal that the unscrewed part is lifting in the air before you screw the SSD on the adaptor. + +![3node_diy_rack_server_12](./img/3node_diy_rack_server_12.png) + +To screw the SSD in place, use the screwdriver included in the PCIe adaptor kit. + +![3node_diy_rack_server_13](./img/3node_diy_rack_server_13.png) + +![3node_diy_rack_server_14](./img/3node_diy_rack_server_14.png) + +Now that’s a steady SSD! + +## Checking the RAM sticks + +![3node_diy_rack_server_15](./img/3node_diy_rack_server_15.png) + +It’s now time to get under the hood! Make sure the case is at the unlocked position. If you need to turn it to unlocked position, use a flathead screwdriver or a similar tool. + +![3node_diy_rack_server_16](./img/3node_diy_rack_server_16.png) + +![3node_diy_rack_server_17](./img/3node_diy_rack_server_17.png) + +Lift up the lock and the top server plate should glide to the back. You can remove the top of the server. + +![3node_diy_rack_server_18](./img/3node_diy_rack_server_18.png) + +Here’s the full story! R620 and all! + +![3node_diy_rack_server_19](./img/3node_diy_rack_server_19.png) + +![3node_diy_rack_server_20](./img/3node_diy_rack_server_20.png) + +![3node_diy_rack_server_21](./img/3node_diy_rack_server_21.png) + +To remove this plastic piece, simply lift with your fingers at the designated spot (follow the blue line!). + +![3node_diy_rack_server_22](./img/3node_diy_rack_server_22.png) + +Here’s the RAMs! This R620 came already equipped with 256GB of rams dispersed in 16x16GB sticks. If you need to add the RAM sticks yourself, make sure you are doing it correctly. The FAQ covers some basic information for RAM installation. + +![3node_diy_rack_server_23](./img/3node_diy_rack_server_23.png) + +To remove a stick, push on the clips on both sides. You can do it one at a time if you want. Make sure it doesn’t pop out and fall on a hardware piece! Once the clips are opened, pull out the RAM stick by holding it on the sides. This will ensure it does not get damaged. + +![3node_diy_rack_server_24](./img/3node_diy_rack_server_24.png) + +Here’s the RAM in it’s purest form! + +![3node_diy_rack_server_25](./img/3node_diy_rack_server_25.png) + +Here you can see that the gap is not in the middle of the RAM stick. You must be careful when inserting the RAM. Make sure you have the gap aligned with the RAM holder. + +![3node_diy_rack_server_26](./img/3node_diy_rack_server_26.png) + +When you want to put a RAM stick in its slot, make sure the plastic holders on the sides are opened and insert the RAM stick. Make sure you align the RAM stick properly. You can then push on one side at a time until the RAM stick clicks in. You can do it both sides at once if you are at ease. + +### General Rules when Installing RAM Sticks + +First, always use RAM sticks of the same size and type. It should be noted on your motherboard which slots to populate first. + +As a general guide, there is usually 2 slots A and B, with each 2 memory stick entries. You must then install the ram sticks on A1 and B1 in order to achieve dual channel, then A2 and B2 if you have more (visual order: A1 A2 B1 B2). + +#### Procedure to Install RAM Sticks + +You want to start with your largest sticks, evenly distributed between both processors and work your way down to your smallest. + +As an example, let's say you have 2 processors and 4x 16GB sticks and 4x 8GB sticks. The arrangement would be A1-16GB, B1-16GB, A2-16GB, B2-16GB, A3-8GB, B3-8GB, A4-8GB, B4-8GB. + +Avoid odd numbers as well. You optimally want pairs. So if you only have 5x 8GB sticks, only install 4 until you have an even 6. + + +## Installing the SSD Disks + + +![3node_diy_rack_server_27](./img/3node_diy_rack_server_27.png) + +To put back the plastic protector, simply align the plastic piece with the two nudges in the metal case. + +![3node_diy_rack_server_28](./img/3node_diy_rack_server_28.png) + +![3node_diy_rack_server_29](./img/3node_diy_rack_server_29.png) + +We will now remove this PCIe riser in order to connect the SSDs. + +![3node_diy_rack_server_30](./img/3node_diy_rack_server_30.png) + +Optional step: put the SSDs and the PCIe riser next to each other so they can talk and break the ice. They will get to learn one another before going into the server to farm TFT. + +![3node_diy_rack_server_31](./img/3node_diy_rack_server_31.png) + +Just like with RAM sticks, you want to make sure you are aligned with the slot. + +![3node_diy_rack_server_32](./img/3node_diy_rack_server_32.png) + +Next, push the adaptor inside the riser’s opening. This takes some force too. If you are well aligned, it should be done with ease. + +![3node_diy_rack_server_33](./img/3node_diy_rack_server_33.png) + +This is what the riser looks like with the two SSDs installed. Now you simply need to put the riser back inside the server. + +![3node_diy_rack_server_34](./img/3node_diy_rack_server_34.png) + +Push down on the riser to insert it properly. + +![3node_diy_rack_server_35](./img/3node_diy_rack_server_35.png) + +It’s good to notice that the inside of the top plate of the server has great pictures showing how to manipulate the hardware. + + + +## Plugging the 3node Server + + + +![3node_diy_rack_server_36](./img/3node_diy_rack_server_36.png) + +Now you will want to plug in the power cable in the PSU. Here we show two 495W PSUs. With 256GB of RAM and two SSDs NVME, it is better to use two 750W PSUs. Note that this server will only use around 100W at idle. There are two power cables for redundancy. The unit does not need more than one to function. + +On a 15A/120V breaker, you can have more than one server. But note that, at full load, this server can use up to 400W. In this case, no more than 3 servers could be plugged on the same breaker. Make sure you adapt to your current situation (server's power consumption, electric breaker, etc.). + +![3node_diy_rack_server_37](./img/3node_diy_rack_server_37.png) + +Plugging in the power cable is pretty straightforward. Just make sure you have the 3 pins oriented properly! + +![3node_diy_rack_server_38](./img/3node_diy_rack_server_38.png) + +It is highly recommended to plug the power cable in a surge protector. If you have unsteady electricity at your location, it might be good to use a UPS, uninterrupted power supply. A surge protector is essential to avoid overpowering and damaging the server. + +![3node_diy_rack_server_39](./img/3node_diy_rack_server_39.png) + +![3node_diy_rack_server_40](./img/3node_diy_rack_server_40.png) + +![3node_diy_rack_server_41](./img/3node_diy_rack_server_41.png) + +Before starting the server, you can plug in the monitor and the keyboard as well as the ethernet cable. Make sure you plug the ethernet cable in one of the four NIC ports. + +![3node_diy_rack_server_42](./img/3node_diy_rack_server_42.png) + +Now, power it on! + +![3node_diy_rack_server_43](./img/3node_diy_rack_server_43.png) + +The server is booting. + + +## Removing the DVD Optical Drive - Installing a SSD disk in the DVD Optical Drive Slot + + +![3node_diy_rack_server_44](./img/3node_diy_rack_server_44.png) + +![3node_diy_rack_server_45](./img/3node_diy_rack_server_45.png) + +If you want to change the DVD optical drive, push where indicated and remove the power and SATA cables. + +It is possible to install a SSD disk in there. To do so, use a SATA HDD hard drive caddy CD/DVD **9.5mm** and put in a SATA III 2.5" disk. The caddy is not necessary. You could simply remove the standard CD/DVD caddy and plug the SATA disk. + +The hardware part is done. Next, you will want to set the BIOS properly as well as get the bootstrap image of Zero-OS. Before we get into this, let's have some information on using the onboard storage of your 3node server. + + +## Using Onboard Storage - RAID Controller Details + + +If you want to use the onboard storage on your server, you will probably need to flash the RAID card or do some adjustment in order for Zero-OS to recognize your disks. + +You can use the onboard storage on a server without RAID. You can [re-flash](https://fohdeesha.com/docs/perc.html) the RAID card, turn on HBA/non-RAID mode, or install a different card. It's usually easy to set servers such as a HP Proliant with the HBA mode. + +For Dell servers, you can either cross-flash the RAID controller with an “IT-mode-Firmware” (see this [video](https://www.youtube.com/watch?v=h5nb09VksYw)) or get a DELL H310-controller (which has the non-RAID option). Otherwise, as shown in this guide, you can install a NVME SSD with a PCIe adaptor, and turn off the RAID controller. + +Note that for Dell R610 and R710, you can re-flash the RAID card. For the R910, you can’t re-flash the card. In this case, you will need to get a LSI Dell card. + +# Zero-OS Bootstrap Image + +With R620 and R720 Dell servers, UEFI does not work well. You will want to use either a DVD or a USB in BIOS mode. + +Go on https://bootstrap.grid.tf/ and download the appropriate image: option **ISO** for the DVD and option **USB** for BIOS USB (not UEFI). + +Write your farmer ID and make sure you select production mode. + +## Creating a Farm + +You can create a farm with either the ThreeFold Dashboard or the ThreeFold Connect app. + +### Using Dashboard + +The Dashboard section contains all the information required to [create a farm](../../../dashboard/farms/your_farms.md). + +### Using TF Connect App + +You can [create a ThreeFold farm](../../../threefold_token/storing_tft/tf_connect_app.md) with the ThreeFold Connect App. + +## Wiping All the Disks + +You might need to wipe your disks if they are not brand new. To wipe your disks, read the section [Wipe All the Disks](../../3node_building/4_wipe_all_disks.md) of the ThreeFold Farming Documentation. + +## Downloading the Zero-OS Bootstrap Image + +You can then download the [Zero-OS bootstrap image](https://v3.bootstrap.grid.tf) for your farm. + +![3node_diy_rack_server_46](./img/3node_diy_rack_server_46.png) + +![3node_diy_rack_server_47](./img/3node_diy_rack_server_47.png) + +Use the ISO image for DVD boot and the USB image for USB BIOS boot (not UEFI). We use the farm ID 1 here as an example. Put your own farm ID. + +### DVD ISO BIOS Image +For the ISO image, download the file and burn it on a DVD. + +### USB BIOS Image +Note: the USB key must be formatted before burning the Zero-OS bootstrap image. + +For Windows, MAC and Linux, you can use [balenaEtcher](https://www.balena.io/etcher/), a free and open source software that will let you create a bootstrap image on a USB key, while also formatting the USB key at the same time. + +This is the **easiest way** to burn your Zero-OS bootstrap image. All the steps are clearly explained within the software. + +For Windows, you can also use Rufus. + +For the USB image, with Linux, you can also go through the command window and write: + +> dd status=progress if=FILELOCATION.ISO(or .IMG) of=/dev/sd*. + +Here the * is to indicate that you must adjust according to your disk. To see your disks, write **lsblk** in the command window. Make sure you select the proper disk. + + +# BIOS Settings + +Before starting the server, plug in the USB bootstrap image. You can also insert the DVD once the server is on. + +When you start the server, press F2 to get into System Setup. + +Then, select System BIOS. In System BIOS settings, select Processor Settings. + +Note: More details are available for BIOS Settings in this [documentation](../../3node_building/5_set_bios_uefi.md). + +## Processor Settings + +Make sure you have enabled the Logical Processor (Hyper Threading with HP). This turns 8 cores into 16 virtual cores. You can set QPI Speed at Maximum data rate. Make sure you set All to Number of Cores per Processor. You can adjust the Processor Core speed and Processor Bus Speed for specific uses. + +It is also good to take a look at the Processors and make sure the hardware is correct. + +## Boot Settings + +Go to System BIOS Settings and select Boot Settings. In Boot Settings, choose BIOS and not UEFI as the Boot Mode. You need to save your preferences and comeback to select BIOS Boot Settings. + +Once back in BIOS Boot Settings, go to Boot Sequence. Depending on your bootstrap image of Zero-OS, select either the USB key or the Optical Drive CD-DVD option. The name of the USB key can be Drive C or else depending on where you plugged it and your server model. + +You can also disable the booting options that are not need. It can be good to have a DVD and a USB key with the bootstrap images for redundancy. If one boot fails, the computer would try with the other options of the boot sequence. This can be done with 2 USB keys too. + +Boot Sequence Retry enabled will simply redo the booting sequence if the last time did not work. + + +That's it. You've set the BIOS settings properly and now is time to boot the 3Node and connect to the ThreeFold Grid. + +You can then save your preferences and exit. Your server should restart and load the bootstrap image. + +# Booting the 3Node + +Once you've set the BIOS settings and restarted your computer, it will download the Zero-OS bootstrap image. This takes a couple of minutes. + +The first time you boot a 3Node, it will be written: “This node is not registered (farmer : NameOfFarm). This is normal. The Grid will create a node ID and you will be able to see it on screen. This can take a couple of minutes. + +Once you have your node ID, you can also go on the ThreeFold Explorer to see your 3Node and verify that the connection is recognized by the Explorer. + +# Additional Information +## Differences between the R620 and the R720 + +Note that the main difference between the R620 and the R720 is that the former is a 1U and the latter a 2U. 2U servers are usually less noisy and generate less heat than 1U servers since they have a greater volume. In the R720, fans are bigger and thus less noisy. This can be an important factor to consider. Both offer great performances and work well with Zero-OS. + +## Different CPUs and RAMs Configurations for 3Node Dell Servers + +Different CPUs and RAMs configurations are possible for the Dell R620/R720 servers. + +For example, you could replace the E5-2640 v2 CPUs for the E5-2695 V2. This would give you 48 Threads. You could then go with 12x32GB DDR3 LRDIMM. You would also need 5TB SSD total instead to get the proper ratio, which is 100GB of SSD and 8GB of RAM per virtual core (also called thread or logical core). + +Note that you cannot have more than 16 sticks of ECC DIMM on the R620/R720. For more sticks, you need LRDIMM as stated above. + +# Closing Words +That's it. You have now built a DIY 3Node and you are farming on the ThreeFold Grid. + +If you encounter errors, you can read the section [Troubleshooting and Error Messages](../../../faq/faq.md#troubleshooting-and-error-messages) of the Farmer FAQ. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers). + +>Welcome to the New Internet! diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png new file mode 100644 index 0000000..be135fb Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_1.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png new file mode 100644 index 0000000..59fd49b Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_10.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png new file mode 100644 index 0000000..e83f1a7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_11.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png new file mode 100644 index 0000000..e74e59f Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_12.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png new file mode 100644 index 0000000..0aeb0e5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_13.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png new file mode 100644 index 0000000..79b3798 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_14.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png new file mode 100644 index 0000000..50f9191 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_15.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png new file mode 100644 index 0000000..5f2c328 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_16.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png new file mode 100644 index 0000000..92cef03 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_17.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png new file mode 100644 index 0000000..a78b3f0 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_18.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png new file mode 100644 index 0000000..78212d9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_19.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png new file mode 100644 index 0000000..d459bf3 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_2.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png new file mode 100644 index 0000000..5cdb630 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_20.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png new file mode 100644 index 0000000..84bfe28 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_21.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png new file mode 100644 index 0000000..62e7ee4 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_22.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png new file mode 100644 index 0000000..bdd4063 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_23.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png new file mode 100644 index 0000000..c9363fd Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_24.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png new file mode 100644 index 0000000..54cf093 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_25.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png new file mode 100644 index 0000000..3c4ee6d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_26.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png new file mode 100644 index 0000000..3af18ad Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_27.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png new file mode 100644 index 0000000..07f7558 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_28.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png new file mode 100644 index 0000000..1f610e8 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_29.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png new file mode 100644 index 0000000..aa7d30d Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_3.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png new file mode 100644 index 0000000..965f8bf Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_30.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png new file mode 100644 index 0000000..b72cfd2 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_31.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png new file mode 100644 index 0000000..143efca Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_32.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png new file mode 100644 index 0000000..705e98a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_33.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png new file mode 100644 index 0000000..47a2799 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_34.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png new file mode 100644 index 0000000..68448ab Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_35.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png new file mode 100644 index 0000000..20324e9 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_36.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png new file mode 100644 index 0000000..88f7691 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_37.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png new file mode 100644 index 0000000..b5210c5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_38.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png new file mode 100644 index 0000000..6804b91 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_39.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png new file mode 100644 index 0000000..d5ae149 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_4.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png new file mode 100644 index 0000000..671b240 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_40.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png new file mode 100644 index 0000000..c090dc7 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_41.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png new file mode 100644 index 0000000..04909fb Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_42.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png new file mode 100644 index 0000000..5248051 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_43.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png new file mode 100644 index 0000000..a6f1d64 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_44.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png new file mode 100644 index 0000000..1dedf8c Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_45.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png new file mode 100644 index 0000000..b0c1140 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_46.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png new file mode 100644 index 0000000..9da0919 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_47.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png new file mode 100644 index 0000000..3b9f8d5 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_5.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png new file mode 100644 index 0000000..d7da48a Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_6.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png new file mode 100644 index 0000000..c464eae Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_7.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png new file mode 100644 index 0000000..e9afd38 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_8.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png new file mode 100644 index 0000000..0527f58 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/3node_diy_rack_server_9.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/farming_30.png b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/farming_30.png differ diff --git a/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md new file mode 100644 index 0000000..b525533 --- /dev/null +++ b/collections/farmers/complete_diy_guides/3node_diy_rack_server/img/readme.md @@ -0,0 +1 @@ +Image folder for /wethreepedia/3node_diy_rack_server diff --git a/collections/farmers/complete_diy_guides/complete_diy_guides_readme.md b/collections/farmers/complete_diy_guides/complete_diy_guides_readme.md new file mode 100644 index 0000000..c3b2360 --- /dev/null +++ b/collections/farmers/complete_diy_guides/complete_diy_guides_readme.md @@ -0,0 +1,10 @@ +

Complete DIY 3Node Guides

+ +This section of the ThreeFold Farmers book presents two short guides detailing how to build a DIY 3Node. + +A perfect start for newcomers is the Desktop guide. If you want to build a bigger 3Node, the Rack Server guide maybe the best fit your you! + +

Table of Contents

+ +- [3Node Desktop DIY Guide](./3node_diy_desktop/3node_diy_desktop.html) +- [3Node Rack Server DIY Guide](./3node_diy_rack_server/3node_diy_rack_server.html) \ No newline at end of file diff --git a/collections/farmers/farmerbot/farmerbot_information.md b/collections/farmers/farmerbot/farmerbot_information.md new file mode 100644 index 0000000..04e108b --- /dev/null +++ b/collections/farmers/farmerbot/farmerbot_information.md @@ -0,0 +1,452 @@ + +

Farmerbot Additional Information

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Additional Information](#additional-information) + - [General Considerations](#general-considerations) + - [YAML Configuration File Template](#yaml-configuration-file-template) + - [Supported Commands and Flags](#supported-commands-and-flags) + - [Minimum specs to run the Farmerbot](#minimum-specs-to-run-the-farmerbot) + - [How to Prepare Your Farm for the Farmerbot with WOL](#how-to-prepare-your-farm-for-the-farmerbot-with-wol) + - [WOL Requirements](#wol-requirements) + - [Enabling WOL in the BIOS](#enabling-wol-in-the-bios) + - [ZOS Nodes and NIC](#zos-nodes-and-nic) + - [NIC Firmware and WOL](#nic-firmware-and-wol) + - [How to Move Your Farm to a Different Network](#how-to-move-your-farm-to-a-different-network) + - [The differences between power "state" and power "target"](#the-differences-between-power-state-and-power-target) + - [The differences between uptime, status and power state](#the-differences-between-uptime-status-and-power-state) + - [The sequence of events for a node managed by the Farmerbot](#the-sequence-of-events-for-a-node-managed-by-the-farmerbot) + - [The problematic states of a 3node set with the Farmerbot](#the-problematic-states-of-a-3node-set-with-the-farmerbot) + - [Using the ThreeFold Node Status Bot](#using-the-threefold-node-status-bot) + - [CPU overprovisioning](#cpu-overprovisioning) + - [Seed phrase and HEX secret](#seed-phrase-and-hex-secret) + - [Farmerbot directory tree](#farmerbot-directory-tree) + - [Dedicated Nodes and the Farmerbot](#dedicated-nodes-and-the-farmerbot) + - [Periodic wakeup](#periodic-wakeup) + - [Time period between random wakeups and power target update](#time-period-between-random-wakeups-and-power-target-update) + - [Upgrade to the new Farmerbot](#upgrade-to-the-new-farmerbot) + - [Set the Farmerbot without the mnemonics of a ThreeFold Dashboard account](#set-the-farmerbot-without-the-mnemonics-of-a-threefold-dashboard-account) +- [Maintenance](#maintenance) + - [See the power state and power target of 3Nodes](#see-the-power-state-and-power-target-of-3nodes) + - [With GraphQL](#with-graphql) + - [With Grid Proxy](#with-grid-proxy) + - [Change manually the power target of a 3Node](#change-manually-the-power-target-of-a-3node) + - [Properly reboot the node if power target "Down" doesn't work](#properly-reboot-the-node-if-power-target-down-doesnt-work) + - [Add a 3Node to a running Farmerbot](#add-a-3node-to-a-running-farmerbot) + - [Update the Farmerbot with a new release](#update-the-farmerbot-with-a-new-release) +- [Troubleshooting](#troubleshooting) + - [Can't Find the Logs](#cant-find-the-logs) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +We present some general information concerning the Farmerbot as well as some advice for proper maintenance and troubleshooting. + +# Additional Information + +We present additional information to complement the [Quick Guide](farmerbot_quick.md). + +## General Considerations + +The Farmerbot doesn’t have to run physically in the farm since it instructs nodes over RMB to power on and off. The Farmerbot should be running at all time. + +The Farmerbot uses the nodes in the farm to send WOL packets to the node that needs to wakeup. For this reason, you need at least one node per farm to be powered on at all time. If you do not specify one node to be always on, the Farmerbot will randomly choose a node to stay on for each cycle. If all nodes in a subnet are powered off, there is no way other nodes in other subnets will be able to power them on again. + +Note that if you run the Farmerbot on your farm, it is logical to set the node running the Farmerbot as always on. In this case, it will always be this node that wakes up the other nodes. + +Currently, you can run only one Farmerbot per farm. Since you can only deploy one Farmerbot per farm, the Farmerbot can only run on one node at a time. + +Since you need at least one node to power up a second node, you can't use the Farmerbot with just one node. You need at least two 3Nodes in your farm to correctly use the Farmerbot. + +The Farmerbot gets its data completely from TFChain. This means that, unlike the previous version, the Farmerbot will not start all the nodes when it restarts. + +## YAML Configuration File Template + +The quick guide showed a simple form of the YAML configuration file. Here are all the parameters that can be set for the configuration file. + +``` +farm_id: "" +included_nodes: [optional, if no nodes are added then the farmerbot will include all nodes in the farm, farm should contain at least 2 nodes] + - "" +excluded_nodes: + - "" +never_shutdown_nodes: + - "" +power: + periodic_wake_up_start: "" + wake_up_threshold: "" + periodic_wake_up_limit: "" + overprovision_cpu: "" +``` + +## Supported Commands and Flags + +We present the different commands for the Farmerbot. + +- `start`: to start (power on) a node + +```bash +farmerbot start --node -m -n dev -d +``` + +Where: + +```bash +Flags: + --node uint32 the node ID you want to use + +Global Flags: +-d, --debug by setting this flag the farmerbot will print debug logs too +-m, --mnemonic string the mnemonic of the account of the farmer +-n, --network string the grid network to use (default "main") +-s, --seed string the hex seed of the account of the farmer +-k, --key-type string key type for mnemonic (default "sr25519") +``` + +- `start all`: to start (power on) all nodes in a farm + +```bash +farmerbot start all --farm -m -n dev -d +``` + +Where: + +```bash +Flags: + --farm uint32 the farm ID you want to start your nodes ins + +Global Flags: +-d, --debug by setting this flag the farmerbot will print debug logs too +-m, --mnemonic string the mnemonic of the account of the farmer +-n, --network string the grid network to use (default "main") +-s, --seed string the hex seed of the account of the farmer +-k, --key-type string key type for mnemonic (default "sr25519") +``` + +- `version`: to get the current version of farmerbot + +```bash +farmerbot version +``` + +## Minimum specs to run the Farmerbot + +The Farmerbot can run on any computer/server, it could even run on a laptop, so to speak. As long as it has an internet connection, the Farmerbot will be working fine. + +The Farmerbot runs fine on a VM with a single vcore and 2GB of RAM. For the storage, you need to have room for Docker and its dependencies. Thus 1 or 2GB of free storage, with the OS already installed, should be sufficient. + +## How to Prepare Your Farm for the Farmerbot with WOL + +ZOS can utilize 2 NIC's (Network Interface Card) of a node (server, workstation, desktop, ..). The first NIC on the motherboard will always be what we call the ZOS/dmz NIC, the second one is used for public config's (Gateway, public IP's for workloads, ..). So if you don't have public IP's in your farm, only the first NIC of your ZOS node will be used. This subnet is where the farmerbot operates. If you do have public IP's the same applies. + +Wake On LAN (WOL) is used to be able to boot (start) a ZOS node remotely that was shut down by the farmerbot. It works by sending what is called a 'magic packet' to the NIC MAC address of a ZOS node. If that NIC is setup correctly, aka 'listening' for the packet, the node will start up, post and boot ZOS. The farmerbot will keep a list of MAC addresses for nodes under it's management, so it knows where to send the packet if it's required. + +## WOL Requirements + +WOL comes with a few requirements. We list them in the sections that follow. + +### Enabling WOL in the BIOS + +Enable WOL in the BIOS of your ZOS node. + +A ZOS node must be capable of doing WOL. Have a look at your node hardware / BIOS manual. If so make sure to enable it in the BIOS! A bit of research will quickly tell you how to enable for your hardware. Some older motherboards do not support this, sometimes you can be lucky it does after a BIOS upgrade, but that is brand/model specific. + +Some examples: + +![farmerbot_bios_1|517x291](img/farmerbot_bios_1.jpeg) + +![farmerbot_bios_2|499x375](img/farmerbot_bios_2.jpeg) + +### ZOS Nodes and NIC + +All your ZOS nodes and their first NIC (ZOS/dmz) should be in the same network subnet (also called network segment or broadcast domain). + +This requires some basic network knowledge. WOL packets can not be send across different subnets by default, it can but this requires specific configuration on the firewall that connects the two subnets. Though cross-subnet WOL is currently not supported by the farmerbot. + +A 'magic' WOL packet is sent only on networking layer 2 (L2 or the 'data link layer') based on MAC address. So not on L3 based on ip address. This is why all nodes that should be brought up via WOL, need to be in the same subnet. + +You can check if this is the case like this: if for example one node has the ip 192.168.0.20/24, then all other nodes should have an ip between 192.168.0.1 and 192.168.0.254. You can calculate subnet ranges easely here: https://www.tunnelsup.com/subnet-calculator/ + +So for the 192.168.0.0/24 example, you can see the range under 'Usable Host Range': + +![farmerbot_bios_3|499x500](img/farmerbot_bios_3.png) + +### NIC Firmware and WOL + +Some NIC's require WOL to be set on the NIC firmware. + +This is fully handled by ZOS. Every time ZOS boots it will enable WOL on links if they require it. So if a ZOS node then is added to a farmerbot, it will have WOL enabled on its NIC when it's turned off (by the farmerbot). + +Your farmerbot can be run on any system, including on a node. It doesn't have to be on the same network subnet as the nodes from the farm. The nodes of the farm on the other hand have to be in the same LAN. Don't hesitate to ask your technical questions here, we and the community will help you set things up! + +## How to Move Your Farm to a Different Network + +Note that the Farmerbot is currently available for Dev Net, QA Net, Test Net and Main Net. Thus, it might not be necessary to move your farm to a different network. + +To move your farm to a different network, you need to create a new bootstrap image for the new network instead of your current network. You should also wipe your 3Nodes' disks before moving to a different network. + +To download the Zero-OS bootstrap image, go to the usual bootstrap link [https://v3.bootstrap.grid.tf/](https://v3.bootstrap.grid.tf/) and select the network you want. + +![test_net|690x422](img/farmerbot_5.png) + +Once you have your new bootstrap image for the new network, [wipe your disks](../3node_building/4_wipe_all_disks.md), insert the new bootstrap image and reboot the 3Node. + +## The differences between power "state" and power "target" + +The target is what is set by the Farmerbot or can be set by the farmer manually on TF Chain. Power state can only be set by the node itself, in response to power targets it observes on chain. + + +## The differences between uptime, status and power state + +There are three distinctly named endpoints or fields that exist in the back end systems: + +* Uptime + * number of seconds the node was up, as of it's last uptime report. This is the same on GraphQL and Grid Proxy. +* Status + * this is a field that only exists on the Grid Proxy, which corresponds to whether the node sent an uptime report within the last 40 minutes. +* Power state + * this is a field that only exists on GraphQL, and it's the self reported power state of the node. This only goes to "down" if the node shut itself down at request of the Farmerbot. + +## The sequence of events for a node managed by the Farmerbot + +The sequence of events for a node managed by farmerbot should look like this: + +1. Node is online. Target, state, and status are all "Up". +2. Farmerbot sets node's target to "Down". +3. Node sets its state to "Down" and then shuts off. +4. Three hours later the status switches to "Down" because the node hasn't been updated. +5. At periodic wake up time, Farmerbot sets node's target to "Up". +6. Node receives WoL packet and starts booting. +7. After boot is complete, node sets its state to "Up" and also submits uptime report. +8. updatedAt is updated with time that uptime report was received and status changes to "Up". + +At that point the cycle is completed and will repeat. + +## The problematic states of a 3node set with the Farmerbot + +These are problematic states: + +1. Target is set to "Up" but state and status are "Down" for longer than normal boot time (node isn't responding). +2. Target has been set to "Down" for longer than ~23.5 hours (farmerbot isn't working properly). +3. Target is "Down" but state and status are up (Zos is potentially not responding to power target correctly). +4. State is "Up" but status is "Down" (node shutdown unexpectedly). + +## Using the ThreeFold Node Status Bot + +You can use the [ThreeFold Node Status Bot](https://t.me/tfnodestatusbot) to see the nodes' status in relation to the Farmerbot. + +## CPU overprovisioning + +In the context of the ThreeFold grid, overprovisioning a CPU means that you can allocate more than one deployment to one CPU. + +In relation to the Farmerbot, you can set a value between 1 and 4 of how much the CPU can be overprovisioned. For example, a value of 2 means that the Farmerbot can allocate up to 2 deployments to one CPU. + +## Seed phrase and HEX secret + +When setting up the Farmerbot, you will need to enter either the seed phrase or the HEX secret of your farm. For farms created in the TF Connect app, the HEX secret from the app is correct. For farms created in the TF Dashboard, you'll need the seed phrase provided when you created the account. + +## Farmerbot directory tree + +As a general template, the directory tree of the Farmerbot will look like this: + +``` +└── farmerbot_directory + ├── .env + └── conf.yml +``` + +## Dedicated Nodes and the Farmerbot + +Dedicated nodes are managed like any other node. Nodes marked as dedicated can only be rented completely. Whenever a user wants to rent a dedicated node the user sends a find_node job to the farmerbot. The farmerbot will find such a node, power it on if it is down and reserve the full node (for 30 minutes). The user can then proceed with creating a rent contract for that node. The farmerbot will get that information and keep that node powered on. It will no longer return that node as a possible node in future find_node jobs. Whenever the rent contract is canceled the farmerbot will notice this and shutdown the node if the resource usage allows it. + +## Periodic wakeup + +The minimum period between two nodes to be waken up is currently 5 minutes. This means that every 5 minutes a new node wakes up during the periodic wakeup. + +Once all nodes are awaken, they all shut down at the same time, except the node that stays awaken to wake up the other during the next periodic wake. + +## Time period between random wakeups and power target update + +The time period between a random wakeup and the moment the power target is set to down is between 30 minutes and one hour. + +Whenever a random wakeup is initiated, the Farmerbot will wait 30 minutes for the node to be up. Once the node is up, the Farmerbot will keep that node up for 30 minutes for the two following reasons: + +* The node can send uptime report +* If the node was put online for a given user deployment, this time priod gives ample time for the user to deploy their workload. + +This ensures an optimal user experience and reliablity in 3Nodes' reports. + +Note that each node managed by the Farmerbot will randomly wakeup on average 10 times a month. + +## Upgrade to the new Farmerbot + +If you are still running the old version of the Farmerbot (written in V), you can easily upgrade to the new Farmerbot (written in Go). You simply need to properly stop the old Farmerbot and then follow the new [Farmerbot guide](./farmerbot_quick.md). + +Here are the steps to properly stop the old Farmerbot. + +* Go to the diretory with the old Farmerbot docker files and fully stop the old Farmerbot: + ``` + docker compose rm -f -s -v + ``` +* You should also make sure that there are no containers left from the previous runs. First, list all containers: + ``` + docker container ls --all + ``` +* Then delete the remaining containers: + ``` + docker container rm -f -v NAME_OF_CONTAINER + ``` + +Once the old Farmerbot is properly stopped and deleted, follow the new [Farmerbot guide](./farmerbot_quick.md). + +## Set the Farmerbot without the mnemonics of a ThreeFold Dashboard account + +If you've lost the mnemonics associated with an account created on the ThreeFold Dashboard, it is still possible to set the Farmerbot with this account, but it's easier to simply create a new account and a new farm. Hopefully, the process is simple. + +- Create a new account on the Dashboard. This will generate a new twin. +- Create a new farm and create new bootstrap images of your new farm. +- Reboot your nodes with the new bootstrap images. This will automatically migrate your nodes with their current node IDs to the new farm. + +If you are using the Farmerbot, at this point, you will be able to set it with the mnemonics associated with the new farm. + +# Maintenance + +## See the power state and power target of 3Nodes + +### With GraphQL + +You can use [GraphQL](https://graphql.grid.tf/graphql) to see the power state and power target of 3Nodes. + +To find all nodes within one farm, use the following line with the proper farm ID (here we set farm **1** as an example): + +``` +query MyQuery { + nodes(where: {farmID_eq: 1}) { + power { + target + state + } + nodeID + } +} +``` + +To find a specific node, write the following with the proper nodeID (here we set node **655** as an example): + +``` +query MyQuery { + nodes(where: {nodeID_eq: 655}) { + power { + state + } + nodeID + } +} +``` + +### With Grid Proxy + +You can also see the power state and power target of a 3Node with Grid proxy. + +Use the following URL while adjusting the proper node ID (here we set node **1** as an example): + +``` +https://gridproxy.grid.tf/nodes/1 +``` + +Then, in the response, you will see the following: + +``` +"power": { +"state": "string", +"target": "string" +}, +``` + +If the state and target are not defined, the string will be empty. + +## Change manually the power target of a 3Node + +You can use the Polkadot Extrinsics for this. + +* Go to the Polkadot.js.org website's endpoint based on the network of your 3Node: + * [Main net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/extrinsics) + * [Test net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/extrinsics) + * [Dev net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/extrinsics) + * [QA net](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.qa.grid.tf#/extrinsics) +* Make sure that **Developer -> Extrinsics** is selected +* Select your account +* Select **tfgridModule** +* Select **changepowertarget(nodeId,powerTarget)** +* Select the node you want to change the power target +* Select the power target (**Up** or **Down**) +* Click **Submit Transaction** at the bottom of the page + + + +## Properly reboot the node if power target "Down" doesn't work + +* Set the power target to "Down" manually +* Reboot the node and wait for it to set its power state to "Down" +* Once power target and state are both set to "Down", you can manually power off the node and reboot it + +## Add a 3Node to a running Farmerbot + +If the Farmerbot is running and you want to add a new 3Node to your farm, you can proceed as follows. + +- Boot the new 3Node + - Once the node is registered to the grid, a new node ID will be generated +- If you set the section `included_nodes` in the YAML configuration file + - Add the new node ID to the configuration file +- Restart the Farmerbot with the systemd command `restart` (in this example, the service is called `farmerbot`) + ``` + systemctl restart farmerbot + ``` + +## Update the Farmerbot with a new release + +There are only a few steps needed to update the Farmerbot to a new release. + +- Download the latest [ThreeFold tfgrid-sdk-go release](https://github.com/threefoldtech/tfgrid-sdk-go/releases) and extract the farmerbot for your specific setup (here we use `x86_64`). On the line `wget ...`, make sure to replace `` with the latest Farmerbot release. + ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download//tfgrid-sdk-go_Linux_x86_64.tar.gz + tar xf tfgrid-sdk-go_Linux_x86_64.tar.gz farmerbot + ``` +- Make a copy of the old version in case you need it in the future: + ``` + mv /usr/local/bin/farmerbot /usr/local/bin/farmerbot_archive + ``` +- Move the new Farmerbot to the local bin + ``` + mv farmerbot /usr/local/bin + ``` +- Restart the bot + ``` + systemctl restart farmerbot + ``` +- Remove the tar file + ``` + rm tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` + +# Troubleshooting + +## Can't Find the Logs + +If you can't find the logs of the Farmerbot, make sure that you ran the bot before! Once the Farmerbot runs, it prints logs in a file called `farmerbot.log` in the directory where it is running. + +You can try a search for any files under the home directory with the `.log` extension in case it's been moved: + +``` +find ~/ -name '*.log' +``` + +If you've deleted the log file while the bot is running, the bot won't recreated it. In this case, you will need to restart the bot, e.g. `systemctl restart farmerbot`. The bot will then automatically create a log file. + +# Questions and Feedback + +If you have questions concerning the Farmerbot, feel free to ask for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmer chat](https://t.me/threefoldfarmers). \ No newline at end of file diff --git a/collections/farmers/farmerbot/farmerbot_intro.md b/collections/farmers/farmerbot/farmerbot_intro.md new file mode 100644 index 0000000..96d19b3 --- /dev/null +++ b/collections/farmers/farmerbot/farmerbot_intro.md @@ -0,0 +1,15 @@ +

Farmerbot

+ +The Farmerbot is a service that farmers can run in order to automatically manage the nodes in their farms. The behavior of the farmerbot is customizable through a YAML configuration file. + +We present here a quick guide to accompany farmers in setting up the Farmerbot. This guide contains the essential information to deploy the Farmerbot on the TFGrid. The other section contains additional information and details on the working of the Farmerbot. + +For more information on the Farmerbot, you can visit the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot) on Github. You can also consult the Farmerbot FAQ if needed. + +

Table of Contents

+ +- [Quick Guide](./farmerbot_quick.md) +- [Additional Information](./farmerbot_information.md) +- [Minting and the Farmerbot](./farmerbot_minting.md) + +> Note: The Farmerbot is an optional feature developed by ThreeFold. Please use at your own risk. While ThreeFold will do its best to fix any issues with the Farmerbot and minting, if minting is affected by the use of the Farmerbot, ThreeFold cannot be held responsible. \ No newline at end of file diff --git a/collections/farmers/farmerbot/farmerbot_minting.md b/collections/farmers/farmerbot/farmerbot_minting.md new file mode 100644 index 0000000..f45d13a --- /dev/null +++ b/collections/farmers/farmerbot/farmerbot_minting.md @@ -0,0 +1,26 @@ +

Minting and the Farmerbot

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Minting Rules](#minting-rules) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +We cover essential features of ThreeFold minting in relation with the Farmerbot. + +## Minting Rules + +There are certain minting rules that are very important when it comes to farming on the ThreeFold Grid while using the Farmerbot. + +- The 3Node should wake up within 30 minutes of setting the power target to **Up**. + - If the 3Node does not respect this rule, the 3Node won't mint for the whole minting period. +- The 3Node must wake up at least once every 24 hours. + - If the 3Node does not respect this rule, the 3Node won't mint for a 24-hour period. + +## Disclaimer + +Please note that the Farmerbot is an optional feature developed by ThreeFold. Please use at your own risk. While ThreeFold will do its best to fix any issues with the Farmerbot and minting, if minting is affected by the use of the Farmerbot, ThreeFold cannot be held responsible. diff --git a/collections/farmers/farmerbot/farmerbot_quick.md b/collections/farmers/farmerbot/farmerbot_quick.md new file mode 100644 index 0000000..f1c9fef --- /dev/null +++ b/collections/farmers/farmerbot/farmerbot_quick.md @@ -0,0 +1,292 @@ +

Farmerbot Quick Guide

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Farmerbot Costs on the TFGrid](#farmerbot-costs-on-the-tfgrid) +- [Enable Wake-On-Lan](#enable-wake-on-lan) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Farmerbot Setup](#farmerbot-setup) + - [Download the Farmerbot Binaries](#download-the-farmerbot-binaries) + - [Create the Farmerbot Files](#create-the-farmerbot-files) + - [Run the Farmerbot](#run-the-farmerbot) + - [Set a systemd Service](#set-a-systemd-service) + - [Check the Farmerbot Logs](#check-the-farmerbot-logs) + - [Stop the Farmerbot](#stop-the-farmerbot) +- [Farmerbot Files](#farmerbot-files) + - [Configuration File Template (config.yml)](#configuration-file-template-configyml) + - [Environment Variables File Template (.env)](#environment-variables-file-template-env) +- [Running Multiple Farmerbots on the Same VM](#running-multiple-farmerbots-on-the-same-vm) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this guide, we show how to deploy the [Farmerbot](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot) on a full VM running on the TFGrid. + +This guide can be done on bare metal or on a full VM running on the TFGrid. You need at least two 3Nodes on the same farm to make use of the Farmerbot. + +This version of the Farmerbot also works with ARM64. This means that if you have a Pi 3, 4, or Zero 2 with a 64 bit OS, you can download the appropriate release archive and it will work properly. + +Read the [Additional Information](farmerbot_information.md) section for further details concerning the Farmerbot. + +## Prerequisites + +- The TFChain account associated with the farm should have at least 5 TFT (recommended is 50 TFT) + +## Farmerbot Costs on the TFGrid + +If you run the Farmerbot on a 3Node on the TFGrid, you will have to pay TFT to deploy on that 3Node. You can run a full VM at minimum specs for the Farmerbot, that is 1vcore, 15GB of SSD storage and 512MB of RAM. Note that you can use the Planetary Network. You do not need to deploy a 3Node with IPv4. The cost on main net for this kind of workload is around 0.175TFT/hour (as of the date 11-07-23). + +Next to that, you will have to pay the transaction fees every time the Farmerbot has to wake up or shut down a node. This means that you need some TFT on the account tied to the twin of your farm. + +For the periodic wakeups, each node in the farm is shut down and powered on once a day, i.e. 30 times per month. Also, there is 10 random wakeups per month for each node. This means that each node is turned off and on 40 times per month in average. In that case, the average cost per month to power on nodes and shut them back down equals: + +> average transaction fees cost per month = 0.001 TFT (extrinsic fee) * amount of nodes * 40 * 2 (1 for powering down, one for powering up) + +## Enable Wake-On-Lan + +For a 3Node to work properly with the Farmerbot, the parameter wake-on-lan must be enabled. Enabling wake-on-lan on your 3Node may differ depending on your computer model. Please refer to the documentation of your computer if needed. + +Usually the feature will be called Wake-on-Lan and you need to set it as "enabled" in the BIOS/UEFI settings. + +Here are some examples to guide you: + +* Racker Server, Dell R720 + * Go into `System Setup -> Device Settings -> NIC Port -> NIC Configuration` + * Set Wake-on-Lan to `Enable` +* Desktop Computer, HP EliteDesk G1 + * Go to Power -> Hardware Power Management + * Disable `S5 Maximum Power Saving` + * Go to `Advanced -> Power-On Options` + * Set `Remote Wake up Boot source` to `Remote Server` + +> Hint: Check the Z-OS monitor screen and make sure that all the 3Nodes are within the same lan (e.g. all 3Nodes addresses are between 192.168.15.00 and 192.168.15.255). + +For more information on WOL, [read this section](farmerbot_information.md#how-to-prepare-your-farm-for-the-farmerbot-with-wol). + +## Deploy a Full VM + +For this guide, we run the Farmerbot on a Full VM running on the TFGrid. Note that while you do not need to run the Farmerbot on the TFGrid, the whole process is very simple as presented here. + +- Deploy a full VM on the TFGrid +- Update and upgrade the VM + ``` + apt update && apt upgrade + ``` +- Reboot and reconnect to the VM + ``` + reboot + ``` + +## Farmerbot Setup + +We present the different steps to run the Farmerbot using the binaries. + +> For a script that can help automate the steps in this guide, [check this forum post](https://forum.threefold.io/t/new-farmerbot-install-script/4207). + +### Download the Farmerbot Binaries + +- Download the latest [ThreeFold tfgrid-sdk-go release](https://github.com/threefoldtech/tfgrid-sdk-go/releases) and extract the farmerbot for your specific setup (here we use `x86_64`). On the line `wget ...`, make sure to replace `` with the latest Farmerbot release. + ``` + wget https://github.com/threefoldtech/tfgrid-sdk-go/releases/download//tfgrid-sdk-go_Linux_x86_64.tar.gz + tar xf tfgrid-sdk-go_Linux_x86_64.tar.gz farmerbot + ``` +- Move the Farmerbot + ``` + mv farmerbot /usr/local/bin + ``` +- Remove the tar file + ``` + rm tfgrid-sdk-go_Linux_x86_64.tar.gz + ``` + +### Create the Farmerbot Files + +- Create Farmerbot files directory + ``` + cd ~ + mkdir farmerbotfiles + ``` +- Create the Farmerbot `config.yml` file ([see template below](#configuration-file-template-configyml)) + ``` + nano ~/farmerbotfiles/config.yml + ``` +- Create the environment variables file and set the variables ([see template below](#environment-variables-file-template-env)) + ``` + nano ~/farmerbotfiles/.env + ``` + +### Run the Farmerbot + +We run the Farmerbot with the following command: + +``` +farmerbot run -e ~/farmerbotfiles/.env -c ~/farmerbotfiles/config.yml -d +``` + +For farmers with **ed25519** keys, the flag `-k` should be used. Note that by default, the Farmerbot uses the **sr25519** keys. + +``` +farmerbot run -k ed25519 -e ~/farmerbotfiles/.env -c ~/farmerbotfiles/config.yml -d +``` + +For more information on the supported commands, the [Additional Information section](farmerbot_information.md#supported-commands-and-flags). You can also consult the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +Once you've verified that the Farmerbot runs properly, you can stop the Farmerbot and go to the next section to set a Farmerbot service. This step will ensure the Farmerbot keeps running after exiting the VM. + +### Set a systemd Service + +It is highly recommended to set a Ubuntu systemd service to keep the Farmerbot running after exiting the VM. + +* Create the service file + * ``` + nano /etc/systemd/system/farmerbot.service + ``` +* Set the Farmerbot systemd service + + ``` + [Unit] + Description=ThreeFold Farmerbot + StartLimitIntervalSec=0 + + [Service] + Restart=always + RestartSec=5 + StandardOutput=append:/root/farmerbotfiles/farmerbot.log + StandardError=append:/root/farmerbotfiles/farmerbot.log + ExecStart=/usr/local/bin/farmerbot run -e /root/farmerbotfiles/.env -c /root/farmerbotfiles/config.yml -d + + [Install] + WantedBy=multi-user.target + ``` +* Enable the Farmerbot service + ``` + systemctl daemon-reload + systemctl enable farmerbot + systemctl start farmerbot + ``` +* Verify that the Farmerbot service is properly running + ``` + systemctl status farmerbot + ``` + +### Check the Farmerbot Logs + +Once you've set a Farmerbot systemd service [as show above](#set-a-systemd-service), the Farmerbot will start writing logs to the file `farmerbot.log` in the directory `farmerbotfiles`. + +Thus, you can get more details on the operation of the Farmerbot by inspecting the log file. This can also be used to see the **Farmerbot Report Table** as this table is printed in the Farmerbot log. + +* See all logs so far + ``` + cat ~/farmerbotfiles/farmerbot.log + ``` +* See the last ten lines and new logs as they are generated + ``` + tail -f ~/farmerbotfiles/farmerbot.log + ``` +* See all logs and new lines as they are generated + ``` + tail -f -n +1 ~/farmerbotfiles/farmerbot.log + ``` +* See the last report table + ``` + tac ~/farmerbotfiles/farmerbot.log | grep -B5000 -m1 "Nodes report" | tac + ``` + +### Stop the Farmerbot + +You can stop the farmerbot with the following command: + +``` +systemctl stop farmerbot +``` + +After stopping the farmerbot, any nodes in standby mode will remain in standby. To bring them online, use this command: + +``` +farmerbot start all -e /root/farmerbotfiles/.env --farm +``` + +## Farmerbot Files + +### Configuration File Template (config.yml) + +In this example, the farm ID is 1, we are setting the Farmerbot with 4 nodes and the node 1 never shuts down, we set a periodic wakeup at 1:00PM. + +Note that the timezone of the farmerbot will be the same as the time zone of the machine the farmerbot running inside. By default, a full VM on the TFGrid will be set in UTC. + +``` +farm_id: 1 +included_nodes: + - 1 + - 2 + - 3 + - 4 +never_shutdown_nodes: + - 1 +power: + periodic_wake_up_start: 01:00PM +``` + +Note that if the user wants to include all the nodes within a farm, they can simply omit the `included_nodes` section. In this case, all nodes of the farm will be included in the Farmerbot, as shown in the example below. If you are proceeding like this, make sure that you don't have any unused node IDs on your farm, as the Farmerbot would try to wake up nodes that aren't running anymore on the grid. + +``` +farm_id: 1 +never_shutdown_nodes: + - 1 +power: + periodic_wake_up_start: 01:00PM +``` + +For more information on the configuration file, refer to the [Additional Information section](farmerbot_information.md#yaml-configuration-file-template). + +You can also consult the [Farmerbot repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +### Environment Variables File Template (.env) + +The network can be either `main`, `tets`, `dev` or `qa`. The following example is with the main network. + +``` +MNEMONIC_OR_SEED="word1 word2 word3 ... word12" +NETWORK="main" +``` + +## Running Multiple Farmerbots on the Same VM + +You can run multiple instances of the Farmerbot on the same VM. + +To do so, you need to create a directory for each instance of the Farmerbot. Each directory should contain the configuration and variables files as shown above. Once you've set the files, you can simply execute the Farmerbot `run` command to start each bot in each directory. + +It's recommended to use distinct names for the directories and the services to easily differentiate the multiple farmerbots running on the VM. + +For example, the directory tree of two Farmerbots could be: + +``` +└── farmerbotfiles +    ├── farmerbot1 +    │   ├── .env +    │   └── config.yml +    └── farmerbot2 +    ├── .env +    └── config.yml +``` + +For example, the services of two Farmerbots could be named as follows: + +``` +farmerbot1.service +farmerbot2.service +``` + +## Questions and Feedback + +This guide is meant to get you started quickly with the Farmerbot. That being said, there is a lot more that can be done with the Farmerbot. + +For more information on the Farmerbot, please refer to the [Additional Information section](./farmerbot_information.md). You can also consult the [official Farmerbot Go repository](https://github.com/threefoldtech/tfgrid-sdk-go/tree/development/farmerbot). + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](https://forum.threefold.io/) or on the [ThreeFold Farmers Chat](https://t.me/threefoldfarmers) on Telegram. + +> This is the new version of the Farmerbot written in Go. If you have any feedback and issues, please let us know! \ No newline at end of file diff --git a/collections/farmers/farmerbot/img/farmerbot_4.png b/collections/farmers/farmerbot/img/farmerbot_4.png new file mode 100644 index 0000000..58d4b2c Binary files /dev/null and b/collections/farmers/farmerbot/img/farmerbot_4.png differ diff --git a/collections/farmers/farmerbot/img/farmerbot_5.png b/collections/farmers/farmerbot/img/farmerbot_5.png new file mode 100644 index 0000000..439022d Binary files /dev/null and b/collections/farmers/farmerbot/img/farmerbot_5.png differ diff --git a/collections/farmers/farmerbot/img/farmerbot_bios_1.jpeg b/collections/farmers/farmerbot/img/farmerbot_bios_1.jpeg new file mode 100644 index 0000000..7978e9c Binary files /dev/null and b/collections/farmers/farmerbot/img/farmerbot_bios_1.jpeg differ diff --git a/collections/farmers/farmerbot/img/farmerbot_bios_2.jpeg b/collections/farmers/farmerbot/img/farmerbot_bios_2.jpeg new file mode 100644 index 0000000..207aaef Binary files /dev/null and b/collections/farmers/farmerbot/img/farmerbot_bios_2.jpeg differ diff --git a/collections/farmers/farmerbot/img/farmerbot_bios_3.png b/collections/farmers/farmerbot/img/farmerbot_bios_3.png new file mode 100644 index 0000000..10ae6c9 Binary files /dev/null and b/collections/farmers/farmerbot/img/farmerbot_bios_3.png differ diff --git a/collections/farmers/farmers.md b/collections/farmers/farmers.md new file mode 100644 index 0000000..ccddde8 --- /dev/null +++ b/collections/farmers/farmers.md @@ -0,0 +1,37 @@ +# ThreeFold Farmers + +This section covers all practical information on how to become a cloud service provider (farmer) on the ThreeFold Grid. + +For complementary information on ThreeFold farming, refer to the [Farming](../../knowledge_base/farming/farming_toc.md) section. + +To buy a certified node from an official ThreeFold vendor, check the [ThreeFold Marketplace](https://marketplace.3node.global/). + +

Table of Contents

+ +- [Build a 3Node](./3node_building/3node_building.md) + - [1. Create a Farm](./3node_building/1_create_farm.md) + - [2. Create a Zero-OS Bootstrap Image](./3node_building/2_bootstrap_image.md) + - [3. Set the Hardware](./3node_building/3_set_hardware.md) + - [4. Wipe All the Disks](./3node_building/4_wipe_all_disks.md) + - [5. Set the BIOS/UEFI](./3node_building/5_set_bios_uefi.md) + - [6. Boot the 3Node](./3node_building/6_boot_3node.md) +- [Farming Optimization](./farming_optimization/farming_optimization.md) + - [GPU Farming](./3node_building/gpu_farming.md) + - [Set Additional Fees](./farming_optimization/set_additional_fees.md) + - [Minting Receipts](./3node_building/minting_receipts.md) + - [Minting Periods](./farming_optimization/minting_periods.md) + - [Room Parameters](./farming_optimization/farm_room_parameters.md) + - [Farming Costs](./farming_optimization/farming_costs.md) + - [Calculate Your ROI](./farming_optimization/calculate_roi.md) + - [Farming Requirements](./farming_optimization/farming_requirements.md) +- [Advanced Networking](./advanced_networking/advanced_networking_toc.md) + - [Networking Overview](./advanced_networking/networking_overview.md) + - [Network Considerations](./advanced_networking/network_considerations.md) + - [Network Setup](./advanced_networking/network_setup.md) +- [Farmerbot](./farmerbot/farmerbot_intro.md) + - [Quick Guide](./farmerbot/farmerbot_quick.md) + - [Additional Information](./farmerbot/farmerbot_information.md) + - [Minting and the Farmerbot](./farmerbot/farmerbot_minting.md) +- [Farmers FAQ](../faq/faq.md#farmers-faq) + +> Note: Bugs in the code (e.g. ZOS or other components) can happen. If this is the case, there might be a loss of tokens during minting which won't be refunded by ThreeFold. If there are minting code errors, ThreeFold will try its best to fix the minting code and remint nodes that were affected by such errors. diff --git a/collections/farmers/farming_optimization/calculate_roi.md b/collections/farmers/farming_optimization/calculate_roi.md new file mode 100644 index 0000000..df13bd4 --- /dev/null +++ b/collections/farmers/farming_optimization/calculate_roi.md @@ -0,0 +1,17 @@ +

Calculate the ROI of a DIY 3Node

+ +To calculate the ROI of a DIY 3Node, we first calculate the Revenue per Month: + +>Revenue per month = TFT price when sold * TFT farmed per month + +The ROI of a DIY 3Node is: + +> Cost of 3Node / Revenue per month = ROI in months + +For example, a Rack Server farming 3000 TFT per month with an initial cost of 1500$USD has the following ROI: + +> 1500 / (3000 * 0.08) = 6.25 month ROI + +This calculation is based on a TFT value of 8 cents. You should adjust this according to the current market price. + +Note that this ROI equation is used to compare efficienty between different DIY 3Nodes. It does not constitute real final gains as additional costs must be taken into consideration, such as electricity for the 3Nodes, for the AC system, as well as Internet bandwidth. All those notions are covered in this part of the book. \ No newline at end of file diff --git a/collections/farmers/farming_optimization/farm_room_parameters.md b/collections/farmers/farming_optimization/farm_room_parameters.md new file mode 100644 index 0000000..823d84e --- /dev/null +++ b/collections/farmers/farming_optimization/farm_room_parameters.md @@ -0,0 +1,139 @@ + +

Air Conditioner, Relative Humidity and Air Changes per Hour

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Calculate the Minimum BTU/h Needed for the AC](#calculate-the-minimum-btuh-needed-for-the-ac) + - [How Much BTU/h is Needed?](#how-much-btuh-is-needed) + - [Taking Utilization Into Account](#taking-utilization-into-account) + - [The General BTU/h Equation](#the-general-btuh-equation) +- [Ensure Proper Relative Humidity](#ensure-proper-relative-humidity) +- [Ensure Proper Air Changes per Hour](#ensure-proper-air-changes-per-hour) + +*** + +## Introduction + +In this section of the ThreeFold Farmers book, we cover some important notions concerning the room parameters where your 3Nodes are working. We discuss topics such as air conditioner, relative humidity and air changes per hour. + +Planning ahead the building of your ThreeFold farm with these notions in mind will ensure a smooth farming experience. + + + +## Calculate the Minimum BTU/h Needed for the AC + +Let's see how to calculate how powerful your AC unit needs to be when it comes to cooling down your server room. + +As we know, servers generate heat when they are working. While a desktop 3Node will generate under 20W at idle and a server 3Node might use 100W at **idle**, when you pile up some 3Nodes desktops/servers in the same location, things can get pretty warm when cultivation on the Grid is happening. Indeed, when your servers will be using a lot of power, especially in the summer time, you might need some additional cooling. + +A good thing about servers generating heat is that this can be used as a **heat source in the winter**. Other more advanced techniques can be used to maximize the heat production. But that's for another day! + +Note that for small farms, your current heating and cooling system may suffice. + +So let's do the calculation: + +### How Much BTU/h is Needed? + + +How much BTU/h does your ThreeFold Farm need to cool your servers? + +Calculating this is pretty simple actually. You need to keep in mind that **1 kW (1000 W) of power is equivalent to 3413 BTU/h** (Britisth Thermal Unit). + +> 1000 W = 1 kW = 3413 BTU/h +> +> 1000 Wh = 1 kWh = 3413 BTU + +So with our idle server example running at 100W, we have 0.1 kW. + +> 100 W = 0.1 kW + +We then multiply our kW by the BTU/h factor **3413** to obtain the result in BTU/h. Here we have 341.3 BTU/h: + +> 0.1 kW * 3413 = 341.3 BTU/h + +Say you have 5 servers with this same configuration. It means you have + +> (# of servers) * (BTU/h per server) = Total BTU/h + +> 5 * 341.3 = 1706.5 BTU/h + +Thus, a 2000 BTU/h air conditioner would be able to compensate for the heat when your servers are at idle. + +> Note that in general for air conditioners, it will often be written BTU instead of BTU/h as a shorthand. + + +Please take note that this does not take into account the energy needed to cool down your environment. You'd need to take into consideration **the heat of the servers and the general heat of your environment** to figure out how much BTU your AC needs in the big heat days of the summer. + +### Taking Utilization Into Account + +But then, what happens at cultivation? Well, say your server needs 400W of power when it's being fully cultivated by some lively ThreeFold Users of the New Internet. In this case, we would say that 400 W is the power consumption at **full load**. + +As we started with 100 W, and we now have 400 W, it means that you'd need four times the amount of BTU/h. + +Here we show how to calculate this with any other configuration of full load/idle. + +> Full-load / Idle Ratio = Full Load W / Idle W + +> 4 = 400 W / 100 W + +The BTU/h needed in cultivation would be + +> (Full-Load / Idle Ratio) * Idle BTU/h needed = Full Load BTU/h + +> 4 * (1706.5 BTU/h at Idle) = 6826 BTU/h at Full Load + +Thus, you would need 6826 BTU/h from the AC unit for 5 servers running each at 400W. In that case, a 8000 BTU/h AC unit would be sufficient. Let's say your environment would typically need 4000 BTU/h to cool the room, you'd need about 12000 BTU/h AC unit for the whole setup. + +> If: BTU/h needed < BTU/h AC Unit, Then: AC Unit is OK for TF farming at full load. + + + +Now you can have a better idea of how much BTU/h is necessary for your AC unit. Of course, this can be a useful piece of data to incorporate in your simulation of Revenue/Cost farming. + +### The General BTU/h Equation + +The **general equation** would then be: + +> Server Power in kW at Full Load * 3413 * Number of Servers = Total Maximum BTU/h needed per ThreeFold Farm + + + +As another example, 7 servers using 120 W of power at idle would need: + +> 0.12 * 3413 * 7 = 2866.92 BTU/h + +During cultivation, these 7 servers might use 480 W. This would be: + +> 0.48 * 3413 * 7 = 11467.68 BTU/h + +To be sure everything's OK, this set up would need a 12 000 BTU/h AC unit to compensate for the heat generated by the ThreeFold Farm during full cultivation. This example considers the environment heat to be negligible. + +> 11467.68 < 12000 --> 12K BTU/h AC Unit is OK for farm + + +That's it! It ain't any more complicated. Straight up mathematics and some judgment. + +Now, let's compute the costs of running all this! + + + +## Ensure Proper Relative Humidity + +To ensure that the relative humidity in your server room stays within a proper range, look in your server's user manual to know the proper range of relative humidity your server can handle. If necessary, use an hygrometer to measure relative humidity and make sure it stays within an acceptable range for your 3Nodes. + +Depending on your geographical location and your current situation, it could be interesting to consider having a AC unit equipped with a dehumidifier. Read your servers' manual to check the proper relative humidity range and set the unit accordingly. The maximum/minimum temperature and relative humidity a 3Node server can handle will depend on the specific server/computer you are using. You should check the server's technical guide/manual to get the proper information. The following is an example. + +We will use here the Dell R720 as an example since it is a popular 3Node choice. In this case, we use the R720's [Technical Guide](https://downloads.dell.com/manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-r720_reference-guide_en-us.pdf) as reference. + +For the R720, between 35˚C and 40˚C (or 95˚F and 104˚F), with 5% to 85% relative humidity, you can have this <10% of annual operating hours (around 36 days per year), and between 40˚C and 45˚C (or 104˚F and 113˚F), with 5 to 90% relative humidity, it’s <1% of annual operating hours (around 3.6 day per year). All this considers that there is no direct sunlight. + +From 10˚C to 35˚C (thus from 50˚F to 95˚F), it’s considered standard operating temperature. With relative humidity from 10% to 80%. + +This can give you a good idea of the conditions a 3Node can handle, but make sure you verify with your specific server's manual. + +## Ensure Proper Air Changes per Hour + +To ensure that the air changes per hour is optimal in your 3Node servers' room, and depending on your current situation, it can be recommended to ventilate the server room in other to disperse or evacuate excess heat and humidity. In those cases, ventilation flow will be set depending on the air changes per hour (ACPH) needed. Note that the [ASHRAE](https://www.ashrae.org/File%20Library/Technical%20Resources/Standards%20and%20Guidelines/Standards%20Addenda/62-2001/62-2001_Addendum-n.pdf) recommends from 10 to 15 ACPH for a computer room. + +> Note: A good AC unit will be able to regulate the heat and the relative humidity as well as ensure proper air changes per hour. \ No newline at end of file diff --git a/collections/farmers/farming_optimization/farming_costs.md b/collections/farmers/farming_optimization/farming_costs.md new file mode 100644 index 0000000..f10e6a8 --- /dev/null +++ b/collections/farmers/farming_optimization/farming_costs.md @@ -0,0 +1,206 @@ +

Calculate the Farming Costs: Power, Internet and Total Costs

+ +

Table of Contents

+ +- [Calculate the Total Electricity Cost of Your Farm](#calculate-the-total-electricity-cost-of-your-farm) +- [Calculate the Proper Bandwidth Needed for Your Farm](#calculate-the-proper-bandwidth-needed-for-your-farm) + - [The Minimum Bandwidth per 3Node Equation](#the-minimum-bandwidth-per-3node-equation) + - [Cost per Month for a Given Bandwidth](#cost-per-month-for-a-given-bandwidth) +- [Calculate Total Cost and Revenue](#calculate-total-cost-and-revenue) + - [Check Revenue with the ThreeFold Simulator](#check-revenue-with-the-threefold-simulator) + - [Economics of Farming](#economics-of-farming) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Calculate the Total Electricity Cost of Your Farm + +The total electricity cost of your farm is the sum of all Power used by your system times the price you pay for each kWh of power. + +> Total electricity cost = Total Electricity in kWh * Cost per kWh + +> Total Electricty in kWh = 3Nodes' electricity consumption * Number of 3Nodes + Cooling system electricity consumption + +With our example, we have 5 servers running at 400 W at Full Load and we have a 12K BTU unit that is consuming in average 1000W. + +We would then have: + +> 5 * 400 W + 1000 W = 3000 W = 3 kW + +To get the kWh per day we simply multiply by 24. + +> kW * (# of hour per day) = daily kWh consumption + +> 3 kW * 24 = 72 kWh / day + +We thus have 72 kWH per day. For 30 days, this would be + +> kWh / day * (# day in a month) = kWh per month + +> 72 * 30 = 2160 kWH / month. + +At a kWh price of 0.10$ USD, we have a cost of 216 $USD per month for the electricity bill of our ThreeFold farm. + +> kWH / month of the farm * kWh Cost = Electricity Bill per month for the farm + +> 2160 * 0.1 = 216$USD / month for electricity bills + + +## Calculate the Proper Bandwidth Needed for Your Farm + +The bandwidth needed for a given 3Node is not yet set in stone and you are welcome to participate in ongoing [discussion on this subject](https://forum.threefold.io/t/storage-bandwidth-ratio/1389) on the ThreeFold Forum. + +In this section, we will give general guidelines. The goal is to have a good idea of what constitutes a proper bandwidth available for a given amount of resources utilized on the ThreeFold Grid. + +Starting with a minimum of 1 mbps per Titan, which is 1 TB SSD and 32 GB RAM, we note that this is the lowest limit that gives the opportunity for the most people possible to join the ThreeFold Grid. That being said, we could set that 10 mbps is an acceptable upper limit for 1 TB SSD and 64 GB of RAM. + +Those numbers are empirical and more information will be shared in the future. The ratio 1TB SSD/64GB RAM is in tune with the optimal TFT rewards ratio. It is thus logical to think that farmers will build 3Node based on this ratio. Giving general bandwidth guidelines based on this ratio unit could thus be efficient for the current try-and-learn situation. + +### The Minimum Bandwidth per 3Node Equation + + +Here we explore some equations that can give a general idea to farmers of the bandwidth needed for their farms. As stated, this is not yet set in stones and the TFDAO will need to discuss and clarify those notions. + +Here is a general equation that gives you a good idea of a correct bandwidth for a 3Node: + +> min Bandwidth per 3Node (mbps) = k * max((Total SSD TB / 1 Tb),(Total Threads / 8 Threads),(Total GB / 64 GB)) + k * (Total HDD TB / 2) + +Setting k = 10 mbps, we have: + +> min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) + +As an example, a Titan, with 1TB SSD, 8 Threads and 64 GB of RAM, would need 10 mbps: + +> 10 * max(1, 1, 1) = 10 * 1 = 10 + +With the last portion of the equation, we can see that for each additional 1TB HDD storage, you would need to add 5 mbps of bandwidth. + + +Let's take a big server as another example. Say we have a server with 5TB SSD, 48 threads and 384 GB of RAM. We would then need 60 mbps of bandwidth for each of these 3Nodes: + +> 10 * max((5/5), (48/8), (384/64)) = 10 * max(5,6,6) = 10 * 6 = 60 + +This server would need 60 mbps minimum to account for a full TF Grid utilization. + +You can easily scale this equation if you have many 3Nodes. + + + +Let's say you have a 1 gbps bandwidth from your Internet Service Provider (ISP). How much of those 3Nodes could your farm have? + +> Floor (Total available bandwidth / ((Bandwidth needed per 3Nodes)) = Max servers possible + +With our example we have: + +> 1000 / 60 = 16.66... = 16 + +We note that the function Floor takes the integer without the decimals. + +Thus, a 1 gbps bandwidth farm could have 16 3Nodes with each 5TB SSD, 48 threads and 384 GB of RAM. + + + +In this section, we used **k = 10 mbps**. If you follow those guidelines, you will most probably have a decent bandwidth for your ThreeFold farm. For the time being, the goal is to have farmers building ThreeFold farms and scale them reasonably with their available bandwidth. + +Stay tuned for official bandwidth parameters in the future. + + + +### Cost per Month for a Given Bandwidth + +Once you know the general bandwidth needed for your farm, you can check with your ISP the price per month and take this into account when calculating your monthly costs. + +Let's take the example we used with 5 servers with 400 W at Full Load. Let's say these 5 servers have the same parameters we used above here. We then need 60 gbps per 3Nodes. This means we need 300 mbps. For the sake of our example, let's say this is around 100$ USD per month. + + +## Calculate Total Cost and Revenue + + +As the TFT price is fixed for 60 months when you connect your 3Node for the first time on the TF Grid, we will use the period of 60 months, or 5 years, to calculate the total cost and revenue. + +The total cost is equal to: + +> Total Cost = Initial investment + 60 * (electricity + Internet costs per month) + +In our example, we can state that we paid each server 1500$ USD and that they generate each 3000 TFT per month, with an entry price of 0.08$ USD per TFT. + +The electricity cost per month is + +> 144$ for the electricity bill +> +> 100$ for the Internet bill +> +> Total : 244 $ monthly cost for electricity and Internet + +The revenues are + +> Revenues per month = Number of 3Nodes * TFT farmed per 3Node * Price TFT Sold + +In this example, we have 5 servers generating 2000 TFT per month at 0.08$ USD per TFT: + +> 5 * 3000$ * 0.08$ = 1200$ + +The net revenue per month are thus equal to + +> Net Revenue = Gross revenue - Monthly cost. + +We thus have + +> 1200$ - 244$ = 956$ + +This means that we generate a net profit of 956$ per month, without considering the initial investment of building the 3Nodes for the farm. + +In the previous AC example, we calculate that a minimum of 12K BTU was needed for the AC system. Let's say that this would mean buying a 350$ USD 12k BTU AC unit. + +The initial cost is the cost of all the 3Nodes plus the AC system. + +> Number of 3Nodes * cost per 3Nodes + Cost of AC system = Total Cost + +In this case, it would be: + +> Total initial investment = Number of 3Nodes * Cost of 3Node + Cost of AC system + +Then we'd have: + +> 5 * 1500 + 350 = 7850 $ + +Thus, a more realistic ROI would be: + +> Total initial investment / Net Revenue per Month = ROI in months + +In our case, we would have: + +> 7850$ / 956$ = Ceiling(8.211...) = 9 + +With the function Ceiling taking the upper integer, without any decimals. + +Then within 9 months, this farm would have paid itself and from now on, it would be only positive net revenue of 956$ per month. + +We note that this takes into consideration that we are using the AC system 24/7. This would surely not be the case in real life. This means that the real ROI would be even better. It is a common practice to do estimates with stricter parameters. If you predict being profitable with strict parameters, you will surely be profitable in real life, even when "things" happen and not everything goes as planned. As always, this is not financial advice. + +We recall that in the section [Calculate the ROI of a DIY 3Node](./calculate_roi.md), we found a simpler ROI of 6.25 months, say 7 months, that wasn't taking into consideration the additional costs of Internet and electricity. We now have a more realistic ROI of 9 months based on a fixed TFT price of 0.08$ USD. You will need to use to equations and check with your current TF farm and 3Nodes, as well as the current TFT market price. + + +### Check Revenue with the ThreeFold Simulator + +To know how much TFT you will farm per month for a giving 3Node, the easiest route is to use the [ThreeFold Simulator](https://simulator.grid.tf/). You can do predictions of 60 months as the TFT price is locked at the TFT price when you first connect your 3Node, and this, for 60 months. + +To know the details of the calculations behind this simulator, you can read [this documentation](https://library.threefold.me/info/threefold#/tfgrid/farming/threefold__farming_reward). + + +### Economics of Farming + +As a brief synthesis, the following equations are used to calculate the total revenues and costs of your farm. + +``` +- Total Monthly Cost = Electricity cost + Internet Cost +- Total Electricity Used = Electricy per 3Node * Number of 3Node + Electricity for Cooling +- Total Monthly Revenue = TFT farmed per 3 node * Number of 3Nodes * TFT price when sold +- Initial Investment = Price of farm (3Nodes) + Price of AC system +- Total Return on investment = (60 * Monthly Revenue) - (60 * Monthly cost) - Initial Investment +``` + + +## Questions and Feedback + +This section constitutes a quick synthesis of the costs and revenues when running a ThreeFold Farm. As always, do your own reseaerch and don't hesitate to visit the [ThreeFold Forum](https://forum.threefold.io/) on the [ThreeFold Telegram Farmer Group](https://t.me/threefoldfarmers) if you have any questions. diff --git a/collections/farmers/farming_optimization/farming_optimization.md b/collections/farmers/farming_optimization/farming_optimization.md new file mode 100644 index 0000000..eaf0693 --- /dev/null +++ b/collections/farmers/farming_optimization/farming_optimization.md @@ -0,0 +1,14 @@ +

Farming Optimization

+ +The section [Build a 3Node](../3node_building/3node_building.md) covered the notions necessary to build a DIY 3Node server. The following section will give you additional information with the goal of optimizing your farm while also being able to plan ahead the costs in terms of energy and capitals. We also cover how to set a GPU node and more. + +

Table of Contents

+ +- [GPU Farming](../3node_building/gpu_farming.md) +- [Set Additional Fees](./set_additional_fees.md) +- [Minting Receipts](../3node_building/minting_receipts.md) +- [Minting Periods](./minting_periods.md) +- [Room Parameters](./farm_room_parameters.md) +- [Farming Costs](./farming_costs.md) +- [Calculate Your ROI](./calculate_roi.md) +- [Farming Requirements](./farming_requirements.md) \ No newline at end of file diff --git a/collections/farmers/farming_optimization/farming_requirements.md b/collections/farmers/farming_optimization/farming_requirements.md new file mode 100644 index 0000000..87bc509 --- /dev/null +++ b/collections/farmers/farming_optimization/farming_requirements.md @@ -0,0 +1,28 @@ +

Farming Requirements

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Uptime Requirements](#uptime-requirements) + - [Farmerbot Consideration](#farmerbot-consideration) + +--- + +## Introduction + +This section contains information on the farming requirements. + +## Uptime Requirements + +To be eligible for proof-of-capacity farming rewards, farmers need to ensure that their nodes have a minimum uptime per minting period. + +- 95% uptime requirements for DIY nodes + - This means that nodes have 36 hours of allowed downtime per month +- 98% uptime requirements for certified nodes + - This means that nodes have 14.4 hours of allowed downtime per month + +A minting period is 720 hours. + +### Farmerbot Consideration + +When minting considers a node running the Farmerbot, it counts standby time as uptime, as long as the node is healthy. If the node fails to wake within 24 hours, those 24 are deducted. This means that if the node misses two different wakeup within 24 hours, it will not have sufficient uptime for this minting period. This accounts for both certified and DIY cases. \ No newline at end of file diff --git a/collections/farmers/farming_optimization/minting_periods.md b/collections/farmers/farming_optimization/minting_periods.md new file mode 100644 index 0000000..851012d --- /dev/null +++ b/collections/farmers/farming_optimization/minting_periods.md @@ -0,0 +1,56 @@ +

Minting Periods

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Minting Period Length](#minting-period-length) +- [2023 Minting Periods](#2023-minting-periods) +- [2024 Minting Periods](#2024-minting-periods) + +*** + +## Introduction + +We discuss the length and the frequencies of the ThreeFold farming minting periods. + +## Minting Period Length + +Each minting period has: 2630880 seconds = 43848 minutes = 730.8 hours. + +## 2023 Minting Periods + +The minting periods for the 12 months of 2023 are the following: + +| Month | Start of the Minting Period | End of the Minting Period | +|----------|---------------------------------|---------------------------------| +| Jan 2023 | December 31, 2022 at 4\:32\:40 am | January 30, 2023 at 3\:20\:40 pm | +| Feb 2023 | January 30, 2023 at 3\:20\:40 pm | March 2, 2023 at 2\:08\:40 am | +| Mar 2023 | March 2, 2023 at 2\:08\:40 am | April 1, 2023 at 12\:56\:40 pm | +| Apr 2023 | April 1, 2023 at 12\:56\:40 pm | May 1, 2023 at 11\:44\:40 pm | +| May 2023 | May 1, 2023 at 11\:44\:40 pm | June 1, 2023 at 10\:32\:40 am | +| Jun 2023 | June 1, 2023 at 10\:32\:40 am | July 1, 2023 at 9\:20\:40 pm | +| Jul 2023 | July 1, 2023 at 9\:20\:40 pm | August 1, 2023 at 8\:08\:40 am | +| Aug 2023 | August 1, 2023 at 8\:08\:40 am | August 31, 2023 at 6\:56\:40 pm | +| Sep 2023 | August 31, 2023 at 6\:56\:40 pm | October 1, 2023 at 5\:44\:40 am | +| Oct 2023 | October 1, 2023 at 5\:44\:40 am | October 31, 2023 at 4\:32\:40 pm | +| Nov 2023 | October 31, 2023 at 4\:32\:40 pm | December 1, 2023 at 3\:20\:40 am | +| Dec 2023 | December 1, 2023 at 3\:20\:40 am | December 31, 2023 at 2\:08\:40 pm | + +## 2024 Minting Periods + +The minting periods for the 12 months of 2024 are the following: + +| Month | Start of the Minting Period | End of the Minting Period | +|----------|---------------------------------|---------------------------------| +| Jan 2024 | December 31, 2023 at 14\:08\:40 | January 31, 2024 at 00\:56\:40 | +| Feb 2024 | January 31, 2024 at 00\:56\:40 | March 1, 2024 at 11\:44\:40 | +| Mar 2024 | March 1, 2024 at 11\:44\:40 | March 31, 2024 at 22\:32\:40 | +| Apr 2024 | March 31, 2024 at 22\:32\:40 | May 1, 2024 at 09\:20\:40 | +| May 2024 | May 1, 2024 at 09\:20\:40 | May 31, 2024 at 20\:08\:40 | +| Jun 2024 | May 31, 2024 at 20\:08\:40 | July 1, 2024 at 06\:56\:40 | +| Jul 2024 | July 1, 2024 at 06\:56\:40 | July 31, 2024 at 17:44\:40 | +| Aug 2024 | July 31, 2024 at 17\:44\:40 | August 31, 2024 at 04\:32\:40 | +| Sep 2024 | August 31, 2024 at 04\:32\:40 | September 30, 2024 at 15\:20\:40 | +| Oct 2024 | September 30, 2024 at 15\:20\:40 | October 31, 2024 at 02\:08\:40 | +| Nov 2024 | October 31, 2024 at 02\:08\:40 | November 30, 2024 at 12\:56\:40 | +| Dec 2024 | November 30, 2024 at 12\:56\:40 | December 30, 2024 at 23\:44\:40 | diff --git a/collections/farmers/farming_optimization/set_additional_fees.md b/collections/farmers/farming_optimization/set_additional_fees.md new file mode 100644 index 0000000..71d9958 --- /dev/null +++ b/collections/farmers/farming_optimization/set_additional_fees.md @@ -0,0 +1,31 @@ +

Set Additional Fees

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Steps](#steps) +- [TFT Payments](#tft-payments) +- [Dedicated Nodes Notice](#dedicated-nodes-notice) + +*** + +## Introduction + +Farmers can set additional fees for their 3Nodes on the [TF Dashboard](https://dashboard.grid.tf/). By doing so, users will then be able to [reserve the 3Node and use it as a dedicated node](../../dashboard/deploy/node_finder.md#dedicated-nodes). +This can be useful for farmers who provide additional values to their 3Nodes, e.g. a GPU card and/or high-quality hardware. + +## Steps + +Here are the steps to [set additional fees](../../dashboard/farms/your_farms.md#extra-fees) to a 3Node. + +* On the Dashboard, go to **Farms** -> **Your Farms** +* Under the section **Your Nodes**, locate the 3Node and click **Set Additional Fees** under **Actions** +* Set a monthly fee (in USD) and click **Set** + +## TFT Payments + +When a user reserves your 3Node, you will receive TFT payments once every 24 hours. These TFT payments will be sent to the TFChain account of your farm's twin. + +## Dedicated Nodes Notice + +Note that while any 3Node that has no workload can be reserved by a TF user as a dedicated node, when a farmer sets additional fees to a 3Node, this 3Node automatically becomes a dedicated node. For a user to run workloads on this 3Node, the 3Node must then be reserved, i.e rented as a dedicated node. \ No newline at end of file diff --git a/collections/farmers/img/farming_30.png b/collections/farmers/img/farming_30.png new file mode 100644 index 0000000..d810d13 Binary files /dev/null and b/collections/farmers/img/farming_30.png differ diff --git a/collections/farming/.collection b/collections/farming/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/farming/_beta/planet_positive_farming.md b/collections/farming/_beta/planet_positive_farming.md new file mode 100644 index 0000000..b7736e1 --- /dev/null +++ b/collections/farming/_beta/planet_positive_farming.md @@ -0,0 +1,18 @@ +# Planet Positive Farming + +The ThreeFold Grid (“Grid”) has the aim to become a carbon_negative grid by the end of 2022. ThreeFold Farmers (“Farmers”) will be offsetting their carbon emissions three times and through their Farming process, will be directly involved in initiatives with the goal of regenerating the earth and enhancing the life of local communities. ThreeFold will therefore partner with an organization specialized in climate education and involving students and teachers in their quest to fight against climate change. Users of the ThreeFold Grid will therefore purchase carbon-negative internet capacity with ThreeFold Tokens ("TFT"). + +Part of the ThreeFold Token from the farming reward will be allocated for energy compensation and sent onto a pool dedicated for the above project. + +The amount will depend on: +- The types of 3Node run by the Farmer as each type of server varies in terms of power utilization +- The location of the 3Node as each country has a different electricity production structure. + +More variables will be taken into account to ensure the reliability of this voluntary carbon offset. + +> More information will be communicated to the community soon. Stay tune. + + + + + diff --git a/collections/farming/certified/certified_farming.md b/collections/farming/certified/certified_farming.md new file mode 100644 index 0000000..94a3127 --- /dev/null +++ b/collections/farming/certified/certified_farming.md @@ -0,0 +1,5 @@ +# Certified Farming + +!!!include:farming_certification_benefits + +!!!def alias:certified_farming \ No newline at end of file diff --git a/collections/farming/certified/certified_node.md b/collections/farming/certified/certified_node.md new file mode 100644 index 0000000..8536d20 --- /dev/null +++ b/collections/farming/certified/certified_node.md @@ -0,0 +1,33 @@ +![](img/farming_solutions.jpg) + +## Certified Node + +A Certified Node is a node which comes BIOS locked and does not allow the owner change how the node boots. + +This makes it impossible for the node owner to make changes to the operating system and secure that the node will run the right certified version of Zero-OS. + +The Titan V2.1 node is a certified node. Certified nodes are eligible for more [farming rewards](farming_reward). + +### Requirements + +- Node delivered by a certified hardware vendor or through the ThreeFold website. +- The farmer who owns a certified node will have to sign specifically created terms and conditions + - not done yet, will be part of tfgrid 3.0 launch, see [here](farming_certification_terms_conditions). +- For 2.0 Farmers who started in 2020 or before: agreement about vesting see [vesting_overview](vesting_overview). + +### More Info + +- [Certified Farming](certified_farming) + +### More Technical Details + +- The BIOS gets locked. +- The BIOS gets configured to use TPM2 + - more info about tpm2 [here](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_raj.pdf). +- TPM2 is a security implementation on the motherboard which allows Zero-OS to securely store private keys, this mechanism is used to identify Zero-OS nodes and make sure they are registered as certified node. +- If a user would reset the BIOS then the TPM private keys are gone as well. +- Starting with our TFGrid 3.0 we will use this TPM feature to verify and validate some checks done from tfchain. +- A certified farmer is required to use certified_nodes. + + +!!!def alias:certified_node diff --git a/collections/farming/certified/farming_certification_benefits.md b/collections/farming/certified/farming_certification_benefits.md new file mode 100644 index 0000000..bf29f18 --- /dev/null +++ b/collections/farming/certified/farming_certification_benefits.md @@ -0,0 +1,17 @@ +## Certified Farming Benefits + +This certification program has many benefits for the TFGrid user, for the world and the farmer itself. + +| For The Planet | For The Cloud User | For The Farmer | +| ------------------------------------------------------ | ---------------------------------------------------- | ------------------------------------------------------ | +| More Green | More Secure | Faster Adoption (easier to get to 30% treshold) | +| **certified carbon neutral** | More Defined Legal Framework | More Farming Rewards (income) | +| Better Global Distribution | More Uptime (SLA) | Lower Operational Cost | +| Sovereignity (legal framework) | Better Performance | Higher Credibility | +| More protection for user | Possiblity For Support | Custom Pricing CU/SU/NU is possible (*) | +| **[more info](farming_certification_benefits_planet)** | **[more info](farming_certification_benefits_user)** | **[more info](farming_certification_benefits_farmer)** | + +(*) in collaboration with TFTech, and planned for earliest H1 2022. + +- [farming certification requirements](farming_certified_requirements) +- [certified nodes](certified_node) \ No newline at end of file diff --git a/collections/farming/certified/farming_certification_benefits_farmer.md b/collections/farming/certified/farming_certification_benefits_farmer.md new file mode 100644 index 0000000..5629d3f --- /dev/null +++ b/collections/farming/certified/farming_certification_benefits_farmer.md @@ -0,0 +1,63 @@ +![](img/grid_banner.jpg) + +## For The Farmer + +### Higher Cloud Unit Reward reward = more income + +A certified farmer gets more farming_rewards because a certified farmer need to adhere to higher service level agreements and buy certified farming solutions or certifiy their existing 3nodes. + +### TF Tech Support + +Direct defect and certified build support for Zero-OS from the software creators for the tfgrid_primitives on TFGrid. + +### Lower Operational Cost + +Certified farms can use the TFTech power management solution which makes sure that 3Nodes are only powered on when there is a need for it. This can save huge amounts of electricity. + +### Easier to get to minimal required 30% utilization + +The tokens farmed for a 3Node are locked up in a staking pool untill the 3Node gets to 30% utilization. + +This is to make sure that people cannot put hardware to the grid which cannot be used and as such get farming rewards. + +ThreeFold needs to make sure that there is no abuse and also that 3Nodes are not brought life for just farming which would burn energy for no reason. + +ThreeFold & ThreeFold Tech will do a lot of promotion and find channel_partners to get the TFGrid to grow as fast as possible and get capacity used. + +Its up to ThreeFold to make sure that certified capacity gets deployed where it is needed first. + +All of this leads to much faster utilization of the TF_Farm IT capacity which results in the TF_Farmer getting access to their farmed tokens faster. + +### Higher Credibility Because Of Certification + +Security advantage as farm, location and nodes have been thoroughly checked and documented (secure boot process guarantees the most stable (best) Zero OS version). This information gets checked by ThreeFold which means the TFGrid user will more likey chose a certified farmer. + +### Custom Network (\*): + +Ability to implement custom networking based on VxLAN or many other networking technologies. + +This is needed for deployments in hybrid or even full private mode where customers have very specific requirements around networking. + +### Farmer Bot (\*) + +A farmer bot will be made available which makes it easier for a certified farmer to manage their farm in relation to + +- power management +- management farmed tokens +- network management tools +- lockout bad actors (e.g. deny access for hackers or other bad actors) +- ... + +### Monitoring Integration (\*) + +Possibility to integrate custom monitoring solutions. + +### Reputation System (\*) + +Farmers and TFGrid users are rated by a reputation system. +These reputation scores will be visible on the TF_Chain + +This allows a farmer to see that the TFGrid users on the TF_Farm can be trusted. + +> (\*): planned easliest Q4 2021 + diff --git a/collections/farming/certified/farming_certification_benefits_planet.md b/collections/farming/certified/farming_certification_benefits_planet.md new file mode 100644 index 0000000..3d28c69 --- /dev/null +++ b/collections/farming/certified/farming_certification_benefits_planet.md @@ -0,0 +1,20 @@ +![](img/grid_banner.jpg) + +## For The Planet + +### Green + +Certified farming capacity will become carbon neutral by end 2021. +This happens because if using less energy and offsetting remainder of energy usage with buying carbon credits from a climate change action program called TAG = Take Action Global. + +### Inclusive + +Everyone gets access to this network of capacity everywhere. + +Thanks to certification its easier to make sure that the TF_Farmer complies with all requirements. + +### Sovereignity + +Its important to deliver sovereign solutions to the user and to countries. +Certification allows ThreeFold to guarantee more requirements and have more visibility in the sovereignty requirements. + diff --git a/collections/farming/certified/farming_certification_benefits_user.md b/collections/farming/certified/farming_certification_benefits_user.md new file mode 100644 index 0000000..980d57c --- /dev/null +++ b/collections/farming/certified/farming_certification_benefits_user.md @@ -0,0 +1,52 @@ +![](img/grid_banner.jpg) + +## For The ThreeFold Grid User. + +Users of the ThreeFold Grid have an advantage to use a farmer who is certified. + +### More information about the farm + +Whoever wants to use capacity from the grid gets more information about + +- connection capabilities to the internet +- quality of hardware used +- service level agreements +- location of the TF_Farm (country,...) +- protection mechanisms around fire, water, ... + +This information is important for the TFGrid user to make selection where to host their IT workloads. + +TFTech will validate this information. + +### More uptime + +Being certified means that the farmer likely will have more uptime compared to a non certified farmer. + +### More security + +As part of the certification process, TFTech will make sure that the 3Node has secure boot procedures to allow for more security. The Farmer needs to sign an agreement with ThreeFold Foundation where they commit to a set of standards and security requirements. + +### More legal protection + +Each certified farmer has to adhere to a set of terms & conditions which protect the TFGrid User. + +The terms & conditions describe + +- privacy & security protections +- compliance to legal requirements +- protection against abuse + +### Continuation protection. + +The TF_Farmer has to promise to keep the farm operational till the end as specified on farming certificate which gets registered on the TFChain. + +- [more info about requirements see here](farming_certified_requirements) + +### TFGrid User Can Have Support + +All support inquiries will be handled through blocks of 15 minutes which are paid for in ThreeFold Support Tokens(TFTS). + +Any TFGrid user can ask for support but only for certified farms. + + + diff --git a/collections/farming/certified/farming_certification_terms_conditions.md b/collections/farming/certified/farming_certification_terms_conditions.md new file mode 100644 index 0000000..4ddf2f4 --- /dev/null +++ b/collections/farming/certified/farming_certification_terms_conditions.md @@ -0,0 +1,11 @@ +- sign terms and conditions document with threefold_dubai + - we are in the process of formalizing this, this will be done at least before the end 2022. +- farmer agrees and acknowledges following info: + - TFGrid operates as a DAO with the help of human councils. This means that no organization manages the operations of the TFGrid. + - TFTech as subcontractor for threefold_dubai delivers software support for the tfgrid_primitives (only defect support and certified builds). + - All information required to be a farmer can be found on our knowledgebase: https://library.threefold.me/ + - TFT rewards (farming) is the result of the blockchain as operated by consensus3 concept. If SLA is not achieved TFT will NOT be rewarded that month. + - TFT rewards are done in line with [farming reward document](farming_reward). + - Measurement of SLA (see below) done by consensus3 engine. + + diff --git a/collections/farming/certified/farming_certified_requirements.md b/collections/farming/certified/farming_certified_requirements.md new file mode 100644 index 0000000..887f0e7 --- /dev/null +++ b/collections/farming/certified/farming_certified_requirements.md @@ -0,0 +1,69 @@ +![](img/grid_banner.jpg) + +## Certified Farming Requirements + +### Individual Certified Farmer + +- Certified Farms are made up of certified_nodes +- up to 4 certified nodes +- home or office location + +#### Uptime and Network Requirements + +- 97% uptime is accepted in home farming situations +- 1 IP feed (consumer provider) +- 1 public IP address and NAT allowed +- enough bandwidth to allow the utilization of the storage/archive (see below) +- good enough latency (low latency = performance of network) + +### Professional Certified Farmer + +- Certified Farms are made up of certified_nodes +- more than 4 certified nodes +- datacenter location + +#### Uptime and Network Requirements + +- 99.5% uptime +- minimal bandwidth as required for the workloads as hosted on the farm +- maximum network latency +- more than 1 internet connection (multiple IP feeds) +- enough IPv4 addresses +- at least 1 class C ipv4 addr for network farmers. +- enough bandwidth to allow the utilization of the storage/archive (see below) +- good enough latency (low latency = performance of network) +- [install your network in line with Threefold Requirements](tfgrid_networking) + +(*) = in case of datacenter or commercial deployment + +#### Redundancy Requirements + +- protection for fire & water damage +- enough access to power +- redundant power systems + +#### Terms and Conditions need to be signed + +!!!include:farming_certification_terms_conditions + + +### Bandwidth Requirement for archive/storage usecase example. + +A storage usecase needs a lot of bandwidth to allow the storage nodes to be filled and also to allow its customers to download the information. + +It’s the obligation of the farmer to make sure that enough bandwidth is available. We will measure this by doing random upload & download tests to the storage systems. + +It should always be possible to have at least 1 mbit/sec per Zero_DB (which is a storage container running on 1 harddisk or ssd). + +### Reputation & Monitoring Engine + +The TFGrid has a reputation engine and a monitoring engine to measure uptime & other SLA requirements, see consensus3. + +Factors the TFGrid Reputation_engine will look at (Q4 2021, latest Q1 2022) + +- Available Bandwidth +- Latency +- Utilization +- Uptime (nodes & network) + +The monitoring engine could require farmers to execute on certain actions. \ No newline at end of file diff --git a/collections/farming/certified/farming_types.md b/collections/farming/certified/farming_types.md new file mode 100644 index 0000000..c35b1f6 --- /dev/null +++ b/collections/farming/certified/farming_types.md @@ -0,0 +1,20 @@ +![](img/farming_solutions.jpg) + +# Farming Types + +## DIY (Do It Yourself) Farming + +- Self made or bought bare metal server (compute/storage) capacity (and AMD or Intel system) +- No license on the software with TF Tech (Company responsible for the ThreeFold Software components). +- No Support possibility from ThreeFold / TF Tech. + +### Certified + +- Certified Secured server hardware +- Comes plug and play +- Highest Uptime Requirements +- Certification report given by TFTech or partners to describe farming situation (H2 2021). +- see [farming certified requirements](farming_certified_requirements) + +!!!include:farming_certification_benefits + diff --git a/collections/farming/certified/img/farming_solutions.jpg b/collections/farming/certified/img/farming_solutions.jpg new file mode 100644 index 0000000..dfc81d8 Binary files /dev/null and b/collections/farming/certified/img/farming_solutions.jpg differ diff --git a/collections/farming/diy/diy_guide.md b/collections/farming/diy/diy_guide.md new file mode 100644 index 0000000..87255c5 --- /dev/null +++ b/collections/farming/diy/diy_guide.md @@ -0,0 +1,44 @@ +# Do-it-Yourself Farming Guide + +Any standard computer can become a 3Node on the ThreeFold Grid. This section covers the compatible systems and setup optimization for anyone who wants to purchase or build their own nodes. + +## What kind of hardware is supported? + +Any 64-bit hardware with an Intel or AMD processor chip can run Zero-OS and become a 3Node. The following configurations provide guidelines on compatible and recommended setups: + +- Servers, desktops and mini computers type hardware are compatible. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A ratio of 1:4 between vCPU and RAM (e.g. 8vCPU and 32 GB of RAM) is recommended. +- The recommended upper limit is 8 GB of RAM per vCPU as farming rewards do not increase beyond that ratio. +- A minimum of 500 GB of SSD and a bare minimum of 2 GB of RAM is required. +- A wired ethernet connection is highly recommended to maximize reliability and the ability to farm TFT. + +> Note: The team successfully tested ARM based devices, but they are not yet supported. + +The follwoing configurations are not advised or not supported: + +- Laptops are not advised and USB based external drives are not supported due to reliability concerns. +- No graphics or display is required, although it may be helpful during the boot configuration or troubleshooting if necessary. +- GPU is not yet supported. + +## How much power does a node consume? + +Power efficiency is important for farmers to spend less on electricity and therefore increase earnings. A small form factor server may be much more power efficient than a gaming PC with similar specs (GPUs are not supported yet). + +> Note: Knowing exactly how much power a system will draw can be complicated, but some manufacturers provide more detailed estimates than the watt rating of a power supply. + +## What kind of internet connection is needed? + +- A wired network connection should be considered essential to maximize a node's reliability. Any domestic high speed internet plan is adequate for a basic node. +- If the node connects more than a few terrabytes of storage, a gigabit or faster connection may be necessary to support the traffic. +- The Grid is designed with IPv6 in mind, but IPv4 is sufficient for now. + +> A node only needs bandwidth when it is being utilized. That means you could scale up your connectivity as utilization of your node grows. + +## How to boot a node with Zero-OS? + +Zero OS can be booted either from a USB stick (the boot image is tiny, so any size drive will do) or over a network via PXE. In either case, the latest software will be downloaded and cryptographically verified before boot. After the first boot, Zero OS will update itself automatically and requires virtually no maintenance. + +When you’re ready to start farming, follow [these instructions](https://library.threefold.me/info/manual/#/manual__farming) to bring your 3Node online. + +> Note: Occasionally, updates to the boot medium may be required. \ No newline at end of file diff --git a/collections/farming/farming_circular.md b/collections/farming/farming_circular.md new file mode 100644 index 0000000..eedc2e5 --- /dev/null +++ b/collections/farming/farming_circular.md @@ -0,0 +1,19 @@ + +Farming is the process of adding Internet capacity (compute, storage and network) to the ThreeFold Grid. + +ThreeFold uses a proof-of-blockstake consensus mechanism. By running Zero-OS on their hardware, Farmers dedicate the computation power and storage capacity of their node to the network, enabling anyone to host data and run IT workloads on a decentralized Internet infrastructure. + +![](img/circular_tft3_.jpg ':size=500') + +> TODO: we have better one (note: image? could not find) + +In decentralized systems like ThreeFold, we need to ensure that everyone is able to provide Internet capacity to the world. Farmers help this happen by connecting hardware that run Zero-OS. Once booted, the hardware is locked to generate Internet capacity for the network. The capacity is registered on TFChain, securing access to a decentralized Internet for users and rewarding farmers with TFT. + + + diff --git a/collections/farming/farming_intro.md b/collections/farming/farming_intro.md new file mode 100644 index 0000000..9ba490e --- /dev/null +++ b/collections/farming/farming_intro.md @@ -0,0 +1,64 @@ + +# ThreeFold Farming + +![](img/farming_intro0.jpeg) + +ThreeFold Farming ("Farming") is the process of connecting Internet capacity to the ThreeFold Grid. This process is undertaken by independent people or organization called ThreeFold Farmers ("Farmers"). + +## What is Farming? + +{{#include farming_circular.md}} + +## Who can become a farmer on ThreeFold? + +![](img/farming_.png) + +Technically, anyone can farm on the ThreeFold Grid using any server-type hardware. By using [Proof-of-Capacity](proof_of_capacity), farming was designed to reward all nodes equally according to the Internt capacity they provide to the ThreeFold Grid. + +## Cost of farming + +Anyone can become a Farmer, and there is no technical knowledge required. ThreeFold's autonomous system does all the heavy lifting, making it easy for anyone to join. + +- Potential costs of the hardware necessary to provide Internet capacity and maintain a farming setup. +- Electrical costs to power the farm. +- Potential cost of equipment to support larger farming setups such as data centers (ventilation, monitoring, electrical wiring, etc). + +To further explore farming rewards, click [here](@farming_reward). + +## How ThreeFold Internet capacity is farmed? + +1. A farmer provides Internet capacity by booting compatible hardware with Zero-OS. +2. Once installed, Zero-OS locks the hardware and registers the Internet capacity in TFChain. +3. Once verified by the [Proof-of-Capacity](proof_of_capacity) algorythm, the Internet capacity is made available to the network via the explorer. + +> Note: All the compute and storage data remains off-chain in order to protect the privacy of users. Once Zero-OS is booted, the device is locked in such a way that it no longer has any state or remote access, preventing farmers to access user data on a hardware level as well. + +## What kind of hardware can become a 3Node? + +Any Intel or AMD server type hardware that contains compute and/ or storage can be connected to the ThreeFold Grid. Farmers need to download Zero-OS and boot their hardware. + +Learn more [here](@farming_hardware_overview) + +Once booted by Zero OS, the hardware becomes a 3Node, and its total capacity will automatically be detected and registered on the blockchain database. We call this Proof-of-Capacity. + +Learn more about Proof-of-Capacity [here](@proof_of_capacity) + + + +!!!alias become_a_farmer \ No newline at end of file diff --git a/collections/farming/farming_reward.md b/collections/farming/farming_reward.md new file mode 100644 index 0000000..8eba865 --- /dev/null +++ b/collections/farming/farming_reward.md @@ -0,0 +1,63 @@ +

Farming Reward

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How do farmers earn TFT?](#how-do-farmers-earn-tft) +- [Proof-of-Capacity](#proof-of-capacity) +- [What is proof-of-capacity?](#what-is-proof-of-capacity) +- [Why proof-of-capacity?](#why-proof-of-capacity) +- [How does Proof-of-Capacity work?](#how-does-proof-of-capacity-work) + +*** + +> Note: Farming rewards will be updated for the next 3.14 grid release. Stay tuned. + +## Introduction + +The amount of TFT earned by farmers is relative to the amount of compute, storage or network capacity they provide to the ThreeFold Grid as recorded by the proof-of-capacity algorythm. This section covers some farming and token reward basics. + +## How do farmers earn TFT? + +ThreeFold Blockchain (TFChain) rewards farmers for providing Internet capacity and expanding the ThreeFold Grid. They earn TFT. When successfully verified by proof-of-capacity, farmers earn TFT according to the amount of Internet capacity registered in TFChain. + +## Proof-of-Capacity + +The Proof-of-Capacity records Internet resources from the 3Node: + +The ThreeFold Blockchain (TFChain) uses work algorythm called "Proof-of-Capacity" to verify the Internet capacity provided by 3Nodes. Put simply, PoC verifies, on an ongoing basis, that farms are honestly representing the Internet capacity they provide to the network. + +**See Proof-of-Capacity in action** by visiting the [ThreeFold Grid Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) which represents the best resource to view POC-related data. + +## What is proof-of-capacity? + +POC allows ThreeFold Farmers to earn reward according to their contribution. Farming is the "work" itself, the act of providing Internet capacity to the network and making it accessible via our TFDAO and TFChain. + +The PoC algorythm records four different types of Internet capacity: + +- Compute Capacity (CPU) +- Memory Capacity (RAM) +- Storage Capacity (SSD/HDD) +- Network Capacity (Bandwidth, IP Addresses) + +## Why proof-of-capacity? + +PoC comes with a number of benefits, including: + +- Energy efficiency: earning reward in form of TFT does not waste energy, farming TFT is a carbon_negative operation. +- Lower barriers to entry with reduced hardware requirements: no need for elite hardware to stand a chance for earning rewards. +- Decentralized: allows anyone to connect a 3node to the network. TFGrid runs as a DAO. + +The main advantage of PoC to farmers it makes it really easy to run a 3Node. It doesn't require huge investments in hardware or energy and everyone earns a fair reward for their contribution. It is more decentralized, allowing for increased participation, and more 3Nodes doesn't mean increased returns, like in mining. + +## How does Proof-of-Capacity work? + +1. A farmer boots hardware with Zero-OS (multiple boot methods available) +2. Zero-OS is a low level OS, with no shell, farmers cannot access Zero-OS +3. Zero-OS reports used IT capacity towards TFChain +4. TFChain and TFDAO will calculate rewards as required for the farmer (TFGrid 3.1.x) +5. TFChain will mint the required TFT and send them to account on TFChain of TFFarmer. +6. Everyone can use the [ThreeFold Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) to see where capacity is available. This info comes from the TFChain. + + +{{#include farming_reward_disclaimer.md}} diff --git a/collections/farming/farming_reward_calculation.md b/collections/farming/farming_reward_calculation.md new file mode 100644 index 0000000..a56a9f1 --- /dev/null +++ b/collections/farming/farming_reward_calculation.md @@ -0,0 +1,53 @@ +## Farming Reward Calculation + +Each 3Node has certain amount of compute, storage and network resources: + +- Compute Capacity (CPU) +- Memory Capacity (RAM) +- Storage Capacity (SSD/HDD) +- Network Capacity (Bandwidth, IP Addresses) + +For making this Internet Capacity available, Farmers are rewarded with TFT. + +The amount of resources availabe in a 3Node are translated into compute units (CU), storage units (SU), Network units (NU) and IP addresses (IPAddr) to calculate farming rewards. See also [Cloud Units Calculation For Farming](../../cloudunits/resource_units_calc_cloudunits.md). + +> **Unless explicitly specified otherwise, calculations of "gigabytes" use base +> 1024. That is, 1 GB is equal to 1073741824 bytes.** + +The formula to calculate farming rewards is the following: + +```python +TFT earned per month = + CU farmed * CU farming rewards + + SU farmed * SU farming rewards + + NU used * NU farming rewards + + IPAddr used * IPAddr farming rewards + +``` + +The below table expands on CU, SU, NU and IPAddr and their farming rewards: + +| Unit | description | v3 farming rewards in TFT | +| ------------------- | ----------------------------------------------------------------- | ------------------------- | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | $REWARD_CU_TFT TFT/month | +| Storage Unit (SU) | typically 1 TB of netto usable storage | $REWARD_SU_TFT TFT/month | +| Network Unit (NU) | 1 GB of data transfered as used by TFGrid user for Public IP Addr | $REWARD_NU_TFT TFT/GB | +| Public IPv4 Address | Public IP Address as used by a TFGrid user | $REWARD_IP_TFT TFT/hour | + + +The reward for above items are linked (pegged) to the USD + +| Unit | USD | Unit | +| ------------------- | ----- | ------------------------------------- | +| Compute Unit (CU) | 2.4 | per month | +| Storage Unit (SU) | 1 | per month | +| Network Unit (NU) | 0.03 | per GB transfer (as customers use it) | +| Public IPv4 Address | 0.005 | per IP address, calculated per hour | + +> IMPORTANT: MORE INFO ABOUT DAO RULES IN RELATION TO PROOF OF CAPACITY, SEE BELOW + +> **The rewards above are calculated according to the current TFT to USD price in TFChain of $TFTFARMING** ($NOW). TFDAO is responsible to change this price in accordance to current marketsituation and liquidity. + +See below for more info about USD price which will be used to calculate your farming reward as well as any other specifics in relation to farming calculations. + +The above farming rewards apply for 3Nodes registered in TFChain for ThreeFold Grid v3. Anyone can calculate their potential rewards using the [Farming Reward Calculator](https://dashboard.grid.tf/calculator/simulator). The same CU, SU, NU and IPAddr principles apply to the sales of Internet capacity in the form [cloud units](../../cloudunits/cloudunits.md). diff --git a/collections/farming/farming_reward_disclaimer.md b/collections/farming/farming_reward_disclaimer.md new file mode 100644 index 0000000..5485865 --- /dev/null +++ b/collections/farming/farming_reward_disclaimer.md @@ -0,0 +1,8 @@ +> DISCLAIMER: ThreeFold Foundation organizes this process. This process is the result of the execution of code written by open source developers (Zero-OS and minting code) and a group of people who checks this process voluntarily. No claims can be made or damages asked for to any person or group related to ThreeFold Foundation like, but not limited to, the different councils. This process changes for TFGrid 3.X once the TFDAO is fully active. + +> Important note: The ThreeFold Token (TFT) is not an investment instrument. +> TFTs are used to buy and sell IT capacity on the ThreeFold Grid. +> More info: see [Proof of Capacity DAO rules](./poc_dao_rules.md) + + + diff --git a/collections/farming/farming_toc.md b/collections/farming/farming_toc.md new file mode 100644 index 0000000..547b71e --- /dev/null +++ b/collections/farming/farming_toc.md @@ -0,0 +1,12 @@ +# Farming + +This section covers the essential information concerning ThreeFold Farming. + +To farm on the ThreeFold Grid, refer to the [Farmers](../../documentation/farmers/farmers.md) section. + +

Table of Contents

+ +- [Farming Rewards](./farming_reward.md) +- [Proof-of-Capacity](./proof_of_capacity.md) +- [Proof-of-Utilization](./proof_of_utilization.md) +- [PoC DAO Rules](./poc_dao_rules.md) \ No newline at end of file diff --git a/collections/farming/img/circular_tft3_.jpg b/collections/farming/img/circular_tft3_.jpg new file mode 100644 index 0000000..0c54072 Binary files /dev/null and b/collections/farming/img/circular_tft3_.jpg differ diff --git a/collections/farming/img/farming_.png b/collections/farming/img/farming_.png new file mode 100644 index 0000000..f7a6ff6 Binary files /dev/null and b/collections/farming/img/farming_.png differ diff --git a/collections/farming/img/farming_intro0.jpeg b/collections/farming/img/farming_intro0.jpeg new file mode 100644 index 0000000..c60445f Binary files /dev/null and b/collections/farming/img/farming_intro0.jpeg differ diff --git a/collections/farming/img/farming_rewards_.png b/collections/farming/img/farming_rewards_.png new file mode 100644 index 0000000..d67d68b Binary files /dev/null and b/collections/farming/img/farming_rewards_.png differ diff --git a/collections/farming/img/grid_new_.png b/collections/farming/img/grid_new_.png new file mode 100644 index 0000000..5f85f72 Binary files /dev/null and b/collections/farming/img/grid_new_.png differ diff --git a/collections/farming/img/token_time_to_get_involved_now_.jpg b/collections/farming/img/token_time_to_get_involved_now_.jpg new file mode 100644 index 0000000..2022990 Binary files /dev/null and b/collections/farming/img/token_time_to_get_involved_now_.jpg differ diff --git a/collections/farming/img/utilization_process.png b/collections/farming/img/utilization_process.png new file mode 100644 index 0000000..0016ff6 Binary files /dev/null and b/collections/farming/img/utilization_process.png differ diff --git a/collections/farming/own_farm_utilization.md b/collections/farming/own_farm_utilization.md new file mode 100644 index 0000000..724553d --- /dev/null +++ b/collections/farming/own_farm_utilization.md @@ -0,0 +1,10 @@ +# Cost of Utilization of Capacity for a Famer their own farm. + +We would like to make sure that a farmer can use their provided capacity super cost effective. +The ThreeFold Dao will take care of this situation. + +The idea is that the farmer will pay only for the burning & validator nodes. + +> see [Proof of Capacity DAO rules](poc_dao_rules). + + diff --git a/collections/farming/poc_dao_rules.md b/collections/farming/poc_dao_rules.md new file mode 100644 index 0000000..0b0475c --- /dev/null +++ b/collections/farming/poc_dao_rules.md @@ -0,0 +1,60 @@ +

ThreeFold DAO Rules for Proof-of-Capacity

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Technical Farming Requirements](#technical-farming-requirements) +- [Suggested: improvements to proof-of-capacity](#suggested-improvements-to-proof-of-capacity) +- [TFGrid is a DAO](#tfgrid-is-a-dao) +- [Grid Enhancement Proposal](#grid-enhancement-proposal) + +*** + +> Note: The proof-of-capacity DAO rules will be updated for the next 3.14 grid release. Stay tuned. + +## Introduction + +- The CU/SU reward gets expressed in TFT and registered in TFChain at 3Node registration time + - For certified Nodes, the CU/SU reward was specified at sales/promotion time, this process is managed by ThreeFold Tech. +- CU/SU rewards are calculated from Resource Units + - Certified Node gets 25% more farming rewards + - TFT pricing is pegged to USD (pricing changes in line with TFT/USD rate) +- Rewards for NU and IP Addresses are dynamic + - The TFChain tracks capacity utilization and as such the reward can be calculated for the Farmer +- All Internet capacity farmed is rewarded on a monthly basis according to minimum service level agreements + - Minimum SLA = Service Level Agreement (see special section about SLA) needs to be achieved before TFT can be rewarded + +## Technical Farming Requirements + +- Make sure you have 50GB SSD capacity min available per logical core (physical core times number of threads it can run), if not your calculated CU will be lower. +- Make sure your network connection is good enough, in future it will be measured and part of the Service Level Agreement. + +{{#include tfgrid_min_sla.md}} + +**Important Information around TFT USD Price Used at Registration** + +This is for mainnet TFGrid 3.0: + +- The TFT USD price used at 3Node registration at launch of mainnet is hardcoded in TFChain 3.0 at 0.08 USD per TFT (TFChain 3.0 as used in Jan 2022). +- Once the DAO is life, a new price will be approved by the DAO voters. Idea is to have this price re-visited more or less once a month, if needed faster. +- The TFT USD price used at 3Node registration is defined by the TFDAO at least once a month by means of GEP. + +## Suggested: improvements to proof-of-capacity + +Suggestions will be made to improve PoC, the DAO will have to come to consensus before changes can be made. + +- How to deal with a situation where a 3node adds or removes compute or storage capacity. +- ThreeFold is developing a way of how to detect possible fraud on PoC using TPM chip and dynamic generated code to execute random PoC checks. +- If PoC finds fraud e.g. trying to fake Internet capacity provided, the 3Node will be disabled automatically by Zero-OS and flagged as fraudulant. The Farmer will then have to re-register with a lower reputation for transparancy to the ecosystem. If TFTs are staked at that time, they will be locked permanently. +- How to improve the calculation of CU rewards to mitigate the difference in power provided between new and old hardware. + + +## TFGrid is a DAO + +- All of above information is public and can be see by everone of the community as per 3Node and Farmer (part of TFChain). +- Farming rewards methodology can and probably will get revised if the community wants this, DAO consensus needs to be achieved before changes can happen, this happens by means of a GEP. + +## Grid Enhancement Proposal + +- Changes to above described mechanism or any other change request for the TFGrid is managed by grid enhancement proposals (GEP). +- Because we are a DAO, everything is open for change as long as consensus of community in accordance of TFDAO has been achieved. \ No newline at end of file diff --git a/collections/farming/proof_of_capacity.md b/collections/farming/proof_of_capacity.md new file mode 100644 index 0000000..ae0f9fe --- /dev/null +++ b/collections/farming/proof_of_capacity.md @@ -0,0 +1,93 @@ +

Proof-of-Capacity

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [What is proof-of-capacity?](#what-is-proof-of-capacity) +- [Why proof-of-capacity?](#why-proof-of-capacity) +- [How does Proof-of-Capacity work?](#how-does-proof-of-capacity-work) +- [PoC Rewards](#poc-rewards) +- [Farming Reward Calculation](#farming-reward-calculation) + +*** + +> Note: The proof-of-capacity parameters will be updated for the next 3.14 grid release. Stay tuned. + +## Introduction + +The ThreeFold Blockchain (TFChain) uses work algorythm called "Proof-of-Capacity" to verify the Internet capacity provided by 3Nodes. Put simply, PoC verifies, on an ongoing basis, that farms are honestly representing the Internet capacity they provide to the network. + +## What is proof-of-capacity? + +POC allows ThreeFold Farmers to earn reward according to their contribution. Farming is the "work" itself, the act of providing Internet capacity to the network and making it accessible via our TFDAO and TFChain. + +The PoC algorythm records four different types of Internet capacity: + +- Compute Capacity (CPU) +- Memory Capacity (RAM) +- Storage Capacity (SSD/HDD) +- Network Capacity (Bandwidth, IP Addresses) + +## Why proof-of-capacity? + +PoC comes with a number of benefits, including: + +- Energy efficiency: earning reward in form of TFT does not waste energy. +- Lower barriers to entry with reduced hardware requirements: no need for elite hardware to stand a chance for earning rewards. +- Decentralized: allows anyone to connect a 3node to the network. TFGrid runs as a DAO. + +The main advantage of PoC to farmers it makes it really easy to run a 3Node. It doesn't require huge investments in hardware or energy and everyone earns a fair reward for their contribution. It is more decentralized, allowing for increased participation, and more 3Nodes doesn't mean increased returns, like in mining. + +## How does Proof-of-Capacity work? + +1. A farmer boots hardware with Zero-OS (multiple boot methods available) +2. Zero-OS is a low level OS, with no shell, farmers cannot access Zero-OS +3. Zero-OS reports used IT capacity towards TFChain +4. TFChain and TFDAO will calculate rewards as required for the farmer (TFGrid 3.1.x) +5. TFChain will mint the required TFT and send them to account on TFChain of the farmer. +6. Everyone can use the [ThreeFold Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) to see where capacity is available. This info comes from the TFChain. + + +## PoC Rewards + +100% of specified [farming rewards](./farming_reward.md) goes to the farmer. + +## Farming Reward Calculation + +Each 3Node has certain amount of compute, storage and network resources: + +- Compute Capacity (CPU) +- Memory Capacity (RAM) +- Storage Capacity (SSD/HDD) +- Network Capacity (Bandwidth, IP Addresses) + +For making this Internet Capacity available, Farmers are rewarded with TFT. + +The amount of resources availabe in a 3Node are translated into compute units (CU), storage units (SU), Network units (NU) and IP addresses (IPAddr) to calculate farming rewards. See also [Cloud Units Calculation For Farming](../cloud/resource_units_calc_cloudunits.md). + +> **Unless explicitly specified otherwise, calculations of "gigabytes" use base +> 1024. That is, 1 GB is equal to 1073741824 bytes.** + +The formula to calculate farming rewards is the following: + +```python +TFT earned per month = + CU farmed * CU farming rewards + + SU farmed * SU farming rewards + + NU used * NU farming rewards + + IPAddr used * IPAddr farming rewards + +``` + +The below table expands on CU, SU, NU and IPAddr and their farming rewards: + +| Unit | description | v3 farming rewards in TFT | +| ------------------- | ----------------------------------------------------------------- | ------------------------- | +| Compute Unit (CU) | typically 2 vcpu, 4 GB mem, 50 GB storage | 30.00 TFT/month | +| Storage Unit (SU) | typically 1 TB of netto usable storage | 12.50 TFT/month | +| Network Unit (NU) | 1 GB of data transfered as used by TFGrid user for Public IP Addr | 0.38 TFT/GB | +| Public IPv4 Address | Public IP Address as used by a TFGrid user | 0.06 TFT/hour | + +> **The rewards above are calculated according to the current TFT to USD price in TFChain of 0.08. TFDAO is responsible to change this price in accordance to the current market and liquidity.** + +The above farming rewards apply for 3Nodes registered in TFChain for ThreeFold Grid v3. Anyone can calculate their potential rewards using the [Farming Reward Simulator](https://dashboard.grid.tf/#/farms/simulator/). The same CU, SU, NU and IPAddr principles apply to the sales of Internet capacity in the form of [cloud units](../cloud/cloudunits.md). diff --git a/collections/farming/proof_of_utilization.md b/collections/farming/proof_of_utilization.md new file mode 100644 index 0000000..e92536e --- /dev/null +++ b/collections/farming/proof_of_utilization.md @@ -0,0 +1,58 @@ +

Proof-of-Utilization

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [What is Proof-of-Utilization?](#what-is-proof-of-utilization) +- [How does Proof-of-Utilization work?](#how-does-proof-of-utilization-work) +- [ThreeFold DAO rules in Relation To Proof-of-Utilization](#threefold-dao-rules-in-relation-to-proof-of-utilization) + - [TFGrid Capacity Utilization](#tfgrid-capacity-utilization) + - [Other Ways TFT are Required](#other-ways-tft-are-required) + +*** + +> Note: The proof-of-utilization parameters will be updated for the next 3.14 grid release. Stay tuned. + +## Introduction + +ThreeFold Token ("TFT") is an Utility token and gets generated by ThreeFold Farmers, see [proof-of-capacity](./proof_of_capacity.md) for more information. + +Each ThreeFold Grid user can now use this capacity. The ThreeFold Chain ("TFChain") - ThreeFold Blockchain will track the utilization of this capacity. This process is called Proof-of-Utilization. Each hour the utilization is being tracked on the blockchain and charged to the capacity's user. + +## What is Proof-of-Utilization? + +Proof-of-utilization is the underlying mechanisms that verifies the utilization of Internet capacity on the ThreeFold Grid. + +Every hour, the utilization is recorded in TFChain and the user is charged for the Internet capacity used on the ThreeFold Grid. Discount calculated in line with the amount of TFT users have in their accounts on TFChain. Learn more about the discount [here](../cloud/pricing/staking_discount_levels.md). + +## How does Proof-of-Utilization work? + +1. A user reserves Internet capacity on a given set of 3Nodes. +2. Zero-OS records the reserved and used CU, SU, NU and IPAddresses in correlation with TFChain records. +3. The TFChain DAO will charge the costs to the user in line with [discount mechanism](../cloud/pricing/staking_discount_levels.md). +4. TFT from the user account are burned/distributed in line to table below. + +| Percentage | Description | Remark | +| ---------- | -------------------------------------- | ------------------------------------------------------------------------ | +| 35% | TFT burning | A mechanism used to maintain scarcity in the TFT economy. | +| 10% | ThreeFold Foundation | Funds allocated to promote and grow the ThreeFold Grid. | +| 5% | Validator Staking Pool | Rewards farmers that run TFChain 3.0 validator nodes. | +| 50% | Solution providers & sales channel | managed by [ThreeFold DAO](../about/dao/dao.md). | + +> Note: While the solution provider program is still active, the plan is to discontinue the program in the near future. We will update the manual as we get more information. We currently do not accept new solution providers. + +## ThreeFold DAO rules in Relation To Proof-of-Utilization + +### TFGrid Capacity Utilization + +- Each solution provider and sales channel gets registered in TFChain and as such the distribution can be defined and calculated at billing time. +- For billing purposes, ThreeFold DAO will check if it is from a known sales channel or solution provider. If yes, then the billing smart contract code will know how to distribute the TFTs. If the channel of solution provider is not known, then the 50% will go to the ThreeFold Foundation. +- For Certified Farming, [ThreeFold Tech](../about/threefold_tech.md) can define the solution & sales channel parameters, these are channels as provided by ThreeFold Tech. +- Burning can be lowered to 25% if too many tokens would be burned, ThreeFold DAO consensus needs to be achieved. + +### Other Ways TFT are Required + +- Anyone building solutions on top of the TFGrid can use TFT as a currency to charge for the added value they provide, this gives an extra huge requirement for TFT. +- Some will use TFT as a store or exchange of value, like money, because TFT is a valuable commodity. The hoarding of TFT means that TFT are not available to be used on the TFGrid. + + diff --git a/collections/farming/tfgrid_min_sla.md b/collections/farming/tfgrid_min_sla.md new file mode 100644 index 0000000..5659485 --- /dev/null +++ b/collections/farming/tfgrid_min_sla.md @@ -0,0 +1,18 @@ +## Minimum requirement Service Level Agreement (SLA) + +Minimal SLA's need to be achieved before the farming reward can be earned (uptime, bandwidth, latency, ...). This is not yet fully implemented. + +More service levels agreements will be required, the DAO will decide on those changes. +Requests can be made by everyone by means of GEP. + +Some Ideas + +- minimal uptime +- minimal bandwidth requirement +- minimal network latency requirement +- minimal distance between Certified Nodes for the super node concept +- different uptime requirement for Certified vs DIY nodes + + +If SLA (Service Level Agreement) was not achieved for 3 consecutive months, then the 3Node will have to re-register which means the CU/SU reward will be recalculated at that time and re-registered in TFChain for that node, just like a new one. + diff --git a/collections/farming/utility_token_model.md b/collections/farming/utility_token_model.md new file mode 100644 index 0000000..56e4346 --- /dev/null +++ b/collections/farming/utility_token_model.md @@ -0,0 +1,4 @@ +| Utility Token model | | +| -------------------------------------------- | ------------------------------------------ | +| [Proof Of Capacity](proof_of_capacity) | Farming (creation) of TFT | +| [Proof Of Utilization](proof_of_utilization) | Utilization (burning, distribution) of TFT | \ No newline at end of file diff --git a/collections/farming/why_farming.md b/collections/farming/why_farming.md new file mode 100644 index 0000000..be389a5 --- /dev/null +++ b/collections/farming/why_farming.md @@ -0,0 +1,18 @@ + +## Why becoming a Farmer? + +### Internet and Its Global Demand + +The Internet represents the largest economy in the world and is growing at a rapid pace. + +![](img/token_time_to_get_involved_now_.jpg) + +The ThreeFold Grid offers the most scaleable, secure and sustainable infrastructure to supply the increasing Internet demand. + +Learn more about the ThreeFold Grid [here](grid_intro). + +### Sovereign and Recurrent Wealth + +By participating in the expansion of the ThreeFold Grid, Farmers earn [TFT](threefold_token) on a monthly basis. ThreeFold Token has value - it represents a unit of reservation of Internet Capacity on the ThreeFold Grid. With the infinite expansion of the ThreeFold Grid and the scarcity of mechanism of the [TFT](threefold_token), there will be a constant increase in demand while a decrease in supply, thus providing value of its holders/Farmers. + +Learn more about Farming Rewards [here](farming_reward). \ No newline at end of file diff --git a/collections/manual_legal/.collection b/collections/manual_legal/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/manual_legal/definitions_legal.md b/collections/manual_legal/definitions_legal.md new file mode 100644 index 0000000..5afe811 --- /dev/null +++ b/collections/manual_legal/definitions_legal.md @@ -0,0 +1,297 @@ +# Definitions + +## ThreeFold (TF) + +AN INTERNET BUILT FOR EVERYONE, BY EVERYONE. + +Threefold is a peer-to-peer network of network, storage an compute capacity for an upgraded internet, laying the foundation for a more sustainable, smart, and sovereign digital world where everyone can participate and prosper. + +All the ideas and content created for this concept are opensource and stored in github +A group of volunteers and the ThreeFold Foundation maintain these repositories. + +> See [https://github.com/threefoldfoundation](https://github.com/threefoldfoundation) + +## ThreeFold Foundation (TFF) + +The ThreeFold Foundation (ThreeFold_Dubai) is a participant in the bigger ThreeFold movement, the purpose of the movement is to bring the world a truly peer-to-peer internet. + +We acknowledge and support the many people and organizations around the world who bring crucial support to the growth and adoption of the ThreeFold_Grid. + +See [ThreeFold Dubai](../about/threefold_dubai.md) + +> Work is ongoing to make the Foundation a global distributed concept with probably more than 1 legal entity. + +## ThreeFold Tech (TFTech, TFTECH) + +TF TECH NV, a Belgian limited liability company, having its registered office at Antwerpse Steenweg 19, B-9080 Lochristi, Belgium, registered with the Belgian Crossroads Bank of Enterprises under company number 0712.845.674 (RLP Gent, district Gent) + +TF Tech is a software tech company and is a major contributor to the software as used on the TFGrid. + +See [TFTech](../about/threefold_tech.md) + +## Non For Profit + +Non-for-profit organizations are types of organizations that do not earn profits for its owners. All of the money earned by or donated to a non-for-profit organization is used in pursuing the organization's objectives and keeping it running. Employees or contributors can be paid for the services provided. + +In the case of TFF following remarks might be useful + +- Many non for profits get a legal status by the government to not have to pay tax, in our case the foundation is in Dubai, there is no Tax implication in Dubai as such we didn't need this status or certification. +- TFF has been funded by its original founders by means of loans or investment in kind or tokens, this money can be returned to the founders whenever cashflow allows (which is not the case yet). +- TFF directors/shareholders do everything they can to only operate out of the best interests of the ThreeFold Project. +- A project is under way to officialize the structure with strict governance e.g. a company called ThreeFold VZW has been created in Belgium with official governance around non for profit structure. This company is not used yet. Other alternatives are being researched at this moment (Aug 2020). +- ThreeFold_Dubai has farmed tokens which can be used as gifts towards contributors or employees. + +## ThreeFold_Grid (TFG) + +The ThreeFold_Grid is a new, global neutral and sustainable network of IT infrastructure. On this Grid, IT capacity is indexed registered on the TFChain for easy discovery by purchasers. + +This Internet capacity is produced and allocated locally - similar to the way electricity and other utilities are purchased today. This allows any digital service or application provider to host their services and applications in proximity to the end user leading to significantly greater performance, a lower price point and better margins. This is both more cost effective and green. + +## IT Capacity + +- IT = Information Technology. +- IT Capacity is resource availability for running any IT Workloads +- Examples of IT Workloads which can run on the TFG are + - web applications + - archiving of data + - generic storage (e.g. using the S3 storage interface) + - container workloads (e.g. using the Kubernetes interface) + - artificial intelligence workloads + - big data workloads (processing of data) + - gaming servers + - content delivery + - test workloads for developers + +## ThreeFold_Token (TFT) + +The ThreeFold_Token is a digital Token which allows anyone to buy and sell IT Capacity on the TF Grid. This token only gets issued by the TFChain if a TF Pool gets connected to the TF Grid. + +The TFChain can issue a maximum of 4 billion tokens (gen 2). + +## TFChain + +Group of blockchain related technologies as used by ThreeFold to accomplish the following: + +- store & trade your TFTs: uses Stellar Public Blockchain platform +- buy/sell capacity on the TFG: TFExplorer +- register capacity of the TFG: TFExplorer +- provision IT workloads on the TFG: TFExplorer +- ... + +> See the following [github repos](https://github.com/threefoldtech) and [https://github.com/threefoldfoundation/tft-stellar](https://github.com/threefoldfoundation/tft-stellar) + +## Zero-OS (ZOS) or Capacity Layer + +The Zero-OS is the software which makes it possible to convert any pool of hardware to become a pool of resource for the ThreeFold_Grid. + +> See [Zero-OS](https://github.com/threefoldtech/zos) = Ultra Efficient Stateless Operating System + +## Zero-People or Autonomous Layer + +- [Jumpscale](https://github.com/threefoldtech/js-ng) = Automation Framework (self healing, ...) + +## User + +- is the person/organization/company who buys capacity from the TF Grid +- capacity can only be bought by means of TFTs + +## TF Distributed exchange (TFExchange) + +Since March 2020 based on Stellar integrated Decentralized exchange and before Atomic Swaps. +Mechanism for people to exchange TFT to other digital currencies in a decentralized way. +Atomic Swaps were difficult to use, this got resolved by switching to Stellar blockchain. + +# ThreeFold Farming + +## TFNode + +- is a compute/storage server which provides IT Capacity as source for the Cloud Units +- a TFNode is part of a Farming Pool +- 3Nodes are owned by TF Farmers. +- The TFNode runs the TF Operating System and TFChain (TFC). + +## Cloud Units + +Units of IT capacity as sold from the TF Grid to Users. +More info see [here](../cloud/cloudunits.md) + +## ThreeFold Farming Pool (FP) + +A Pool of storage & compute hardware which allows to provision IT Capacity. + +Each Farming Pool consists out of 3Nodes which run the TF Operating System and TF Blockchain Software (TFChain) which allows anyone in the world to use this IT capacity to host their IT workloads (storage apps, archive capacity, web applications, artificial intelligence, iOT, docker containers, etc). To use this IT Capacity, through the TF Grid, people need to own ThreeFold_Tokens (“TFTs”) as they are the only possible mechanism to purchase this capacity on the TF Grid. As such, TFTs represent a true utility. + +## ThreeFold Farmer + +A ThreeFold Farmer is any organization or person who invests in a ThreeFold Farming Pool and connects this capacity to the ThreeFold_Grid. + +As a result of Farming, i.e. creating additional capacity, ThreeFold_Tokens are automatically created by the ThreeFold Chain. + +Farmers can cultivate both managed and/or unmanaged capacity. + +Farmers receive TFTs + +- as part of owning the TF Farming Pool (tf_farming) +- as part of selling capacity from the TF Farming Pool (cultivating) + +Most TF Farmers use a ThreeFold Cooperative to become active because it hugely simplifies the process and gives them often better pricing to purchase the Farming Pool as well as connecting the Farming Pool to the internet. + +The ThreeFold Farmer is the only party who owns the TF Farming Pool. + +## ThreeFold Cooperative + +Any organization who helps a TF Farmer to become active on the TF Grid. + +A Cooperative can supply any or all of following services. + +- Selling required hardware kit for the Farming Pool (compute, storage, networking) to the TF Farmer (and logistics around it). +- Installing & testing the TF Operating System on the chosen hardware. +- Burn in testing of the chosen hardware: make sure the hardware is reliable and works following expectations. +- Configuration & Installation of the ThreeFold Farming Pool. +- Registration & Initialization of the ThreeFold Farming Pool. +- Delivering & Executing of hardware Warranty as specified on contract. +- Creation and Delivery of the ThreeFold Mobile App for the TF Farmer (allow people worldwide to order capacity using TFTs for the Farming Pool. +- Software support for the Farming Pool +- Training of the TF Farmer about TF Concepts + - how to use the TF Wallet + - how to safely store the TFT's + - how to go from TFT's to fiat currency like USD/EUR (and visa versa) + - how to register pricing info on the TF Grid + - how to integrate a fiat currency payment gateway into existing ecommerce website for the sales of TFT's or TF IT Capacity (e.g. integration with Stripe or other payment mechanism) + - how to consult/register information on the TFChain +- Hosting Services + - all services related to connectivity to the internet (routing, denial of service, firewalling, ...) + - rackspace & other datacenter services + - monitoring of the infrastructure (hardware and software). + +## Do It Yourself Capacity + +Unmanaged IT Capacity can exist everywhere; in people’s home, in mobile telephone masts, in utility cabinets, next to railways or motorways, anywhere where internet lines meet electrical outlets, any IT Hosting or Datacenter Facility. This capacity is deployed to the TF Grid and has no people involved to manage its operations (apart from the physical and network aspects). Farmers have no access to the 3Nodes purchased. They can only use the capacity produced in the exact same way as any other user, i.e. through the TFChain, in a secure private and neutral way, equally applicable to all. + +Unmanaged capacity provides the following 3 basic services + +- Storage Capacity = backend storage services which can be used as backend for more high level storage services like S3 +- Compute Capacity = backend compute capacity which can be used as backend for more high level compute services like Kubernetes. +- Network Gateway Services: integration with ZeroTier network, HTTP(s) reverse proxy, DNS services, TCP Portforwarding. + +These basic services are ordered through the TFChain only. +SLA's (service level agreements) cannot be be guaranteed on Unmanaged Capacity and as such not registered in the TFChain. + +## Certified Capacity + +Capacity which received certification as organized by ThreeFold Tech. + +## Managed Capacity + +Managed capacity is capacity that sits in a datacenter or other controlled environment where people operate and maintain supervision of the capacity connected to the TF Grid and published in the TF Directory. SLA (Service Level Agreements) are provided on this capacity like uptime, guaranteed bandwidth, response times, ... + +TF Farmers have access to the 3Nodes. + +Features Only Available In A Managed Capacity Farming Pool + +- Published & Tracked (monitored) Service Level Agreements + +# Legal + +## The Company + +The Company has been defined on the contract who refers to this document but can be any of the following: + +- The company or organization who is selling a service on the ThreeFold_Grid. +- The company who is selling/buying ThreeFold_Tokens (TFTs) as capacity on the ThreeFold_Grid. +- The company who is helping a Farmer to become active on the ThreeFold_Grid = a TF Cooperative. +- The company who is selling the hardware and software required for a Farming Pool + +## The Product + +The Product is the ThreeFold_Token or any service related to the ThreeFold_Grid which can be bought by The Purchaser. +The Product has been defined on the contract who refers to this document. + +## The Purchaser + +Is the person or company or organization who buys The Product from The Company + +## ThreeFold Tech (TFTech) + +Software Technology company in Belgium. + +Has no direct relationship with the TFGrid or TFTokens. TFTech does not farm and ThreeFold_Tokens and has no impact or does not give any direction to anything happening on the ThreeFold_Grid or in relation to TFTokens. + +TFTech is the company who creates a lot of the opensource software as is used in the TFGrid. TFTech is also a contributor to the TF Foundation in the form of content or promotion, there is no legal connection in place. + +TFTech business model is to sell licenses and certify TFGrid farmers if that is what they require. + +# Miscellaneous + +## TF Wallet + +A software application which allows anyone to consult how many TFTs they own and to make transfers of TFTs to other parties. +The TF Wallet works together with Stellar and is nothing but a javascript UI. +TF Wallet is part of the ThreeFold Connect app on mobile. + +# Sales Related Definitions + +### “Acceptance” + +means that any Deliverable has successfully completed the Acceptance process set forth in Section 4. Such Acceptance may be either explicit or implicit, i.e. in the absence of an explicit Rejection. + +### “Acceptance Period” + +means fifteen (15) days as from the Delivery Date, unless otherwise agreed to in the Sales Order or as provided under statutory law. + +### “Customer” + +means you or the customer entity identified in the Sales Order, as the case may be. + +### “Deliverables” + +means the Hardware, Software, Services (if any), or any deliverable specified in a Sales Order. + +### “Delivery” + +means the act of making the Deliverables available for reception by the Customer in accordance with Section 4.1. + +### “Delivery Date” + +means the ultimate date on which the Delivery may take place, as determined in the Sales Order. + +### “Documentation” + +means all manuals, instructions and other documents (whether in hard copy, soft copy or web-based form) relating to, or necessary for, the use, operation or maintenance of the Deliverables, together with all enhancements, corrections, modifications and amendments to such documents that are furnished to Customer under this Agreement. + +### “Effective Date” + +means the date when the Agreement starts to operate, corresponding to the issuance date of the Sales Order. + +### “Hardware” + +means any hardware to be provided by Company as specified in a Sales Order or Specific Agreement. + +### “Party” + +means any party to this Agreement; + +### “Rejection” + +means the explicit rejection of Deliverables by Customer, provided that the following cumulative conditions have all been completed: +the Rejection has been notified by Customer to Company within the Acceptance Period (i.e. at the latest on the last day of the Acceptance Period); +Customer has returned to Company all rejected Deliverables immediately after the Rejection notice; +Any rejection that does not meet both aforementioned cumulative conditions shall not qualify as a Rejection and shall be deemed an implicit Acceptance. + +### “Sales Order” + +means any Sales Order generated electronically by Company to allow the Customer to order, including the details specified by Customer in the checkout of the Company website, or any document that the Parties mutually agree upon as the vehicle for procuring Hardware, Software and/or Services pursuant to this Agreement. + +### “Services” + +means any services to be provided by Company to Customer as stipulated in the Sales Order. + +### “Software” + +means the open source software connecting the Hardware to the ThreeFold network, all in machine readable, object code form, together with all enhancements, modifications, corrections and amendments thereto. + +### “Specifications” + +means the technical requirements for, and performance standards of, the Deliverables as set forth in the Sales Order or Documentation provided to Customer. + +{{#include ./terms_conditions/sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/disclaimer.md b/collections/manual_legal/disclaimer.md new file mode 100644 index 0000000..37be9e9 --- /dev/null +++ b/collections/manual_legal/disclaimer.md @@ -0,0 +1,36 @@ +# General Warning And Disclaimer + +Your use of the TFGrid and/or TFTs, as well as the IT capacity made available from the TFGrid to the Internet, as well as any tools provided to work with TFGrid or TFTokens (hereinafter collectively also referred to as the "Services") will be subject to the warnings, limitations, acknowledgements and disclaimers set out hereinafter. These statements and disclaimers are made by and on behalf of (1) the TF Foundation (ThreeFold_Dubai), (2) each individual or entity acting as a ThreeFold Farmer (3) , TFTech NV (4) any of the companies or individuals related to these entities, and (5) any person contributing or otherwise assisting in developing, marketing or distributing the Services (hereinafter collectively referred to as “ThreeFold”). + +### Disclaimer + +TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW THE SERVICES ARE PROVIDED ON AN "AS IS" AND “AS AVAILABLE” BASIS WITHOUT WARRANTIES OF ANY KIND, AND THREEFOLD EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES AS TO THE SERVICES, INCLUDING, WITHOUT LIMITATION, IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT; (B) THREEFOLD DOES NOT REPRESENT OR WARRANT THAT THE SERVICES ARE ACCURATE, COMPLETE, RELIABLE, CURRENT OR ERROR-FREE, MEET YOUR REQUIREMENTS, OR THAT DEFECTS IN THE SERVICES WILL BE CORRECTED; AND (C) THREEFOLD CANNOT AND DOES NOT REPRESENT OR WARRANT THAT THE SERVICES, OR THE SERVERS USED TO PROVIDE SUCH SERVICES, ARE FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. + +By using the Services, you acknowledge that: + +- you have access to all relevant and required information to do with the tokens and the grid on https://library.threefold.me + - e.g. way how tokens are created/minted on: token_creation + - all this information is being provided on best effort basis and does not imply any promise or guarantee +- you have sufficient knowledge and experience in financial and business matters and are capable of evaluating the merits and risks of using or acquiring TFTs, and that you are able to bear the economic risk of such acquisition for an indefinite period of time. +- the Services (including the TFTs) involve risks, all of which you fully and completely assume, including, but not limited to, the risk relating to the possibility of limited or absent liquidity for the TFTs on the secondary markets (including the relevant online exchanges), the risk relating to price fluctuations of the TFTs on such secondary markets (including the relevant online exchanges), etc. +- the Services (including the TFTs) have or will be created and delivered to you at your sole risk on an "AS IS" basis. +- you have not relied on any representations or warranties made by ThreeFold outside this document, including, but not limited to, conversations of any kind, whether through oral or electronic communication, or any white paper. Without limiting the generality of the foregoing, you assume all risk and liability for the results obtained by the use of any tokens (including TFTs) and regardless of any oral or written statements made by ThreeFold, by way of technical advice or otherwise, related to the use of the tokens. +- you bear sole responsibility for any taxes as a result of the use or acquisition of the Services (including the TFTs), and any future acquisition, ownership, use, sale or other disposition of TFTs held by you. To the extent permitted by law, you agree to indemnify, defend and hold ThreeFold harmless for any claim, liability, assessment or penalty with respect to any taxes (other than any net income taxes of ThreeFold that result from the sale of TFTs) associated with or arising from your purchase, use or ownership of TFTs. + +### Release + +To the fullest extent permitted by applicable law, you hereby explicitly release ThreeFold from responsibility, liability, claims, demands and/or damages (actual and consequential) of every kind and nature, known and unknown (including, but not limited to, claims of negligence), arising out of or related to: + +(1) disputes between users of the Services and/or the acts or omissions of third parties; and +(2) your purchase of tokens (if any) from ThreeFold_Dubai (formerly known as ‘GreenITGlobe’) or Bettertoken BV identified in the relevant contracts as ‘Internal ThreeFold_Tokens’ or ‘iTFTs’. + +If and to the extend you have purchased or otherwise acquired tokens from ThreeFold_Dubai (formerly known as ‘GreenITGlobe’) or Bettertoken BV that were identified in the relevant contracts as ‘Internal ThreeFold_Tokens’ or ‘iTFTs’, your use of the Services (including your subsequent receipt or acceptance of TFTs) implies your confirmation that such purchase or acquisition has been duly completed as a result of your receipt of a corresponding amount of TFTs, that all deliverables under the relevant contracts (known as ‘iTFT Purchase Agreement’, ‘TFT Purchase Agreement’ or ‘ITO investment agreement’) have been duly delivered and that there are no further obligations from one of the above-mentioned companies to you in relation to such contracts. + +### Limitation of Liability + +- Exclusion of indirect damages. To the fullest extent permissible under applicable law, ThreeFold shall not have any liability with respect to any claims for consequential, exemplary, special, indirect and/or punitive damages (such as –but not limited to – loss of goodwill, loss of actual or anticipated business or contracts, work stoppage, loss as a result of a third party claim, data loss or corruption of data, computer failure or lost profit), arising out or in any way related to the access or use of the Services or otherwise related to ThreeFold, regardless of the form of action, whether based in contract, or otherwise, even if ThreeFold has been advised of the possibility of such damages. +- Cap on damages. To the extent permissible under the applicable law the contractual and/or extra-contractual liability of ThreeFold arising out or related to the use of, or inability to use, the Services, shall be limited to (1) any compensation you paid to ThreeFold for the Services, or (2) 1,000 US Dollars, whichever is greater. This limitation is cumulative and not per incident. It applies to all causes of action and obligations in the aggregate, including without limitations, any claim of breach of contract and/or negligence. +- Prescription. No action in any form arising out of or in connection with this Agreement may be brought by the Purchaser more than one (1) year after the cause of action has accrued. +- No limitations for own intent. Nothing in this Agreement shall (or shall be deemed to, or construed to) exclude or restrict any liability either Party may incur as a result of fraud, willful intent or for any death or personal injury resulting from its gross negligence or that of its employees, agents or subcontractors. + +{{#include ./terms_conditions/sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/legal.md b/collections/manual_legal/legal.md new file mode 100644 index 0000000..d925da2 --- /dev/null +++ b/collections/manual_legal/legal.md @@ -0,0 +1,17 @@ +

ThreeFold Legal Wiki

+ +As part of ThreeFold's commitment to transparency and providing a secure and reliable platform, we have a dedicated [ThreeFold's Legal Wiki](https://legal.threefold.io) where users can access essential legal articles and documentation. + +At [**legal.threefold.io**](https://legal.threefold.io), users can find important legal resources such as Terms and Conditions (T&C) and disclaimers that govern the usage of the ThreeFold Grid and related services. + +These legal documents outline the rights, responsibilities, and obligations of both ThreeFold and its users, ensuring a clear understanding of the legal framework within which the platform operates. By visiting the legal section, users can familiarize themselves with the legal aspects of engaging with the ThreeFold ecosystem, promoting a trustworthy and accountable environment. + +You're invited to explore the ThreeFold Legal Wiki by visiting [this link](https://library.threefold.me/info/legal/#/). + +- [Disclaimer](../wiki/disclaimer.md) +- [Definitions](../wiki/definitions_legal.md) +- [Privacy Policy](../wiki/privacypolicy.md) +- [Terms & Conditions ThreeFold Related Websites](../wiki/terms_conditions_websites.md) +- [Terms & Conditions TFGrid Users TFGrid 3](../wiki/terms_conditions_griduser.md) + - [TFTA to TFT](../wiki/tfta_to_tft.md) +- [Terms & Conditions TFGrid Farmers TFGrid 3](../wiki/terms_conditions_farmer3.md) \ No newline at end of file diff --git a/collections/manual_legal/privacypolicy.md b/collections/manual_legal/privacypolicy.md new file mode 100644 index 0000000..1d0c4a8 --- /dev/null +++ b/collections/manual_legal/privacypolicy.md @@ -0,0 +1,120 @@ +# Privacy Policy + +*This privacy policy will explain how ThreeFold Movement ("companies", “we”, or “us”) uses the personal data we collect from you when you use any of our:* + +{{#include ./terms_conditions/sub/websites.md}} + +### What data do we collect? + +All websites using the ThreeFold Movement Privacy Policy do not collect any data on a personal level by default. All data being processed is anonymized. When signing up for our newsletter we collect your: *email address* + +**How do we collect your data?** + +Browsing data: +We automatically collect data and process data when you use or view our website via your browser's cookies. + +Newsletter Signups: +Collected only with your permission through our sign-up form that uses a double opt-in mechanism for you to explicitly accept. + +### How will we use your data? + +We use this information to monitor and analyze your use of our website and for the website's technical administration, to increase our website's functionality and user-friendliness, and to better tailor it to our visitors needs. + +If you agree, our companies will share your data with the following partner companies so that they may offer you or us their products and services: + +* Matomo: offers us services relating to monitoring and measuring website traffic and access, creating user navigation reports, etc. All information processed here is anonymized. We run this service within our own environments. The data being processed does not leave our servers and is not shared with any thrid parties. + +* Mailerlite: offers us services relating newsletter sending and monitoring. + +**We do not track individual IP's or any other personal data.** + +The aforementioned processors operate independently from us and have their own privacy policy, which we strongly suggest you review. These processors may use the information collected through their services to evaluate visitors’ activity, as set out in their respective privacy policies. + +### How do we store your data? + +We store the anonmyzed data in Matomo for us to research usage and improve user experience on our websites. +We store email addresses in Mailerlite's system. + +### Marketing + +We will not use your information for any (re)marketing reasons, nor send you information about products and/or services of ours or any partner companies unless you explicitly agreed to signing up for our newsletter. + +### What are your data protection rights? + +We would like to make sure you are fully aware of all of your data protection rights. Every user is entitled to the following: + +#### The right to access + +You have the right to request from us copies of your personal data. We may charge you a small fee for this service. + +#### The right to rectification + +You have the right to request that we correct any information you believe is inaccurate. You also have the right to request us to complete information you believe is incomplete. + +#### The right to erasure + +You have the right to request that we erase your personal data, under certain conditions. + +#### The right to restrict processing + +You have the right to request that we restrict the processing of your personal data, under certain conditions. + +#### The right to object to processing + +You have the right to object to our companies' processing of your personal data, under certain conditions. + +#### The right to data portability + +You have the right to request that we transfer the data that we have collected to another organization, or directly to you, under certain conditions. + +If you make a request, we have one month to respond to you. If you would like to exercise any of these rights, please contact us: + +* email: dataprivacy@threefold.io + +* post address: +{{#include ./threefold_fzc_address.md}} + + +### What are cookies? + +Cookies are text files placed on your computer to collect standard Internet log information and visitor behavior information. When you visit our websites, we may collect information from you automatically through cookies or similar technology. + +For further information, visit: http://allaboutcookies.org/ + +### How do we use cookies? + +We use cookies in a range of ways to improve your experience on our website, including: + +* understanding how you use our website + +* for the websites technical administration + +### What types of cookies do we use? + +There are a number of different types of cookies, however, our websites use: + +* Functionality - Our companies use these cookies so that we recognize you on our website and remember your previously selected preferences. These could include what language you prefer and the location you are in. A mix of first-party and third-party cookies are used. + +* No Advertising - Our companies use these cookies to collect information about your visit to our website, the content you viewed, the links you followed and information about your browser, device, and your IP address. However, we will not share this data with third parties for advertising purposes. + +* Analytics cookies - Our companies use these to monitor how users reached the Site, and how they interact with and move around once on the Site. These cookies let us know what features on the Site are working the best and what features on the Site can be improved. + +### How to manage cookies + +You can set your browser to not accept cookies, and the above website tells you how to remove cookies from your browser. However, in a few cases, some of our website features may not function as a result. + +### Privacy policies of other websites + +Our website contains links to other websites. Our privacy policy applies only to our website, so if you click on a link to another website, you should read their privacy policy. + +### Changes to our privacy policy + +We keep our privacy policy under regular review and places any updates on this web page. This privacy policy was last updated on 16 May 2019. + +### How to contact us + +If you have any questions about our privacy policy, the data we hold on you, or you would like to exercise one of your data protection rights, please do not hesitate to contact us. + +Email us at: dataprivacy@threefold.io + +{{#include ./terms_conditions/sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/sub/parties_threefold.md b/collections/manual_legal/terms_conditions/sub/parties_threefold.md new file mode 100644 index 0000000..1ea6f8f --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/parties_threefold.md @@ -0,0 +1 @@ +These Terms and Conditions (the "**Agreement**") constitute a legal agreement between you (“**user**," “**you**", or “**yours**”) and [THREEFOLD RELATED COMPANIES](../../about/threefold_companies.md) (“**Threefold**”, “**Company**,” “**us**,” “**we**” or “**our**”) \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/sub/the_company.md b/collections/manual_legal/terms_conditions/sub/the_company.md new file mode 100644 index 0000000..1897d8d --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/the_company.md @@ -0,0 +1,8 @@ +## The Company + +The Company has been defined on the contract who refers to this document but can be any of the following: + +- The company or organization who is selling a service on the ThreeFold_Grid. +- The company who is selling/buying ThreeFold_Tokens (TFTs) as capacity on the ThreeFold_Grid. +- The company who is helping a Farmer to become active on the ThreeFold_Grid = a TF Cooperative +- The company who is selling the hardware and software required for a Farming Pool diff --git a/collections/manual_legal/terms_conditions/sub/the_product.md b/collections/manual_legal/terms_conditions/sub/the_product.md new file mode 100644 index 0000000..d10fc8f --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/the_product.md @@ -0,0 +1,4 @@ +## The Product + +The Product is the ThreeFold_Token or any service related to the ThreeFold_Grid which can be bought by The Purchaser. +The Product has been defined on the contract who refers to this document. diff --git a/collections/manual_legal/terms_conditions/sub/the_purchaser.md b/collections/manual_legal/terms_conditions/sub/the_purchaser.md new file mode 100644 index 0000000..f13c063 --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/the_purchaser.md @@ -0,0 +1,3 @@ +## The Purchaser + +The Purchaser is the person or company or organization who buys The Product from The Company. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/sub/the_single_source_truth.md b/collections/manual_legal/terms_conditions/sub/the_single_source_truth.md new file mode 100644 index 0000000..db24c10 --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/the_single_source_truth.md @@ -0,0 +1,5 @@ +## SINGLE SOURCE OF TRUTH + +Our single source of truth for our legal docs is stored on [Github: https://github.com/threefoldfoundation/info_legal/tree/master/](https://github.com/threefoldfoundation/info_legal) + +> You can see the history of each file on github, useful to see the right version of the file in relation to the date when you signed a document or contract which linked into one of the above documents. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/sub/websites.md b/collections/manual_legal/terms_conditions/sub/websites.md new file mode 100644 index 0000000..7557344 --- /dev/null +++ b/collections/manual_legal/terms_conditions/sub/websites.md @@ -0,0 +1 @@ +websites/wikis/forums ending with threefold.io, threefold.me, grid.tf, threefold.tech, ThreeFold_Token.com, freeflownation.org, 3bot.org, incubaid.com or consciousinternet.org or any other website as used/promoted by the ThreeFold Foundation or any other site as originating from our opensource git repository on https://github.com/threefoldfoundation. diff --git a/collections/manual_legal/terms_conditions/terms_conditions.md b/collections/manual_legal/terms_conditions/terms_conditions.md new file mode 100644 index 0000000..6f8d82b --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions.md @@ -0,0 +1,9 @@ +

Terms & Conditions

+ +

Table of Contents

+ +- [Terms & Conditions ThreeFold Related Websites](./terms_conditions_websites.md) +- [Terms & Conditions TFGrid Users TFGrid 3](./terms_conditions_griduser.md) + - [TFTA to TFT](./tfta_to_tft.md) +- [Terms & Conditions TFGrid Farmers TFGrid 3](./terms_conditions_farmer3.md) +- [Terms & Conditions Sales](./terms_conditions_sales.md) \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer3.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer3.md new file mode 100644 index 0000000..0c6ec94 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer3.md @@ -0,0 +1,24 @@ +{{#include ./terms_conditions_farmer_parts/part_0_introduction_tcs.md}} +{{#include ./terms_conditions_farmer_parts/part_1_definitions.md}} +{{#include ./terms_conditions_farmer_parts/part_2_farmer_services.md}} +{{#include ./terms_conditions_farmer_parts/part_3_farmer_grant.md}} +{{#include ./terms_conditions_farmer_parts/part_4_certified_vs_diy.md}} +{{#include ./terms_conditions_farmer_parts/part_5_farmer_responsibilities.md}} +{{#include ./terms_conditions_farmer_parts/part_6_restrictions.md}} +{{#include ./terms_conditions_farmer_parts/part_7_representations_and_warranties.md}} +{{#include ./terms_conditions_farmer_parts/part_8_capacity_measurement_minting3.md}} +{{#include ./terms_conditions_farmer_parts/part_9_capacity_utilization3.md}} +{{#include ./terms_conditions_farmer_parts/part_10_term_termination.md}} +{{#include ./terms_conditions_farmer_parts/part_11_intellectual_property.md}} +{{#include ./terms_conditions_farmer_parts/part_12_indemnification.md}} +{{#include ./terms_conditions_farmer_parts/part_13_disclaimer_limitation_liability.md}} +{{#include ./terms_conditions_farmer_parts/part_14_export_compliance.md}} +{{#include ./terms_conditions_farmer_parts/part_15_agreement_severability_waiver.md}} +{{#include ./terms_conditions_farmer_parts/part_16_governing_law_venue.md}} + + +## APPENDIX + +{{#include threefold_companies0.md}} + +{{#include ./sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/img/farmer_tcs_minting_equation.jpg b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/img/farmer_tcs_minting_equation.jpg new file mode 100644 index 0000000..8b13676 Binary files /dev/null and b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/img/farmer_tcs_minting_equation.jpg differ diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_0_introduction_tcs.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_0_introduction_tcs.md new file mode 100644 index 0000000..bc27086 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_0_introduction_tcs.md @@ -0,0 +1,4 @@ +**FARMER TERMS AND CONDITIONS** + +THESE TERMS AND CONDITIONS (THE "**AGREEMENT**") CONSTITUTE A LEGAL AGREEMENT BETWEEN YOU (“**FARMER**," “**YOU**", OR “**YOURS**”) AND OF THE THREEFOLD COMPANIES (“**THREEFOLD**”, “**COMPANY**,” “**US**,” “**WE**” OR “**OUR**”), GOVERNING THE TERMS OF YOUR PARTICIPATION AS A FARMER IN THE THREEFOLD GRID. YOU UNDERSTAND AND AGREE THAT BY ACCEPTING THE TERMS OF THIS AGREEMENT, EITHER BY CLICKING TO SIGNIFY ACCEPTANCE, OR BY TAKING ANY ONE OR MORE OF THE FOLLOWING ACTIONS DOWNLOADING, INSTALLING, RUNNING,/AND OR USING THE APPLICABLE SOFTWARE, YOU AGREE TO BE BOUND BY THE TERMS OF THIS AGREEMENT EFFECTIVE AS OF THE DATE THAT YOU TAKE THE EARLIEST OF ONE OF THE FOREGOING ACTIONS. YOU REPRESENT AND WARRANT THAT YOU ARE 18 YEARS OLD OR OLDER AND HAVE THE RIGHT AND AUTHORITY TO ENTER INTO AND COMPLY WITH THE TERMS OF THIS AGREEMENT. + diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_10_term_termination.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_10_term_termination.md new file mode 100644 index 0000000..df594c0 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_10_term_termination.md @@ -0,0 +1,7 @@ +### 10. TERM AND TERMINATION + +This Agreement shall be effective as of the date that you take the earliest of the following actions: your acceptance of this Agreement, either by clicking to signify acceptance, or by taking any one or more of the following actions: downloading, installing, running and/or using the Software. It will continue until terminated per the terms below. + +Either party may terminate this Agreement immediately at any time without notice to the other party. + +In case of termination, the Farmer shall immediately cease using the Software. Any portion of Farmed or Cultivated ThreeFold_Tokens that have not been transferred to the Farmer’s wallet on the date of termination will be irrevocably forfeited. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_11_intellectual_property.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_11_intellectual_property.md new file mode 100644 index 0000000..1eb7a4a --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_11_intellectual_property.md @@ -0,0 +1,7 @@ +### 11. INTELLECTUAL PROPERTY + +No rights are granted to the Farmer hereunder other than as expressly set forth in this Agreement. Except for Software subject to the Open Source Licenses, and except for any rights expressly granted under this Agreement, Company and its licensors own and shall retain all right, title, and interest in and to the ThreeFold_Grid and all related software (including any improvements, enhancements, customizations, and modifications thereto), the Documentation, and the Related Data, including, without limitation, all related intellectual property rights therein. For purposes hereof, the term "**Related Data**" means data derived from operation of the 3Node and of the ThreeFold_Grid via the 3Node, and any data that is aggregated by Company (including aggregations with data sourced from other Farmers and other third party data sources), and data and information regarding the Farmer’s access to and participation in the ThreeFold_Grid, including, without limitation, statistical usage data derived from the operation of the 3Node and ThreeFold_Grid and configurations, log data and the performance results related thereto. For the avoidance of doubt, nothing herein shall be construed as prohibiting Company from utilizing Related Data to optimize and improve the ThreeFold_Grid or otherwise operate Company’s business; provided that if Company provides Related Data to third parties, such Related Data shall be de-identified and presented in the aggregate so that it will not disclose the identity of Farmers to any third party. + +The ThreeFold_Grid may include access to various confidential and proprietary third party data that is utilized along with the IT Capacity, and all such data is owned by the applicable third party source or vendor. Farmer may only use such data as part of the ThreeFold_Grid and may not extract or otherwise utilize any such data except as included in and in connection with the ThreeFold_Grid. This data may be compiled from third party sources, including but not limited to, public records, user submissions, and other commercially available data sources. These sources may not be accurate or complete, or up-to-date and is subject to ongoing and continual change without notice. Neither Company nor its third party data sources make any representations or warranties regarding the data and assume no responsibility for the accuracy, completeness, or currency of the data. + +Company shall have a royalty-free, worldwide, transferable, sublicensable, irrevocable, perpetual license to use or incorporate into the Software and/or the ThreeFold_Grid any suggestions, ideas, enhancement requests, feedback, recommendations or other information provided by Farmers relating to the features, functionality, or operation thereof ("**Feedback**"). Company shall have no obligation to use Feedback, and Farmer shall have no obligation to provide Feedback. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_12_indemnification.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_12_indemnification.md new file mode 100644 index 0000000..69fc66c --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_12_indemnification.md @@ -0,0 +1,7 @@ +### 12. INDEMNIFICATION + +To the fullest extent permitted by applicable law, you will defend, indemnify and hold harmless Company and our respective past, present, and future employees, officers, directors, contractors, consultants, equity holders, suppliers, vendors, service providers, parent companies, subsidiaries, affiliates, agents, representatives, predecessors, successors and assigns (the "**Indemnified Parties**") from and against all claims, damages, costs and expenses (including attorneys’ fees) that arise from or relate to: (i) your use of the Software; (ii) your participation in the ThreeFold_Grid; (iii) any Feedback you provide; or (iv) your breach of this Agreement. + +Company reserves the right to exercise sole control over the defense of any claim subject to indemnification under the paragraph above, at your expense. This indemnity is in addition to, and not in lieu of, any other indemnities set forth in a written agreement between you and Company. + +If the Software becomes, or in Company’s reasonable judgment is likely to become, the subject of a claim of infringement, then Company may in its sole discretion: (a) obtain the right, for Farmer to continue using the Software; (b) provide a non-infringing functionally equivalent replacement; or (c) modify the Software so that it is no longer infringing. If Company, in its sole and reasonable judgment, determines that none of the above options are commercially reasonable, then Company may, without liability, suspend or terminate Farmer’s use of the Software. This Section 12 states Company’s sole liability and Farmer’s exclusive remedy for infringement claims. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_13_disclaimer_limitation_liability.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_13_disclaimer_limitation_liability.md new file mode 100644 index 0000000..c03c6ed --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_13_disclaimer_limitation_liability.md @@ -0,0 +1,17 @@ +### 13. DISCLAIMER AND LIMITATION OF LIABILITY + +The Farmer hereby acknowledges the fact that he/she has been advised that TFTs may qualify as a security and that the offers and sales of TFTs have not been registered under any country’s securities laws and, therefore, cannot be resold except in compliance with the applicable country’s laws. + +The Farmer understands that the use of TFTs, the Software and/or the ThreeFold_Grid involves risks, all of which the Farmer fully and completely assumes, including, but not limited to, the risk that (i) the technology associated with the ThreeFold_Grid, 3Node and/or related Threefold products will not function as intended; (ii) the Threefold project will not be completed; (iii) Threefold will fail to attract sufficient interest from key stakeholders; and (iv) ThreeFold or any related parties may be subject to investigation and punitive actions from governmental authorities. + +Except as explicitly set forth herein, Company makes no representations that the Software is appropriate for use in any jurisdictions. Farmers engaging with the ThreeFold_Grid from any jurisdictions do so at their own risk and are responsible for compliance with local laws. + +The Farmer understands and expressly accepts that the TFTs, the Software and the ThreeFold_Grid were created and delivered to the Farmer at the sole risk of the Farmer on an "AS IS" and “UNDER DEVELOPMENT” basis. + +COMPANY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS AND IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND NON-INFRINGEMENT. COMPANY MAKES NO WARRANTY THAT THE SOFTWARE, THREEFOLD GRID, OR DOCUMENTATION WILL BE UNINTERRUPTED, ACCURATE, COMPLETE, RELIABLE, CURRENT, ERROR-FREE, VIRUS FREE, OR FREE OF MALICIOUS CODE OR HARMFUL COMPONENTS, OR THAT DEFECTS WILL BE CORRECTED. COMPANY DOES NOT CONTROL, ENDORSE, SPONSOR, OR ADOPT ANY CONTENT AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND REGARDING THE CONTENT STORED ON THE THREEFOLD GRID. COMPANY HAS NO OBLIGATION TO SCREEN, MONITOR, OR EDIT CONTENT AND IS NOT RESPONSIBLE OR LIABLE FOR ANY CONTENT. YOU ACKNOWLEDGE AND AGREE THAT COMPANY HAS NO INDEMNITY, SUPPORT, SERVICE LEVEL, OR OTHER OBLIGATIONS HEREUNDER. + +The Undersigned understands and expressly acknowledges that it has not relied on any representations or warranties made by the Company, TF Tech NV, Bettertoken NV, Kristof De Spiegeleer, any person or entity involved in the development or promotion of the Software and/or the ThreeFold project, or any related parties, including, but not limited to, conversations of any kind, whether through oral or electronic communication or otherwise, or any whitepapers or other documentation. + +WITHOUT LIMITING THE GENERALITY OF THE FOREGOING, THE FARMER ASSUMES ALL RISK AND LIABILITY FOR THE RESULTS OBTAINED BY THE USE OF THE SOFTWARE, THE THREEFOLD GRID AND/OR THE THREEFOLD TOKENS AND REGARDLESS OF ANY ORAL OR WRITTEN STATEMENTS MADE BY THREEFOLD, BY WAY OF TECHNICAL ADVICE OR OTHERWISE, RELATED TO THE USE THEREOF. + +COMPANY SHALL NOT BE LIABLE FOR ANY INCIDENTAL, CONSEQUENTIAL, PUNITIVE, SPECIAL, INDIRECT, OR EXEMPLARY DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS, REVENUE, DATA, OR DATA USE, OR DAMAGE TO BUSINESS) HOWEVER CAUSED, WHETHER BY BREACH OF WARRANTY, BREACH OF CONTRACT, IN TORT (INCLUDING NEGLIGENCE) OR ANY OTHER LEGAL OR EQUITABLE CAUSE OF ACTION EVEN PREVIOUSLY ADVISED OF SUCH DAMAGES IN ADVANCE OR IF SUCH DAMAGES WERE FORESEEABLE, AND COMPANY SHALL ONLY BE LIABLE FOR DIRECT DAMAGES CAUSED BY ITS GROSS NEGLIGENCE. IN NO EVENT WILL COMPANY’S TOTAL AGGREGATE LIABILITY ARISING FROM OR RELATING TO THIS AGREEMENT EXCEED ONE HUNDRED EURO (€ 100.00). diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_14_export_compliance.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_14_export_compliance.md new file mode 100644 index 0000000..b580805 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_14_export_compliance.md @@ -0,0 +1,3 @@ +### 14. EXPORT COMPLIANCE + +The Software may be subject to export laws and regulations of the European Union, the United States and other jurisdictions. Farmer represents that it is not named on any E.U. or U.S. government denied-party list. Farmer shall not access or use the Software or the ThreeFold_Grid in a E.U. or U.S.-embargoed or any sanctioned country or region or in violation of any E.U. or U.S. export law or regulation. Farmer shall not use the ThreeFold_Grid to export, re-export, transfer, or make available, whether directly or indirectly, any regulated item or information to anyone outside the E.U. or U.S. in connection with this Agreement without first complying with all export control laws and regulations that may be imposed by the European Union, any EU country or the U.S. Government and any country or organization of nations within whose jurisdiction Farmer operates or does business. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_15_agreement_severability_waiver.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_15_agreement_severability_waiver.md new file mode 100644 index 0000000..d0e2be8 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_15_agreement_severability_waiver.md @@ -0,0 +1,7 @@ +### 15. ENTIRE AGREEMENT,SEVERABILITY, WAIVER + +1. This Agreement sets forth the complete and final agreement of the parties concerning the subject matter hereof, and supersedes, replaces all prior agreements, written and oral, between them concerning the subject matter hereof. If a term of this Agreement to be invalid or unenforceable, the remaining provisions will continue in full force and effect. A party’s consent to, or waiver of, enforcement of this Agreement on one occasion will not be deemed a waiver of any other provision or such provision on any other occasion. +2. We reserve the right to change this Agreement from time to time in our sole discretion. If we make material changes to this Agreement, we will provide notice of such changes, such as by posting the revised Farmer Terms and Conditions to the Software and on our Websites. By continuing to access or use the Software or otherwise participate in the ThreeFold_Grid after the posted effective date of modifications to this Agreement, you agree to be bound by the revised version of this Agreement. If you do not agree to the modified Agreement, you must stop interacting with the ThreeFold_Grid and disconnect all your 3Node. +3. The parties are independent contractors. No agency, partnership, franchise, joint venture, or employment relationship is intended or created by this Agreement. Neither party has the power or authority to create or assume any obligation, or make any representations or warranties, on behalf of the other party. +4. The Farmer agrees that the Company may transfer and assign the Agreement in its sole discretion, provided a notice of such assignment is sent to the Farmer within fifteen days of such assignment. +5. Notices to Company made under this Agreement shall be made by email to legal@threefold.io AND in writing and delivered by registered mail (return receipt requested) or nationally-recognized overnight courier service to ThreeFold_Dubai, with registered office at BA1120 DMCC BUSINESS CENTRE, LEVEL NO 1, JEWELLERY & GEMPLEX 3, DUBAI, UNITED EMIRATES ARAB, attention Legal Department. You agree to receive electronically all communications, agreements, documents, notices, and disclosures that we provide in connection with the Software and/or the ThreeFold_Grid ("**Communications**"). We may provide Communications in a variety of ways, including by e-mail, text, in-app notifications, or by posting them on our websites. You agree that all Communications that we provide to you electronically satisfy any legal requirement that such communications be in writing. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_16_governing_law_venue.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_16_governing_law_venue.md new file mode 100644 index 0000000..d9178c9 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_16_governing_law_venue.md @@ -0,0 +1,3 @@ +### 16. GOVERNING LAW AND VENUE + +This Agreement will be governed by Luxembourg law. Any disputes shall be subject to the jurisdiction of the courts of Luxembourg, Grand Duchy of Luxembourg. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_1_definitions.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_1_definitions.md new file mode 100644 index 0000000..e135276 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_1_definitions.md @@ -0,0 +1,39 @@ +### 1. DEFINITIONS + +Unless defined otherwise in this Agreement below, capitalized terms in this Agreement shall have the meaning ascribed to them in the following links for [Definitions](../definitions_legal.md). + +SPECIAL DEFINITIONS + +- THREEFOLD COMPANIES noted as (“**THREEFOLD**”, “**COMPANY**,” “**US**,” “**WE**” OR “**OUR**”) mean any of the companies as mentioned below: + - DUBAI & BVI + - THREEFOLD DMCC, TF Hub Limited + - THREEFOLD FZC (the original ThreeFold in UAE, no longer active) + - THREEFOLD LABS IT + - MAZRAA IS BRANDNAME OF THREEFOLD LABS IT + - EUROPE + - THREEFOLD VZW + - TFTECH NV BELGIUM + - BETTERTOKEN NV BELGIUM + - THREEFOLD AG +- TFCHAIN ("**TFCHAIN**") + - ThreeFold Blockchain manages the ThreeFold Grid and the 3Nodes as an autonomous piece of software. + - A DAO has been created (decentralized autonomous organization) which manages the behaviour of this Blockchain Software (upgrades & functionalities) + - Is a piece of opensource software as used by all of us together. + - TFChain has been introduces since TFGrid 3.x +- Change Request + - Change Requests can be registered on the TFChain. They are a proposal for any request for change. + - Change Requests can be used to trigger change in protocol, software update, changes in software or TFGrid specifications. + - Change Requests need to be approved by majority of Validators + - Change Requests are being introduced from TFGrid 3.x (x to be defined) +- TFChain Validators ("**VALIDATOR**") + - A Validator is a piece of software running a TFChain Blockchain Function to protect the security and sovereignity of the Blockchain. + - Each Validator has a vote to agree on changes in protocol, software updates, changes in software or TFGrid specifications. + - Each Owner / Maintainer of a Validator has to stake a certain to be defined amount of TFT before voting can happen. + - Validators are being introduced from TFGrid 3.x (x to be defined) + - Majority of Validators have to vote positively on each Change Request suggested before change can happen. + - Validators are required to let the DAO function. +- DAO + - Decentralizes Autonomous Organization + - Implemented on multiple levels, but for release 3.0.x only on level 1 which is on Substrate TFChain level. + - The DAO is the set of rules under which the decentralized organization functions. + - The DAO specifications will or are available on https://library.threefold.me (our knowledgebase) diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_2_farmer_services.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_2_farmer_services.md new file mode 100644 index 0000000..8c1f0a7 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_2_farmer_services.md @@ -0,0 +1,10 @@ +### 2. FARMER SERVICES + +The Farmer may provide IT Capacity on the ThreeFold_Grid (the "**Farmer Services**") pursuant to the terms hereof during the Term of this Agreement. The Farmer Services include the features and functionality applicable to the version of the TF Operating System (Zero OS) installed by the Farmer and TF Blockchain Software (together also referred to as the “**Software**”). Company may update the content, functionality, and user interface of the Farmer Services from time to time in its sole discretion. + +By entering into this Agreement you receive a non-exclusive, non-sublicensable, non-transferable right to provide the Farmer Services pursuant to this Agreement during the Term hereof solely for your internal business purposes subject to the limitations set forth herein. + +The Software consists of open source code and is made available to you pursuant to the terms of the open-source license agreement(s) as located on https://github.com/threefoldtech and https://github.com/threefoldfoundation (the "**Open Source License(s)**"). Your use of the Software or any other Content (Information) is conditioned upon your compliance at all times with the terms of all applicable Open Source License(s). [Example license for Zero-OS can be found here](https://github.com/threefoldtech/zos/blob/main/LICENSE). + +Including without limitation all provisions governing access to source code, modification, and/or reverse engineering. You are responsible for complying with any applicable documentation, meaning any information that describes the ThreeFold_Grid, provides instructions or recommendations related to the configuration and/or use of the ThreeFold_Grid, or otherwise informs Users of the intended use of the ThreeFold_Grid, including, but not limited to content provided directly to User or published at [https://library.threefold.me](https://library.threefold.me), [https://library.threefold.me](https://library.threefold.me), [https://forum.threefold.io](https://forum.threefold.io) or otherwise made available in conjunction with the ThreeFold_Grid, the ThreeFold_Token or the Software (“**Documentation**”) and for satisfying all technical requirements of the Software, including any requirements set forth in the Documentation for ensuring that the Software performs properly. + diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_3_farmer_grant.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_3_farmer_grant.md new file mode 100644 index 0000000..0a4a093 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_3_farmer_grant.md @@ -0,0 +1,9 @@ +### 3. FARMER GRANT OF RIGHT TO IT CAPACITY + +By making available one or more computers, network or storage devices ("**3Nodes**") and connecting such 3Nodes to the TF Grid via the Software, you hereby grant to Company, TFChain and Users the irrevocable right to access and use the 3Nodes as follows: + +- to use storage, compute and network services as delivered by your 3Node(s) +- to store data and materials by Users on your 3Node(s) (the "**Content**") and to access such Content from your 3Node(s) at any time + +in accordance with the capabilities of the software installed on the 3Nodes; + diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_4_certified_vs_diy.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_4_certified_vs_diy.md new file mode 100644 index 0000000..30cac6c --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_4_certified_vs_diy.md @@ -0,0 +1,19 @@ +### 4. CERTIFIED VS. DIY-FARMERS + +There are two types of ThreeFold Farmers: + +1. Certified Farmer: Uses hardware from certified sources and signs a contract with TF Tech NV for support and additional benefits +2. Do It Yourself (DIY) Farmer: Uses any hardware and in case of support uses online material only. + +Farmers can opt in for certification ("**Certification**"). Certification can be withdrawn in case the relevant 3Node does no longer comply with the applicable certification requirements. + +The following criteria or requirements are checked (timing of implementation, see roadmap on wiki): + +- Bandwidth: 24 times a day random non local nodes are used to upload a 2MB file to a 3Node. The bandwidth will be measured in mbit/sec +- Utilization: Through the ThreeFold Explorer the true utilization of the 3Node will be checked. It will be displayed in % of the 3Node total capacity. +- Uptime: The uptime per month will be created in the ThreeFold Explorer and is presented as a percentage of 3Node availability. + +ThreeFold Foundation or TFTech may give free certification to 3Nodes that benefit the distribution of capacity on the ThreeFold_Grid. + +ThreeFold Foundation or TFTech may also certify certain hardware partners (i.e. certified hardware vendors) as part of this certification process. + diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_5_farmer_responsibilities.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_5_farmer_responsibilities.md new file mode 100644 index 0000000..7d081a2 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_5_farmer_responsibilities.md @@ -0,0 +1,15 @@ +### 5. FARMER RESPONSIBILITIES + +At all times during the Term of this Agreement or the period when Content is maintained in your 3Node, whichever is longer: + +1. You will comply with the terms of this Agreement, the [Generic Disclaimer](../disclaimer.md), [ThreeFold Website Terms](./terms_conditions_websites.md) and [Privacy Policy](../privacypolicy.md) and any other terms and conditions required by in connection herewith, the Open Source Licenses, and the terms of all other agreements to which you are a party in connection with your performance under this Agreement including, without limitation, any agreement you have with a third-party Internet service provider. +2. You will operate the 3Node in strict compliance with terms of this Agreement and any applicable laws or regulations, and will not take any action not expressly authorized hereunder. +3. Without prejudice to your rights under any applicable Open Source license, you will not modify or attempt to modify the Software for any purpose including but not limited to attempting to circumvent the audit, bypass security, manipulate the performance of, or otherwise disrupt the ThreeFold_Grid for any reason, including but not limited to attempting to increase the amount of data stored or bandwidth utilized or the amount of Farmed TFTs, as defined herein, and you will not otherwise interfere with the operation of the ThreeFold_Grid. +4. You will provide and maintain the 3Node so that, at all times, it will meet the minimum requirements set out for either pre configured servers (‘certified hardware’) or ‘do-it-yourself’ servers. [Read more here](../../../documentation/farmers/farmers.md). +5. You will implement and maintain adequate administrative, organizational, physical and technical safeguards to ensure the protection, confidentiality, security, and integrity of the 3Node and Content and shall take all reasonable steps to ensure that Content are not disclosed, accessed, used, modified, or distributed except as expressly authorized under this Agreement. +6. You acknowledge and agree that by running the Software on your hardware device and allowing IT Capacity to be made available on the TF Grid to the Users and TFCHAIN, you may act as a cloud service provider under certain circumstances and as such qualify as a processor or sub-processor under the General Data Processing Directive (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC) (‘GDPR’). You undertake to comply with any legal obligations which may possibly be applicable to you as a data processor under the GDPR and/or any other applicable data privacy regulations. +7. You acknowledge and agree that by running the Software on your hardware device and allowing IT Capacity to be made available on the TF Grid to the Users, your 3Node may be impacted due to additional constraints being placed on it by the Software and the processing of Content. In particular, but without limiting the generality of the foregoing, your 3Node may not operate as quickly as it would without running the Software and making IT Capacity available for use by Users. +8. In connection with your use of the Software and/or operation of a 3Node hereunder, Company may, provide updates to the software which will be automatically provided and installed. You acknowledge that these updates are done automatically on your 3node or any other web site or portal and you don't have the ability to confirm such an update. This update mechanism might be revisited at the end of 2020 and will be communicated accordingly. These updates need to be done automatically for now, because ThreeFold_Grid consists out of many components which are depending on each other and need the right version to be installed. +9. In connection with your use of the Software and/or operation of a 3Node hereunder, Company may, from time to time, require you to affirm and/or reaffirm your agreement to the terms of this Agreement, and in such case, your continued use of the Software is contingent upon your promptly providing such affirmation as requested by Company. +10. You, as the Famer, acknowledge that you retain administrative and/or physical control over to whom you grant access to the applicable 3Node. You are responsible for maintaining the physical security of the 3Node +11. Company may suspend Famer’s participation in the ThreeFold_Grid if Company believes the Farmer to be: (a) violating any term of this Agreement; or (b) using the ThreeFold_Grid in a manner that Company reasonably believes may cause a security risk, a disruption to the ThreeFold_Grid, or liability for Company or any persons involved in the ThreeFold Open Source project. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_6_restrictions.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_6_restrictions.md new file mode 100644 index 0000000..d37d47b --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_6_restrictions.md @@ -0,0 +1,13 @@ +### 6. RESTRICTIONS + +You will operate the 3Node in strict accordance with the terms of this Agreement and in no other manner. Without limiting the generality of the foregoing, you will not: + +1. access or use the ThreeFold_Grid: (i) in violation of applicable laws; or (ii) in a manner that interferes with or disrupts the integrity or performance of the ThreeFold_Grid (or the data contained therein). +2. with respect to Content (i) reverse engineer any aspect of the Content or do anything that might discover the contents or origin of the Content, (ii) attempt to bypass or circumvent measures employed to prevent or limit access to the Content, including by attempting to defeat any encryption, or (iii) attempt to interfere with the storage or transmission of Content or with our audits of your 3Node(s); +3. manipulate or otherwise attempt to bypass, change, or update any values related to uptime detection outside the programmatic operation of the Software; +4. deliberately or actively limit or otherwise negatively impact download speed such that insufficient bandwidth is available for required audit traffic; +5. manipulate or alter the default behavior of the ThreeFold_Grid to artificially increase or decrease the value of any reputation factor of any 3Node; +6. manipulate network responses to any request with unauthorized intent to change the cryptographic signatures, NodeID, or TFT wallet address; +7. attempt to manipulate or falsify the identification of the 3Node by the Software or otherwise bypass the proof of capacity process; +8. retain any Content after the earlier of termination of this Agreement or de-certification of the applicable 3Node at any time; or +9. in any other way attempt to interfere, impede, alter, or otherwise interact in any manner not expressly authorized hereunder with the ThreeFold_Grid or the operation of any other 3Node(s). diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_7_representations_and_warranties.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_7_representations_and_warranties.md new file mode 100644 index 0000000..bff2d65 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_7_representations_and_warranties.md @@ -0,0 +1,9 @@ +### 7. REPRESENTATIONS AND WARRANTIES + +You hereby represent, warrant, and covenant that: + +1. You own or control your 3Node(s), and have the right to install the Software on your 3Node(s) and share IT Capacity pursuant to this Agreement, and otherwise comply with all of your obligations under this Agreement and/or applicable laws; +2. You represent and warrant that you are authorized to receive ThreeFold_Tokens (TFT) as a remuneration from Users for the usage of your IT Capacity on the ThreeFold_Grid as set forth in this Agreement; +3. You have full power and authority to enter into this Agreement and comply with all terms hereof, and that doing so will not conflict with any obligation you may owe to any third party; +4. You have the qualifications, skill, and ability to perform your obligations hereunder without the advice, control, or supervision of Company; and +5. You will at all times comply with all applicable foreign, federal, state, and local laws, orders, rules, and regulations currently in effect or that may come into effect during the term of this Agreement, including but not limited to those regarding data privacy and protection. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting.md new file mode 100644 index 0000000..a61d2f6 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting.md @@ -0,0 +1,52 @@ +### 8. TFT Minting (Token Creation ‘FARMING of capacity’) TFGrid 2.x + +#### 8.1 General Principle + +Farmers who connect 3Nodes on an ongoing basis to the ThreeFold_Grid by running the Software and making IT Capacity available to the Users, get rewarded by receiving ThreeFold_Tokens (TFTs) which are generated by the Software. TFTs are exclusively issued (created) by the TF Chain for each active Capacity Pool which gets and remains connected to the ThreeFold_Grid. Such issuance of TFTs that results from connecting a 3Node to the ThreeFold_Grid and making IT Capacity available on a global scale to Users is called "**Farming**". + +#### 8.2 Calculation of Farmed TFTs + +The details of farming (minting of TFT’s) are described on our farming logic TFGrid 2 and this location serves as master for the TFT Reward Process. + +The wiki is version controlled (on github), so all changes can be followed.All connected IT Capacity gets registered on the TF Chain, i.e. ThreeFolds blockchain software. Each month the TF Chain issues new TFTs and transfers them to Farmers in respect of each 3Node that remained connected to the ThreeFold_Grid during the preceding month, using the following calculation in respect of each 3Node: + +![farmer_tcs_minting_equation](img/farmer_tcs_minting_equation.jpg) + +The concepts of CPR, CPR Price and Difficulty Level are determined in the aforementioned wiki and are incorporated into this Agreement by reference. + +The amount of TFTs that are Farmed hence depends on three variables: + +_1. Proof-of-Capacity_ + +The specs of the Farmer’s relevant 3Node: + +- Compute Capacity (CPU) = CRU +- Memory Capacity (RAM) = MRU +- Storage Capacity (SSD/HDD) = SRU/HRU + +The performance/capability of this hardware is attributed with Cloud Units that then summarized to a Cloud Production Rate (CPR) (as further described in the abovementioned wiki) for the relevant 3Node. The higher the CPR, the more TFTs are Farmed. + +_2. Difficulty Level_ + +The amount of ThreeFold_Tokens (TFTs) that Farmers receive for Farming also depends on the amount of TFTs that are already in circulation. The more TFTs already exist, the lower the rewards. This follows the principle of diminishing returns. We call this Farming limitation to reward the "**Difficulty Level**". + +When the amount of existing TFT nears 4 billion, the amount of TFTs received by Farmers will decrease progressively. Once the aggregate amount of "Farmed" TFTs reaches four billion ThreeFold_Tokens (4,000,000,000 TFTs), there won’t be any rewards for Farming anymore and no new TFTs will be generated by the Software. + +The Difficulty Level is the same for all Farmers at a certain point in time. + +_3. Certification_ + +When connecting reliable hardware the Farmer can request a certification from TF Tech NV which leads to increased earnings in TFT for such ‘Certified Farmers’. We automatically measure uptime, bandwidth and the utilization of the node for this certification. The details of this certification have to be further defined and will be set out on our wiki. + +#### 8.3 Uptime + +In addition thereto, the Company will determine in its sole discretion the reasonable uptime that each 3Node of the Farmer needs to to achieve when they register their 3Node(s) in the TF Chain. ThreeFold expects most 3Nodes to achieve an uptime of more than 98%, commercial providers can go as high as 99.9%. The TF Chain will only issue TFTs if the IT Capacity was connected to the internet and was usable during the last month at least up to the specified uptime guarantee. + +#### 8.4 Taxes + +You will be solely responsible for payment of all applicable taxes (if any) associated with your Farming of ThreeFold_Tokens (TFTA, TFT), including but not limited to value added taxes, taxes on gross receipts and income, Social Security taxes, business license fees and other payment obligations applicable to your business. + +#### 8.5 Modification + +The Company reserves the right to modify the terms of this section 8 (‘_Capacity Measurement and Minting - ‘Farming’_) at any time, including but not limited to the determination of the Difficulty Level. Such amendments will be subject to the approval of a majority of the members of the Company’s ‘Grid Counsel’ and a majority of the Farmers (whereby majority is measured based on nr of 3Nodes a Farmer has, each 3Node entitles the Farmer to one vote, Farmers who do not vote have no say in the decision process) who participate in an online poll organized by the Company In case of modification to these terms, the Company shall inform the Farmer at least one month in advance. In case the Farmer would not agree to such modifications, the Farmer shall have the right to immediately and unilaterally terminate this Agreement by disconnecting the Farming Pool from the ThreeFold_Grid. + diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting3.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting3.md new file mode 100644 index 0000000..b4bc8b9 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_8_capacity_measurement_minting3.md @@ -0,0 +1,55 @@ +### 8. TFT Minting (Token Creation ‘FARMING of capacity’) For TFGrid 3.x + +#### 8.1 General Principle + +Farmers who connect 3Nodes on an ongoing basis to the ThreeFold_Grid by running the Software and making IT Capacity available to the Users, get rewarded by receiving ThreeFold tokens (TFTs) which are generated by the Software. TFTs are exclusively issued (created) by the TFChain for each active Capacity Pool which gets and remains connected to the ThreeFold_Grid. Such issuance of TFTs that results from connecting a 3Node to the ThreeFold_Grid and making IT Capacity available on a global scale to Users is called "**Farming**". + +#### 8.2 Calculation of Farmed TFTs + +The details of farming (minting of TFT’s) are described on our [farming logic](../../farming/farming_reward.md) and this location serves as master for the TFT Reward Process. The wiki is version controlled (on github), so all changes can be followed.All connected IT Capacity gets registered on the TFChain, i.e. ThreeFolds blockchain software. Each month the TFChain issues new TFTs and transfers them to Farmers in respect of each 3Node that remained connected to the ThreeFold_Grid during the preceding month, using the following calculation in respect of each 3Node: + + +The amount of TFTs that are Farmed hence depends on three variables: + +_1. Proof-of-Capacity_ + +The specs of the Farmer’s relevant 3Node: + +- Compute Capacity (CPU) = CRU +- Memory Capacity (RAM) = MRU +- Storage Capacity (SSD/HDD) = SRU/HRU + +_2. PRICE OF TFT = THREEFOLD TOKEN_ + +The Price of TFT is registered at point of connection or an averaged out period. + +Each farmer needs to register their TFT farming account in the TF Explorer through the TF Chain (see manual). + +_3. Certification_ + +When connecting reliable hardware the Farmer can request a certification from TF Tech NV which leads to increased earnings in TFT for such ‘Certified Farmers’. We automatically measure uptime, bandwidth and the utilization of the node for this certification. The details of this certification have to be further defined and will be set out on our wiki. + +The specific way how farming rewards are calculated is specified on: + +- [Farming Reward](../../farming/farming_reward.md) +- [Proof-of-Capacity](../../farming/proof_of_capacity.md) + + +#### 8.3 Uptime + +In addition thereto, the Company will determine in its sole discretion the reasonable uptime that each 3Node of the Farmer needs to to achieve when they register their 3Node(s) in the TFChain. ThreeFold expects most 3Nodes to achieve an uptime of more than 98%, commercial providers can go as high as 99.9%. The TFChain will only issue TFTs if the IT Capacity was connected to the internet and was usable during the last month at least up to the specified uptime guarantee. + +#### 8.4 Taxes + +You will be solely responsible for payment of all applicable taxes (if any) associated with your Farming of ThreeFold tokens (TFTA, TFT), including but not limited to value added taxes, taxes on gross receipts and income, Social Security taxes, business license fees and other payment obligations applicable to your business. + +#### 8.5 Modification + +The Company reserves the right to modify the terms of this section 8 (‘_Capacity Measurement and Minting - ‘Farming’_) at any time. Such amendments will be subject to the approval of the majority of the Farmers (whereby majority is measured based on nr of 3Nodes a Farmer has, each 3Node entitles the Farmer to one vote, Farmers who do not vote have no say in the decision process) who participate in an online poll organized by the Company or the DAO. + +In case of modification to these terms, the Company or DAO shall inform the Farmer at least one month in advance. In case the Farmer would not agree to such modifications, the Farmer shall have the right to immediately and unilaterally terminate this Agreement by disconnecting the Farming Pool from the ThreeFold_Grid. + +The TFChain and TFGrid capabilities & specifications can change over time after getting consensus from the DAO . +The specific requirements and workings of the DAO are or will be publised on our wikisystem: [https://library.threefold.me](https://library.threefold.me). + +If the DAO (by means of X nr of members of the Community) agree on a change of the protocol used or specifications for the Software or TFGrid then the Validators can allow and execute an upgrade of the system (TFChain as well as ZERO-OS software). Farmers and Users accept changes introduced this way, they accept that any of above mentioned variables can be changed that way. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization.md new file mode 100644 index 0000000..1a2373c --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization.md @@ -0,0 +1,60 @@ +### 9. CAPACITY SALES (‘UTILIZATION’) = TFGrid 2.X + +#### 9.1 General Principles + +Users (such as developers or other persons requiring IT Capacity) can rent IT Capacity from the ThreeFold_Grid in exchange for ThreeFold_Tokens (TFTs), which creates a natural economic demand. We call this process of selling IT Capacity on the ThreeFold_Grid "**Utilization**". + +ThreeFold_Tokens (TFTs) are used to buy or sell IT Capacity as delivered by the Capacity Pools on the ThreeFold_Grid. In order to do so, the Farmer shall sell the IT Capacity produced on the ThreeFold_Grid via the "**ThreeFold Directory**" (or “TF Directory”). The TF Directory acts like a ‘marketplace’ for selling IT Capacity generated by the Farmer’s 3Nodes. The TF Directory has been implmented by a tool called TF Explorer see http://explorer.grid.tf + +In order to access the TF Directory and sell IT Capacity on the ThreeFold_Grid, the TF Grid user or Solution Provider offering services on the TFGrid must deploy a virtual system administrator, called the "**3Bot**". This 3bot is used, amongst others, to sell and buy IT Capacity (raw storage and compute resources) on the ThreeFold_Grid. + +The reservation and use of the Farmer’s IT Capacity by the User is effected through "Smart Contracts for IT". The Smart Contract for IT will then be executed automatically by the software code and the requested IT workload will be deployed. The TF Explorer calculates the required TFT Token price and makes sure that the TF Farmer receives their cultivated tokens. + +#### 9.2 Utilization Mechanism + +IT capacity is expressed in compute & storage units. + +- CU = [Compute Units](../../cloud/cloudunits.md) +- SU = [Storage Units](../../cloud/cloudunits.md) + +The pricing is expressed as follows: + +- CP = Compute Unit price - expressed in USD +- SP = Storage Unit price - expressed in USD + +- T = Token price in USD at time of capacity reservation + +Certified Farmers are free to determine the pricing of their IT Capacity. DIY (Do It Yourself) farmers have to rely on the TF Foundation to set the price of the CU and SU. +TF Foundation will do this with all best intentions in mind. + +The IT capacity is sold through the TF Explorer using the smart contract for IT concept. TF Explorer is the inventory of all IT capacity available for consumption on the ThreeFold_Grid. See http://explorer.grid.tf/ + +Each farmer needs to register their TFT wallet in the TF Explorer and the certified farmers can register the price for the CU/SU on the TF Exlorer as well by using their farming 3bot. + +> Utilization in TFT = (CU _ CP + SU _ SP) / T \* 0.9 + +Utilization in TFT are the TFT (Tokens) the Farmer receives when capacity has been sold as result of provisioning the IT workload by means of the IT smart contract concept. + +As a result of executing the IT smart contract, 90% of the proceeds (in TFT) of the capacity sold is send to the wallet of the Farmer, 10% is send to the TF Foundation Wallet, this is an automatic action. + +#### 9.3 TF Foundation Fee and License Fees TFTech + +The Company (the "**Foundation Fee**") receives 10% of sales done of Cloud Units on the TF Explorer (as described above). The Company will use the revenues from the Foundation Fees to fund its projects and objectives, including, amongst others, to promote, maintain and expand the ThreeFold_Grid. The Company might also decide to burn part of those tokens (TFT) to lower the total amount of tokens in the field (burning means, destroy tokens). + +In case the Farmer chooses to purchase a license from TF Tech NV in order to qualify as a Certified Farmer and provide Certified Capacity on the ThreeFold_Grid, the Farmer shall pay the relevant fee (as agreed between the Farmer and TF Tech NV directly) to TF Tech NV (the "**Certification License Fee**"). + +#### 9.4 Taxes + +You (The Farmer) will be responsible for payment of all applicable taxes (if any) associated with your Utilization of ThreeFold_Tokens (i.e. sale of IT Capacity), including but not limited to value added taxes, sales taxes, custom taxes, and taxes on gross receipts and income. + +The Farmer shall seek all necessary tax advice in order to comply with any applicable tax regulations when providing IT Capacity to Users on the ThreeFold_Grid. By way of example, the Farmer acknowledges that within the European Union, as from 1 January 2015, telecommunications, broadcasting and electronically supplied services are always taxed in the country of the customer (i.e. the tax residence of the User) – regardless of whether the User is a business or a consumer. + +In view thereof, the Farmer will determine the applicable Farmer’s Compute Unit Price and Storage unit Price (as referred to in section 9.2 above) taking into account any aforementioned taxes that may apply. ThreeFold will not be held liable for the Farmer’s failure to comply with its legal obligations, including but not limited to its obligation to pay any applicable taxes, and the Farmer will indemnify and hold harmless the Company for claims against the Company from any tax authorities in respect of such non-compliance by the Farmer. + +#### 9.5 Modification + +The Company reserves the right to modify the terms of this section 9 (‘_Capacity Sales - ‘Utilization’_’) at any time, including but not limited to the determination of the Foundation Fee. In case of modification to these terms, the Company shall inform the Farmer at least one month in advance. In case the Farmer would not agree to such modifications, the Farmer shall have the right to immediately and unilaterally terminate this Agreement by disconnecting the Farming Pool from the ThreeFold_Grid. + +#### 9.6 Breach + +In addition to its other rights and remedies under this Agreement, the Farmer forfeit any right to compensation under this Agreement if Farmer breaches any terms thereof. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization3.md b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization3.md new file mode 100644 index 0000000..2c209f3 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_farmer_parts/part_9_capacity_utilization3.md @@ -0,0 +1,53 @@ +### 9. CAPACITY UTILIZATION (‘UTILIZATION’) FOR TFGRID 3.X + +#### 9.1 General Principles + +Users (such as developers or other persons requiring IT Capacity) can use IT Capacity from the ThreeFold_Grid in exchange for ThreeFold_Tokens (TFTs), which creates a natural economic demand. We call this process of using IT Capacity on the ThreeFold_Grid "**Utilization**". + +ThreeFold_Tokens (TFTs) are used to buy IT Capacity as delivered by the 3Nodes (by a process called Farming) on the ThreeFold_Grid. The capacity can be consulted by means of a tool called TF Explorer see http://explorer.grid.tf. + +#### 9.2 Utilization Mechanism + +IT capacity is expressed in [network, compute & storage units](../../cloud/cloudunits.md). + +- CU = Compute Units +- SU = Storage Units +- NU = Network Units + +TF Explorer is the inventory of all IT capacity available for consumption on the ThreeFold_Grid. See http://explorer.grid.tf/ + +TFT received for people using capacity is disributed as follows: + +| Percentage | Description | Remark | +| ---------- | -------------------------------------- | ------------------------------------------------------------------------ | +| 35% | needs to be burned | results in more TFT burned compared to generated once grid more mature. | +| 10% | to TF Foundation | used to promote the manage the project. | +| 5% | to Staking Pool for TF Validators | used to reward the people who run the TFChain 3.0 blockchain validators. | +| 50% | for solution providers & sales channel | managed by [ThreeFold DAO](../../about/dao/dao.md). | + +The single source of truth for Utilization specifications is available [here](../../farming/proof_of_utilization.md). + + +#### 9.4 Taxes + +You (The Farmer) will be responsible for payment of all applicable taxes (if any) associated with your Utilization of ThreeFold_Tokens (i.e. sale of IT Capacity), including but not limited to value added taxes, sales taxes, custom taxes, and taxes on gross receipts and income. + +The Farmer shall seek all necessary tax advice in order to comply with any applicable tax regulations when providing IT Capacity to Users on the ThreeFold_Grid. By way of example, the Farmer acknowledges that within the European Union, as from 1 January 2015, telecommunications, broadcasting and electronically supplied services are always taxed in the country of the customer (i.e. the tax residence of the User) – regardless of whether the User is a business or a consumer. + +In view thereof, the Farmer will determine the applicable Farmer’s Compute Unit Price and Storage unit Price (as referred to in section 9.2 above) taking into account any aforementioned taxes that may apply. ThreeFold will not be held liable for the Farmer’s failure to comply with its legal obligations, including but not limited to its obligation to pay any applicable taxes, and the Farmer will indemnify and hold harmless the Company for claims against the Company from any tax authorities in respect of such non-compliance by the Farmer. + +#### 9.5 Modification + +The Company reserves the right to modify the terms of this section 9 (‘_CAPACITY UTILIZATION (‘UTILIZATION’)_) at any time. Such amendments will be subject to the approval of the TFChain Validators who protect the TFChain (Substrate based chain on Level 1) through our DAO. + +In case of modification to these terms, the Company or DAO shall inform the Farmers and Community at least one month in advance by means of forum or chat or other mechanism. In case the Farmer would not agree to such modifications, the Farmer shall have the right to immediately and unilaterally terminate this Agreement by disconnecting the Farming Pool from the ThreeFold_Grid. + +The TFChain and TFGrid capabilities & specifications can change over time after getting consensus from the DAO . +The specific requirements and workings of the DAO are or will be publised on our wikisystem: https://library.threefold.me + +If the DAO (by means of X nr of members of the Community or Validators) agree on a change of the protocol used or specifications for the Software or TFGrid then the Validators can allow and execute an upgrade of the system (TFChain as well as ZERO-OS software). Farmers and Users accept changes introduced this way, they accept that any of above mentioned variables can be changed that way. + + +#### 9.6 Breach + +In addition to its other rights and remedies under this Agreement, the Farmer forfeit any right to compensation under this Agreement if Farmer breaches any terms thereof. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_griduser.md b/collections/manual_legal/terms_conditions/terms_conditions_griduser.md new file mode 100644 index 0000000..3de562f --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_griduser.md @@ -0,0 +1,240 @@ +**USER TERMS AND CONDITIONS** + +{{#include ./sub/parties_threefold.md}} +, governing your usage of the ThreeFold software and related products (the “**TF Products**”), including but not limited to: + +- The ThreeFold software and technologies (the "**Software**"), including: + - "**Zero OS**", a stateless operating system which enables distributed hardware to form the ThreeFold_Grid which generates IT Capacity (storage and compute capacity) + - "**Zero Chain**", the blockchain framework + - and any other related software components which are referenced in github under [https://github.com/threefoldtech](https://github.com/threefoldtech) and [https://github.com/threefoldfoundation](https://github.com/threefoldfoundation) +- Threefold applications (including the ThreeFold Connect app) +- ThreeFold Tokens (TFTs) +- The TFGrid and any related products. + +The terms of your interaction with the websites, social networks or online communication channels maintained by the Company (including but not limited to the posting or publishing of content, information or promotional materials) shall be governed by the ‘Terms of Service’ referred to on [www.threefold.io](https://www.threefold.io), which are incorporated herein by reference. + +The IT Capacity services provided by Farmers on the ThreeFold_Grid are governed by the present Agreement, but may be supplemented by additional terms agreed between the relevant Farmer and the User governing their delivery, access and use of the IT Capacity through the ThreeFold_Grid. This supplementation shall not extend the liability of ThreeFold. + +You understand and agree that by accepting the terms of this Agreement, either by clicking to signify acceptance, or by taking any one or more of the following actions downloading, installing, running,/and or using the applicable TF Products, you agree to be bound by the terms of this Agreement effective as of the date that you take the earliest of one of the foregoing actions. You represent and warrant that you are 18 years old or older and have the right and authority to enter into and comply with the terms of this Agreement. + +{{#include ./terms_conditions_farmer_parts/part_1_definitions.md}} + +### 2. USE OF TF PRODUCTS + +By entering into this Agreement you receive a non-exclusive, non-sublicenseable, non-transferable right to use the TF Products pursuant to this Agreement during the Term hereof solely for your internal business purposes subject to the limitations set forth herein. + +You acknowledge that the Software consists of open source code which is made available to you pursuant to the terms of the relevant open-source license agreement(s) as specified in github under [https://github.com/threefoldtech](https://github.com/threefoldtech) and [https://github.com/threefoldfoundation](https://github.com/threefoldfoundation) (the "**Open Source License(s)**"). Your use of the Software and the TF Products is conditioned upon your compliance at all times with the terms of all applicable Open Source License(s), including without limitation all provisions governing access to source code, modification, and/or reverse engineering. You are responsible for complying with any applicable documentation, meaning any information that describes the TF Products, provides instructions or recommendations related to the configuration and/or use of the TF Products, or otherwise informs Users of their intended use, including, but not limited to content provided directly to User or published at or otherwise made available in conjunction with the ThreeFold_Grid, the ThreeFold_Token or the Software (“**Documentation**”) and for satisfying all technical requirements of the TF Products, including any requirements set forth in the Documentation for ensuring that the TF Products perform properly. + +### 3. FARMER GRANT OF RIGHT TO FARMING POOL + +By making available one or more computers, network or storage devices ("**3Nodes**") and connecting such 3Nodes to the TF Grid (as part of a Farming Pool) via the Software, you hereby grant to Company and Users the irrevocable right to access and use the 3Nodes as follows: + +- to use storage, compute and network services as delivered by your 3Node(s) +- to store data and materials by Users on your 3Node(s) (the "**Content**") and to access such Content from your 3Node(s) at any time + +in accordance with the capabilities of the software installed on the 3Nodes; + +### 4. REGISTRATION TO THE THREEFOLD CONNECT (FORMERLY 3BOT CONNECT) APPLICATION + +In order to access certain TF Products you will be required to install the ThreeFold Connect application on your device and register your account in this ThreeFold Connect application by creating a username and password. You agree to provide us with accurate, complete, and current registration information about yourself. It is your responsibility to ensure that your password remains confidential and secure. By registering, you agree that you are fully responsible for all activities that occur under your user name and password. We may assume that any communications we receive under your account have been made by you. If you are a billing owner, an administrator, or if you have confirmed in writing that you have the authority to make decisions on behalf of a User ("**3Bot Administrator**"), you represent and warrant that you are authorized to make decisions on behalf of the User and agree that ThreeFold is entitled to rely on your instructions. + +You are responsible for notifying us at legal@threefold.io if you become aware of any unauthorized use of or access to your 3Bot account. You understand and agree that we may require you to provide information that may be used to confirm your identity and help ensure the security of your account. ThreeFold will not be liable for any loss, damages, liability, expenses or attorneys’ fees that you may incur as a result of someone else using your password or account, either with or without your knowledge and/or authorization, and regardless of whether you have or have not advised us of such unauthorized use. You will be liable for losses, damages, liability, expenses and attorneys’ fees incurred by ThreeFold or a third party due to someone else using your account. In the event that the 3Bot Administrator or User loses access to an account or otherwise requests information about an account, ThreeFold reserves the right to request from the 3Bot Administrator or User any verification it deems necessary before restoring access to or providing information about such account in its sole discretion. + +### 5. USE OF THREEFOLD TOKENS + +ThreeFold_Tokens (TFTs) are a digital token used to buy autonomous and decentralized IT Capacity (compute, storage, network, IT services, applications) on the ThreeFold_Grid. TFTs have a specific commercial utility, since ThreeFold_Tokens were conceived as the designated currency for buying and selling IT Capacity on the ThreeFold_Grid. + +TFTs are exclusively generated through a process called Farming, which means that TFTs are created only when new IT Capacity is added to the ThreeFold_Grid. TFTs are registered on a blockchain which is part of the Stellar Network ([https://stellar.org](https://stellar.org/)), Binance Smart Chain, a Cosmos Chain or our own TFChain. + +The first batch of TFTs that got registered in the blockchain consists of the ‘Genesis Block’ of 685 million TFTs that were Farmed by or on behalf of the Company using an initial Farming Pool, also known as the Genesis Pool. + +Two versions of ThreeFold_Tokens have been issued: + +- A first version of the ThreeFold_Token was issued as from March 2018 on ThreeFolds initial blockchain called Rivine. These TFTs are also referred to as "TFTv1" or “TFTA”. These tokens have now been migrated to the Stellar blockchain. +- A second version of the ThreeFold_Token was issued as from May 2020 on the Stellar blockchain. These TFTs are also referred to as "TFTv2". This token is available on multiple blockchains, the total amount of tokens farmed is the same independent of blockchain used, TFBridges are used to migrate TFTs between blockchain technology. + +While the original TFTv1 kept all same properties and benefits, they are now called TFTA on the Stellar blockchain. Since the creation of the TFTv2, TFTA’s have become ThreeFolds voluntary staking pool of Tokens, which means these TFTs can only be used to buy IT Capacity and cannot be traded otherwise. However any User can convert TFTA’s into TFTs (i.e. TFTv2) by implementing a few simple migration steps which can be found [here](./tfta_to_tft.md). Once converted, any TFT’s can be traded or transferred by various means as the User deems fit, as further explain [here](../../../documentation/threefold_token/buy_sell_tft/buy_sell_tft.md). + +The TF Foundation has chosen to use multiple blockchain technology for storing and managing the TFT. You can use any wallet that supports the chosen blockchain, including but not limited to the wallet included in the ThreeFold Connect app. + +Your use of the TF Products and/or TFTs constitutes your acknowledgement of the aforementioned general principles relating to the ThreeFold_Tokens. As a User you furthermore acknowledge that: + +- each TFT constitutes a value of exchange on the ThreeFold_Grid; +- you have been advised that the ThreeFold_Tokens are the result of farming which means a Farmer connects IT Capacity to the ThreeFold_Grid; +- as such the TFTs have not been registered under any country’s securities laws; +- TFTs are neither securities nor an investment instrument; and +- the purchase, creation or use of TFTs involve risks, all of which the User fully and completely assumes by entering into this Agreement. + +### 6. MODIFICATIONS TO THE TF PRODUCTS + +We reserve the right, in our sole discretion, to modify or discontinue, temporarily or permanently, any TF Products (or any features or functionality thereof) at any time without notice and without obligation or liability to you. You agree that ThreeFold shall not be liable to you or any third party for any modification, suspension, or discontinuance of the TF Products or related services. + +### 7. USER RESPONSIBILITIES + +At all times during the Term of this Agreement or the period when you use any TF Products, your Content is maintained in a 3Node or TFTs are maintained in a wallet address of which you hold the private keys, whichever is longer: + +1. You will comply with the terms of this Agreement and the [Privacy Policy](../privacypolicy.md) and any other terms and conditions required by in connection herewith, the Open Source Licenses, and the terms of all other agreements to which you are a party in connection with your performance under this Agreement including, without limitation, any agreement you have with a third-party service provider. +2. You will use and operate the TF Products in strict compliance with terms of this Agreement and any applicable laws or regulations, and you will not take any action not expressly authorized hereunder. +3. Without prejudice to your rights under any applicable Open Source license, you will not modify or attempt to modify the TF Products for any purpose including but not limited to attempting to circumvent the audit, bypass security, manipulate the performance of, or otherwise disrupt the TF Products for any reason, including but not limited to attempting to increase the amount of data stored or bandwidth utilized, as defined herein, and you will not otherwise interfere with the operation of the ThreeFold_Grid. +4. You acknowledge and agree that the Company has no practical access to your data or knowledge of the nature of the data stored on your 3Bot or the 3Nodes (including but not limited to Content) and that it does not retain any Content or other data that you process using the ThreeFold_Grid. The Company has no control in the management of such Content or data, nor any influence in the specific processing procedures. The Company will never pursue changes to TF Products that could make User data accessible to Company or third parties. We are a neutral intermediary and do not act on behalf of a Farmer, User or any other party to process Content or User data and thus you acknowledge and agree that we should not be qualified as data processors or sub-processor under the General Data Processing Directive (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC) (‘GDPR’). +5. Company may suspend User’s use of the TF Products if Company believes the User to be: (a) violating any term of this Agreement; or (b) using the TF Products in a manner that Company reasonably believes may cause a security risk, a disruption to the TF Products (including the ThreeFold_Grid), or liability for Company or any persons involved in the ThreeFold Open Source project. + +### 8. USAGE RESTRICTIONS (ACCEPTABLE USE) + +You will use the TF Products in strict accordance with the terms of this Agreement and in no other manner. Without limiting the generality of the foregoing, you will not: + +1. access or use the TF Products: (i) in violation of applicable laws; (ii) to send or store material knowingly or intentionally containing software viruses, worms, Trojan horses or other harmful computer code, files, or scripts; (iii) in a manner that interferes with or disrupts the integrity or performance of the ThreeFold_Grid (or the data contained therein); or (iv) in any manner that could interfere with, disrupt, negatively affect or inhibit other Users from fully enjoying their use of the TF Products or that could damage, disable, overburden or impair the functioning of the TF Products in any manner. +2. delete or otherwise render Content unavailable for recovery independent of the programmatic functionality of the relevant software; +3. violate, infringe or misappropriate any intellectual property or other third-party right or commit a tort; +4. modify, copy (other than standard page caching), publicly perform, publicly display, sell, rent, lease, timeshare or otherwise distribute the TF Products, in whole or in part. This restriction does not apply to open source software we release, which you can use subject to the applicable open source software license terms; +5. attempt to bypass or circumvent measures employed to prevent or limit access to any content, area or functionality on the TF Products, without providing prior notice to Company of the method used to bypass or circumvent; +6. in any other way attempt to interfere, impede, alter, or otherwise interact with the TF Products in any manner not expressly authorized hereunder; +7. use any of the TF Products other than for its intended purposes; or +8. use the TF Products to engage in or promote any activity that violates these User Terms and Conditions. + +### 9. CONTENT RESTRICTIONS + +The IT Capacity that is made available on the ThreeFold_Grid can be used by any Users to create, post, upload, share or store Content, such as text, graphics, photos, videos, sound, data or other information and materials submitted or provided by Users. + +We have no access to, nor do we control, own, or endorse any Content that you transmit, store or process via the ThreeFold_Grid or any other TF Products. You are solely responsible for any Content stored using the ThreeFold_Grid or any TF Products, and for any data that you have entered using the ThreeFold Connect application. You hereby represent and warrant that (1) you own all intellectual property rights (or have obtained all necessary permissions) to provide your Content and other data; (2) such Content and other data will not violate any agreements or confidentiality obligations; and (3) such Content and data will not violate, infringe or misappropriate any intellectual property right or other proprietary rights of any person or third party. + +You will not create, post, share or store Content or other data that: + +1. is unlawful, libelous, defamatory, harassing, threatening, invasive of privacy or publicity rights, abusive, inflammatory, fraudulent or otherwise objectionable; +2. would constitute, encourage or provide instructions for a criminal offense, violate the rights of any party, otherwise create liability or violate any local, state, national or international law; +3. intentionally misleads by containing or depicting any statements, remarks or claims that do not reflect your honest views and experiences; +4. impersonates, or misrepresents your affiliation with, any person or entity (including Company); +5. references or depicts Company, our TF Products or any related services but fails to disclose any material connection to us that may exist; +6. contains any unsolicited promotions, political campaigning, advertising or solicitations; +7. contains any viruses, corrupted data or other harmful, disruptive or destructive files or content; or +8. in our sole judgment, is objectionable or that restricts or inhibits any other person from using or enjoying the TF Products, or that may expose Company or others to any harm or liability of any type; + +### 10. REPRESENTATIONS AND WARRANTIES + +You hereby represent, warrant, and covenant that: + +1. You have full legal capacity, power and authority to execute and deliver this Agreement and to perform its obligations hereunder. This Agreement constitutes valid and binding obligations of the User, enforceable in accordance with its terms. + +2. You are NOT a target person or entity under any restrictive measures in the framework of the EU Common Foreign and Security Policy and you are NOT named on any E.U. or U.S. government denied-party list. + +3. You are, and have at all times been, in compliance in all material respects with each legal requirement that is are applicable to you, or the ownership of your assets, relating to money laundering (including but not limited to applicable anti-money laundering regulations). + +4. You have been advised and acknowledge that the ThreeFold_Tokens or TFTs are NOT issued by any company or organization and are only registered in the TF BlockChain as an automatic action when a Farmer connects IT Capacity to the ThreeFold_Grid, as such they have not been registered under any country’s securities laws. + +5. You understand that the TFTs involve risks, all of which you fully and completely assume. You understand and expressly accept that the TFTs will be created and delivered to you at your sole risk on an "AS IS" basis. You understand and expressly accept that you have not relied on any representations or warranties made by the Company outside this Agreement, including, but not limited to, conversations of any kind, whether through oral or electronic communication, or any white paper. Without limiting the generality of the foregoing, you assume all risk and liability for the results obtained by any use of TFTs and regardless of any oral or written statements made by or on behalf of the Company, by way of technical advice or otherwise, related to the use of the TFTs. + +6. You have such knowledge and experience in technology, financial and business matters that you are capable of evaluating the merits and risks of such purchase of TFTs and corresponding IT Capacity and are able to bear the economic risk of such acquisition for an indefinite period of time. + +7. You are executing this Agreement for your own account, not as a nominee or agent. + +8. You understand that you have no right against the Company or any other person related to the ThreeFold project except in the event of the Company’s breach of this agreement or intentional fraud. + +9. You understand that you bear sole responsibility for any taxes as a result of the matters and transactions the subject of this Agreement, and any future acquisition, ownership, use, sale or other disposition of TFTs. To the extent permitted by law, you agree to indemnify, defend and hold the Company or any of its affiliates, employees or agents (including developers, auditors, contractors or founders) harmless for any claim, liability, assessment or penalty with respect to any taxes (other than any net income taxes of the Company) associated with or arising from your acquisition, use or ownership of TFTs hereunder. + +### 11. TERM AND TERMINATION + +This Agreement shall be effective as of the date that you take the earliest of the following actions: your acceptance of this Agreement, either by clicking to signify acceptance, or by taking any one or more of the following actions: downloading, installing, running and/or using any TF Product. It will continue until terminated per the terms below. + +Either party may terminate this Agreement immediately at any time without notice to the other party. + +In case of termination, the User shall immediately cease using the TF Products. + +### 12. FEEDBACK + +Company shall have a royalty-free, worldwide, transferable, sublicensable, irrevocable, perpetual license to use or incorporate into the TF Products any suggestions, ideas, enhancement requests, feedback, recommendations or other information provided by Users relating to the features, functionality, or operation thereof ("**Feedback**"). Company shall have no obligation to use Feedback, and User shall have no obligation to provide Feedback. + +### 13. INDEMNIFICATION + +To the fullest extent permitted by applicable law, you will defend, indemnify and hold harmless Company and our respective past, present, and future employees, officers, directors, contractors, consultants, equity holders, suppliers, vendors, service providers, parent companies, subsidiaries, affiliates, agents, representatives, predecessors, successors and assigns (the "**Indemnified Parties**") from and against all claims, damages, costs and expenses (including attorneys’ fees) that arise from or relate to: (i) your use of the TF Products; (ii) any Feedback you provide; or (iii) your breach of this Agreement. + +Company reserves the right to exercise sole control over the defense of any claim subject to indemnification under the paragraph above, at your expense. This indemnity is in addition to, and not in lieu of, any other indemnities set forth in a written agreement between you and Company. + +If any TF Product becomes, or in Company’s reasonable judgment is likely to become, the subject of a claim of infringement, then Company may in its sole discretion: (a) obtain the right, for User to continue using such TF Product; (b) provide a non-infringing functionally equivalent replacement; or (c) modify such TF Product so that it is no longer infringing. If Company, in its sole and reasonable judgment, determines that none of the above options are commercially reasonable, then Company may, without liability, suspend or terminate User’s use of the relevant TF Product. This Section 12 states Company’s sole liability and User’s exclusive remedy for infringement claims. + +### 14. DISCLAIMER AND LIMITATION OF LIABILITY + +The User hereby acknowledges the fact that he/she has been advised that TFTs may qualify as a security and that the offers and sales of TFTs have not been registered under any country’s securities laws and, therefore, cannot be resold except in compliance with the applicable country’s laws. + +The User understands that the use of TFTs, the other TF Products and/or the ThreeFold_Grid involves risks, all of which the User fully and completely assumes, including, but not limited to, the risk that (i) the technology associated with the ThreeFold_Grid, 3Nodes, 3Bot and/or related TF Products will not function as intended; (ii) the Threefold project will not be completed; (iii) Threefold will fail to attract sufficient interest from key stakeholders; and (iv) ThreeFold or any related parties may be subject to investigation and punitive actions from governmental authorities. + +Except as explicitly set forth herein, Company makes no representations that the TF Products are appropriate for use in any jurisdictions. Users engaging with the TF Products from any jurisdictions do so at their own risk and are responsible for compliance with local laws. + +The User understands and expressly accepts that the TFTs, ThreeFold Connect App, the ThreeFold_Grid and other TF Products were created and made available to the User at its sole risk on an "AS IS" and “UNDER DEVELOPMENT” basis. + +COMPANY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS AND IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND NON-INFRINGEMENT. COMPANY MAKES NO WARRANTY THAT THE TF PRODUCTS OR DOCUMENTATION WILL BE UNINTERRUPTED, ACCURATE, COMPLETE, RELIABLE, CURRENT, ERROR-FREE, VIRUS FREE, OR FREE OF MALICIOUS CODE OR HARMFUL COMPONENTS, OR THAT DEFECTS WILL BE CORRECTED. COMPANY DOES NOT CONTROL, ENDORSE, SPONSOR, OR ADOPT ANY CONTENT AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND REGARDING THE CONTENT STORED ON THE THREEFOLD GRID. COMPANY HAS NO OBLIGATION TO SCREEN, MONITOR, OR EDIT CONTENT AND IS NOT RESPONSIBLE OR LIABLE FOR ANY CONTENT. YOU ACKNOWLEDGE AND AGREE THAT COMPANY HAS NO INDEMNITY, SUPPORT, SERVICE LEVEL, OR OTHER OBLIGATIONS HEREUNDER. + +User understands and expressly acknowledges that it has not relied on any representations or warranties made by the Company, TF Tech NV, Bettertoken NV, Kristof De Spiegeleer, any person or entity involved in the development or promotion of the TF Products and/or the ThreeFold project, or any related parties, including, but not limited to, conversations of any kind, whether through oral or electronic communication or otherwise, or any whitepapers or other documentation. + +WITHOUT LIMITING THE GENERALITY OF THE FOREGOING, THE USER ASSUMES ALL RISK AND LIABILITY FOR THE RESULTS OBTAINED BY THE USE OF THE TF PRODUCTS (INCLUDING THE THREEFOLD TOKENS) AND REGARDLESS OF ANY ORAL OR WRITTEN STATEMENTS MADE BY THREEFOLD, BY WAY OF TECHNICAL ADVICE OR OTHERWISE, RELATED TO THE USE THEREOF. + +COMPANY SHALL NOT BE LIABLE FOR ANY INCIDENTAL, CONSEQUENTIAL, PUNITIVE, SPECIAL, INDIRECT, OR EXEMPLARY DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS, REVENUE, DATA, OR DATA USE, OR DAMAGE TO BUSINESS) HOWEVER CAUSED, WHETHER BY BREACH OF WARRANTY, BREACH OF CONTRACT, IN TORT (INCLUDING NEGLIGENCE) OR ANY OTHER LEGAL OR EQUITABLE CAUSE OF ACTION EVEN PREVIOUSLY ADVISED OF SUCH DAMAGES IN ADVANCE OR IF SUCH DAMAGES WERE FORESEEABLE, AND COMPANY SHALL ONLY BE LIABLE FOR DIRECT DAMAGES CAUSED BY ITS GROSS NEGLIGENCE. IN NO EVENT WILL COMPANY’S TOTAL AGGREGATE LIABILITY ARISING FROM OR RELATING TO THIS AGREEMENT EXCEED ONE HUNDRED EURO (€ 100.00). + +### 15. RELEASE + +To the fullest extent permitted by applicable law, you hereby explicitly release (1) the TF Foundation (ThreeFold_Dubai), (2) each individual or entity acting as a Farmer, (3) TF Tech NV, (4) any of the companies or individuals related to these entities, and (5) any person contributing or otherwise assisting in developing, marketing or distributing the TF Products (hereinafter collectively referred to as "**ThreeFold Relatives**") from responsibility, liability, claims, demands and/or damages (actual and consequential) of every kind and nature, known and unknown (including, but not limited to, claims of negligence), arising out of or related to: + +- the acts or omissions of third parties; and + +- your purchase of TFTs (if any) from ThreeFold Relatives, regardless of the names or former names under which such TFTs may have been identified in the relevant contracts (e.g. ‘Internal ThreeFold_Tokens’, ‘iTFTs’, TFTs, etc.). + +If and to the extend you have purchased or otherwise acquired TFTs from ThreeFold Relatives that were identified in the relevant contracts as ‘Internal ThreeFold_Tokens’ or ‘iTFTs’, your use of the TF Products (including your subsequent receipt or acceptance of TFTs) implies your confirmation that such purchase or acquisition has been duly completed as a result of your receipt of a corresponding amount of TFTs, and that all deliverables under the relevant contracts (known as ‘iTFT Purchase Agreement’, ‘TFT Purchase Agreement’ or ‘ITO investment agreement’) have been duly delivered and that there are no further obligations from any ThreeFold Relatives to you in relation to such contracts. + +### 16. EXPORT COMPLIANCE + +The TF Products may be subject to export laws and regulations of the European Union, the United States and other jurisdictions. The User represents that it is not named on any E.U. or U.S. government denied-party list. The User shall not access or use the TF Products in a E.U. or U.S.-embargoed or any sanctioned country or region or in violation of any E.U. or U.S. export law or regulation. User shall not use the TF Products to export, re-export, transfer, or make available, whether directly or indirectly, any regulated item or information to anyone outside the E.U. or U.S. in connection with this Agreement without first complying with all export control laws and regulations that may be imposed by the European Union, any EU country or the U.S. Government and any country or organization of nations within whose jurisdiction the User operates or does business. + +### 17. ENTIRE AGREEMENT,SEVERABILITY, WAIVER + +1. This Agreement sets forth the complete and final agreement of the parties concerning the subject matter hereof, and supersedes, replaces all prior agreements, written and oral, between them concerning the subject matter hereof. If a term of this Agreement to be invalid or unenforceable, the remaining provisions will continue in full force and effect. A party’s consent to, or waiver of, enforcement of this Agreement on one occasion will not be deemed a waiver of any other provision or such provision on any other occasion. + +2. We reserve the right to change this Agreement from time to time in our sole discretion. If we make material changes to this Agreement, we will provide notice of such changes, such as by posting the revised User Terms and Conditions on our websites or in the ThreeFold Connect application. By continuing to access or use the TF Products or otherwise participate in the ThreeFold_Grid after the posted effective date of modifications to this Agreement, you agree to be bound by the revised version of this Agreement. If you do not agree to the modified Agreement, you must stop using and interacting with the TF Products. + +3. The parties are independent contractors. No agency, partnership, franchise, joint venture, or employment relationship is intended or created by this Agreement. Neither party has the power or authority to create or assume any obligation, or make any representations or warranties, on behalf of the other party. + +4. The User agrees that the Company may transfer and assign the Agreement in its sole discretion, provided a notice of such assignment is sent to the User within fifteen days of such assignment. + +5. Notices to Company made under this Agreement shall be made by email to legal@threefold.io AND in writing and delivered by registered mail (return receipt requested) or nationally-recognized overnight courier service to ThreeFold_Dubai, United Arab Emirates, attention Legal Department. You agree to receive electronically all communications, agreements, documents, notices, and disclosures that we provide in connection with the Software and/or the ThreeFold_Grid ("**Communications**"). We may provide Communications in a variety of ways, including by e-mail, text, in-app notifications, or by posting them on our websites. You agree that all Communications that we provide to you electronically satisfy any legal requirement that such communications be in writing. + +### 18. OPERATIONS OF THIRD-PARTY'S DIGITAL ASSET PROTOCOLS AND SERVICES + +1. TF Tech NV (“TF Tech) does not provide (investment) advice regarding cryptocurrencies. You acknowledge that any information provided as part of TF Tech’s ThreeFold Connect App and/or ThreeFold Wallet (the “Platforms”) is not intended as a (personal) recommendation to buy, sell or hold (cryptocurrency) assets. All trading services offered on or through TF Tech’s Platforms are offered on the basis of "execution only". Your orders are executed automatically by our systems. + +2. You hereby confirm to be aware of and accept the risks associated with the purchase, sale and holding of cryptocurrencies and agree not to enter into transactions that can lead to losses that you cannot bear. + +3. TF Tech does not have control over the delivery, quality, legality, safety or any other aspects of any digital assets or services provided to you by third parties. + +4. TF Tech assumes no responsibility for the operation of the underlying software protocols which govern the operation of the cryptocurrencies other than the ThreeFold Token (‘TFT’) which may be displayed or referred to on its ‘software’ and services, including but not limited to, its Platforms. + +5. TF Tech NV does not own or control the underlying software protocols which govern the operation of third party’s digital assets supported on Platforms. Generally, the underlying protocols are controlled by third-party services. TF Tech NV assumes no responsibility for the operation of the underlying protocols and is not able to guarantee the functionality, availability, or security of network operations. In particular, the underlying protocols may be subject to sudden changes in operating rules (including “forks”). Any such material operating changes may materially affect the availability, value, functionality, and/or the name of the third party’s digital asset you store in TF Tech’s Platforms. + +6. TF Tech does not control the timing and features of these material operating changes. It is your responsibility to make yourself aware of upcoming operating changes and you must carefully consider publicly available information and information that may be provided by TF Tech NV in determining whether to continue to use the ThreeFold Connect App account or “Platforms” for the affected third-party’s digital asset. In the event of any such operational change, TF Tech NV reserves the right to take such steps as may be necessary to protect the security and safety of assets held on ThreeFold’s “Platforms”, including temporarily suspending operations, and other necessary steps. TF Tech NV will use its best efforts to provide you notice of its response to any material operating change; however, such changes are outside of TF Tech’s control and may occur without notice to TF Tech NV. TF Tech NV’s response to any material operating change is subject to its sole discretion and includes deciding not to support a third-party’s digital asset fork, or other actions. + +7. You acknowledge and accept the risks of operating changes to third-party’s digital asset protocols and agree that TF Tech NV is not responsible for such operating changes and not liable for any loss of value you may experience as a result of such changes in operating rules. You acknowledge and accept that TF Tech NV has sole discretion to determine its response to any operating change and that we have no responsibility to assist you with third-party’s digital assets, operational changes, unsupported currencies, or protocols. + +### 19. TAXES + +1. You shall be solely responsible for and shall pay (and shall indemnify TF Tech against any liability with respect to any failure by you to pay) all income taxes, value added taxes, goods and services taxes and any and all other taxes or sums due to national, federal, state or local governments (as the case may be) as a result of your use of the Platforms.. + +2. You acknowledge that only you are responsible for the provision of information to tax authorities where such is required. Notwithstanding the above, upon request from a tax authority, TF Tech will provide information relating to you to the tax authorities. + +### 20. EXCLUSION OF LIABILITY + +1. For the avoidance of doubt and notwithstanding the generality of the liability limitations set out in the applicable terms and conditions governing your use of the Platforms and/or other software or services provided by TF Tech, no liability shall exist in any manner whatsoever for: + - differences in prices resulting from delayed processing of buy- or sell orders; + - cancellation of orders by reason of clearly misquoted prices; + - any damage incurred relating to the ThreeFold Wallet feature; + - any losses resulting from hacks, system failures and/or regulatory actions; and + - any indirect loss (including consequential loss, loss of income and profit, loss of data and non-material loss). + +2. Except in case of intentional misconduct by TF Tech, the liability of TF Tech in respect of the Platforms shall in all cases be limited to the amount paid by you for the Platforms during the month prior to the moment the cause of the damage occurred. + +### 21. GOVERNING LAW AND VENUE + +This Agreement will be governed by Luxembourg law. Any disputes shall be subject to the jurisdiction of the courts of Luxembourg, Grand Duchy of Luxembourg. + + +## APPENDIX + +{{#include threefold_companies0.md}} + +{{#include ./sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_sales.md b/collections/manual_legal/terms_conditions/terms_conditions_sales.md new file mode 100644 index 0000000..643aa46 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_sales.md @@ -0,0 +1,92 @@ +# Terms Conditions Sales + +These terms and conditions (the “Agreement") constitute a legal agreement between you (“farmer,”, "customer", “you", or “yours”) and TF Tech NV, with registered office at Antwerpse Steenweg 19, B-9080 Lochristi, Belgium, (company number KBO 0712.845.674) ("we" "our" or the "Company") regarding our the sales of a service or product to you. + +## DEFINITIONS AND RELATED TERMS + +The definitions which apply to this Agreement, unless stated otherwise herein, can be found [here](../definitions_legal.md), and are incorporated herein by reference. + + +## TF TECH GENERAL TERMS AND CONDITIONS OF SALE + +Unless explicitly stated otherwise in a specific agreement, these General Terms and Conditions of Sale, together with the Sales Order, represent the entire agreement (“Agreement”) between you or the entity you represent, and TF Tech NV, a Belgian limited liability company with registered office at Antwerpsesteenweg 19, B-9080 Lochristi, Belgium, registered with the Crossroads Bank of Enterprises under company number 0712.845.674 (RPR Gent, district Gent) (the “Company”). + +### 1. SALES ORDERS + +Company will issue a Sales Order (either electronically or otherwise in writing), and such Sales Order must be explicitly accepted by the Customer. In order to qualify as a Sales Order, the relevant document shall specify, without limitation, at least: + +- The identity of the Customer; +- The Deliverables; +- The Effective Date; +- The Acceptance Period; and +- The prices and fees to be paid in respect of the Deliverables. + +In addition, each Sales Order issued may set forth (i) the applicable quantities, (ii) the unit prices, (iii) the bill-to address, (iv) the site(s) where any Services are to be performed (if applicable), and (v) any additional special terms or instructions. + +### 2. PRICING AND PAYMENT + +The prices and fees for Deliverables shall be as set forth in the Sales Order. All prices will be displayed in the Customer’s local currency where they have chosen that option. In such case prices are displayed in the Customer’s local currency for information purposes only. The final contracted price shall be in EUR and may be subject to bank charges and/or currency exchange fees which will be borne by Customer. + +Company shall issue an invoice against each accepted Sales Order (the “Invoice”). The Invoice shall be made available either electronically, in writing or online through the Company’s electronic invoicing system (if any). + +Each Invoice shall automatically become due and payable on the date of issuance of the Sales Order. + +In the event that Customer fails to make any payment of undisputed amounts on or prior to the applicable invoice due date (as determined in accordance with this Section 3), then those undisputed amounts shall accrue interest from the due date at a rate of eight percent (8%) per annum (or such lesser rate as may be the maximum permissible rate allowed under applicable law), calculated from the first day when such amount became due and owing until the date on which such amount is paid. The Company’s right to claim additional damages shall remain unaffected. + +Customer shall pay all federal, state or local sales or use taxes and any other government taxes, fees, duties or charges that are imposed upon the fees and charges paid by Customer to Company pursuant to this Agreement. Company will be responsible for all other taxes arising from the transactions contemplated by this Agreement, including, without limitation, any taxes based upon Company’s property, net income or gross receipts. Customer shall pay all such amounts directly to the taxing authority unless the taxing authority requires that Company collect and remit payment, in which event Customer shall pay such amounts to Company and Company shall remit such amounts to the authority and provide Customer with a certificate stating that such amounts were so remitted. Customer and Company shall reasonably cooperate in order to take actions to minimize, or to qualify for exemptions from, any applicable taxes, duties or tariffs. Such cooperation shall include, without limitation, the furnishing of certifications that purchases by Customer are for purposes of resale, if applicable, and must be used in accordance with any local and international laws. Customer and Company shall each have the right to protest or appeal any tax or charge assessed against it by any taxing authority with respect to the subject matter of this Agreement. + +### 3. DELIVERY AND ACCEPTANCE + +Delivery. Unless otherwise agreed, prices and delivery are ExWorks (INCOTERMS 2010) from the production premises of the Company in Lochristi (Belgium) or the third party designated in the Sales Order. Any charges Company may be required to pay or collect on the sale, purchase, delivery, storage, use or transportation of the goods shall be paid by Customer. + + +Export regulations. Customer is responsible for complying with all applicable export and/or re-export restrictions and regulations. + +Title; Risk of Loss. Unless otherwise agreed, the risk of loss passes to Customer on the date when the goods are delivered to the carrier, as described in INCOTERMS 2010 (the “Delivery Date”). Where the risk of loss has passed to Customer, Customer must obtain redress for freight losses, shortages or damages from the carrier or its insurer. Company is not responsible for any such losses. Notwithstanding any provision of INCOTERMS 2010 or contained herein, equitable title and accession to the goods shall, where permitted by law, remain with Company until Customer has paid in full. This shall be the case even if legal title to the goods shall be deemed by law to have passed to Customer at the time of delivery and prior to performance of all of Company’s obligations. Customer shall grant and by acceptance of the goods is deemed to have granted to Company a first security interest in all goods to secure payment of amounts owed by Customer. In certain circumstances for instance for very large orders Customer agrees to execute a financing statement at Company's request. Company may reclaim any goods delivered or in transit if Customer fails to make payment when due. + +Inspection and Acceptance. Customer will evaluate any Deliverable that has been delivered to Customer or performed in accordance with this Agreement to determine whether it complies with all applicable Specifications. Customer shall give Company written notice of Acceptance or Rejection within the Acceptance Period. +Upon notice of Rejection, Customer may: +immediately return to Company the relevant Deliverables, provided such Deliverables are in good working condition and without damage, in which case the Invoice will be cancelled through a credit note; +direct Company to correct the nonconformity, in which case Company (at no cost to Customer) shall correct the nonconformity within thirty (30) days of Customer’s request; or +upon mutual agreement of the Parties, pay Company a reduced amount for the nonconforming item (in which case the Invoice will be partially cancelled accordingly through a credit note). +Deliverables provided to Customer will be deemed accepted: +in the absence of any notice of Rejection within or at the expiration of the Acceptance Period; or +in the absence of any return of the Deliverables within the Acceptance Period. + +### 4. USAGE RESTRICTIONS + +Customer acknowledges that any use or purchase of the Hardware or Services for fraudulent or illegal purposes or purchases Hardware or Services in a fraudulent manner will irrevocably invalidate any Agreement between the Customer and Company and may lead to prosecution. + +### 5.WARRANTIES + +Hardware Warranty by Company to Customer. The Hardware supplied by Company pursuant to this Agreement is manufactured and/or developed by third party vendors and will carry the warranties specified by the applicable third party vendor, which warranties Company shall extend to Customer to the full extent permissible under such warranties or as provided under statutory law. + +Services Warranties. Company warrants that all Services performed hereunder shall be performed in a timely, professional and workmanlike manner, in conformance with industry practices, and Company warrants the workmanship of such Services for a period of ninety (90) days from the date on which the applicable Services are provided. + +Eligibility. Any warranties shall be invalid and the company shall have no responsibility or liability whatsoever for any Hardware or Software, or part thereof, that (a) has had the Serial Number, Model Number, or other identification markings altered, removed or rendered illegible; (b) has been damaged by or subject to improper installation or operation, misuse, accident or neglect; (c) has become defective or inoperative due to its integration or assembly with any equipment or products not supplied by Company; (d) has been repaired, modified or otherwise altered by anyone other than Company and/or has been subject to the opening of the Hardware without Company’s prior written consent; (e) has had any item removed from the Hardware including any storage device including USB drives. If any warranty claim by Customer falls within any of the foregoing exceptions, Customer shall pay Company its then current rates and charges for such services. + +Termination. Cancellation or termination of this Agreement by either Company or Customer shall void this warranty. + +Remedies and repair. Company’s liability and responsibility under this warranty is limited to the obligation, at Company’s option, to either repair or replace the relevant Deliverables. In the event that after repeated efforts Company is unable to repair or replace a defective Deliverable, then Customer’s exclusive remedy and Company’s entire liability in contract, tort, or otherwise shall be the payment by Company of Customer's actual damages after mitigation, but shall not exceed the purchase price or fee actually paid by Customer for the relevant defective Deliverable. +The Company shall have no obligation to repair, replace, or refund the relevant Deliverable until the Customer returns the defective Deliverable to the Company. Before returning any Deliverable to the Company, the Customer must contact the Company for a return authorization and other appropriate instructions. +The Company warrants a repaired Deliverable only for the unexpired term of the original warranty for the defective Deliverable. The Company warrants parts exchanged in connection with a repair, only for the unexpired term of the original warranty for the defective Deliverable. + +Warranty Procedure. The Customer shall notify the Company immediately in writing of any obvious or potential defects in the Deliverables, following the acceptance thereof, as soon as such defects have been discovered in the ordinary course of business within the aforementioned warranty terms. Company shall only remedy defective Deliverables under these Warranty provisions provided the defects are notified to Company within the relevant Warranty period. + +Disclaimer. THE ABOVE WARRANTY IS IN LIEU OF ALL OTHER WARRANTIES, EXPRESS OR IMPLIED, INCLUDING THOSE OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF WHICH ARE EXPRESSLY DISCLAIMED. COMPANY SHALL NOT BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, SPECIAL, OR EXEMPLARY DAMAGES (WHETHER FOR LOSS OF PROFIT, LOSS OF BUSINESS, LOSS OF OPPORTUNITY, MISSED SAVINGS, DEPLETION OF GOODWILL, RECALL, DISMANTLING OR OTHERWISE); EVEN IF IT HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +Limitations. To the fullest extent permitted by law, Company’s total and aggregate liability in respect to direct damages, whether in contract or tort (including negligence or breach of statutory duty), for each respective breach or series of related breaches or any and all losses, shall not exceed in the aggregate (i) the actual amount paid for the specific Deliverables giving rise to the claim; or (ii) EUR 10,000, whichever amount is lower. The existence of one or more claims under this Agreement shall not enlarge the limit. +The limitations and exclusions referred to in this clause will not apply in the event the liability results from the Company’s deliberate intent (or that of its subordinates or assistants). +The limitations and exclusions of liability, as well as indemnity stipulated for Company itself in the above paragraphs are also stipulated for and on behalf of its directors, employees, agents and other intermediaries and/or any other person employed by it or delivering services to it within the framework of the Agreement. + +### 6. MISCELLANEOUS + +Severability. If any provision of this Agreement will be held to be invalid or unenforceable for any reason, the remaining provisions will continue to be valid and enforceable. If a court finds that any provision of this Agreement is invalid or unenforceable, but that by limiting such provision, it would become valid and enforceable, then such provision will be deemed to be written, construed, and enforced as so limited. + +Governing Law. All disputes will be governed by the laws of Belgium. The venue for litigation will be the appropriate courts of Ghent, Belgium. Choice of law rules of any jurisdiction and the United Nations Convention on Contracts for the International Sale of Goods will not apply to any dispute. + +Changes. TF Tech NV reserves the right to vary these T&Cs at any time. TF Tech NV will inform customers by email if the T&Cs change using the contact email provided at the time of purchase, or using any email subsequently provided by the customer as their main email. The customer agrees that they will always ensure such emails are up to date and monitored. Any variations to the T&Cs will be deemed to have been accepted unless TF Tech NV is informed to the contrary. + +## APPENDIX + +{{#include ./sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_toc.md b/collections/manual_legal/terms_conditions/terms_conditions_toc.md new file mode 100644 index 0000000..de9638f --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_toc.md @@ -0,0 +1,9 @@ +# Terms & Conditions + +

Table of Contents

+ +- [Terms & Conditions ThreeFold Related Websites](./terms_conditions_websites.md) +- [Terms & Conditions TFGrid Users TFGrid 3](./terms_conditions_griduser.md) + - [TFTA to TFT](./tfta_to_tft.md) +- [Terms & Conditions TFGrid Farmers TFGrid 3](./terms_conditions_farmer3.md) +- [Terms & Conditions Sales](./terms_conditions_sales.md) \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites.md b/collections/manual_legal/terms_conditions/terms_conditions_websites.md new file mode 100644 index 0000000..7eaa7c7 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites.md @@ -0,0 +1,28 @@ +**TERMS OF USE** + +{{#include ./terms_conditions_websites/part_0_agreement_terms.md}} +{{#include ./terms_conditions_websites/part_1_ip_rights.md}} +{{#include ./terms_conditions_websites/part_2_user_representations.md}} +{{#include ./terms_conditions_websites/part_3_user_registration.md}} +{{#include ./terms_conditions_websites/part_4_prohibited_activities.md}} +{{#include ./terms_conditions_websites/part_5_user_generated_contributions.md}} +{{#include ./terms_conditions_websites/part_6_contribution_license.md}} +{{#include ./terms_conditions_websites/part_7_social_media.md}} +{{#include ./terms_conditions_websites/part_8_submission.md}} +{{#include ./terms_conditions_websites/part_9_thirdparty_websites_content.md}} +{{#include ./terms_conditions_websites/part_10_site_management.md}} +{{#include ./terms_conditions_websites/part_11_privacy_policy.md}} +{{#include ./terms_conditions_websites/part_12_dispute_resolution.md}} +{{#include ./terms_conditions_websites/part_13_disclaimer.md}} +{{#include ./terms_conditions_websites/part_14_limitations_liability.md}} +{{#include ./terms_conditions_websites/part_15_indemnification.md}} +{{#include ./terms_conditions_websites/part_16_user_data.md}} +{{#include ./terms_conditions_websites/part_17_electronic_comms_transactions_signatures.md}} +{{#include ./terms_conditions_websites/part_18_miscellaneous.md}} +{{#include ./terms_conditions_websites/part_19_contact_us.md}} + +## APPENDIX + +{{#include threefold_companies0.md}} + +{{#include ./sub/the_single_source_truth.md}} \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_0_agreement_terms.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_0_agreement_terms.md new file mode 100644 index 0000000..93cf2e5 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_0_agreement_terms.md @@ -0,0 +1,15 @@ +**AGREEMENT TO TERMS** + + These Terms of Use constitute a legally binding agreement made between you, whether personally or on behalf of an entity ("you") and ThreeFold, doing business as ThreeFold ("**ThreeFold**", “**we**”, “**us**”, or “**our**”), concerning your access to and use of the threefold related websites: + +{{#include ../sub/websites.md}} + +as well as any other media form, media channel, forum, mobile website or mobile application related, linked, or otherwise connected thereto (collectively, the “Site”). + +You agree that by accessing the Site, you have read, understood, and agreed to be bound by all of these Terms of Use. IF YOU DO NOT AGREE WITH ALL OF THESE TERMS OF USE, THEN YOU ARE EXPRESSLY PROHIBITED FROM USING THE SITE AND YOU MUST DISCONTINUE USE IMMEDIATELY. + +Supplemental terms and conditions or documents that may be posted on the Site from time to time are hereby expressly incorporated herein by reference. We reserve the right, in our sole discretion, to make changes or modifications to these Terms of Use at any time and for any reason. We will alert you about any changes by updating the "Last updated" date of these Terms of Use, and you waive any right to receive specific notice of each such change. It is your responsibility to periodically review these Terms of Use to stay informed of updates. You will be subject to, and will be deemed to have been made aware of and to have accepted, the changes in any revised Terms of Use by your continued use of the Site after the date such revised Terms of Use are posted. + +The information provided on the Site is not intended for distribution to or use by any person or entity in any jurisdiction or country where such distribution or use would be contrary to law or regulation or which would subject us to any registration requirement within such jurisdiction or country. Accordingly, those persons who choose to access the Site from other locations do so on their own initiative and are solely responsible for compliance with local laws, if and to the extent local laws are applicable. + +The Site is intended for users who are at least 18 years old. Persons under the age of 18 are not permitted to use or register for the Site. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_10_site_management.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_10_site_management.md new file mode 100644 index 0000000..1b12bf6 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_10_site_management.md @@ -0,0 +1,3 @@ +**SITE MANAGEMENT** + +We reserve the right, but not the obligation, to: (1) monitor the Site for violations of these Terms of Use; (2) take appropriate legal action against anyone who, in our sole discretion, violates the law or these Terms of Use, including without limitation, reporting such user to law enforcement authorities; (3) in our sole discretion and without limitation, refuse, restrict access to, limit the availability of, or disable (to the extent technologically feasible) any of your Contributions or any portion thereof; (4) in our sole discretion and without limitation, notice, or liability, to remove from the Site or otherwise disable all files and content that are excessive in size or are in any way burdensome to our systems; and (5) otherwise manage the Site in a manner designed to protect our rights and property and to facilitate the proper functioning of the Site. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_11_privacy_policy.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_11_privacy_policy.md new file mode 100644 index 0000000..5db0439 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_11_privacy_policy.md @@ -0,0 +1,22 @@ +**PRIVACY POLICY** + +We care about data privacy and security. Please review our Privacy Policy: privacypolicy . + +By using the Site, you agree to be bound by our Privacy Policy, which is incorporated into these Terms of Use. Please be advised the Site is hosted in the United States. If you access the Site from the European Union, Asia, or any other region of the world with laws or other requirements governing personal data collection, use, or disclosure that differ from applicable laws in the United States, then through your continued use of the Site, you are transferring your data to the United States, and you expressly consent to have your data transferred to and processed in the United States. Further, we do not knowingly accept, request, or solicit information from children or knowingly market to children. Therefore, in accordance with the U.S. Children’s Online Privacy Protection Act, if we receive actual knowledge that anyone under the age of 13 has provided personal information to us without the requisite and verifiable parental consent, we will delete that information from the Site as quickly as is reasonably practical. + +**TERM AND TERMINATION** + +These Terms of Use shall remain in full force and effect while you use the Site. WITHOUT LIMITING ANY OTHER PROVISION OF THESE TERMS OF USE, WE RESERVE THE RIGHT TO, IN OUR SOLE DISCRETION AND WITHOUT NOTICE OR LIABILITY, DENY ACCESS TO AND USE OF THE SITE (INCLUDING BLOCKING CERTAIN IP ADDRESSES), TO ANY PERSON FOR ANY REASON OR FOR NO REASON, INCLUDING WITHOUT LIMITATION FOR BREACH OF ANY REPRESENTATION, WARRANTY, OR COVENANT CONTAINED IN THESE TERMS OF USE OR OF ANY APPLICABLE LAW OR REGULATION. WE MAY TERMINATE YOUR USE OR PARTICIPATION IN THE SITE OR DELETE YOUR ACCOUNT AND ANY CONTENT OR INFORMATION THAT YOU POSTED AT ANY TIME, WITHOUT WARNING, IN OUR SOLE DISCRETION. + + If we terminate or suspend your account for any reason, you are prohibited from registering and creating a new account under your name, a fake or borrowed name, or the name of any third party, even if you may be acting on behalf of the third party. In addition to terminating or suspending your account, we reserve the right to take appropriate legal action, including without limitation pursuing civil, criminal, and injunctive redress. + +**MODIFICATIONS AND INTERRUPTIONS** + + We reserve the right to change, modify, or remove the contents of the Site at any time or for any reason at our sole discretion without notice. However, we have no obligation to update any information on our Site. We also reserve the right to modify or discontinue all or part of the Site without notice at any time. We will not be liable to you or any third party for any modification, price change, suspension, or discontinuance of the Site. + + +We cannot guarantee the Site will be available at all times. We may experience hardware, software, or other problems or need to perform maintenance related to the Site, resulting in interruptions, delays, or errors. We reserve the right to change, revise, update, suspend, discontinue, or otherwise modify the Site at any time or for any reason without notice to you. You agree that we have no liability whatsoever for any loss, damage, or inconvenience caused by your inability to access or use the Site during any downtime or discontinuance of the Site. Nothing in these Terms of Use will be construed to obligate us to maintain and support the Site or to supply any corrections, updates, or releases in connection therewith. + +**GOVERNING LAW** + +These Terms of Use and your use of the Site are governed by and construed in accordance with the laws of Belgium, without regard to its conflict of law principles. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_12_dispute_resolution.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_12_dispute_resolution.md new file mode 100644 index 0000000..0483b50 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_12_dispute_resolution.md @@ -0,0 +1,7 @@ +**DISPUTE RESOLUTION** + +Any legal action of whatever nature brought by either you or us (collectively, the "Parties" and individually, a “Party”) shall be commenced or prosecuted in courts located in Belgium, Ghent, and the Parties hereby consent to, and waive all defenses of lack of personal jurisdiction and forum non conveniens with respect to venue and jurisdiction in such state and federal courts. Application of the United Nations Convention on Contracts for the International Sale of Goods and the Uniform Computer Information Transaction Act (UCITA) are excluded from these Terms of Use. In no event shall any claim, action, or proceeding brought by either Party related in any way to the Site be commenced more than one (1) years after the cause of action arose. + +**CORRECTIONS** + +There may be information on the Site that contains typographical errors, inaccuracies, or omissions, including descriptions, pricing, availability, and various other information. We reserve the right to correct any errors, inaccuracies, or omissions and to change or update the information on the Site at any time, without prior notice. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_13_disclaimer.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_13_disclaimer.md new file mode 100644 index 0000000..19472d0 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_13_disclaimer.md @@ -0,0 +1,3 @@ +**DISCLAIMER** + +THE SITE IS PROVIDED ON AN AS-IS AND AS-AVAILABLE BASIS. YOU AGREE THAT YOUR USE OF THE SITE AND OUR SERVICES WILL BE AT YOUR SOLE RISK. TO THE FULLEST EXTENT PERMITTED BY LAW, WE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, IN CONNECTION WITH THE SITE AND YOUR USE THEREOF, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WE MAKE NO WARRANTIES OR REPRESENTATIONS ABOUT THE ACCURACY OR COMPLETENESS OF THE SITE’S CONTENT OR THE CONTENT OF ANY WEBSITES LINKED TO THE SITE AND WE WILL ASSUME NO LIABILITY OR RESPONSIBILITY FOR ANY (1) ERRORS, MISTAKES, OR INACCURACIES OF CONTENT AND MATERIALS, (2) PERSONAL INJURY OR PROPERTY DAMAGE, OF ANY NATURE WHATSOEVER, RESULTING FROM YOUR ACCESS TO AND USE OF THE SITE, (3) ANY UNAUTHORIZED ACCESS TO OR USE OF OUR SECURE SERVERS AND/OR ANY AND ALL PERSONAL INFORMATION AND/OR FINANCIAL INFORMATION STORED THEREIN, (4) ANY INTERRUPTION OR CESSATION OF TRANSMISSION TO OR FROM THE SITE, (5) ANY BUGS, VIRUSES, TROJAN HORSES, OR THE LIKE WHICH MAY BE TRANSMITTED TO OR THROUGH THE SITE BY ANY THIRD PARTY, AND/OR (6) ANY ERRORS OR OMISSIONS IN ANY CONTENT AND MATERIALS OR FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF ANY CONTENT POSTED, TRANSMITTED, OR OTHERWISE MADE AVAILABLE VIA THE SITE. WE DO NOT WARRANT, ENDORSE, GUARANTEE, OR ASSUME RESPONSIBILITY FOR ANY PRODUCT OR SERVICE ADVERTISED OR OFFERED BY A THIRD PARTY THROUGH THE SITE, ANY HYPERLINKED WEBSITE, OR ANY WEBSITE OR MOBILE APPLICATION FEATURED IN ANY BANNER OR OTHER ADVERTISING, AND WE WILL NOT BE A PARTY TO OR IN ANY WAY BE RESPONSIBLE FOR MONITORING ANY TRANSACTION BETWEEN YOU AND ANY THIRD-PARTY PROVIDERS OF PRODUCTS OR SERVICES. AS WITH THE PURCHASE OF A PRODUCT OR SERVICE THROUGH ANY MEDIUM OR IN ANY ENVIRONMENT, YOU SHOULD USE YOUR BEST JUDGMENT AND EXERCISE CAUTION WHERE APPROPRIATE. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_14_limitations_liability.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_14_limitations_liability.md new file mode 100644 index 0000000..9adccbf --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_14_limitations_liability.md @@ -0,0 +1,3 @@ +**LIMITATIONS OF LIABILITY** + + IN NO EVENT WILL WE OR OUR DIRECTORS, EMPLOYEES, OR AGENTS BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, SPECIAL, OR PUNITIVE DAMAGES, INCLUDING LOST PROFIT, LOST REVENUE, LOSS OF DATA, OR OTHER DAMAGES ARISING FROM YOUR USE OF THE SITE, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. NOTWITHSTANDING ANYTHING TO THE CONTRARY CONTAINED HEREIN, OUR LIABILITY TO YOU FOR ANY CAUSE WHATSOEVER AND REGARDLESS OF THE FORM OF THE ACTION, WILL AT ALL TIMES BE LIMITED TO $1.00 USD. CERTAIN STATE LAWS DO NOT ALLOW LIMITATIONS ON IMPLIED WARRANTIES OR THE EXCLUSION OR LIMITATION OF CERTAIN DAMAGES. IF THESE LAWS APPLY TO YOU, SOME OR ALL OF THE ABOVE DISCLAIMERS OR LIMITATIONS MAY NOT APPLY TO YOU, AND YOU MAY HAVE ADDITIONAL RIGHTS. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_15_indemnification.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_15_indemnification.md new file mode 100644 index 0000000..447c022 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_15_indemnification.md @@ -0,0 +1,3 @@ +**INDEMNIFICATION** + +You agree to defend, indemnify, and hold us harmless, including our subsidiaries, affiliates, and all of our respective officers, agents, partners, and employees, from and against any loss, damage, liability, claim, or demand, including reasonable attorneys’ fees and expenses, made by any third party due to or arising out of: (1) your Contributions; (2) use of the Site; (3) breach of these Terms of Use; (4) any breach of your representations and warranties set forth in these Terms of Use; (5) your violation of the rights of a third party, including but not limited to intellectual property rights; or (6) any overt harmful act toward any other user of the Site with whom you connected via the Site. Notwithstanding the foregoing, we reserve the right, at your expense, to assume the exclusive defense and control of any matter for which you are required to indemnify us, and you agree to cooperate, at your expense, with our defense of such claims. We will use reasonable efforts to notify you of any such claim, action, or proceeding which is subject to this indemnification upon becoming aware of it. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_16_user_data.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_16_user_data.md new file mode 100644 index 0000000..80ecde7 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_16_user_data.md @@ -0,0 +1,3 @@ +**USER DATA** + +We will maintain certain data that you transmit to the Site for the purpose of managing the performance of the Site, as well as data relating to your use of the Site. Although we perform regular routine backups of data, you are solely responsible for all data that you transmit or that relates to any activity you have undertaken using the Site. You agree that we shall have no liability to you for any loss or corruption of any such data, and you hereby waive any right of action against us arising from any such loss or corruption of such data. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_17_electronic_comms_transactions_signatures.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_17_electronic_comms_transactions_signatures.md new file mode 100644 index 0000000..da54ef8 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_17_electronic_comms_transactions_signatures.md @@ -0,0 +1,3 @@ +**ELECTRONIC COMMUNICATIONS, TRANSACTIONS, AND SIGNATURES** + +Visiting the Site, sending us emails, and completing online forms constitute electronic communications. You consent to receive electronic communications, and you agree that all agreements, notices, disclosures, and other communications we provide to you electronically, via email and on the Site, satisfy any legal requirement that such communication be in writing. YOU HEREBY AGREE TO THE USE OF ELECTRONIC SIGNATURES, CONTRACTS, ORDERS, AND OTHER RECORDS, AND TO ELECTRONIC DELIVERY OF NOTICES, POLICIES, AND RECORDS OF TRANSACTIONS INITIATED OR COMPLETED BY US OR VIA THE SITE. You hereby waive any rights or requirements under any statutes, regulations, rules, ordinances, or other laws in any jurisdiction which require an original signature or delivery or retention of non-electronic records, or to payments or the granting of credits by any means other than electronic means. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_18_miscellaneous.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_18_miscellaneous.md new file mode 100644 index 0000000..633bbe3 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_18_miscellaneous.md @@ -0,0 +1,3 @@ +**MISCELLANEOUS** + + These Terms of Use and any policies or operating rules posted by us on the Site or in respect to the Site constitute the entire agreement and understanding between you and us. Our failure to exercise or enforce any right or provision of these Terms of Use shall not operate as a waiver of such right or provision. These Terms of Use operate to the fullest extent permissible by law. We may assign any or all of our rights and obligations to others at any time. We shall not be responsible or liable for any loss, damage, delay, or failure to act caused by any cause beyond our reasonable control. If any provision or part of a provision of these Terms of Use is determined to be unlawful, void, or unenforceable, that provision or part of the provision is deemed severable from these Terms of Use and does not affect the validity and enforceability of any remaining provisions. There is no joint venture, partnership, employment or agency relationship created between you and us as a result of these Terms of Use or use of the Site. You agree that these Terms of Use will not be construed against us by virtue of having drafted them. You hereby waive any and all defenses you may have based on the electronic form of these Terms of Use and the lack of signing by the parties hereto to execute these Terms of Use. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_19_contact_us.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_19_contact_us.md new file mode 100644 index 0000000..82f7d36 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_19_contact_us.md @@ -0,0 +1,9 @@ +**CONTACT US** + + In order to resolve a complaint regarding the Site or to receive further information regarding use of the Site, please contact us at: + + **ThreeFold FCZ** + +BA1120 DMCC BUSINESS CENTRE, LEVEL NO 1, JEWELLERY & GEMPLEX 3, DUBAI, UNITED EMIRATES ARAB + +info@threefold.io \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_1_ip_rights.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_1_ip_rights.md new file mode 100644 index 0000000..4a56c39 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_1_ip_rights.md @@ -0,0 +1,5 @@ +**INTELLECTUAL PROPERTY RIGHTS** + + Unless otherwise indicated, the Site is our proprietary property and all source code, databases, functionality, software, website designs, audio, video, text, photographs, and graphics on the Site (collectively, the "Content") and the trademarks, service marks, and logos contained therein (the “Marks”) are owned or controlled by us or licensed to us, and are protected by copyright and trademark laws and various other intellectual property rights and EU competition laws, foreign jurisdictions, and international conventions. The Content and the Marks are provided on the Site “AS IS” for your information and personal use only. Except as expressly provided in these Terms of Use, no part of the Site and no Content or Marks may be copied, reproduced, aggregated, republished, uploaded, posted, publicly displayed, encoded, translated, transmitted, distributed, sold, licensed, or otherwise exploited for any commercial purpose whatsoever, without our express prior written permission. + +Provided that you are eligible to use the Site, you are granted a limited license to access and use the Site and to download or print a copy of any portion of the Content to which you have properly gained access solely for your personal, non-commercial use. We reserve all rights not expressly granted to you in and to the Site, the Content and the Marks. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_2_user_representations.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_2_user_representations.md new file mode 100644 index 0000000..281bcc0 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_2_user_representations.md @@ -0,0 +1,5 @@ +**USER REPRESENTATIONS** + + By using the Site, you represent and warrant that: (1) all registration information you submit will be true, accurate, current, and complete; (2) you will maintain the accuracy of such information and promptly update such registration information as necessary; (3) you have the legal capacity and you agree to comply with these Terms of Use; (4) you are not a minor in the jurisdiction in which you reside; (5) you will not access the Site through automated or non-human means, whether through a bot, script, or otherwise; (6) you will not use the Site for any illegal or unauthorized purpose; and (7) your use of the Site will not violate any applicable law or regulation. + +If you provide any information that is untrue, inaccurate, not current, or incomplete, we have the right to suspend or terminate your account and refuse any and all current or future use of the Site (or any portion thereof). \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_3_user_registration.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_3_user_registration.md new file mode 100644 index 0000000..24f5115 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_3_user_registration.md @@ -0,0 +1,3 @@ +**USER REGISTRATION** + +You may be required to register with the Site. You agree to keep your password confidential and will be responsible for all use of your account and password. We reserve the right to remove, reclaim, or change a username you select if we determine, in our sole discretion, that such username is inappropriate, obscene, or otherwise objectionable. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_4_prohibited_activities.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_4_prohibited_activities.md new file mode 100644 index 0000000..dc86d3f --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_4_prohibited_activities.md @@ -0,0 +1,18 @@ +**PROHIBITED ACTIVITIES** + +You may not access or use the Site for any purpose other than that for which we make the Site available. The Site may not be used in connection with any commercial endeavors except those that are specifically endorsed or approved by us. + + As a user of the Site, you agree not to: + +1. Systematically retrieve data or other content from the Site to create or compile, directly or indirectly, a collection, compilation, database, or directory without written permission from us. +2. Circumvent, disable, or otherwise interfere with security-related features of the Site, including features that prevent or restrict the use or copying of any Content or enforce limitations on the use of the Site and/or the Content contained therein. +3. Engage in unauthorized framing of or linking to the Site. +4. Trick, defraud, or mislead us and other users, especially in any attempt to learn sensitive account information such as user passwords. +5. Engage in any automated use of the system, such as using scripts to send comments or messages, or using any data mining, robots, or similar data gathering and extraction tools. +6. Interfere with, disrupt, or create an undue burden on the Site or the networks or services connected to the Site. +7. Use the Site as part of any effort to compete with us or otherwise use the Site and/or the Content for any revenue-generating endeavor or commercial enterprise. +8. Decipher, decompile, disassemble, or reverse engineer any of the software comprising or in any way making up a part of the Site. +9. Upload or transmit (or attempt to upload or to transmit) viruses, Trojan horses, or other material, including excessive use of capital letters and spamming (continuous posting of repetitive text), that interferes with any party’s uninterrupted use and enjoyment of the Site or modifies, impairs, disrupts, alters, or interferes with the use, features, functions, operation, or maintenance of the Site. +10. Upload or transmit (or attempt to upload or to transmit) any material that acts as a passive or active information collection or transmission mechanism, including without limitation, clear graphics interchange formats ("gifs"), 1×1 pixels, web bugs, cookies, or other similar devices (sometimes referred to as “spyware” or “passive collection mechanisms” or “pcms”). +11. Except as may be the result of standard search engine or Internet browser usage, use, launch, develop, or distribute any automated system, including without limitation, any spider, robot, cheat utility, scraper, or offline reader that accesses the Site, or using or launching any unauthorized script or other software. +12. Use the Site in a manner inconsistent with any applicable laws or regulations. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_5_user_generated_contributions.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_5_user_generated_contributions.md new file mode 100644 index 0000000..15b00c4 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_5_user_generated_contributions.md @@ -0,0 +1,22 @@ +**USER GENERATED CONTRIBUTIONS** + +The Site may invite you to chat, contribute to, or participate in blogs, message boards, online forums, and other functionality, and may provide you with the opportunity to create, submit, post, display, transmit, perform, publish, distribute, or broadcast content and materials to us or on the Site, including but not limited to text, writings, video, audio, photographs, graphics, comments, suggestions, or personal information or other material (collectively, "Contributions"). Contributions may be viewable by other users of the Site and through third-party websites. As such, any Contributions you transmit may be treated as non-confidential and non-proprietary. When you create or make available any Contributions, you thereby represent and warrant that: + + + +1. The creation, distribution, transmission, public display, or performance, and the accessing, downloading, or copying of your Contributions do not and will not infringe the proprietary rights, including but not limited to the copyright, patent, trademark, trade secret, or moral rights of any third party. +2. You are the creator and owner of or have the necessary licenses, rights, consents, releases, and permissions to use and to authorize us, the Site, and other users of the Site to use your Contributions in any manner contemplated by the Site and these Terms of Use. +3. You have the written consent, release, and/or permission of each and every identifiable individual person in your Contributions to use the name or likeness of each and every such identifiable individual person to enable inclusion and use of your Contributions in any manner contemplated by the Site and these Terms of Use. +4. Your Contributions are not false, inaccurate, or misleading. +5. Your Contributions are not unsolicited or unauthorized advertising, promotional materials, pyramid schemes, chain letters, spam, mass mailings, or other forms of solicitation. +6. Your Contributions are not obscene, lewd, lascivious, filthy, violent, harassing, libelous, slanderous, or otherwise objectionable (as determined by us). +7. Your Contributions do not ridicule, mock, disparage, intimidate, or abuse anyone. +98 Your Contributions do not advocate the violent overthrow of any government or incite, encourage, or threaten physical harm against another. +9. Your Contributions do not violate any applicable law, regulation, or rule. +10. Your Contributions do not violate the privacy or publicity rights of any third party. +11. Your Contributions do not contain any material that solicits personal information from anyone under the age of 18 or exploits people under the age of 18 in a sexual or violent manner. +12. Your Contributions do not violate any federal or state law concerning child pornography, or otherwise intended to protect the health or well-being of minors; +13. Your Contributions do not include any offensive comments that are connected to race, national origin, gender, sexual preference, or physical handicap. +14. Your Contributions do not otherwise violate, or link to material that violates, any provision of these Terms of Use, or any applicable law or regulation. + +Any use of the Site in violation of the foregoing violates these Terms of Use and may result in, among other things, termination or suspension of your rights to use the Site. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_6_contribution_license.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_6_contribution_license.md new file mode 100644 index 0000000..689ec3d --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_6_contribution_license.md @@ -0,0 +1,9 @@ +**CONTRIBUTION LICENSE** + +By posting your Contributions to any part of the Site or making Contributions accessible to the Site by linking your account from the Site to any of your social networking accounts, you automatically grant, and you represent and warrant that you have the right to grant, to us an unrestricted, unlimited, irrevocable, perpetual, non-exclusive, transferable, royalty-free, fully-paid, worldwide right, and license to host, use, copy, reproduce, disclose, sell, resell, publish, broadcast, retitle, archive, store, cache, publicly perform, publicly display, reformat, translate, transmit, excerpt (in whole or in part), and distribute such Contributions (including, without limitation, your image and voice) for any purpose, commercial, advertising, or otherwise, and to prepare derivative works of, or incorporate into other works, such Contributions, and grant and authorize sublicenses of the foregoing. The use and distribution may occur in any media formats and through any media channels. + +This license will apply to any form, media, or technology now known or hereafter developed, and includes our use of your name, company name, and franchise name, as applicable, and any of the trademarks, service marks, trade names, logos, and personal and commercial images you provide. You waive all moral rights in your Contributions, and you warrant that moral rights have not otherwise been asserted in your Contributions. + +We do not assert any ownership over your Contributions. You retain full ownership of all of your Contributions and any intellectual property rights or other proprietary rights associated with your Contributions. We are not liable for any statements or representations in your Contributions provided by you in any area on the Site. You are solely responsible for your Contributions to the Site and you expressly agree to exonerate us from any and all responsibility and to refrain from any legal action against us regarding your Contributions. + +We have the right, in our sole and absolute discretion, (1) to edit, redact, or otherwise change any Contributions; (2) to re-categorize any Contributions to place them in more appropriate locations on the Site; and (3) to pre-screen or delete any Contributions at any time and for any reason, without notice. We have no obligation to monitor your Contributions. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_7_social_media.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_7_social_media.md new file mode 100644 index 0000000..56d92b7 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_7_social_media.md @@ -0,0 +1,5 @@ +**SOCIAL MEDIA** + +As part of the functionality of the Site, you may link your account with online accounts you have with third-party service providers (each such account, a "Third-Party Account") by either: (1) providing your Third-Party Account login information through the Site; or (2) allowing us to access your Third-Party Account, as is permitted under the applicable terms and conditions that govern your use of each Third-Party Account. You represent and warrant that you are entitled to disclose your Third-Party Account login information to us and/or grant us access to your Third-Party Account, without breach by you of any of the terms and conditions that govern your use of the applicable Third-Party Account, and without obligating us to pay any fees or making us subject to any usage limitations imposed by the third-party service provider of the Third-Party Account. By granting us access to any Third-Party Accounts, you understand that (1) we may access, make available, and store (if applicable) any content that you have provided to and stored in your Third-Party Account (the “Social Network Content”) so that it is available on and through the Site via your account, including without limitation any friend lists and (2) we may submit to and receive from your Third-Party Account additional information to the extent you are notified when you link your account with the Third-Party Account. Depending on the Third-Party Accounts you choose and subject to the privacy settings that you have set in such Third-Party Accounts, personally identifiable information that you post to your Third-Party Accounts may be available on and through your account on the Site. Please note that if a Third-Party Account or associated service becomes unavailable or our access to such Third Party Account is terminated by the third-party service provider, then Social Network Content may no longer be available on and through the Site. You will have the ability to disable the connection between your account on the Site and your Third-Party Accounts at any time. PLEASE NOTE THAT YOUR RELATIONSHIP WITH THE THIRD- + +PARTY SERVICE PROVIDERS ASSOCIATED WITH YOUR THIRD-PARTY ACCOUNTS IS GOVERNED SOLELY BY YOUR AGREEMENT(S) WITH SUCH THIRD-PARTY SERVICE PROVIDERS. We make no effort to review any Social Network Content for any purpose, including but not limited to, for accuracy, legality, or non-infringement, and we are not responsible for any Social Network Content. You acknowledge and agree that we may access your email address book associated with a Third-Party Account and your contacts list stored on your mobile device or tablet computer solely for purposes of identifying and informing you of those contacts who have also registered to use the Site. You can deactivate the connection between the Site and your Third-Party Account by contacting us using the contact information below or through your account settings (if applicable). We will attempt to delete any information stored on our servers that was obtained through such Third-Party Account, except the username and profile picture that become associated with your account. diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_8_submission.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_8_submission.md new file mode 100644 index 0000000..d390911 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_8_submission.md @@ -0,0 +1,3 @@ +**SUBMISSIONS** + +You acknowledge and agree that any questions, comments, suggestions, ideas, feedback, or other information regarding the Site ("Submissions") provided by you to us are non-confidential and shall become our sole property. We shall own exclusive rights, including all intellectual property rights, and shall be entitled to the unrestricted use and dissemination of these Submissions for any lawful purpose, commercial or otherwise, without acknowledgment or compensation to you. You hereby waive all moral rights to any such Submissions, and you hereby warrant that any such Submissions are original with you or that you have the right to submit such Submissions. You agree there shall be no recourse against us for any alleged or actual infringement or misappropriation of any proprietary right in your Submissions. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/terms_conditions_websites/part_9_thirdparty_websites_content.md b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_9_thirdparty_websites_content.md new file mode 100644 index 0000000..7f69b25 --- /dev/null +++ b/collections/manual_legal/terms_conditions/terms_conditions_websites/part_9_thirdparty_websites_content.md @@ -0,0 +1,3 @@ +**THIRD-PARTY WEBSITES AND CONTENT** + +The Site may contain (or you may be sent via the Site) links to other websites ("Third-Party Websites") as well as articles, photographs, text, graphics, pictures, designs, music, sound, video, information, applications, software, and other content or items belonging to or originating from third parties ("Third-Party Content"). Such Third-Party Websites and Third-Party Content are not investigated, monitored, or checked for accuracy, appropriateness, or completeness by us, and we are not responsible for any Third-Party Websites accessed through the Site or any Third-Party Content posted on, available through, or installed from the Site, including the content, accuracy, offensiveness, opinions, reliability, privacy practices, or other policies of or contained in the Third-Party Websites or the Third-Party Content. Inclusion of, linking to, or permitting the use or installation of any Third-Party Websites or any Third-Party Content does not imply approval or endorsement thereof by us. If you decide to leave the Site and access the Third-Party Websites or to use or install any Third-Party Content, you do so at your own risk, and you should be aware these Terms of Use no longer govern. You should review the applicable terms and policies, including privacy and data gathering practices, of any website to which you navigate from the Site or relating to any applications you use or install from the Site. Any purchases you make through Third-Party Websites will be through other websites and from other companies, and we take no responsibility whatsoever in relation to such purchases which are exclusively between you and the applicable third party. You agree and acknowledge that we do not endorse the products or services offered on Third-Party Websites and you shall hold us harmless from any harm caused by your purchase of such products or services. Additionally, you shall hold us harmless from any losses sustained by you or harm caused to you relating to or resulting in any way from any Third-Party Content or any contact with Third-Party Websites. \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/tfta_to_tft.md b/collections/manual_legal/terms_conditions/tfta_to_tft.md new file mode 100644 index 0000000..33a89ce --- /dev/null +++ b/collections/manual_legal/terms_conditions/tfta_to_tft.md @@ -0,0 +1,9 @@ +## Convert TFTA to TFT + +TFTA is a voluntary staking pool for people to show that they have no intent to sell in near time. + +If you would like migrate TFTA to TFT, it's super easy, just send your TFTA to the following address: + +> GBUT4GP5GJ6B3XW5PXENHQA7TXJI5GOPW3NF4W3ZIW6OOO4ISY6WNLN2 + +and it will return as TFT. We suggest that you try with 1 TFT first! \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions/threefold_companies0.md b/collections/manual_legal/terms_conditions/threefold_companies0.md new file mode 100644 index 0000000..a626914 --- /dev/null +++ b/collections/manual_legal/terms_conditions/threefold_companies0.md @@ -0,0 +1,29 @@ +

ThreeFold Related Companies

+ +The following companies are related parties to ThreeFold. Our terms and conditions apply. + +| THREEFOLD RELATED COMPANIES | Description | +| --------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| [ThreeFold Dubai or ThreeFold Cloud](../../about/threefold_dubai.md) | Promotion of TFGrid + Delivery of ThreeFold Cloud | +| [Threefold_Tech](../../about/threefold_tech.md) | Belgium-based tech company owns IP (Intellectual Property) of tech, is open source | +| [ThreeFold_VZW](../../about/threefold_vzw.md) | Non for profit organization in BE, intented to be used for grants work. | +| [ThreeFold_AG](../../about/threefold_ag.md) | ThreeFold in Zug, Switzerland | +| TF Hub Limited | ThreeFold in BVI | +| Codescalers | Egypt-based software development team, creates a lot of code for ThreeFold | + + +| FARMING COOPERATIVES | | +| ------------------------------------ | ------------------------------------------------ | +| [Mazraa](../../about/mazraa.md) | A farmer in Middle East who is part of ThreeFold_Dubai | +| [BetterToken](../../about/bettertoken.md) | BetterToken is the very first ThreeFold Farming Cooperative in Europe | + + +| SOME LARGER FARMERS | | +| ------------------- | ---------------------------------------------------------------- | +| Green Edge | Early ThreeFold Farmer providing decentralized compute & storage | +| Bancadati | Large ThreeFold Farmer in Switzerland | +| Moresi | A neutral, technologically advanced data center in Switzerland | +| there are many more | ... | + +> Please note, ThreeFold Grid 3.x operates as a [DAO](../../about/dao/dao.md) every party who wants to participate with the ThreeFold Grid uses the [TFChain](../../about/tfchain.md) and our Forums. +> [Click here for more info about our DAO](../../about/dao/tfdao.md) \ No newline at end of file diff --git a/collections/manual_legal/terms_conditions_all3.md b/collections/manual_legal/terms_conditions_all3.md new file mode 100644 index 0000000..07fb1ec --- /dev/null +++ b/collections/manual_legal/terms_conditions_all3.md @@ -0,0 +1,15 @@ +# Legal + +## TFGRID USER and/or FARMER TERMS AND CONDITIONS TFGRID 3.X + +THESE TERMS AND CONDITIONS (THE "**AGREEMENTS**") CONSTITUTE A LEGAL AGREEMENT BETWEEN YOU (“TFGRID **USER**,"“TFGRID **FARMER**," “**YOU**", OR “**YOURS**”) AND TRC (“**THREEFOLD**”, “**COMPANY**,” “**US**,” “**WE**” OR “**OUR**”), GOVERNING THE TERMS OF YOUR PARTICIPATION AS A PARTNER, CUSTOMER, FARMER OR USER IN THE THREEFOLD GRID. YOU UNDERSTAND AND AGREE THAT BY ACCEPTING THE TERMS OF THIS AGREEMENT, EITHER BY CLICKING TO SIGNIFY ACCEPTANCE, OR BY TAKING ANY ONE OR MORE OF THE FOLLOWING ACTIONS DOWNLOADING, INSTALLING, RUNNING,/AND OR USING THE APPLICABLE SOFTWARE, YOU AGREE TO BE BOUND BY THE TERMS OF THIS AGREEMENT EFFECTIVE AS OF THE DATE THAT YOU TAKE THE EARLIEST OF ONE OF THE FOREGOING ACTIONS. YOU REPRESENT AND WARRANT THAT YOU ARE 18 YEARS OLD OR OLDER AND HAVE THE RIGHT AND AUTHORITY TO ENTER INTO AND COMPLY WITH THE TERMS OF THIS AGREEMENT. + +> BY USING THE TFGRID OR ANY OF THE THREEFOLD PROVIDED SOFTWARE OR SERVICES YOU ACCEPT THE FOLLOWING AGREEMENTS: + +- [X] [Disclaimer](./disclaimer.md) +- [X] [Definitions](./definitions_legal.md) +- [X] [Privacy Policy](./privacypolicy.md) +- [X] [Terms & Conditions ThreeFold Related Websites](./terms_conditions/terms_conditions_websites.md) +- [X] [Terms & Conditions TFGrid Users TFGrid 3](./terms_conditions/terms_conditions_griduser.md) +- [X] [Terms & Conditions TFGrid Farmers TFGrid 3](./terms_conditions/terms_conditions_farmer3.md) +- [X] [Terms & Conditions TFGrid Sales](./terms_conditions/terms_conditions_sales.md) \ No newline at end of file diff --git a/collections/manual_legal/threefold_fzc_address.md b/collections/manual_legal/threefold_fzc_address.md new file mode 100644 index 0000000..7ae3c9e --- /dev/null +++ b/collections/manual_legal/threefold_fzc_address.md @@ -0,0 +1,2 @@ +Q1-07-038/B SAIF Zone, Sharjah +United Arab Emirates \ No newline at end of file diff --git a/collections/system_administrators/.collection b/collections/system_administrators/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/system_administrators/advanced/advanced.md b/collections/system_administrators/advanced/advanced.md new file mode 100644 index 0000000..2a01387 --- /dev/null +++ b/collections/system_administrators/advanced/advanced.md @@ -0,0 +1,20 @@ +

TFGrid Advanced

+ +In this section, we delve into sophisticated topics and powerful functionalities that empower you to harness the full potential of TFGrid 3.0. Whether you're an experienced user seeking to deepen your understanding or a trailblazer venturing into uncharted territories, this manual is your gateway to mastering advanced concepts on the ThreeFold Grid. + +

Table of Contents

+ +- [Token Transfer Keygenerator](./token_transfer_keygenerator.md) +- [Cancel Contracts](./cancel_contracts.md) +- [Contract Bills Reports](./contract_bill_report.md) +- [Listing Free Public IPs](./list_public_ips.md) +- [Cloud Console](./cloud_console.md) +- [Redis](./grid3_redis.md) +- [IPFS](./ipfs/ipfs_toc.md) + - [IPFS on a Full VM](./ipfs/ipfs_fullvm.md) + - [IPFS on a Micro VM](./ipfs/ipfs_microvm.md) +- [Hummingbot](./hummingbot.md) +- [AI & ML Workloads](./ai_ml_workloads.md) +- [Ecommerce](./ecommerce/ecommerce.md) + - [WooCommerce](./ecommerce/woocommerce.md) + - [nopCommerce](./ecommerce/nopcommerce.md) diff --git a/collections/system_administrators/advanced/ai_ml_workloads.md b/collections/system_administrators/advanced/ai_ml_workloads.md new file mode 100644 index 0000000..bc5760e --- /dev/null +++ b/collections/system_administrators/advanced/ai_ml_workloads.md @@ -0,0 +1,125 @@ +

AI & ML Workloads

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Prepare the System](#prepare-the-system) +- [Install the GPU Driver](#install-the-gpu-driver) +- [Set a Python Virtual Environment](#set-a-python-virtual-environment) +- [Install PyTorch and Test Cuda](#install-pytorch-and-test-cuda) +- [Set and Access Jupyter Notebook](#set-and-access-jupyter-notebook) +- [Run AI/ML Workloads](#run-aiml-workloads) + +*** + +## Introduction + +We present a basic method to deploy artificial intelligence (AI) and machine learning (ML) on the TFGrid. For this, we make use of dedicated nodes and GPU support. + +In the first part, we show the steps to install the Nvidia driver of a GPU card on a full VM Ubuntu 22.04 running on the TFGrid. + +In the second part, we show how to use PyTorch to run AI/ML tasks. + +## Prerequisites + +You need to reserve a [dedicated GPU node](../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid. + +## Prepare the System + +- Update the system + ``` + dpkg --add-architecture i386 + apt-get update + apt-get dist-upgrade + reboot + ``` +- Check the GPU info + ``` + lspci | grep VGA + lshw -c video + ``` + +## Install the GPU Driver + +- Download the latest Nvidia driver + - Check which driver is recommended + ``` + apt install ubuntu-drivers-common + ubuntu-drivers devices + ``` + - Install the recommended driver (e.g. with 535) + ``` + apt install nvidia-driver-535 + ``` + - Reboot and reconnect to the VM +- Check the GPU status + ``` + nvidia-smi + ``` + +Now that the GPU node is set, let's work on setting PyTorch to run AI/ML workloads. + +## Set a Python Virtual Environment + +Before installing Python package with pip, you should create a virtual environment. + +- Install the prerequisites + ``` + apt update + apt install python3-pip python3-dev + pip3 install --upgrade pip + pip3 install virtualenv + ``` +- Create a virtual environment + ``` + mkdir ~/python_project + cd ~/python_project + virtualenv python_project_env + source python_project_env/bin/activate + ``` + +## Install PyTorch and Test Cuda + +Once you've created and activated a virtual environment for Pyhton, you can install different Python packages. + +- Install PyTorch and upgrade Numpy + ``` + pip3 install torch + pip3 install numpy --upgrade + ``` + +Before going further, you can check if Cuda is properly installed on your machine. + +- Check that Cuda is available on Python with PyTorch by using the following lines: + ``` + import torch + torch.cuda.is_available() + torch.cuda.device_count() # the output should be 1 + torch.cuda.current_device() # the output should be 0 + torch.cuda.device(0) + torch.cuda.get_device_name(0) + ``` + +## Set and Access Jupyter Notebook + +You can run Jupyter Notebook on the remote VM and access it on your local browser. + +- Install Jupyter Notebook + ``` + pip3 install notebook + ``` +- Run Jupyter Notebook in no-browser mode and take note of the URL and the token + ``` + jupyter notebook --no-browser --port=8080 --ip=0.0.0.0 + ``` +- On your local machine, copy and paste on a browser the given URL but make sure to change `127.0.0.1` with the WireGuard IP (here it is `10.20.4.2`) and to set the correct token. + ``` + http://10.20.4.2:8080/tree?token= + ``` + +## Run AI/ML Workloads + +After following the steps above, you should now be able to run Python codes that will make use of your GPU node to compute AI and ML workloads. + +Feel free to explore different ways to use this feature. For example, the [HuggingFace course](https://huggingface.co/learn/nlp-course/chapter1/1) on natural language processing is a good introduction to machine learning. \ No newline at end of file diff --git a/collections/system_administrators/advanced/cancel_contracts.md b/collections/system_administrators/advanced/cancel_contracts.md new file mode 100644 index 0000000..7b466a0 --- /dev/null +++ b/collections/system_administrators/advanced/cancel_contracts.md @@ -0,0 +1,48 @@ +

Cancel Contracts

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Using the Dashboard](#using-the-dashboard) +- [Using GraphQL and Polkadot UI](#using-graphql-and-polkadot-ui) +- [Using grid3\_client\_ts](#using-grid3_client_ts) + +*** + +## Introduction + +We present different methods to delete contracts on the TFGrid. + +## Using the Dashboard + +To cancel contracts with the Dashboard, consult the [Contracts List](../../dashboard/deploy/your_contracts.md) documentation. + +## Using GraphQL and Polkadot UI + +From the QraphQL service execute the following query. + +``` +query MyQuery { + + nodeContracts(where: {twinId_eq: TWIN_ID, state_eq: Created}) { + contractId + } +} + +``` + +replace `TWIN_ID` with your twin id. The information should be available on the [Dashboard](../../dashboard/dashboard.md). + +Then from [polkadot UI](https://polkadot.js.org/apps/), add the tfchain endpoint to development. + +![](img/polka_web_add_development_url.png) + +Go to `Extrinsics`, choose the `smartContract` module and `cancelContract` extrinsic and use the IDs from GraphQL to execute the cancelation. + +![](img/polka_web_cancel_contracts.jpg) + +## Using grid3_client_ts + +In order to use the `grid3_client_ts` module, it is essential to first clone our official mono-repo containing the module and then navigate to it. If you are looking for a quick and efficient way to cancel contracts, we offer a code-based solution that can be found [here](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/scripts/delete_all_contracts.ts). + +To make the most of `grid_client`, we highly recommend following our [Grid-Client guide](https://github.com/threefoldtech/tfgrid-sdk-ts/blob/development/packages/grid_client/README.md) for a comprehensive overview of the many advanced capabilities offered by this powerful tool. With features like contract creation, modification, and retrieval, `grid_client` provides an intuitive and easy-to-use solution for managing your contracts effectively. diff --git a/collections/system_administrators/advanced/cloud_console.md b/collections/system_administrators/advanced/cloud_console.md new file mode 100644 index 0000000..ee8d15e --- /dev/null +++ b/collections/system_administrators/advanced/cloud_console.md @@ -0,0 +1,33 @@ +

Cloud Console

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) +- [Connect to Cloud Console](#connect-to-cloud-console) + +--- + +## Introduction + +Cloud console is a tool to view machine logging and interact with the machine you have deployed. We show the basics of cloud-console and how to access it via a browser during deployment. + +## Overview + +Cloud console always runs on the machine's private network ip and port number equla to `20000 +last octect` of machine private IP. For example if the machine ip is `10.20.2.2/24`, this means that `cloud-console` is running on `10.20.2.1:20002`. + +For the cloud-console to run we need to start the cloud-hypervisor with option "--serial pty" instead of tty, this allows us to interact with the vm from another process, `cloud-console` in our case. + +## Connect to Cloud Console + +You can easily connect to cloud console on the TFGrid. + +- Deploy a VM on the TFGrid with the WireGuard network +- Set the WireGuard configuration file +- Start the WireGuard connection: + ``` + wg-quick up wireguard.conf + ``` +- Go to your browser with the network router IP `10.20.2.1:20002` to access cloud console. + +> Note: You might need to create a user/password in the VM first before connecting to cloud-console if the image used does not have a default user. \ No newline at end of file diff --git a/collections/system_administrators/advanced/contract_bill_report.md b/collections/system_administrators/advanced/contract_bill_report.md new file mode 100644 index 0000000..1269df2 --- /dev/null +++ b/collections/system_administrators/advanced/contract_bill_report.md @@ -0,0 +1,63 @@ +

Contract Bills Reports

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Contract Billing Report (GraphQL)](#contract-billing-report-graphql) +- [Consumption](#consumption) + +*** + +## Introduction + +Now you can check the billing rate of your contracts directly from the `Contracts` tab in the Dashboard. + +> It takes an hour for the contract to display the billing rate (Until it reaches the first billing cycle). + +The `Billing Rate` is displayed in `TFT/Hour` + +![image](img/billing_rate.png) + +## Contract Billing Report (GraphQL) + +- you need to find the contract ID +- ask the graphql for the consumption + +> example query for all contracts + +```graphql +query MyQuery { + contractBillReports { + contractId + amountBilled + discountReceived + } +} +``` + +And for a specific contract + +```graphql +query MyQuery { + contractBillReports(where: { contractId_eq: 10 }) { + amountBilled + discountReceived + contractId + } +} +``` + +## Consumption + +```graphql +query MyQuery { + consumptions(where: { contractId_eq: 10 }) { + contractId + cru + sru + mru + hru + nru + } +} +``` diff --git a/collections/system_administrators/advanced/ecommerce/ecommerce.md b/collections/system_administrators/advanced/ecommerce/ecommerce.md new file mode 100644 index 0000000..63a65ee --- /dev/null +++ b/collections/system_administrators/advanced/ecommerce/ecommerce.md @@ -0,0 +1,8 @@ +

Ecommerce

+ +You can easily deploy a free and open-source ecommerce on the TFGrid. We present here two of the most popular options. + +

Table of Contents

+ +- [WooCommerce](./woocommerce.md) +- [nopCommerce](./nopcommerce.md) \ No newline at end of file diff --git a/collections/system_administrators/advanced/ecommerce/img/nopcommerce_1.png b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_1.png new file mode 100644 index 0000000..cc68f2a Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_1.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/nopcommerce_2.png b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_2.png new file mode 100644 index 0000000..e2da8b2 Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_2.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/nopcommerce_3.png b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_3.png new file mode 100644 index 0000000..15967b0 Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_3.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/nopcommerce_4.png b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_4.png new file mode 100644 index 0000000..c686452 Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/nopcommerce_4.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/woocommerce_1.png b/collections/system_administrators/advanced/ecommerce/img/woocommerce_1.png new file mode 100644 index 0000000..6b295de Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/woocommerce_1.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/woocommerce_2.png b/collections/system_administrators/advanced/ecommerce/img/woocommerce_2.png new file mode 100644 index 0000000..93512ec Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/woocommerce_2.png differ diff --git a/collections/system_administrators/advanced/ecommerce/img/woocommerce_3.png b/collections/system_administrators/advanced/ecommerce/img/woocommerce_3.png new file mode 100644 index 0000000..c975a87 Binary files /dev/null and b/collections/system_administrators/advanced/ecommerce/img/woocommerce_3.png differ diff --git a/collections/system_administrators/advanced/ecommerce/nopcommerce.md b/collections/system_administrators/advanced/ecommerce/nopcommerce.md new file mode 100644 index 0000000..47e9f8e --- /dev/null +++ b/collections/system_administrators/advanced/ecommerce/nopcommerce.md @@ -0,0 +1,269 @@ +

Ecommerce on the TFGrid

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Create an SSH Tunnel](#create-an-ssh-tunnel) +- [Preparing the VM](#preparing-the-vm) +- [Set a Firewall](#set-a-firewall) +- [Download nopCommerce](#download-nopcommerce) +- [Access nopCommerce](#access-nopcommerce) +- [Install nopCommerce](#install-nopcommerce) +- [Access the Ecommerce from the Public Internet](#access-the-ecommerce-from-the-public-internet) + - [Set a DNS Record](#set-a-dns-record) + - [Access the Ecommerce](#access-the-ecommerce) + - [HTTPS with Caddy](#https-with-caddy) + - [Manage with Systemd](#manage-with-systemd) +- [Access Admin Panel](#access-admin-panel) +- [Manage nopCommerce with Systemd](#manage-nopcommerce-with-systemd) +- [References](#references) +- [Questions and Feedback](#questions-and-feedback) +--- + +## Introduction + +We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a full VM with an IPv4 address. + +[nopCommerce](https://www.nopcommerce.com/en) is an open-source ecommerce platform based on Microsoft's ASP.NET Core framework and MS SQL Server 2012 (or higher) backend Database. + +## Prerequisites + +- [A TFChain account](../../../dashboard/wallet_connector.md) +- TFT in your TFChain account + - [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md) + - [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md) + +## Deploy a Full VM + +We start by deploying a full VM on the ThreeFold Dashboard. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/) +* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for a full VM + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 512MB + * Minimum storage: 15GB +* After deployment, note the VM IPv4 address + +## Create an SSH Tunnel + +We create an SSH tunnel with port 5432:80, as it is this combination that we will set for nopCommerce on the docker-compose file. + +- Open a terminal and create an SSH tunnel + ``` + ssh -4 -L 5432:127.0.0.1:80 root@VM_IPv4_address> + ``` + +Simply leave this window open and follow the next steps. + +## Preparing the VM + +We prepare the full to run nopCommerce. + +* Connect to the VM via SSH + ``` + ssh root@VM_IPv4_address + ``` +* Update the VM + ``` + apt update + ``` +* [Install Docker](../../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine) +* Install docker-compose + ``` + apt install docker-compose -y + ``` + +## Set a Firewall + +You can set a firewall to your VM for further security. This should be used in production mode. + +* Add the permissions + * ``` + ufw allow 80 + ufw allow 443 + ``` +* Enable the firewall + * ``` + ufw enable + ``` +* Verify the fire wall status + * ``` + ufw status verbose + ``` + +## Download nopCommerce + +* Clone the repository + ``` + git clone https://github.com/nopSolutions/nopCommerce.git + cd nopCommerce + ``` +* Build the image + ``` + cd nopCommerce + docker-compose -f ./postgresql-docker-compose.yml build + ``` +* Run the image + ``` + docker-compose -f ./postgresql-docker-compose.yml up + ``` + +## Access nopCommerce + +You can access the nopCommerce interface on a browser with port 5432 via the SSH tunnel: + +``` +localhost:5432 +``` + +![](./img/nopcommerce_1.png) + +For more information on how to use nopCommerce, refer to the [nopCommerce docs](https://docs.nopcommerce.com/en/index.html). + +## Install nopCommerce + +You will need to set your ecommerce store and database information. + +- Enter an email for your website (e.g. `admin@example.com`) +- For the database, choose PostgreSQL and check both options `Create a database` and `Enter raw connection`. Enter the following information (as per the docker-compose information) + ``` + Server=nopcommerce_database;Port=5432;Database=nop;User Id=postgres;Password=nopCommerce_db_password; + ``` +- Note: For production, you will need to set your own username and password. + +## Access the Ecommerce from the Public Internet + +### Set a DNS Record + +* Go to your domain name registrar + * In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on: + * Type: A Record + * Host: @ + * Value: + * TTL: Automatic + * It might take up to 30 minutes to set the DNS properly. + * To check if the A record has been registered, you can use a common DNS checker: + * ``` + https://dnschecker.org/#A/example.com + ``` + +### Access the Ecommerce + +You can now go on a web browser and access your website via your domain, e.g. `example.com`. + +![](./img/nopcommerce_2.png) + +### HTTPS with Caddy + +We set HTTPS with Caddy. + +- Install Caddy + ``` + apt install -y debian-keyring debian-archive-keyring apt-transport-https curl + curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg + curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list + apt update + apt install caddy + ``` +- Set a reverse proxy on port 80 with your own domain + ``` + caddy reverse-proxy -r --from example.com --to :80 + ``` + +You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache. + +![](./img/nopcommerce_3.png) + +When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent. + +#### Manage with Systemd + +We create a systemd service to always run the reverse proxy for port 80. + +- Create a caddy service + ```bash + nano /etc/systemd/system/caddy.service + ``` +- Set the service with your own domain + ``` + [Unit] + Description=Caddy Service + StartLimitIntervalSec=0 + + [Service] + Restart=always + RestartSec=5 + ExecStart=caddy reverse-proxy -r --from example.com --to :80 + + [Install] + WantedBy=multi-user.target + ``` +- Enable the service + ``` + systemctl daemon-reload + systemctl enable caddy + systemctl start caddy + ``` +- Verify that the Caddy service is properly running + ``` + systemctl status caddy + ``` + +Systemd will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots. + +## Access Admin Panel + +You can access the admin panel by clicking on `Log in` and providing the admin username and password set during the nopCommerce installation. + +![](./img/nopcommerce_4.png) + +In `Add your store info`, you can set the HTTPS address of your domain and enable SSL. + +You will need to properly configure your ecommerce instance for your own needs and products. Read the nopCommerce docs for more information. + +## Manage nopCommerce with Systemd + +We create a systemd service to always run the nopCommerce docker-compose file. + +- Create a nopcommerce service + ```bash + nano /etc/systemd/system/nopcommerce.service + ``` +- Set the service with your own domain + ``` + [Unit] + Description=nopCommerce Service + StartLimitIntervalSec=0 + + [Service] + Restart=always + RestartSec=5 + StandardOutput=append:/root/nopcommerce.log + StandardError=append:/root/nopcommerce.log + ExecStart=docker-compose -f /root/nopCommerce/postgresql-docker-compose.yml up + [Install] + WantedBy=multi-user.target + ``` +- Enable the service + ``` + systemctl daemon-reload + systemctl enable nopcommerce + systemctl start nopcommerce + ``` +- Verify that the Caddy service is properly running + ``` + systemctl status nopcommerce + ``` + +Systemd will start up the nopCommerce docker-compose file, restart it if it ever crashes, and start it up automatically after any reboots. + +## References + +For further information on how to set nopCommerce, read the [nopCommerce documentation](https://docs.nopcommerce.com/en/index.html?showChildren=false). + +## Questions and Feedback + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/system_administrators/advanced/ecommerce/woocommerce.md b/collections/system_administrators/advanced/ecommerce/woocommerce.md new file mode 100644 index 0000000..edc6f6e --- /dev/null +++ b/collections/system_administrators/advanced/ecommerce/woocommerce.md @@ -0,0 +1,157 @@ +

WooCommerce on the TFGrid

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy Wordpress](#deploy-wordpress) +- [Set a DNS Record](#set-a-dns-record) +- [HTTPS with Caddy](#https-with-caddy) + - [Adjust the Firewall](#adjust-the-firewall) + - [Manage with zinit](#manage-with-zinit) +- [Access Admin Panel](#access-admin-panel) +- [Install WooCommerce](#install-woocommerce) +- [Troubleshooting](#troubleshooting) +- [References](#references) +- [Questions and Feedback](#questions-and-feedback) +--- + +## Introduction + +We show how to deploy a free and open-source ecommerce on the ThreeFold Grid. We will be deploying on a micro VM with an IPv4 address. + +[WooCommerce](https://woocommerce.com/) is the open-source ecommerce platform for [WordPress](https://wordpress.com/). The platform is free, flexible, and amplified by a global community. The freedom of open-source means you retain full ownership of your store’s content and data forever. + +## Prerequisites + +- [A TFChain account](../../../dashboard/wallet_connector.md) +- TFT in your TFChain account + - [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md) + - [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md) + +## Deploy Wordpress + +We start by deploying Wordpress on the ThreeFold Dashboard. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [Wordpress deloyment page](https://dashboard.test.grid.tf/#/deploy/applications/wordpress/) +* Deploy a Wordpress with an IPv4 address and sufficient resources to run Wordpress + * IPv4 Address + * Minimum vcores: 2vcore + * Minimum MB of RAM: 4GB + * Minimum storage: 50GB +* After deployment, note the VM IPv4 address + +## Set a DNS Record + +* Go to your domain name registrar + * In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on: + * Type: A Record + * Host: @ + * Value: + * TTL: Automatic + * It might take up to 30 minutes to set the DNS properly. + * To check if the A record has been registered, you can use a common DNS checker: + * ``` + https://dnschecker.org/#A/example.com + ``` + +## HTTPS with Caddy + +We set HTTPS with Caddy. + +- Install Caddy + ``` + apt install -y debian-keyring debian-archive-keyring apt-transport-https curl + curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg + curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' > /etc/apt/sources.list.d/caddy-stable.list + apt update + apt install caddy + ``` +- Set a reverse proxy on port 80 with your own domain + ``` + caddy reverse-proxy -r --from example.com --to :80 + ``` + +You should see in the logs that it successfully obtains an SSL certificate, and after that you can try navigating to your site's domain again to verify it's working. Using a private window or adding `https://` specifically might be necessary until your browser drops its cache. + +When you're satisfied that everything looks good, hit `ctl-c` to exit Caddy and we'll proceed to making this persistent. + +### Adjust the Firewall + +By default, ufw is set on Wordpress application from the Dashboard. To use Caddy and set HTTPS, we want to allow port 443. + +* Add the permissions + * ``` + ufw allow 443 + ``` + +### Manage with zinit + +We manage Caddy with zinit. + +- Open the file for editing + ```bash + nano /etc/zinit/caddy.yaml + ``` +- Insert the following line with your own domain and save the file + ``` + exec: caddy reverse-proxy -r --from example.com --to :80 + ``` +- Add the new Caddy file to zinit + ```bash + zinit monitor caddy + ``` + +Zinit will start up Caddy immediately, restart it if it ever crashes, and start it up automatically after any reboots. Assuming you tested the Caddy invocation above and used the same form here, that should be all there is to it. + +Here are some other Zinit commands that could be helpful to troubleshoot issues: + +- See status of all services (same as "zinit list") + ``` + zinit + ``` +- Get logs for a service + ``` + zinit log caddy + ``` +- Restart a service (to test configuration changes, for example) + ``` + zinit stop caddy + zinit start caddy + ``` + +## Access Admin Panel + +You can access the admin panel by clicking on `Admin panel` under `Actions` on the Dashboard. You can also use the following template on a browser with your own domain: + +``` +example.com/wp-admin +``` + +If you've forgotten your credentials, just open the Wordpress info window on the Dashboard. + +## Install WooCommerce + +On the Wordpress admin panel, go to `Plugins` and search for WooCommerce. + +![](./img/woocommerce_1.png) + +Once this is done, you can open WooCommerce on the left-side menu. + +![](./img/woocommerce_2.png) + +You can then set your store and start your online business! + +![](./img/woocommerce_3.png) + +## Troubleshooting + +You might need to deactivate some plugins that aren't compatible with WooCommerce, such as `MailPoet`. + +## References + +Make sure to read the [Wordpress and Woocommerce documentation](https://woocommerce.com/document/woocommerce-self-service-guide) to set your ecommerce. + +## Questions and Feedback + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/system_administrators/advanced/grid3_redis.md b/collections/system_administrators/advanced/grid3_redis.md new file mode 100644 index 0000000..0d3377e --- /dev/null +++ b/collections/system_administrators/advanced/grid3_redis.md @@ -0,0 +1,46 @@ +

Redis

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Redis](#install-redis) + - [Linux](#linux) + - [MacOS](#macos) +- [Run Redis](#run-redis) + +*** + +## Introduction + +Redis is an open-source, in-memory data structure store that is widely used as a caching layer, message broker, and database. It is known for its speed, versatility, and support for a wide range of data structures. Redis is designed to deliver high-performance data access by storing data in memory, which allows for fast read and write operations. It supports various data types, including strings, lists, sets, hashes, and more, and provides a rich set of commands for manipulating and querying the data. + +Redis is widely used in various use cases, including caching, session management, real-time analytics, leaderboards, task queues, and more. Its simplicity, speed, and flexibility make it a popular choice for developers who need a fast and reliable data store for their applications. In Threefold's ecosystem context, Redis can be used as a backend mechanism to communicate with the nodes on the ThreeFold Grid using the Reliable Message Bus. + + + +## Install Redis + +### Linux + +If you don't find Redis in your Linux distro's package manager, check the [Redis downloads](https://redis.io/download) page for the source code and installation instructions. + +### MacOS + +On MacOS, [Homebrew](https://brew.sh/) can be used to install Redis. The steps are as follow: + +``` +brew update +brew install redis +``` + +Alternatively, it can be built from source, using the same [download page](https://redis.io/download/) as shown above. + + + +## Run Redis + +You can launch the Redis server with the following command: + +``` +redis-server +``` \ No newline at end of file diff --git a/collections/system_administrators/advanced/grid3_stellar_tfchain_bridge.md b/collections/system_administrators/advanced/grid3_stellar_tfchain_bridge.md new file mode 100644 index 0000000..f74cbdb --- /dev/null +++ b/collections/system_administrators/advanced/grid3_stellar_tfchain_bridge.md @@ -0,0 +1,39 @@ +

Transferring TFT Between Stellar and TFChain

+ +

Table of Contents

+ +- [Usage](#usage) +- [Prerequisites](#prerequisites) +- [Stellar to TFChain](#stellar-to-tfchain) +- [TFChain to Stellar](#tfchain-to-stellar) + +*** + +## Usage + +This document will explain how you can transfer TFT from Tfchain to Stellar and back. + +For more information on TFT bridges, read [this documentation](../threefold_token/tft_bridges/tft_bridges.md). + +## Prerequisites + +- [Stellar wallet](../threefold_token/storing_tft/storing_tft.md) + +- [Account on TFChain (use TF Dashboard to create one)](../dashboard/wallet_connector.md) + +![](./img/bridge.png) + +## Stellar to TFChain + +You can deposit to Tfchain using the bridge page on the TF Dashboard, click deposit: + +![bridge](./img/bridge_deposit.png) + +## TFChain to Stellar + +You can bridge back to stellar using the bridge page on the dashboard, click withdraw: + +![withdraw](./img/bridge_withdraw.png) + +A withdrawfee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT. +The amount withdrawn from TFChain will be sent to your Stellar wallet. diff --git a/collections/system_administrators/advanced/hummingbot.md b/collections/system_administrators/advanced/hummingbot.md new file mode 100644 index 0000000..1a154b7 --- /dev/null +++ b/collections/system_administrators/advanced/hummingbot.md @@ -0,0 +1,80 @@ +

Hummingbot on a Full VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Preparing the VM](#preparing-the-vm) +- [Setting Hummingbot](#setting-hummingbot) +- [References](#references) + +--- + +## Introduction + +Hummingbot is an open source platform that helps you design, backtest, and deploy fleets of automated crypto trading bots. + +In this guide, we go through the basic steps to deploy a [Hummingbot](https://hummingbot.org/) instance on a full VM running on the TFGrid. + + +## Prerequisites + +- [A TFChain account](../../../dashboard/wallet_connector.md) +- TFT in your TFChain account + - [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md) + - [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md) + +## Deploy a Full VM + +We start by deploying a full VM on the ThreeFold Dashboard. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/) +* Deploy a full VM (Ubuntu 22.04) with an IPv4 address and at least the minimum specs for Hummingbot + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 4096GB + * Minimum storage: 15GB +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` + +## Preparing the VM + +We prepare the full to run Hummingbot. + +* Update the VM + ``` + apt update + ``` +* [Install Docker](../computer_it_basics/docker_basics.html#install-docker-desktop-and-docker-engine) + +## Setting Hummingbot + +We clone the Hummingbot repo and start it via Docker. + +* Clone the Hummingbot repository + ``` + git clone https://github.com/hummingbot/hummingbot.git + cd hummingbot + ``` +* Start Hummingbot + ``` + docker compose up -d + ``` +* Attach to instance + ``` + docker attach hummingbot + ``` + +You should now see the Hummingbot page. + +![](./img/hummingbot.png) + +## References + +The information to install Hummingbot have been taken directly from their [documentation](https://hummingbot.org/installation/docker/). + +For any advanced configurations, you may refer to the Hummingbot documentation. \ No newline at end of file diff --git a/collections/system_administrators/advanced/img/advanced_.png b/collections/system_administrators/advanced/img/advanced_.png new file mode 100644 index 0000000..0065213 Binary files /dev/null and b/collections/system_administrators/advanced/img/advanced_.png differ diff --git a/collections/system_administrators/advanced/img/billing_rate.png b/collections/system_administrators/advanced/img/billing_rate.png new file mode 100644 index 0000000..b22a35a Binary files /dev/null and b/collections/system_administrators/advanced/img/billing_rate.png differ diff --git a/collections/documentation/dashboard/img/bridge.png b/collections/system_administrators/advanced/img/bridge.png similarity index 100% rename from collections/documentation/dashboard/img/bridge.png rename to collections/system_administrators/advanced/img/bridge.png diff --git a/collections/documentation/dashboard/img/bridge_deposit.png b/collections/system_administrators/advanced/img/bridge_deposit.png similarity index 100% rename from collections/documentation/dashboard/img/bridge_deposit.png rename to collections/system_administrators/advanced/img/bridge_deposit.png diff --git a/collections/documentation/dashboard/img/bridge_withdraw.png b/collections/system_administrators/advanced/img/bridge_withdraw.png similarity index 100% rename from collections/documentation/dashboard/img/bridge_withdraw.png rename to collections/system_administrators/advanced/img/bridge_withdraw.png diff --git a/collections/system_administrators/advanced/img/contracts_list.png b/collections/system_administrators/advanced/img/contracts_list.png new file mode 100644 index 0000000..a252229 Binary files /dev/null and b/collections/system_administrators/advanced/img/contracts_list.png differ diff --git a/collections/system_administrators/advanced/img/hummingbot.png b/collections/system_administrators/advanced/img/hummingbot.png new file mode 100644 index 0000000..ab81cfa Binary files /dev/null and b/collections/system_administrators/advanced/img/hummingbot.png differ diff --git a/collections/system_administrators/advanced/img/ipfs_logo.png b/collections/system_administrators/advanced/img/ipfs_logo.png new file mode 100644 index 0000000..03a5db4 Binary files /dev/null and b/collections/system_administrators/advanced/img/ipfs_logo.png differ diff --git a/collections/system_administrators/advanced/img/minio_1.png b/collections/system_administrators/advanced/img/minio_1.png new file mode 100644 index 0000000..58d1627 Binary files /dev/null and b/collections/system_administrators/advanced/img/minio_1.png differ diff --git a/collections/system_administrators/advanced/img/minio_2.png b/collections/system_administrators/advanced/img/minio_2.png new file mode 100644 index 0000000..3db775d Binary files /dev/null and b/collections/system_administrators/advanced/img/minio_2.png differ diff --git a/collections/system_administrators/advanced/img/polka_web_add_development_url.png b/collections/system_administrators/advanced/img/polka_web_add_development_url.png new file mode 100644 index 0000000..159e7a1 Binary files /dev/null and b/collections/system_administrators/advanced/img/polka_web_add_development_url.png differ diff --git a/collections/system_administrators/advanced/img/polka_web_cancel_contracts.jpg b/collections/system_administrators/advanced/img/polka_web_cancel_contracts.jpg new file mode 100644 index 0000000..b704900 Binary files /dev/null and b/collections/system_administrators/advanced/img/polka_web_cancel_contracts.jpg differ diff --git a/collections/system_administrators/advanced/img/swap_to_stellar.png b/collections/system_administrators/advanced/img/swap_to_stellar.png new file mode 100644 index 0000000..0473664 Binary files /dev/null and b/collections/system_administrators/advanced/img/swap_to_stellar.png differ diff --git a/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md b/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md new file mode 100644 index 0000000..e61a173 --- /dev/null +++ b/collections/system_administrators/advanced/ipfs/ipfs_fullvm.md @@ -0,0 +1,190 @@ +

IPFS on a Full VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Create a Root-Access User](#create-a-root-access-user) +- [Set a Firewall](#set-a-firewall) + - [Additional Ports](#additional-ports) +- [Install IPFS](#install-ipfs) +- [Set IPFS](#set-ipfs) +- [Final Verification](#final-verification) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold guide, we explore how to set an IPFS node on a Full VM using the ThreeFold Playground. + +## Deploy a Full VM + +We start by deploying a full VM on the ThreeFold Playground. + +* Go to the [Threefold Playground](https://playground.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) with an IPv4 address and at least the minimum specs + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 1024GB + * Minimum storage: 50GB +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` + +## Create a Root-Access User + +We create a root-access user. Note that this step is optional. + +* Once connected, create a new user with root access (for this guide we use "newuser") + * ``` + adduser newuser + ``` + * You should now see the new user directory + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Switch to the new user + * ``` + su - newuser + ``` + * Create a directory to store the public key + * ``` + mkdir ~/.ssh + ``` + * Give read, write and execute permissions for the directory to the new user + * ``` + chmod 700 ~/.ssh + ``` + * Add the SSH public key in the file **authorized_keys** and save it + * ``` + nano ~/.ssh/authorized_keys + ``` +* Exit the VM + * ``` + exit + ``` +* Reconnect with the new user + * ``` + ssh newuser@VM_IPv4_address + ``` + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). +We thus add the following rules: +* Allow SSH (port 22) + * ``` + sudo ufw allow ssh + ``` +* Allow port 4001 + * ``` + sudo ufw allow 4001 + ``` +* To enable the firewall, write the following: + * ``` + sudo ufw enable + ``` +* To see the current security rules, write the following: + * ``` + sudo ufw status verbose + ``` +You now have enabled the firewall with proper security rules for your IPFS deployment. + +### Additional Ports + +We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible. +If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001. +If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this. +If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/). + +## Install IPFS + +We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). +* Download the binary + * ``` + wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Unzip the file + * ``` + tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Change directory + * ``` + cd kubo + ``` +* Run the install script + * ``` + sudo bash install.sh + ``` +* Verify that IPFS Kubo is properly installed + * ``` + ipfs --version + ``` + +## Set IPFS + +We initialize IPFS and run the IPFS daemon. + +* Initialize IPFS + * ``` + ipfs init --profile server + ``` +* Increase the storage capacity (optional) + * ``` + ipfs config Datastore.StorageMax 30GB + ``` +* Run the IPFS daemon + * ``` + ipfs daemon + ``` +* Set an Ubuntu systemd service to keep the IPFS daemon running after exiting the VM + * ``` + sudo nano /etc/systemd/system/ipfs.service + ``` +* Enter the systemd info + * ``` + [Unit] + Description=IPFS Daemon + [Service] + Type=simple + ExecStart=/usr/local/bin/ipfs daemon --enable-gc + Group=newuser + Restart=always + Environment="IPFS_PATH=/home/newuser/.ipfs" + [Install] + WantedBy=multi-user.target + ``` +* Enable the service + * ``` + sudo systemctl daemon-reload + sudo systemctl enable ipfs + sudo systemctl start ipfs + ``` +* Verify that the IPFS daemon is properly running + * ``` + sudo systemctl status ipfs + ``` +## Final Verification +We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. +* Reboot the VM + * ``` + sudo reboot + ``` +* Reconnect to the VM + * ``` + ssh newuser@VM_IPv4_address + ``` +* Check that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` +## Questions and Feedback +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/system_administrators/advanced/ipfs/ipfs_microvm.md b/collections/system_administrators/advanced/ipfs/ipfs_microvm.md new file mode 100644 index 0000000..2f58f16 --- /dev/null +++ b/collections/system_administrators/advanced/ipfs/ipfs_microvm.md @@ -0,0 +1,167 @@ +

IPFS on a Micro VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Micro VM](#deploy-a-micro-vm) +- [Install the Prerequisites](#install-the-prerequisites) +- [Set a Firewall](#set-a-firewall) + - [Additional Ports](#additional-ports) +- [Install IPFS](#install-ipfs) +- [Set IPFS](#set-ipfs) +- [Set IPFS with zinit](#set-ipfs-with-zinit) +- [Final Verification](#final-verification) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold guide, we explore how to set an IPFS node on a micro VM using the ThreeFold Playground. + +## Deploy a Micro VM + +We start by deploying a micro VM on the ThreeFold Playground. + +* Go to the [Threefold Playground](https://playground.grid.tf/#/) +* Deploy a micro VM (Ubuntu 22.04) with an IPv4 address + * IPv4 Address + * Minimum vcores: 1vcore + * Minimum MB of RAM: 1024MB + * Minimum storage: 50GB +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` + +## Install the Prerequisites + +We install the prerequisites before installing and setting IPFS. + +* Update Ubuntu + * ``` + apt update + ``` +* Install nano and ufw + * ``` + apt install nano && apt install ufw -y + ``` + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). + +We thus add the following rules: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow port 4001 + * ``` + ufw allow 4001 + ``` +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You have enabled the firewall with proper security rules for your IPFS deployment. + +### Additional Ports + +We provided the basic firewall ports for your IPFS instance. There are other more advanced configurations possible. + +If you want to access your IPFS node remotely, you can allow **port 5001**. This will allow anyone to access your IPFS node. Make sure that you know what you are doing if you go this route. You should, for example, restrict which external IP address can access port 5001. + +If you want to run your deployment as a gateway node, you should allow **port 8080**. Read the IPFS documentation for more information on this. + +If you want to run pubsub capabilities, you need to allow **port 8081**. For more information, read the [IPFS documentation](https://blog.ipfs.tech/25-pubsub/). + +## Install IPFS + +We install the [IPFS Kubo binary](https://docs.ipfs.tech/install/command-line/#install-official-binary-distributions). + +* Download the binary + * ``` + wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Unzip the file + * ``` + tar -xvzf kubo_v0.24.0_linux-amd64.tar.gz + ``` +* Change directory + * ``` + cd kubo + ``` +* Run the install script + * ``` + bash install.sh + ``` +* Verify that IPFS Kubo is properly installed + * ``` + ipfs --version + ``` + +## Set IPFS + +We initialize IPFS and run the IPFS daemon. + +* Initialize IPFS + * ``` + ipfs init --profile server + ``` +* Increase the storage capacity (optional) + * ``` + ipfs config Datastore.StorageMax 30GB + ``` +* Run the IPFS daemon + * ``` + ipfs daemon + ``` + +## Set IPFS with zinit + +We set the IPFS daemon with zinit. This will make sure that the IPFS daemon starts at each VM reboot or if it stops functioning momentarily. + +* Create the yaml file + * ``` + nano /etc/zinit/ipfs.yaml + ``` +* Set the execution command + * ``` + exec: /usr/local/bin/ipfs daemon + ``` +* Run the IPFS daemon with the zinit monitor command + * ``` + zinit monitor ipfs + ``` +* Verify that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` + +## Final Verification + +We reboot and reconnect to the VM and verify that IPFS is properly running as a final verification. + +* Reboot the VM + * ``` + reboot -f + ``` +* Reconnect to the VM and verify that the IPFS daemon is running + * ``` + ipfs swarm peers + ``` + +## Questions and Feedback + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/system_administrators/advanced/ipfs/ipfs_toc.md b/collections/system_administrators/advanced/ipfs/ipfs_toc.md new file mode 100644 index 0000000..15d9c4a --- /dev/null +++ b/collections/system_administrators/advanced/ipfs/ipfs_toc.md @@ -0,0 +1,6 @@ +

IPFS and ThreeFold

+ +

Table of Contents

+ +- [IPFS on a Full VM](./ipfs_fullvm.md) +- [IPFS on a Micro VM](./ipfs_microvm.md) \ No newline at end of file diff --git a/collections/system_administrators/advanced/list_public_ips.md b/collections/system_administrators/advanced/list_public_ips.md new file mode 100644 index 0000000..80fb72a --- /dev/null +++ b/collections/system_administrators/advanced/list_public_ips.md @@ -0,0 +1,22 @@ +

Listing Public IPs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +Listing public IPs can be done by asking graphQL for all IPs that has `contractId = 0` + +## Example + +```graphql +query MyQuery { + publicIps(where: {contractId_eq: 0}) { + ip + } +} +``` diff --git a/collections/system_administrators/advanced/minio_helm3.md b/collections/system_administrators/advanced/minio_helm3.md new file mode 100644 index 0000000..471043e --- /dev/null +++ b/collections/system_administrators/advanced/minio_helm3.md @@ -0,0 +1,112 @@ +

MinIO Operator with Helm 3

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Create an SSH Tunnel](#create-an-ssh-tunnel) +- [Set the VM](#set-the-vm) +- [Set MinIO](#set-minio) +- [Access the MinIO Operator](#access-the-minio-operator) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +We show how to deploy a Kubernetes cluster and set a [MinIO](https://min.io/) Operator with [Helm 3](https://helm.sh/). + +MinIO is a high-performance, S3 compatible object store. It is built for +large scale AI/ML, data lake and database workloads. Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters. + +## Prerequisites + +- TFChain account with TFT +- [Deploy Kubernetes cluster with one master and one worker (IPv4)](../../dashboard/solutions/k8s.md) +- [Make sure you can connect via SSH on the terminal](../../system_administrators/getstarted/ssh_guide/ssh_openssh.md) + +## Create an SSH Tunnel + +To access the MinIO Operator, we need to create an SSH tunnel with the port 9090. + +- Open a terminal and create an SSH tunnel + ``` + ssh -4 -L 9090:127.0.0.1:9090 root@ + ``` + +Simply leave this window open and follow the next steps. + +## Set the VM + +We set the Master VM to access the minIO Operator. + +- Install the prerequisites: + ``` + apt update + apt install git -y + apt install wget + apt install jq -y + ``` +- Install Helm + ``` + wget https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz + tar -xvf helm-v3.14.3-linux-amd64.tar.gz + mv linux-amd64/helm /usr/local/bin/helm + ``` +- Install yq + ``` + wget https://github.com/mikefarah/yq/releases/download/v4.43.1/yq_linux_amd64.tar.gz + tar -xvf yq_linux_amd64.tar.gz + mv yq_linux_amd64 /usr/bin/yq + ``` + +## Set MinIO + +We can then set the MinIO Operator. For this step, we mainly follow the MinIO documentation [here](https://min.io/docs/minio/kubernetes/upstream/operations/install-deploy-manage/deploy-operator-helm.html). + +- Add the MinIO repo + ``` + helm repo add minio-operator https://operator.min.io + ``` +- Validate the MinIO repo content + ``` + helm search repo minio-operator + ``` +- Install the operator + ``` + helm install \ + --namespace minio-operator \ + --create-namespace \ + operator minio-operator/operator + ``` +- Verify the operator installation + ``` + kubectl get all -n minio-operator + ``` + +## Access the MinIO Operator + +You can then access the MinIO Operator on your local browser (port 9090) + +``` +localhost:9090 +``` + +To log in the MinIO Operator, you will need to enter the token. To see the token, run the following line: + +``` +kubectl get secret/console-sa-secret -n minio-operator -o json | jq -r ".data.token" | base64 -d +``` + +Enter the token on the login page: + +![minio_1](./img/minio_1.png) + +You then have access to the MinIO Operator: + +![minio_2](./img/minio_2.png) + + +## Questions and Feedback + +If you have any questions, feel free to ask for help on the [ThreeFold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/advanced/token_transfer_keygenerator.md b/collections/system_administrators/advanced/token_transfer_keygenerator.md new file mode 100644 index 0000000..38c9ce1 --- /dev/null +++ b/collections/system_administrators/advanced/token_transfer_keygenerator.md @@ -0,0 +1,88 @@ + +

Transfer TFT Between Networks by Using the Keygenerator

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) + - [Keypair](#keypair) + - [Stellar to TFChain](#stellar-to-tfchain) + - [Alternative Transfer to TF Chain](#alternative-transfer-to-tf-chain) +- [TFChain to Stellar](#tfchain-to-stellar) + +*** + +## Introduction + +Using this method, only transfer is possible between accounts that are generated in the same manner and that are yours. Please find the keygen tooling for it below. + +## Prerequisites + +### Keypair + +- ed25519 keypair +- Go installed on your local computer + +Create a keypair with the following tool: + +```sh +go build . +./keygen +``` + +### Stellar to TFChain + +Create a Stellar wallet from the key that you generated. +Transfer the TFT from your wallet to the bridge address. A deposit fee of 1 TFT will be taken, so make sure you send a larger amount as 1 TFT. + +Bridge addresses : + +- On Mainnet: `GBNOTAYUMXVO5QDYWYO2SOCOYIJ3XFIP65GKOQN7H65ZZSO6BK4SLWSC` on [Stellar Mainnet](https://stellar.expert/explorer/public). +- On testnet: `GA2CWNBUHX7NZ3B5GR4I23FMU7VY5RPA77IUJTIXTTTGKYSKDSV6LUA4` on [Stellar MAINnet](https://stellar.expert/explorer/public) + +The amount deposited on TF Chain minus 1 TFT will be transferred over the bridge to the TFChain account. + +Effect will be the following : + +- Transferred TFTs from Stellar will be sent to a Stellar vault account representing all tokens on TFChain +- TFTs will be minted on the TFChain for the transferred amount + +### Alternative Transfer to TF Chain + +We also enabled deposits to TF Grid objects. Following objects can be deposited to: + +- Twin +- Farm +- Node +- Entity + +To deposit to any of these objects, a memo text in format `object_objectID` must be passed on the deposit to the bridge wallet. Example: `twin_1`. + +To deposit to a TF Grid object, this object **must** exists. If the object is not found on chain, a refund is issued. + +## TFChain to Stellar + +Create a TFChain account from the key that you generated. (TF Chain raw seed). +Browse to : + +- For mainnet: +- For testnet: +- For Devnet: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.dev.grid.tf#/accounts + +-> Add Account -> Click on mnemonic and select `Raw Seed` -> Paste raw TF Chain seed. + +Select `Advanced creation options` -> Change `keypair crypto type` to `Edwards (ed25519)`. Click `I have saved my mnemonic seed safely` and proceed. + +Choose a name and password and proceed. + +Browse to the [extrinsics](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.test.grid.tf#/extrinsics) , select tftBridgeModule and extrinsic: `swap_to_stellar`. Provide your Bridge substrate address and the amount to transfer. Sign using your password. +Again, a withdrawfee of 1 TFT will be taken, so make sure you send an amount larger than 1 TFT. + +The amount withdrawn from TFChain will be sent to your Stellar wallet. + +Behind the scenes, following will happen: + +- Transferred TFTs from Stellar will be sent from the Stellar vault account to the user's Stellar account +- TFTs will be burned on the TFChain for the transferred amount + +Example: ![swap_to_stellar](img/swap_to_stellar.png ':size=400') diff --git a/collections/system_administrators/computer_it_basics/cli_scripts_basics.md b/collections/system_administrators/computer_it_basics/cli_scripts_basics.md new file mode 100644 index 0000000..dec08b2 --- /dev/null +++ b/collections/system_administrators/computer_it_basics/cli_scripts_basics.md @@ -0,0 +1,1138 @@ + +

CLI and Scripts Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Update and upgrade packages](#update-and-upgrade-packages) + - [Test the network connectivity of a domain or an IP address with ping](#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) + - [Install Go](#install-go) + - [Install Brew](#install-brew) + - [Brew basic commands](#brew-basic-commands) + - [Install Terraform with Brew](#install-terraform-with-brew) + - [Yarn basic commands](#yarn-basic-commands) + - [Set default terminal](#set-default-terminal) + - [See the current path](#see-the-current-path) + - [List hidden files](#list-hidden-files) + - [Display the content of a directory](#display-the-content-of-a-directory) + - [Vim modes and basic commands](#vim-modes-and-basic-commands) + - [Check the listening ports using netstat](#check-the-listening-ports-using-netstat) + - [See the disk usage of different folders](#see-the-disk-usage-of-different-folders) + - [Verify the application version](#verify-the-application-version) + - [Find the path of a file with only the file name](#find-the-path-of-a-file-with-only-the-file-name) + - [Become the superuser (su) on Linux](#become-the-superuser-su-on-linux) + - [Exit a session](#exit-a-session) + - [Know the current user](#know-the-current-user) + - [Set the path of a package](#set-the-path-of-a-package) + - [See the current path](#see-the-current-path-1) + - [Find the current shell](#find-the-current-shell) + - [SSH into Remote Server](#ssh-into-remote-server) + - [Replace a string by another string in a text file](#replace-a-string-by-another-string-in-a-text-file) + - [Replace extensions of files in a folder](#replace-extensions-of-files-in-a-folder) + - [Remove extension of files in a folder](#remove-extension-of-files-in-a-folder) + - [See the current date and time on Linux](#see-the-current-date-and-time-on-linux) + - [Special variables in Bash Shell](#special-variables-in-bash-shell) + - [Gather DNS information of a website](#gather-dns-information-of-a-website) + - [Partition and mount a disk](#partition-and-mount-a-disk) +- [Encryption](#encryption) + - [Encrypt files with Gocryptfs](#encrypt-files-with-gocryptfs) + - [Encrypt files with Veracrypt](#encrypt-files-with-veracrypt) +- [Network-related Commands](#network-related-commands) + - [See the network connections and ports](#see-the-network-connections-and-ports) + - [See identity and info of IP address](#see-identity-and-info-of-ip-address) + - [ip basic commands](#ip-basic-commands) + - [Display socket statistics](#display-socket-statistics) + - [Query or control network driver and hardware settings](#query-or-control-network-driver-and-hardware-settings) + - [See if ethernet port is active](#see-if-ethernet-port-is-active) + - [Add IP address to hardware port (ethernet)](#add-ip-address-to-hardware-port-ethernet) + - [Private IP address range](#private-ip-address-range) + - [Set IP Address manually](#set-ip-address-manually) +- [Basic Scripts](#basic-scripts) + - [Run a script with arguments](#run-a-script-with-arguments) + - [Print all arguments](#print-all-arguments) + - [Iterate over arguments](#iterate-over-arguments) + - [Count lines in files given as arguments](#count-lines-in-files-given-as-arguments) + - [Find path of a file](#find-path-of-a-file) + - [Print how many arguments are passed in a script](#print-how-many-arguments-are-passed-in-a-script) +- [Linux](#linux) + - [Install Terraform](#install-terraform) +- [MAC](#mac) + - [Enable remote login on MAC](#enable-remote-login-on-mac) + - [Find Other storage on MAC](#find-other-storage-on-mac) + - [Sort files by size and extension on MAC](#sort-files-by-size-and-extension-on-mac) +- [Windows](#windows) + - [Install Chocolatey](#install-chocolatey) + - [Install Terraform with Chocolatey](#install-terraform-with-chocolatey) + - [Find the product key](#find-the-product-key) + - [Find Windows license type](#find-windows-license-type) +- [References](#references) + +*** + +## Introduction + +We present here a quick guide on different command-line interface (CLI) commands as well as some basic scripts. + +The main goal of this guide is to demonstrate that having some core understanding of CLI and scripts can drastically increase efficiency and speed when it comes to deploying and managing workloads on the TFGrid. + +## Basic Commands + +### Update and upgrade packages + +The command **update** ensures that you have access to the latest versions of packages available. + +``` +sudo apt update +``` + +The command **upgrade** downloads and installs the updates for each outdated package and dependency on your system. + +``` +sudo apt upgrade +``` + + + +### Test the network connectivity of a domain or an IP address with ping + +To test the network connectivity of a domain or an IP address, you can use `ping` on Linux, MAC and Windows: + +* Template + ``` + ping + ``` +* Example + ``` + ping threefold.io + ``` + +On Windows, by default, the command will send 4 packets. On MAC and Linux, it will keep on sending packets, so you will need to press `Ctrl-C` to stop the command from running. + +You can also set a number of counts with `-c` on Linux and MAC and `-n` on Windows. + +* Send a given number of packets on Linux and MAC (e.g 5 packets) + ``` + ping -c 5 threefold.io + ``` +* Send a given number of packets on Windows (e.g 5 packets) + ``` + ping -n 5 threefold.io + ``` + +*** + +### Install Go + +Here are the steps to install [Go](https://go.dev/). + +* Install go + * ``` + sudo apt install golang-go + ``` +* Verify that go is properly installed + * ``` + go version + ``` + + + +### Install Brew + +Follow those steps to install [Brew](https://brew.sh/) + +* Installation command from Brew: + * ``` + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + ``` +* Add the path to the **.profile** directory. Replace by your username. + * ``` + echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> /home//.profile + ``` +* Evaluation the following: + * ``` + eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" + ``` +* Verify the installation + * ``` + brew doctor + ``` + + + +### Brew basic commands + +* To update brew in general: + * ``` + brew update + ``` +* To update a specific package: + * ``` + brew update + ``` +* To install a package: + * ``` + brew install + ``` +* To uninstall a package: + * ``` + brew uninstall + ``` +* To search a package: + * ``` + brew search + ``` +* [Uninstall Brew](https://github.com/homebrew/install#uninstall-homebrew) + * ``` + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)" + ``` + + + +### Install Terraform with Brew + +Installing Terraform with Brew is very simple by following the [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +* Compile HashiCorp software on Homebrew's infrastructure + * ``` + brew tap hashicorp/tap + ``` +* Install Terraform + * ``` + brew install hashicorp/tap/terraform + ``` + + + +### Yarn basic commands + +* Add a package + * ``` + yarn add + ``` +* Initialize the development of a package + * ``` + yarn init + ``` +* Install all the dependencies in the **package.json** file + * ``` + yarn install + ``` +* Publish a package to a package manager + * ``` + yarn publish + ``` +* Remove unused package from the current package + * ``` + yarn remove + ``` +* Clean the cache + * ``` + yarn cache clean + ``` + + + +### Set default terminal + +``` +update-alternatives --config x-terminal-emulator +``` + +### See the current path + +``` +pwd +``` + + + +### List hidden files + +``` +ls -ld .?* +``` + + + +### Display the content of a directory + +You can use **tree** to display the files and organization of a directory: + +* General command + * ``` + tree + ``` +* View hidden files + * ``` + tree -a + ``` + + + +### Vim modes and basic commands + +[Vim](https://www.vim.org/) is a free and open-source, screen-based text editor program. + +With Vim, you can use two modes. + +* Insert mode - normal text editor + * Press **i** +* Command mode - commands to the editor + * Press **ESC** + +Here are some basic commands: + +* Delete characters + * **x** +* Undo last command + * **u** +* Undo the whole line + * **U** +* Go to the end of line + * **A** +* Save and exit + * **:wq** +* Discard all changes + * **:q!** +* Move cursor to the start of the line + * **0** +* Delete the current word + * **dw** +* Delete the current line + * **dd** + + + +### Check the listening ports using netstat + +Use the command: + +``` +netstat +``` + + + + +### See the disk usage of different folders + +``` +du -sh * +``` + + + + +### Verify the application version + +``` +which +``` + + + +### Find the path of a file with only the file name + +On MAC and Linux, you can use **coreutils** and **realpath** from Brew: + +* ``` + brew install coreutils + ``` +* ``` + realpath file_name + ``` + + + +### Become the superuser (su) on Linux + +You can use either command: + +* Option 1 + * ``` + sudo -i + ``` +* Option 2 + * ``` + sudo -s + ``` + + + +### Exit a session + +You can use either command depending on your shell: + +* ``` + exit + ``` +* ``` + logout + ``` + + + +### Know the current user + +You can use the following command: + +* ``` + whoami + ``` + + + +### See the path of a package + +To see the path of a package, you can use the following command: + +* ``` + whereis + ``` + + + +### Set the path of a package + +``` +export PATH=$PATH:/snap/bin + +``` + + + + +### See the current path + +``` +pwd +``` + + + +### Find the current shell + +* Compact version + * ``` + echo $SHELL + ``` +* Detailed version + * ``` + ls -l /proc/$$/exe + ``` + + + +### SSH into Remote Server + +* Create SSH key pair + * ``` + ssh-keygen + ``` +* Install openssh-client on the local computer* + * ``` + sudo apt install openssh-client + ``` +* Install openssh-server on the remote computer* + * ``` + sudo apt install openssh-server + ``` +* Copy public key + * ``` + cat ~/.ssh/id_rsa.pub + ``` +* Create the ssh directory on the remote computer + * ``` + mkdir ~/.ssh + ``` +* Add public key in the file **authorized_keys** on the remote computer + * ``` + nano ~/.ssh/authorized_keys + ``` +* Check openssh-server status + * ``` + sudo service ssh status + ``` +* SSH into the remote machine + * ``` + ssh @ + ``` + +\*Note: For MAC, you can install **openssh-server** and **openssh-client** with Brew: **brew install openssh-server** and **brew install openssh-client**. + +To enable remote login on a MAC, [read this section](#enable-remote-login-on-mac). + + + +### Replace a string by another string in a text file + +* Replace one string by another (e.g. **old_string**, **new_string**) + * ``` + sed -i 's/old_string/new_string/g' / + ``` +* Use environment variables (double quotes) + * ``` + sed -i "s/old_string/$env_variable/g" / + ``` + + + +### Replace extensions of files in a folder + +Replace **ext1** and **ext2** by the extensions in question. + +``` +find ./ -depth -name "*.ext1" -exec sh -c 'mv "$1" "${1%.ext1}.ext2"' _ {} \; +``` + + + +### Remove extension of files in a folder + +Replace **ext** with the extension in question. + +```bash +for file in *.ext; do + mv -- "$file" "${file%%.ext}" +done +``` + + + +### See the current date and time on Linux + +``` +date +``` + + + +### Special variables in Bash Shell + +| Special Variables | Descriptions | +| ---------------- | ----------------------------------------------- | +| $0 | Name of the bash script | +| $1, $2...$n | Bash script arguments | +| $$ | Process id of the current shell | +| $* | String containing every command-line argument | +| $# | Total number of arguments passed to the script | +| $@ | Value of all the arguments passed to the script | +| $? | Exit status of the last executed command | +| $! | Process id of the last executed command | +| $- | Print current set of option in current shell | + + + +### Gather DNS information of a website + +You can use [Dig](https://man.archlinux.org/man/dig.1) to gather DNS information of a website + +* Template + * ``` + dig + ``` +* Example + * ``` + dig threefold.io + ``` + +You can also use online tools such as [DNS Checker](https://dnschecker.org/). + + + +### Partition and mount a disk + +We present one of many ways to partition and mount a disk. + +* Create partition with [gparted](https://gparted.org/) + * ``` + sudo gparted + ``` +* Find the disk you want to mount (e.g. **sdb**) + * ``` + sudo fdisk -l + ``` +* Create a directory to mount the disk to + * ``` + sudo mkdir /mnt/disk + ``` +* Open fstab + * ``` + sudo nano /etc/fstab + ``` +* Append the following to the fstab with the proper disk path (e.g. **/dev/sdb**) and mount point (e.g. **/mnt/disk**) + * ``` + /dev/sdb /mnt/disk ext4 defaults 0 0 + ``` +* Mount the disk + * ``` + sudo mount /mnt/disk + ``` +* Add permissions (as needed) + * ``` + sudo chmod -R 0777 /mnt/disk + ``` + + + +## Encryption + +### Encrypt files with Gocryptfs + +You can use [gocryptfs](https://github.com/rfjakob/gocryptfs) to encrypt files. + +* Install gocryptfs + * ``` + apt install gocryptfs + ``` +* Create a vault directory (e.g. **vaultdir**) and a mount directory (e.g. **mountdir**) + * ``` + mkdir vaultdir mountdir + ``` +* Initiate the vault + * ``` + gocryptfs -init vaultdir + ``` +* Mount the mount directory with the vault + * ``` + gocryptfs vaultdir mountdir + ``` +* You can now create files in the folder. For example: + * ``` + touch mountdir/test.txt + ``` +* The new file **test.txt** is now encrypted in the vault + * ``` + ls vaultdir + ``` +* To unmount the mountedvault folder: + * Option 1 + * ``` + fusermount -u mountdir + ``` + * Option 2 + * ``` + rmdir mountdir + ``` + + +### Encrypt files with Veracrypt + +To encrypt files, you can use [Veracrypt](https://www.veracrypt.fr/en/Home.html). Let's see how to download and install Veracrypt. + +* Veracrypt GUI + * Download the package + * ``` + wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-1.25.9-Ubuntu-22.04-amd64.deb + ``` + * Install the package + * ``` + dpkg -i ./veracrypt-1.25.9-Ubuntu-22.04-amd64.deb + ``` +* Veracrypt console only + * Download the package + * ``` + wget https://launchpad.net/veracrypt/trunk/1.25.9/+download/veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb + ``` + * Install the package + * ``` + dpkg -i ./veracrypt-console-1.25.9-Ubuntu-22.04-amd64.deb + ``` + +You can visit [Veracrypt download page](https://www.veracrypt.fr/en/Downloads.html) to get the newest releases. + +* To run Veracrypt + * ``` + veracrypt + ``` +* Veracrypt documentation is very complete. To begin using the application, visit the [Beginner's Tutorial](https://www.veracrypt.fr/en/Beginner%27s%20Tutorial.html). + + + +## Network-related Commands + +### See the network connections and ports + +ifconfig + + + +### See identity and info of IP address + +* See abuses related to an IP address: + * ``` + https://www.abuseipdb.com/check/ + ``` +* See general information of an IP address: + * ``` + https://www.whois.com/whois/ + ``` + + + +### ip basic commands + +* Manage and display the state of all network + * ``` + ip link + ``` +* Display IP Addresses and property information (abbreviation of address) + * ``` + ip addr + ``` +* Display and alter the routing table + * ``` + ip route + ``` +* Manage and display multicast IP addresses + * ``` + ip maddr + ``` +* Show neighbour object + * ``` + ip neigh + ``` +* Display a list of commands and arguments for +each subcommand + * ``` + ip help + ``` +* Add an address + * Template + * ``` + ip addr add + ``` + * Example: set IP address to device **enp0** + * ``` + ip addr add 192.168.3.4/24 dev enp0 + ``` +* Delete an address + * Template + * ``` + ip addr del + ``` + * Example: set IP address to device **enp0** + * ``` + ip addr del 192.168.3.4/24 dev enp0 + ``` +* Alter the status of an interface + * Template + * ``` + ip link set + ``` + * Example 1: Bring interface online (here device **em2**) + * ``` + ip link set em2 up + ``` + * Example 2: Bring interface offline (here device **em2**) + * ``` + ip link set em2 down + ``` +* Add a multicast address + * Template + * ``` + ip maddr add + ``` + * Example : set IP address to device **em2** + * ``` + ip maddr add 33:32:00:00:00:01 dev em2 + ``` +* Delete a multicast address + * Template + * ``` + ip maddr del + ``` + * Example: set IP address to device **em2** + * ``` + ip maddr del 33:32:00:00:00:01 dev em2 + ``` +* Add a routing table entry + * Template + * ``` + ip route add + ``` + * Example 1: Add a default route (for all addresses) via a local gateway + * ``` + ip route add default via 192.168.1.1 dev em1 + ``` + * Example 2: Add a route to 192.168.3.0/24 via the gateway at 192.168.3.2 + * ``` + ip route add 192.168.3.0/24 via 192.168.3.2 + ``` + * Example 3: Add a route to 192.168.1.0/24 that can be reached on +device em1 + * ``` + ip route add 192.168.1.0/24 dev em1 + ``` +* Delete a routing table entry + * Template + * ``` + ip route delete + ``` + * Example: Delete the route for 192.168.1.0/24 via the gateway at +192.168.1.1 + * ``` + ip route delete 192.168.1.0/24 via 192.168.1.1 + ``` +* Replace, or add, a route + * Template + * ``` + ip route replace + ``` + * Example: Replace the defined route for 192.168.1.0/24 to use +device em1 + * ``` + ip route replace 192.168.1.0/24 dev em1 + ``` +* Display the route an address will take + * Template + * ``` + ip route get + ``` + * Example: Display the route taken for IP 192.168.18.25 + * ``` + ip route replace 192.168.18.25/24 dev enp0 + ``` + + + +References: https://www.commandlinux.com/man-page/man8/ip.8.html + + + +### Display socket statistics + +* Show all sockets + * ``` + ss -a + ``` +* Show detailed socket information + * ``` + ss -e + ``` +* Show timer information + * ``` + ss -o + ``` +* Do not resolve address + * ``` + ss -n + ``` +* Show process using the socket + * ``` + ss -p + ``` + +Note: You can combine parameters, e.g. **ss -aeo**. + +References: https://www.commandlinux.com/man-page/man8/ss.8.html + + + +### Query or control network driver and hardware settings + +* Display ring buffer for a device (e.g. **eth0**) + * ``` + ethtool -g eth0 + ``` +* Display driver information for a device (e.g. **eth0**) + * ``` + ethtool -i eth0 + ``` +* Identify eth0 by sight, e.g. by causing LEDs to blink on the network port + * ``` + ethtool -p eth0 + ``` +* Display network and driver statistics for a device (e.g. **eth0**) + * ``` + ethtool -S eth0 + ``` + +References: https://man.archlinux.org/man/ethtool.8.en + + + +### See if ethernet port is active + +Replace with the proper device: + +``` +cat /sys/class/net//carrier +``` + + + +### Add IP address to hardware port (ethernet) + +* Find ethernet port ID on both computers + * ``` + ip a + ``` +* Add IP address (DHCO or static) + * Computer 1 + * ``` + ip addr add /24 dev + ``` + * Computer 2 + * ``` + ip addr add /24 dev + ``` + +* [Ping](#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the address to confirm connection + * ``` + ping + ``` + +To set and view the address for either DHCP or static, go to **Networks** then **Details**. + + + +### Private IP address range + +The private IP range is the following: + +* 10.0.0.0–10.255.255.255 +* 172.16.0.0–172.31.255.255 +* 192.168.0.0–192.168.255.255 + + + +### Set IP Address manually + +You can use the following template when you set an IP address manually: + +* Address + * +* Netmask + * 255.255.255.0 +* Gateway + * optional + + + +## Basic Scripts + +### Run a script with arguments + +You can use the following template to add arguments when running a script: + +* Option 1 + * ``` + ./example_script.sh arg1 arg2 + ``` +* Option 2 + * ``` + sh example_script.sh "arg1" "arg2" + ``` + +### Print all arguments + +* Write a script + * File: `example_script.sh` + * ```bash + #!/bin/sh + echo $@ + ``` +* Give permissions + * ```bash + chmod +x ./example_script.sh + ``` +* Run the script with arguments + * ```bash + sh example_script.sh arg1 arg2 + ``` + + +### Iterate over arguments + +* Write the script + * ```bash + # iterate_script.sh + #!/bin/bash + for i; do + echo $i + done + ``` +* Give permissions + * ``` + chmod +x ./iterate_script.sh + ``` +* Run the script with arguments + * ``` + sh iterate_script.sh arg1 arg2 + ``` + +* The following script is equivalent + * ```bash + # iterate_script.sh + #/bin/bash + for i in $*; do + echo $i + done + ``` + + + +### Count lines in files given as arguments + +* Write the script + * ```bash + # count_lines.sh + #!/bin/bash + for i in $*; do + nlines=$(wc -l < $i) + echo "There are $nlines lines in $i" + done + ``` +* Give permissions + * ``` + chmod +x ./count_lines.sh + ``` +* Run the script with arguments (files). Here we use the script itself as an example. + * ``` + sh count_lines.sh count_lines.sh + ``` + + + +### Find path of a file + +* Write the script + * ```bash + # find.sh + #!/bin/bash + + find / -iname $1 2> /dev/null + ``` +* Run the script + * ``` + sh find.sh + ``` + + + +### Print how many arguments are passed in a script + +* Write the script + * ```bash + # print_qty_args.sh + #!/bin/bash + echo This script was passed $# arguments + ``` +* Run the script + * ``` + sh print_qty_args.sh + ``` + + +## Linux + +### Install Terraform + +Here are the steps to install Terraform on Linux based on the [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +``` +wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg +``` +``` +echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list +``` +``` +sudo apt update && sudo apt install terraform +``` + +Note that the Terraform documentation also covers other methods to install Terraform on Linux. + +## MAC + +### Enable remote login on MAC + +* Option 1: + * Use the following command line: + * ``` + systemsetup -setremotelogin on + ``` +* Option 2 + * Use **System Preferences** + * Go to **System Preferences** -> **Sharing** -> **Enable Remote Login**. + + + +### Find Other storage on MAC + +* Open **Finder** \> **Go** \> **Go to Folder** +* Paste this path + * ``` + ~/Library/Caches + ``` + + + +### Sort files by size and extension on MAC + +* From your desktop, press **Command-F**. +* Click **This Mac**. +* Click the first dropdown menu field and select **Other**. +* From the **Search Attributes** window + * tick **File Size** and **File Extension**. + + + +## Windows + +### Install Chocolatey + +To install Chocolatey on Windows, we follow the [official Chocolatey website](https://chocolatey.org/install) instructions. + +* Run PowerShell as Administrator +* Check if **Get-ExecutionPolicy** is restricted + * ``` + Get-ExecutionPolicy + ``` + * If it is restricted, run the following command: + * ``` + Set-ExecutionPolicy AllSigned + ``` +* Install Chocolatey + * ``` + Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) + ``` +* Note: You might need to restart PowerShell to use Chocolatey + + + +### Install Terraform with Chocolatey + +Once you've installed Chocolatey on Windows, installing Terraform is as simple as can be: + +* Install Terraform with Chocolatey + * ``` + choco install terraform + ``` + + + +### Find the product key + +Write the following in **Command Prompt** (run as administrator): + +``` +wmic path SoftwareLicensingService get OA3xOriginalProductKey +``` + + + +### Find Windows license type + +Write the following in **Command Prompt**: + +``` +slmgr /dli +``` + + + +## References + +* GNU Bash Manual - https://www.gnu.org/software/bash/manual/bash.html \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/computer_it_basics.md b/collections/system_administrators/computer_it_basics/computer_it_basics.md new file mode 100644 index 0000000..c28712c --- /dev/null +++ b/collections/system_administrators/computer_it_basics/computer_it_basics.md @@ -0,0 +1,16 @@ +

Computer and IT Basics

+ +Welcome to the *Computer and IT Basics* section of the ThreeFold Manual! + +In this section, tailored specifically for system administrators, we'll delve into fundamental concepts and tools that form the backbone of managing and securing infrastructure. Whether you're a seasoned sysadmin or just starting your journey, these basics are essential for navigating the intricacies of the ThreeFold Grid. + +

Table of Contents

+ +- [CLI and Scripts Basics](./cli_scripts_basics.md) +- [Docker Basics](./docker_basics.md) +- [Git and GitHub Basics](./git_github_basics.md) +- [Firewall Basics](./firewall_basics/firewall_basics.md) + - [UFW Basics](./firewall_basics/ufw_basics.md) + - [Firewalld Basics](./firewall_basics/firewalld_basics.md) +- [File Transfer](./file_transfer.md) +- [Screenshots](./screenshots.md) \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/docker_basics.md b/collections/system_administrators/computer_it_basics/docker_basics.md new file mode 100644 index 0000000..5a7f297 --- /dev/null +++ b/collections/system_administrators/computer_it_basics/docker_basics.md @@ -0,0 +1,458 @@ +

Docker Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Install Docker Desktop and Docker Engine](#install-docker-desktop-and-docker-engine) + - [Remove completely Docker](#remove-completely-docker) + - [List containers](#list-containers) + - [Pull an image](#pull-an-image) + - [Push an image](#push-an-image) + - [Inspect and pull an image with GHCR](#inspect-and-pull-an-image-with-ghcr) + - [See a docker image (no download)](#see-a-docker-image-no-download) + - [Build a container](#build-a-container) + - [List all available docker images](#list-all-available-docker-images) + - [Run a container](#run-a-container) + - [Run a new command in an existing container](#run-a-new-command-in-an-existing-container) + - [Bash shell into container](#bash-shell-into-container) + - [Pass arguments with a bash script and a Dockerfile](#pass-arguments-with-a-bash-script-and-a-dockerfile) + - [Copy files from a container to the local computer](#copy-files-from-a-container-to-the-local-computer) + - [Delete all the containers, images and volumes](#delete-all-the-containers-images-and-volumes) + - [Kill all the Docker processes](#kill-all-the-docker-processes) + - [Output full logs for all containers](#output-full-logs-for-all-containers) +- [Resources Usage](#resources-usage) + - [Examine containers with size](#examine-containers-with-size) + - [Examine disks usage](#examine-disks-usage) +- [Wasted Resources](#wasted-resources) + - [Prune the Docker logs](#prune-the-docker-logs) + - [Prune the Docker containers](#prune-the-docker-containers) + - [Remove unused and untagged local container images](#remove-unused-and-untagged-local-container-images) + - [Clean up and delete all unused container images](#clean-up-and-delete-all-unused-container-images) + - [Clean up container images based on a given timeframe](#clean-up-container-images-based-on-a-given-timeframe) +- [Command Combinations](#command-combinations) + - [Kill all running containers](#kill-all-running-containers) + - [Stop all running containers](#stop-all-running-containers) + - [Delete all stopped containers](#delete-all-stopped-containers) + - [Delete all images](#delete-all-images) + - [Update and stop a container in a crash-loop](#update-and-stop-a-container-in-a-crash-loop) +- [References](#references) + +*** + +## Introduction + +We present here a quick introduction to Docker. We cover basic commands, as well as command combinations. Understanding the following should give system administrators confidence when it comes to using Docker efficiently. + +The following can serve as a quick reference guide when deploying workloads on the ThreeFold Grid and using Docker in general. + +We invite the readers to consult the [official Docker documentation](https://docs.docker.com/) for more information. + + + +## Basic Commands + +### Install Docker Desktop and Docker Engine + +You can install [Docker Desktop](https://docs.docker.com/get-docker/) and [Docker Engine](https://docs.docker.com/engine/install/) for Linux, MAC and Windows. Follow the official Docker documentation for the details. + +Note that the quickest way to install Docker Engine is to use the convenience script: + +``` +curl -fsSL https://get.docker.com -o get-docker.sh +sudo sh get-docker.sh +``` + + + +### Remove completely Docker + +To completely remove docker from your machine, you can follow these steps: + +* List the docker packages + * ``` + dpkg -l | grep -i docker + ``` +* Purge and autoremove docker + * ``` + apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli docker-compose-plugin + apt-get autoremove -y --purge docker-engine docker docker.io docker-ce docker-compose-plugin + ``` +* Remove the docker files and folders + * ``` + rm -rf /var/lib/docker /etc/docker + rm /etc/apparmor.d/docker + groupdel docker + rm -rf /var/run/docker.sock + ``` + +You can also use the command **whereis docker** to see if any Docker folders and files remain. If so, remove them with + + + +### List containers + +* List only running containers + * ``` + docker ps + ``` +* List all containers (running + stopped) + * ``` + docker ps -a + ``` + + + +### Pull an image + +To pull an image from [Docker Hub](https://hub.docker.com/): + +* Pull an image + * ``` + docker pull + ``` +* Pull an image with the tag + * ``` + docker pull :tag + ``` +* Pull all tags of an image + * ``` + docker pull -a + ``` + + + +### Push an image + +To pull an image to [Docker Hub](https://hub.docker.com/): + +* Push an image + * ``` + docker push + ``` +* Push an image with the tag + * ``` + docker push :tag + ``` +* Push all tags of an image + * ``` + docker pull -a + ``` + + + +### Inspect and pull an image with GHCR + +* Inspect the docker image + * ``` + docker inspect ghcr.io//: + ``` +* Pull the docker image + * ``` + docker pull ghcr.io//: + ``` + + + +### See a docker image (no download) + +If you want to see a docker image without downloading the image itself, you can use Quay's [Skopeo tool](https://github.com/containers/skopeo), a command line utility that performs various operations on container images and image repositories. + +``` +docker run --rm quay.io/skopeo/stable list-tags docker://ghcr.io// +``` + +Make sure to write the proper information for the repository and the image. + +To install Skopeo, read [this documentation](https://github.com/containers/skopeo/blob/main/install.md). + + + + +### Build a container + +Use **docker build** to build a container based on a Dockerfile + +* Build a container based on current directory Dockerfile + * ``` + docker build . + ``` +* Build a container and store the image with a given name + * Template + * ``` + docker build -t ":" + ``` + * Example + * ``` + docker build -t newimage:latest + ``` +* Build a docker container without using the cache + * ``` + docker build --no-cache + ``` + + + +### List all available docker images + +``` +docker images +``` + + + +### Run a container + +To run a container based on an image, use the command **docker run**. + +* Run an image + * ``` + docker run + ``` +* Run an image in the background (run and detach) + * ``` + docker run -d + ``` +* Run an image with CLI input + * ``` + docker run -it + ``` + +You can combine arguments, e.g. **docker run -itd**. + +You can also specify the shell, e.g. **docker run -it /bin/bash** + + + +### Run a new command in an existing container + +To run a new command in an existing container, use **docker exec**. + +* Execute interactive shell on the container + * ``` + docker exec -it sh + ``` + + + +### Bash shell into container + +* Bash shell into a container + * ``` + docker exec -i -t /bin/bash + ``` +* Bash shell into a container with root + * ``` + docker exec -i -t -u root /bin/bash + ``` + +Note: if bash is not available, you can use `/bin/sh` + + + +### Pass arguments with a bash script and a Dockerfile + +You can do the following to pass arguments with a bash script and a Dockerfile. + +```sh +# script_example.sh +#!/bin/sh + +echo This is the domain: $env_domain +echo This is the name: $env_name +echo This is the password: $env_password + +``` +* File `Dockerfile` + +```Dockerfile +FROM ubuntu:latest + +ARG domain + +ARG name + +ARG password + +ENV env_domain $domain + +ENV env_name $name + +ENV env_password $password + +COPY script_example.sh . + +RUN chmod +x /script_example.sh + +CMD ["/script_example.sh"] +``` + + + +### Copy files from a container to the local computer + +``` +docker cp : +``` + + + +### Delete all the containers, images and volumes + +* To delete all containers: + * ``` + docker compose rm -f -s -v + ``` + +* To delete all images: + * ``` + docker rmi -f $(docker images -aq) + ``` + +* To delete all volumes: + * ``` + docker volume rm $(docker volume ls -qf dangling=true) + ``` + +* To delete all containers, images and volumes: + * ``` + docker compose rm -f -s -v && docker rmi -f $(docker images -aq) && docker volume rm $(docker volume ls -qf dangling=true) + ``` + + + +### Kill all the Docker processes + +* To kill all processes: + * ``` + killall Docker && open /Applications/Docker.app + ``` + + + +### Output full logs for all containers + +The following command output the full logs for all containers in the file **containers.log**: + +``` +docker compose logs > containers.log +``` + + + +## Resources Usage + +### Examine containers with size + +``` +docker ps -s +``` + + + +### Examine disks usage + +* Basic mode + * ``` + docker system df + ``` +* Verbose mode + * ``` + docker system df -v + ``` + + + +## Wasted Resources + +### Prune the Docker logs + +``` +docker system prune +``` + +### Prune the Docker containers + +You can use the prune function to delete all stopped containers: + +``` +docker container prune +``` + +### Remove unused and untagged local container images + +The following is useful if you want to clean up local filesystem: + +``` +docker image prune +``` + +### Clean up and delete all unused container images + +``` +docker image prune -a +``` + +### Clean up container images based on a given timeframe + +To clean up container images created X hours ago, you can use the following template (replace with a number): + +``` +docker image prune -a --force --filter "until=h" +``` + +To clean up container images created before a given date, you can use the following template (replace with the complete date): + +``` +docker image prune -a --force --filter "until=" +``` + +Note: An example of a complete date would be `2023-01-04T00:00:00` + + + +## Command Combinations + +### Kill all running containers + +``` +docker kill $(docker ps -q) +``` + + + +### Stop all running containers + +``` +docker stop $(docker ps -a -q) +``` + + + +### Delete all stopped containers + +``` +docker rm $(docker ps -a -q) +``` + + +### Delete all images + +``` +docker rmi $(docker images -q) +``` + + + +### Update and stop a container in a crash-loop + +``` +docker update –restart=no && docker stop +``` + + + +## References + +* Docker Manual - https://docs.docker.com/ +* Code Notary - https://codenotary.com/blog/extremely-useful-docker-commands \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/file_transfer.md b/collections/system_administrators/computer_it_basics/file_transfer.md new file mode 100644 index 0000000..41718dc --- /dev/null +++ b/collections/system_administrators/computer_it_basics/file_transfer.md @@ -0,0 +1,271 @@ +

File Transfer

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [SCP](#scp) + - [File transfer with IPv4](#file-transfer-with-ipv4) + - [File transfer with IPv6](#file-transfer-with-ipv6) +- [Rsync](#rsync) + - [File transfer](#file-transfer) + - [Adjust reorganization of files and folders before running rsync](#adjust-reorganization-of-files-and-folders-before-running-rsync) + - [Automate backup with rsync](#automate-backup-with-rsync) + - [Parameters --checksum and --ignore-times with rsync](#parameters---checksum-and---ignore-times-with-rsync) + - [Trailing slashes with rsync](#trailing-slashes-with-rsync) +- [SFTP](#sftp) + - [SFTP on the Terminal](#sftp-on-the-terminal) + - [SFTP Basic Commands](#sftp-basic-commands) + - [SFTP File Transfer](#sftp-file-transfer) +- [SFTP with FileZilla](#sftp-with-filezilla) + - [Install FileZilla](#install-filezilla) + - [Add a Private Key](#add-a-private-key) + - [FileZilla SFTP Connection](#filezilla-sftp-connection) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +Deploying on the TFGrid with tools such as the Playground and Terraform is easy and it's also possible to quickly transfer files between local machine and VMs deployed on 3Nodes on the TFGrid. In this section, we cover different ways to transfer files between local and remote machines. + +## SCP + +### File transfer with IPv4 + +* From local to remote, write the following on the local terminal: + * ``` + scp / @:/// + ``` +* From remote to local, you can write the following on the local terminal (more secure): + * ``` + scp @:/// / +* From remote to local, you can also write the following on the remote terminal: + * ``` + scp / @:/// + +### File transfer with IPv6 + +For IPv6, it is similar to IPv4 but you need to add `-6` after scp and add `\[` before and `\]` after the IPv6 address. + +## Rsync + +### File transfer + +[rsync](https://rsync.samba.org/) is a utility for efficiently transferring and synchronizing files between a computer and a storage drive and across networked computers by comparing the modification times and sizes of files. + +We show here how to transfer files between two computers. Note that at least one of the two computers must be local. This will transfer the content of the source directory into the destination directory. + +* From local to remote + * ``` + rsync -avz --progress --delete /path/to/local/directory/ remote_user@:/path/to/remote/directory + ``` +* From remote to local + * ``` + rsync -avz --progress --delete remote_user@:/path/to/remote/directory/ /path/to/local/directory + ``` + +Here is short description of the parameters used: + +* **-a**: archive mode, preserving the attributes of the files and directories +* **-v**: verbose mode, displaying the progress of the transfer +* **-z**: compress mode, compressing the data before transferring +* **--progress** tells rsync to print information showing the progress of the transfer +* **--delete** tells rsync to delete files that aren't on the sending side + +### Adjust reorganization of files and folders before running rsync + +[rsync-sidekick](https://github.com/m-manu/rsync-sidekick) propagates changes from source directory to destination directory. You can run rsync-sidekick before running rsync. Make sure that [Go is installed](#install-go). + +* Install rsync-sidekick + * ``` + sudo go install github.com/m-manu/rsync-sidekick@latest + ``` +* Reorganize the files and folders with rsync-sidekick + * ``` + rsync-sidekick /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` + +* Transfer and update files and folders with rsync + * ``` + sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` + +### Automate backup with rsync + +We show how to automate file transfers between two computers using rsync. + +* Create the script file + * ``` + nano rsync_backup.sh + ``` +* Write the following script with the proper paths. Here the log is saved in the same directory. + * ``` + # filename: rsync_backup.sh + #!/bin/bash + + sudo rsync -avz --progress --delete --log-file=/path/to/local/directory/rsync_storage.log /path/to/local/directory/ username@IP_Address:/path/to/remote/directory + ``` +* Give permission + * ``` + sudo chmod +x /path/to/script/rsync_backup.sh + ``` +* Set a cron job to run the script periodically + * Copy your .sh file to **/root**: + ``` + sudo cp path/to/script/rsync_backup.sh /root + ``` +* Open the cron file + * ``` + sudo crontab -e + ``` +* Add the following to run the script everyday. For this example, we set the time at 18:00PM + * ``` + 0 18 * * * /root/rsync_backup.sh + ``` + +### Parameters --checksum and --ignore-times with rsync + +Depending on your situation, the parameters **--checksum** or **--ignore-times** can be quite useful. Note that adding either parameter will slow the transfer. + +* With **--ignore time**, you ignore both the time and size of each file. This means that you transfer all files from source to destination. + * ``` + rsync --ignore-time source_folder/ destination_folder + ``` +* With **--checksum**, you verify with a checksum that the files from source and destination are the same. This means that you transfer all files that have a different checksum compared source to destination. + * ``` + rsync --checksum source_folder/ destination_folder + ``` + +### Trailing slashes with rsync + +rsync does not act the same whether you use or not a slash ("\/") at the end of the source path. + +* Copy content of **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder_content** + * ``` + rsync source_folder/ destination_folder + ``` +* Copy **source_folder** into **destination_folder** to obtain the result: **destination_folder/source_folder/source_folder_content** + * ``` + rsync source_folder destination_folder + ``` + + + +## SFTP + +### SFTP on the Terminal + +Using SFTP for file transfer on the terminal is very quick since the SSH connection is already enabled by default when deploying workloads on the TFGrid. + +If you can use the following command to connect to a VM on the TFGrid: + +``` +ssh root@VM_IP +``` + +Then, it means you can use SFTP to access the same VM: + +``` +sftp root@VM_IP +``` + +Once in the server via SFTP, you can use the command line to get all the commands with `help` or `?`: + +``` +help +``` + +### SFTP Basic Commands + +Here are some common commands for SFTP. + +| Command | Function | +| --------------------------- | ----------------------------------- | +| bye | Quit sftp | +| cd path | Change remote directory to 'path' | +| help | Display this help text | +| pwd | Display remote working directory | +| lpwd | Print local working directory | +| ls [-1afhlnrSt] [path] | Display remote directory listing | +| mkdir path | Create remote directory | +| put [-afpR] local [remote] | Upload file | +| get [-afpR] remote [local] | Download file | +| quit | Quit sftp | +| rm path | Delete remote file | +| rmdir path | Remove remote directory | +| version | Show SFTP version | +| !command | Execute 'command' in local shell | + + +### SFTP File Transfer + +Using SFTP to transfer a file from the local machine to the remote VM is as simple as the following line: + +``` +put /local/path/file +``` + +This will transfer the file in the current user home directory of the remote VM. + +To transfer the file in a given directory, use the following: + +``` +put /local/path/file /remote/path/ +``` + +To transfer a file from the remote VM to the local machine, you can use the command `get`: + +``` +get /remote/path/file /local/path +``` + +To transfer (`get` or `put`) all the files within a directory, use the `-r` argument, as shown in the following example + +``` +get -r /remote/path/to/directory /local/path +``` + +## SFTP with FileZilla + +[FileZilla](https://filezilla-project.org/) is a free and open-source, cross-platform FTP application, consisting of FileZilla Client and FileZilla Server. + +It is possible to use FileZilla Client to transfer files between your local machine and a remote VM on the TFGrid. + +Since SSH is set, the user basically only needs to add the private key in FileZilla and enter the VM credentials to connect using SFTP in FileZilla. + +### Install FileZilla + +FileZilla is available on Linux, MAC and Windows on the [FileZilla website](https://filezilla-project.org/download.php?type=client). Simply follow the steps to properly download and install FileZilla Client. + +### Add a Private Key + +To prepare a connection using FileZilla, you need to add the private key of the SSH key pair. + +Simply add the file `id_rsa` in **SFTP**. + +- Open FileZilla Client +- Go to **Edit** -> **Settings** -> **Connection** -> **SFTP** +- Then click on **Add key file...** + - Search the `id.rsa` file usually located in `~/.ssh/id_rsa` +- Click on **OK** + +### FileZilla SFTP Connection + +You can set a connection between your local machine and a remote 3Node with FileZilla by using **root** as **Username** and the VM IP address as **Host**. + +- Enter the credentials + - Host + - `VM_IP_Address` + - Username + - `root` + - Password + - As set by the user. Can be empty. + - Port + - `22` +- Click on **Quickconnect** + +You can now transfer files between the local machine and the remote VM with FileZilla. + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md b/collections/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md new file mode 100644 index 0000000..9b45f1b --- /dev/null +++ b/collections/system_administrators/computer_it_basics/firewall_basics/firewall_basics.md @@ -0,0 +1,8 @@ +

Firewall Basics

+ +In this section, we cover the basic information concerning Firewall uses on Linux, most notably, we give basic commands and information on UFW and Firewalld. + +

Table of Contents

+ +- [UFW Basics](./ufw_basics.md) +- [Firewalld Basics](./firewalld_basics.md) \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md b/collections/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md new file mode 100644 index 0000000..53c12cd --- /dev/null +++ b/collections/system_administrators/computer_it_basics/firewall_basics/firewalld_basics.md @@ -0,0 +1,149 @@ +

Firewalld Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Firewalld Basic Commands](#firewalld-basic-commands) + - [Install Firewalld](#install-firewalld) + - [See the Status of Firewalld](#see-the-status-of-firewalld) + - [Enable Firewalld](#enable-firewalld) + - [Stop Firewalld](#stop-firewalld) + - [Start Firewalld](#start-firewalld) + - [Disable Firewalld](#disable-firewalld) + - [Mask Firewalld](#mask-firewalld) + - [Unmask Firewalld](#unmask-firewalld) + - [Add a Service to Firewalld](#add-a-service-to-firewalld) + - [Remove a Service to Firewalld](#remove-a-service-to-firewalld) + - [Remove the Diles of a Service to Firewalld](#remove-the-diles-of-a-service-to-firewalld) + - [See if a Service is Available](#see-if-a-service-is-available) + - [Reload Firewalld](#reload-firewalld) + - [Display the Services and the Open Ports for the Public Zone](#display-the-services-and-the-open-ports-for-the-public-zone) + - [Display the Open Ports by Services and Port Numbers](#display-the-open-ports-by-services-and-port-numbers) + - [Add a Port for tcp](#add-a-port-for-tcp) + - [Add a Port for udp](#add-a-port-for-udp) + - [Add a Port for tcp and udp](#add-a-port-for-tcp-and-udp) +- [References](#references) + + +## Introduction + +We present a quick introduction to [firewalld](https://firewalld.org/), a free and open-source firewall management tool for Linux operating systems. This guide can be useful for users of the TFGrid deploying on full and micro VMs as well as other types of deployment. + +## Firewalld Basic Commands + +### Install Firewalld + + * ``` + apt install firewalld -y + ``` +### See the Status of Firewalld + + * ``` + firewall-cmd --state + ``` +### Enable Firewalld + + * ``` + systemctl enablefirewalld + ``` +### Stop Firewalld + + * ``` + systemctl stop firewalld + ``` +### Start Firewalld + + * ``` + systemctl start firewalld + ``` +### Disable Firewalld + + * ``` + systemctl disable firewalld + ``` +### Mask Firewalld + + * ``` + systemctl mask --now firewalld + ``` +### Unmask Firewalld + + * ``` + systemctl unmask --now firewalld + ``` +### Add a Service to Firewalld + + * Temporary + * ``` + firewall-cmd --add-service= + ``` + * Permanent + * ``` + firewall-cmd --add-service= --permanent + ``` + +### Remove a Service to Firewalld + + * Temporary + * ``` + firewall-cmd --remove-service= + ``` + * Permanent + * ``` + firewall-cmd --remove-service= --permanent + ``` + +### Remove the Diles of a Service to Firewalld + + * ``` + rm -f /etc/firewalld/services/.xml* + ``` + +### See if a Service is Available + + * ``` + firewall-cmd --info-service= + ``` + +### Reload Firewalld + + * ``` + firewall-cmd --reload + ``` + +### Display the Services and the Open Ports for the Public Zone + + * ``` + firewall-cmd --list-all --zone=public + ``` + +### Display the Open Ports by Services and Port Numbers + +* By services + * ``` + firewall-cmd --list-services + ``` +* By port numbers + * ``` + firewall-cmd --list-ports + ``` + +### Add a Port for tcp + + * ``` + firewall-cmd --zone=public --add-port=/tcp + ``` +### Add a Port for udp + + * ``` + firewall-cmd --zone=public --add-port=/udp + ``` +### Add a Port for tcp and udp + + * ``` + firewall-cmd --zone=public --add-port= + ``` + +## References + +ufw man pages - https://firewalld.org/documentation/man-pages/firewalld.html \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md b/collections/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md new file mode 100644 index 0000000..c9e5076 --- /dev/null +++ b/collections/system_administrators/computer_it_basics/firewall_basics/ufw_basics.md @@ -0,0 +1,256 @@ + +

Uncomplicated Firewall (ufw) Basic Commands

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Basic Commands](#basic-commands) + - [Install ufw](#install-ufw) + - [Enable ufw](#enable-ufw) + - [Disable ufw](#disable-ufw) + - [Reset ufw](#reset-ufw) + - [Reload ufw](#reload-ufw) + - [Deny Incoming Connections](#deny-incoming-connections) + - [Allow Outgoing Connections](#allow-outgoing-connections) + - [Allow a Specific IP address](#allow-a-specific-ip-address) + - [Allow a Specific IP Address to a Given Port](#allow-a-specific-ip-address-to-a-given-port) + - [Allow a Port for tcp](#allow-a-port-for-tcp) + - [Allow a Port for tcp and udp](#allow-a-port-for-tcp-and-udp) + - [Allow a Subnet to a Given Port](#allow-a-subnet-to-a-given-port) + - [Deny an IP Address](#deny-an-ip-address) + - [Block Incoming Connections to a Network Interface](#block-incoming-connections-to-a-network-interface) + - [Delete a Rule with Number](#delete-a-rule-with-number) + - [Get App Info](#get-app-info) + - [Allow a Specific App](#allow-a-specific-app) +- [References](#references) + + +## Introduction + +We present a quick introduction to [Uncomplicated Firewall (ufw)](https://firewalld.org/), a free and open-source firewall management tool for Linux operating systems. This guide can be useful for users of the TFGrid deploying on full and micro VMs as well as other types of deployment. + +## Basic Commands + +We show here basic commands to set a firewall on Linux with Uncomplicated Firewall (ufw). + +### Install ufw + + * Update + * ``` + apt update + ``` + * Install ufw + * ``` + apt install ufw + ``` + +### Enable ufw + + * ``` + ufw enable + `````` + +### Disable ufw + + * ``` + ufw disable + ``` + +### Reset ufw + + * ``` + ufw reset + ``` + +### Reload ufw + + * ``` + ufw reload + ``` + +### Deny Incoming Connections + + * ``` + ufw default deny incoming + ``` + +### Allow Outgoing Connections + + * ``` + ufw default allow outgoing + ``` +### Allow a Specific IP address + + * ``` + ufw allow from + ``` + +### Allow a Specific IP Address to a Given Port + + * ``` + ufw allow from to any port + ``` + +### Allow a Port for tcp + +* ``` + ufw allow /tcp + ``` +### Allow a Port for udp + +* ``` + ufw allow /udp + ``` +### Allow a Port for tcp and udp + +* ``` + ufw allow + ``` + +### Allow Ports: Examples + +Here are some typical examples of ports to allow with ufw: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow HTTPS (port 443) + * ``` + ufw allow https + ``` +* Allow mysql (port 3306) + * ``` + ufw allow 3306 + ``` + +### Allow Port Ranges + +* Template + * ``` + ufw allow : + ``` +* Example + * ``` + ufw allow 6000:6005 + ``` + +### Allow a Subnet + +* ``` + ufw allow from + ``` + +### Allow a Subnet to a Given Port + +* ``` + ufw allow from to any port + ``` + +### Deny a Port + +* ``` + ufw deny + ``` + +### Deny an IP Address + +* ``` + ufw deny + ``` + +### Deny a Subnet + +* ``` + ufw deny from + ``` + +### Block Incoming Connections to a Network Interface + +* ``` + ufw deny in on from + ``` + +### Check Rules + +Use **status** to check the current firewall configurations. Add **verbose** for more details. + +* ``` + ufw status + ``` +* ``` + ufw status verbose + ``` + +### Check Rules (Numbered) + +It can be useful to see the numbering of the rules, to remove more easily a rule for example. + +* ``` + ufw status numbered + ``` + +### Delete a Rule with Number + +It can be useful to see the numbering of the rules, to remove more easily a rule for example. + +* ``` + ufw delete + ``` + +### Delete a Rule with the Rule Name and Parameters + +You can also delete a rule by writing directly the rule name you used to add the rule. + +* Template + * ``` + ufw delete + ``` +* Example + * ``` + ufw delete allow ssh + ``` + * ``` + ufw delete allow 22 + ``` + +You can always check the current rules with **ufw status** to see if the rules are properly removed. + +### List the Available Profiles Available + +* ``` + ufw app list + ``` + +This command will give you the names of the apps present on the server. You can then use **ufw app info** to get information on the app, or allow the app with **ufw allow** + +### Get App Info + +* ``` + ufw app info + ``` + +### Set ufw in Verbose Mode + +* ``` + ufw verbose + ``` + +### Allow a Specific App + +* Template + * ``` + ufw allow "" + ``` +* Example + * ``` + ufw allow "NGINX Full" + ``` + +## References + +ufw man pages - https://manpages.ubuntu.com/manpages/trusty/man8/ufw.8.html \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/git_github_basics.md b/collections/system_administrators/computer_it_basics/git_github_basics.md new file mode 100644 index 0000000..dca25e4 --- /dev/null +++ b/collections/system_administrators/computer_it_basics/git_github_basics.md @@ -0,0 +1,450 @@ +

Git and GitHub Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Git](#install-git) + - [Install on Linux](#install-on-linux) + - [Install on MAC](#install-on-mac) + - [Install on Windows](#install-on-windows) +- [Basic Commands](#basic-commands) + - [Check Git version](#check-git-version) + - [Clone a repository](#clone-a-repository) + - [Clone a single branch](#clone-a-single-branch) + - [Check all available branches](#check-all-available-branches) + - [Check the current branch](#check-the-current-branch) + - [Go to another branch](#go-to-another-branch) + - [Add your changes to a local branch](#add-your-changes-to-a-local-branch) + - [Push changes of a local branch to the remote Github branch](#push-changes-of-a-local-branch-to-the-remote-github-branch) + - [Reverse modifications to a file where changes haven't been staged yet](#reverse-modifications-to-a-file-where-changes-havent-been-staged-yet) + - [Download binaries from Github](#download-binaries-from-github) + - [Resolve conflicts between branches](#resolve-conflicts-between-branches) + - [Download all repositories of an organization](#download-all-repositories-of-an-organization) + - [Revert a push commited with git](#revert-a-push-commited-with-git) + - [Make a backup of a branch](#make-a-backup-of-a-branch) + - [Revert to a backup branch](#revert-to-a-backup-branch) + - [Start over local branch and pull remote branch](#start-over-local-branch-and-pull-remote-branch) + - [Overwrite local files and pull remote branch](#overwrite-local-files-and-pull-remote-branch) + - [Stash command and parameters](#stash-command-and-parameters) +- [Code Editors](#code-editors) + - [VS-Code](#vs-code) + - [VS-Codium](#vs-codium) +- [References](#references) + +*** + +## Introduction + +In this section, we cover basic commands and aspects of [GitHub](https://github.com/) and [Git](https://git-scm.com/). + +Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. + +GitHub is a platform and cloud-based service for software development and version control using Git, allowing developers to store and manage their code. + + + +## Install Git + +You can install git on MAC, Windows and Linux. You can consult Git's documentation learn how to [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +### Install on Linux + +* Fedora distribution + * ``` + dnf install git-all + ``` +* Debian-based distribution + * ``` + apt install git-all + ``` +* Click [here](https://git-scm.com/download/linux) for other Linux distributions + +### Install on MAC + +* With Homebrew + * ``` + brew install git + ``` + +### Install on Windows + +You can download Git for Windows at [this link](https://git-scm.com/download/win). + + + +## Basic Commands + +### Check Git version + +``` +git --version +``` + + + +### Clone a repository + +``` +git clone +``` + + + +### Clone a single branch + +``` +git clone --branch --single-branch +``` + + + +### Check all available branches + +``` +git branch -r +``` + + + +### Check the current branch + +``` +git branch +``` + + + +### Go to another branch + +``` +git checkout +``` + + + +### Add your changes to a local branch + +* Add all changes + * ``` + git add . + ``` +* Add changes of a specific file + * ``` + git add / + ``` + + + +### Push changes of a local branch to the remote Github branch + +To push changes to Github, you can use the following commands: + +* ``` + git add . + ``` +* ``` + git commit -m "write your changes here in comment" + ``` +* ``` + git push + ``` + + + +### Count the differences between two branches + +Replace **branch1** and **branch2** with the appropriate branch names. + +``` +git rev-list --count branch1..branch2 +``` + +### See the default branch + +``` +git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@' +``` + + + +### Force a push + +``` +git push --force +``` + + + +### Merge a branch to a different branch + +* Checkout the branch you want to copy content TO + * ``` + git checkout branch_name + ``` +* Merge the branch you want content FROM + * ``` + git merge origin/dev_mermaid + ``` +* Push the changes + * ``` + git push -u origin/head + ``` + + + +### Clone completely one branch to another branch locally then push the changes to Github + +For this example, we copy **branchB** into **branchA**. + +* See available branches + * ``` + git branch -r + ``` +* Go to **branchA** + * ``` + git checkout branchA + ``` +* Copy **branchB** into **branchA** + * ``` + git git reset --hard branchB + ``` +* Force the push + * ``` + git push --force + ``` + + + +### The 3 levels of the command reset + +* ``` + git reset --soft + ``` + * Bring the History to the Stage/Index + * Discard last commit +* ``` + git reset --mixed + ``` + * Bring the History to the Working Directory + * Discard last commit and add +* ``` + git reset --hard + ``` + * Bring the History to the Working Directory + * Discard last commit, add and any changes you made on the codes + +Note 1: If you're using **--hard**, make sure to run git status to verify that your directory is clean, otherwise you will lose your uncommitted changes. + +Note 2: The argument **--mixed** is the default option, so **git reset** is equivalent to **git reset --mixed**. + + + +### Reverse modifications to a file where changes haven't been staged yet + +You can use the following to reverse the modifications of a file that hasn't been staged: + +``` +git checkout +``` + + + +### Download binaries from Github + +* Template: + * ``` + wget -O https://raw.githubusercontent.com//// + ``` + + + +### Resolve conflicts between branches + +We show how to resolve conflicts in a development branch (e.g. **branch_dev**) and then merging the development branch into the main branch (e.g. **branch_main**). + +* Clone the repo + * ``` + git clone + ``` +* Pull changes and potential conflicts + * ``` + git pull origin branch_main + ``` +* Checkout the development branch + * ``` + git checkout branch_dev + ``` +* Resolve conflicts in a text editor +* Save changes in the files +* Add the changes + * ``` + git add . + ``` +* Commit the changes + * ``` + git commit -m "your message here" + ``` +* Push the changes + * ``` + git push + ``` + + + +### Download all repositories of an organization + +* Log in to gh + * ``` + gh auth login + ``` +* Clone all repositories. Replace with the organization in question. + * ``` + gh repo list --limit 1000 | while read -r repo _; do + gh repo clone "$repo" "$repo" + done + ``` + + + +### Revert a push commited with git + +* Find the commit ID + * ``` + git log -p + ``` +* Revert the commit + * ``` + git revert + ``` +* Push the changes + * ``` + git push + ``` + + + +### Make a backup of a branch + +``` +git clone -b --single-branch //.git +``` + + + +### Revert to a backup branch + +* Checkout the branch you want to update (**branch**) + * ``` + git checkout + ``` +* Do a reset of your current branch based on the backup branch + * ``` + git reset --hard + ``` + + + +### Start over local branch and pull remote branch + +To start over your local branch and pull the remote branch to your working environment, thus ignoring local changes in the branch, you can do the following: + +``` +git fetch +git reset --hard +git pull +``` + +Note that this will not work for untracked and new files. See below for untracked and new files. + + + +### Overwrite local files and pull remote branch + +This method can be used to overwrite local files. This will work even if you have untracked and new files. + +* Save local changes on a stash + * ``` + git stash --include-untracked + ``` +* Discard local changes + * ``` + git reset --hard + ``` +* Discard untracked and new files + * ``` + git clean -fd + ``` +* Pull the remote branch + * ``` + git pull + ``` + +Then, to delete the stash, you can use **git stash drop**. + + + +### Stash command and parameters + +The stash command is used to record the current state of the working directory. + +* Stash a branch (equivalent to **git stash push**) + * ``` + git stash + ``` +* List the changes in the stash + * ``` + git stash list + ``` +* Inspect the changes in the stash + * ``` + git stash show + ``` +* Remove a single stashed state from the stash list and apply it on top of the current working tree state + * ``` + git stash pop + ``` +* Apply the stash on top of the current working tree state without removing the state from the stash list + * ``` + git stash apply + ``` +* Drop a stash + * ``` + git stash drop + ``` + + + +## Code Editors + +There are many code editors that can work well when working with git. + +### VS-Code + +[VS-Code](https://code.visualstudio.com/)is a source-code editor made by Microsoft with the Electron Framework, for Windows, Linux and macOS. + +To download VS-Code, visit their website and follow the given instructions. + +### VS-Codium + +[VS-Codium ](https://vscodium.com/) is a community-driven, freely-licensed binary distribution of Microsoft’s editor VS Code. + +There are many ways to install VS-Codium. Visit the [official website](https://vscodium.com/#install) for more information. + +* Install on MAC + * ``` + brew install --cask vscodium + ``` +* Install on Linux + * ``` + snap install codium --classic + ``` +* Install on Windows + * ``` + choco install vscodium + ``` + + + +## References + +Git Documentation - https://git-scm.com/docs/user-manual \ No newline at end of file diff --git a/collections/system_administrators/computer_it_basics/screenshots.md b/collections/system_administrators/computer_it_basics/screenshots.md new file mode 100644 index 0000000..5d23b74 --- /dev/null +++ b/collections/system_administrators/computer_it_basics/screenshots.md @@ -0,0 +1,75 @@ +

Screenshots

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Linux](#linux) +- [MAC](#mac) +- [Windows](#windows) + +*** + +## Introduction + +In this section, we show how to easily take screenshots on Linux, MAC and Windows. + +## Linux + +- Copy to the clipboard a full screenshot +``` +PrintScreen +``` +- Copy to the clipboard a screenshot of an active window +``` +Alt + PrintScreen +``` +- Copy to the clipboard a screenshot of an active app +``` +Control + Alt + PrintScreen +``` +- Copy to the clipboard a screenshot of a selected area +``` +Shift + PrintScreen +``` + +## MAC + +- Save to the desktop a full screenshot +``` +Shift + Command (⌘) + 3 +``` +- Save to the desktop a screenshot of an active window +``` +Shift + Command (⌘) + 4 + Spacebar +``` +- Copy to the clipboard a screenshot of an active window +``` +Shift + Control + Command (⌘) + 3 +``` +- Save to the desktop a screenshot of a selected area +``` +Shift + Command (⌘) + 4 +``` +- Copy to the clipboard a screenshot of a selected area +``` +Shift + Control + Command (⌘) + 4 +``` + +## Windows + +- Copy to the clipboard a full screenshot +``` +PrintScreen +``` +- Save to the pictures directory a full screenshot +``` +Windows key + PrintScreen +``` +- Copy to the clipboard a screenshot of an active window +``` +Alt + PrintScreen +``` +- Copy to the clipboard a selected area of the screen +``` +Windows key + Shift + S +``` \ No newline at end of file diff --git a/collections/system_administrators/getstarted/TF_Connect/README.md b/collections/system_administrators/getstarted/TF_Connect/README.md new file mode 100644 index 0000000..098eb0e --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Connect/README.md @@ -0,0 +1,4 @@ +# Threefold Connect Basics Tutorial + +* Create an account +* Create a wallet \ No newline at end of file diff --git a/collections/system_administrators/getstarted/TF_Connect/TF_Connect.md b/collections/system_administrators/getstarted/TF_Connect/TF_Connect.md new file mode 100644 index 0000000..043d1dc --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Connect/TF_Connect.md @@ -0,0 +1,148 @@ +

ThreeFold Connect: Create a Threefold Connect Account and Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the ThreeFold Connect App](#download-the-threefold-connect-app) +- [Create a ThreeFold Connect Account](#create-a-threefold-connect-account) +- [Verify Your Email](#verify-your-email) +- [Create a ThreeFold Connect Wallet](#create-a-threefold-connect-wallet) + +*** + +## Introduction + +The ThreeFold Connect app emerges as a dynamic and essential companion for individuals seeking seamless access to the ThreeFold ecosystem on the go. Available for free download on both iOS and Android mobile platforms, the TF Connect app ensures that users can effortlessly engage with the ThreeFold Grid, empowers users to manage their digital assets, engage in secure transactions, and explore decentralized financial opportunities, all within a unified mobile experience. + +In this tutorial, we show you how to create a ThreeFold Connect account and wallet. The main steps are simple and you will be done in no time. If you have any questions, feel free to write a post on the [ThreeFold Forum](http://forum.threefold.io/). + +## Download the ThreeFold Connect App + + +The ThreeFold Connect app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +- Note that for Android phones, you need at minimum Android 8.1 +- Note that for iOS phones, you need at minimum iOS 15 + +Either use the links above, or search for the ThreeFold Connect app on the App Store or the Google Play Store. Then install and open the app. If you want to leave a 5-star review of the app, no one here will stop you! + +![farming_tf_wallet_1](./img/farming_tf_wallet_1.png) +![farming_tf_wallet_2](./img/farming_tf_wallet_2.png) + +When you try to open the app, if you get an error message such as: "Error in initialization in Flagsmith...", you might need to upgrade your phone to a newer software version (Android 8.1 and iOS 15). + +*** + +## Create a ThreeFold Connect Account + +Once you are in the app, you will see some introduction pages to help you familiarize with the TF Connect app. You will also be asked to read and accept ThreeFold's Terms and Conditions. + +![farming_tf_wallet_3](./img/farming_tf_wallet_3.png) +![farming_tf_wallet_4](./img/farming_tf_wallet_4.png) + +You will then be asked to either *SIGN UP* or *RECOVER ACCOUNT*. To create a new account, click *SIGN UP*. + +![farming_tf_wallet_5](./img/farming_tf_wallet_5.png) + +Then, choose a *ThreeFold Connect Id*. This 3bot ID will be used, as well as the seed phrase, when you want to recover an account. Choose wisely. And do not forget it! Here we will use TFExample, as an example. + +![farming_tf_wallet_6](./img/farming_tf_wallet_6.png) + +Next, you need to add a valid email address. You will need to access your email and confirm the ThreeFold validation email to fully use the ThreeFold Connect app. + +![farming_tf_wallet_7](./img/farming_tf_wallet_7.png) + +The next step is crucial! Make sure no one is around looking at your screen. You will be shown your seed phrase. Keep this in a secure and offline place. You will need the 3bot ID and the seed phrase to recover your account. This seed phrase is of utmost important. Do not lose it nor give it to anyone. + +![farming_tf_wallet_8](./img/farming_tf_wallet_8.png) + +Once you've hit *Next*, you will be asked to write down 3 random words of your seed phrase. This is a necessary step to ensure you have taken the time to write down your seed phrase. + +![farming_tf_wallet_9](./img/farming_tf_wallet_9.png) + +Then, you'll be asked to confirm your TF 3bot ID and the associated email. + +![farming_tf_wallet_10](./img/farming_tf_wallet_10.png) + +Finally, you will be asked to choose a 4-digit pin. This will be needed to use the ThreeFold Connect app. If you ever forget this 4-digit pin, you will need to recover your account from your 3bot name and your seed phrase. You will need to confirm the new pin in the next step. + +![farming_tf_wallet_11](./img/farming_tf_wallet_11.png) + +That's it! You've created your ThreeFold Connect account. You can press the hamburger menu on the top left to explore the ThreeFold Connect app. + +![farming_tf_wallet_12](./img/farming_tf_wallet_12.png) + +In the next step, we will create a ThreeFold Connect wallet. You'll see, it's very simple! + +But first, let's see how to verify your email. + +*** + +## Verify Your Email + +Once you've created your account, an email will be sent to the email address you've chosen in the account creation process. To verify your email, go on your email inbox and open the email sent by *info@openkyc.live* with the subject *Verify your email address*. + +In this email, click on the link *Verify my email address*. This will lead you to a *login.threefold.me* link. The process should be automatic. Once this is done, you will receive a confirmation on screen, as well as on your phone. + +![farming_tf_wallet_39](./img/farming_tf_wallet_39.png) + +![farming_tf_wallet_40](./img/farming_tf_wallet_40.png) + +![farming_tf_wallet_41](./img/farming_tf_wallet_41.png) + +If, for some reason, you did not receive the verification email, simply click on *Verify* and another email will be sent. + +![farming_tf_wallet_42](./img/farming_tf_wallet_42.png) + +![farming_tf_wallet_43](./img/farming_tf_wallet_43.png) + +That's it! You've now created a ThreeFold Connect account. + +All that is left to do is to create a ThreeFold Connect wallet. This is very simple. + +Let's go! + +*** + +## Create a ThreeFold Connect Wallet + +To create a wallet, click on the ThreeFold Connect app menu, then choose *Wallet*. + +![farming_tf_wallet_13](./img/farming_tf_wallet_13.png) + +Once you are in the section *Wallet*, click on *Create Initial Wallet*. If it doesn't work the first time, retry some more. If you have trouble creating a wallet, make sure your connection is reliable. You can try a couple of minutes later if it still doesn't work. With a reliable connection, there shouldn't be any problem. Contact TF Support if problems persist. + +![farming_tf_wallet_14](./img/farming_tf_wallet_14.png) + +This is what you see when the TF Grid is initializing your wallet. + +![farming_tf_wallet_15](./img/farming_tf_wallet_15.png) + +Once your wallet is initialized, you will see *No balance found for this wallet*. You can click on this button to enter the wallet. + +![farming_tf_wallet_16](./img/farming_tf_wallet_16.png) + +Once inside your wallet, this is what you see. + +![farming_tf_wallet_17](./img/farming_tf_wallet_17.png) + +We will now see where the Stellar and the TFChain addresses and secrets are to be found. We will also change the wallet name. To do so, click on the *encircled i* at the bottom right of the screen. + +On this page, you can access your Stellar and TFChain addresses as well as your Stellar and TFChain secret keys. + +![farming_tf_wallet_18](./img/farming_tf_wallet_18.png) + +To change the name of your wallet, click on the button next to *Wallet Name*. Here we use TFWalletExample. Note that you can also use alphanumeric characters. + +![farming_tf_wallet_19](./img/farming_tf_wallet_19.png) + +![farming_tf_wallet_20](./img/farming_tf_wallet_20.png) + +At the top of the section *Wallet*, we can see that the name has changed. + +![farming_tf_wallet_21](./img/farming_tf_wallet_21.png) + +That's it! You now have a ThreeFold Connect account and wallet. +This will be very useful for your TFT transactions on the ThreeFold ecosystem. + +*** \ No newline at end of file diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png new file mode 100644 index 0000000..2e5caee Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_1.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png new file mode 100644 index 0000000..cfa7edc Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_10.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png new file mode 100644 index 0000000..1a0d5b9 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_11.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png new file mode 100644 index 0000000..95a606e Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_12.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png new file mode 100644 index 0000000..2e50989 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_13.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png new file mode 100644 index 0000000..8ed439d Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_14.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png new file mode 100644 index 0000000..120be6a Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_15.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png new file mode 100644 index 0000000..cbd433b Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_16.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png new file mode 100644 index 0000000..51130a3 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_17.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png new file mode 100644 index 0000000..8fe5d2f Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_18.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png new file mode 100644 index 0000000..6c6796c Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_19.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png new file mode 100644 index 0000000..93d3a01 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_2.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png new file mode 100644 index 0000000..70cf1cb Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_20.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png new file mode 100644 index 0000000..7f9e454 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_21.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png new file mode 100644 index 0000000..0a45687 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_3.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png new file mode 100644 index 0000000..0c78bc0 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_39.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png new file mode 100644 index 0000000..4f86de2 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_4.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png new file mode 100644 index 0000000..6651627 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_40.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png new file mode 100644 index 0000000..839e929 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_41.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png new file mode 100644 index 0000000..5f84480 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_42.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png new file mode 100644 index 0000000..eb3a017 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_43.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png new file mode 100644 index 0000000..2dd845d Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_5.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png new file mode 100644 index 0000000..ce39ca7 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_6.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png new file mode 100644 index 0000000..c9256be Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_7.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png new file mode 100644 index 0000000..0901cf6 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_8.png differ diff --git a/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png new file mode 100644 index 0000000..a913e4b Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Connect/img/farming_tf_wallet_9.png differ diff --git a/collections/system_administrators/getstarted/TF_Dashboard/TF_Dashboard.md b/collections/system_administrators/getstarted/TF_Dashboard/TF_Dashboard.md new file mode 100644 index 0000000..6dc12c7 --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Dashboard/TF_Dashboard.md @@ -0,0 +1,148 @@ +

Threefold Dashboard: Create Account and Transfer TFT

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Create Polkadot Extension Account](#create-polkadot-extension-account) +- [Transfer TFT from Stellar Chain to TFChain](#transfer-tft-from-stellar-chain-to-tfchain) + +## Introduction + +For this section, we will create an account on the TFChain and transfer TFT from Stellar chain to TFChain. We will then be able to use the TFT and deploy workloads on the Threefold Playground. + +## Create Polkadot Extension Account + +Go to the Threefold Dashboard: [dashboard.grid.tf](https://dashboard.grid.tf/) + +If you don't have the Polkadot extension installed on your browser, you will be able to click on the download link directly on the Threefold Dashboard page: + +![image](./img/dashboard_1.png) + +This link will lead you to the Polkadot extension download page: https://polkadot.js.org/extension/ + +![image](./img/dashboard_2.png) + +Then, simply click on "Add to Chrome". + +![image](./img/dashboard_3.png) + +Then, confirm by clicking on "Add extension". + +![image](./img/dashboard_4.png) + +You can now access the extension by clicking on the browser's extension button on the top right of the screen, and by then clicking on *polkadot{.js} extension*: + +![image](./img/dashboard_5.png) + +Make sure to carefully read the Polkadot message then click on **Understood, let me continue**: + +![image](./img/dashboard_6.png) + +Then click on the **plus** symbol to create a new account: + +![image](./img/dashboard_7.png) + +For this next step, you should be very careful. Your seed phrase is your only access to your account. Make sure to keep a copy somewhere safe and offline. + +![image](./img/dashboard_8.png) + +After, choose a name for your account and a password: + +![image](./img/dashboard_9.png) + +Your account is now created. You can see it when you open the Polkadot extension on your browser: + +![image](./img/dashboard_10.png) + +Now, when you go on the [Threefold Dashboard](https://dashboard.grid.tf/), you can click on the **Connect** button on the top right corner: + +![image](./img/dashboard_11.png) + +You will then need to grant the Threefold Dashboard access to your Polkadot account. + +![image](./img/dashboard_12.png) + +Then, simply click on your account name to access the Threefold Dashboard: + +![image](./img/dashboard_14.png) + +Read and accept the Terms and Conditions + +![image](./img/dashboard_15.png) + +You will be asked to confirm the transaction, write your password and click on **Sign the transaction** to confirm. + +![image](./img/dashboard_13.png) + +Once you open your account, you can choose a relay for it then click on **Create**. + +![image](./img/dashboard_relay.png) + +You will also be asked to confirm the transaction. + +![image](./img/dashboard_13.png) + +That's it! You've successfully created an account on the TFChain thanks to the Polkadot extension. You can now access the Threefold Dashboard. + +On to the next section! Where we will transfer (or swap) TFT from the Stellar Chain on your Threefold Connect app wallet to the TFChain on the Threefold Dashboard. + +You'll see, this is so easy thanks to the Threefold Dashboard configuration. + +*** + +## Transfer TFT from Stellar Chain to TFChain + +On the [Threefold Dashboard](https://dashboard.grid.tf/), click on the **Portal**, then click on **Swap**. + +Make sure the chain **stellar** is selected. Then click **Deposit**, as we want to deposit TFT from the Stellar Chain to the TFChain. + +![image](./img/dashboard_16.png) + +Next, you will scan the QR code shown on the screen with the Threefold Connect app. + +> Note that you can also manually enter the Stellar Chain address and the Twin ID. + +![image](./img/dashboard_17.png) + +To scan the QR code on the Threefold Connec app, follow those steps: + +Click on the menu button: + +![image](./img/dashboard_18.png) + +Click on **Wallet**: + +![image](./img/dashboard_19.png) + +Then, click on **Send Coins**: + +![image](./img/dashboard_20.png) + +On the next page, select the **Stellar** chain, then click on **SCAN QR**: + +![image](./img/dashboard_21.png) + + +This will automatically write the correct address and twin ID. + +You can now write the amount of TFT you wish to send, and then click **SEND** + +> We recommend to try with a small amount of TFT first to make sure everything is OK. +> +> The transfer fees are of 1 TFT per transfer. + +![image](./img/dashboard_22.png) + +You will then simply need to confirm the transaction. It is a good opportunity to make sure everything is OK. + +![image](./img/dashboard_23.png) + +You should then receive your TFT on your Dashboard account within a few minutes. + +You can see your TFT balance on the top of the screen. Here's an example of what it could look like: + +![image](./img/dashboard_24.png) + +> Note: You might need to refresh (reload) the webpage to see the new TFT added to the account. + +That's it! You've swapped TFT from Stellar Chain to TFChain. diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png new file mode 100644 index 0000000..0db09d3 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_1.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png new file mode 100644 index 0000000..b7b21f2 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_2.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png new file mode 100644 index 0000000..85b01b2 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_3.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png new file mode 100644 index 0000000..753c0dd Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_4.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png new file mode 100644 index 0000000..073a4a7 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/img/tft_on_ethereum_image_5.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md new file mode 100644 index 0000000..67ecd6e --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Token/tft_ethereum/tft_ethereum.md @@ -0,0 +1,94 @@ +

TFT on Ethereum

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFT Ethereum-Stellar Bridge](#tft-ethereum-stellar-bridge) +- [TFT and Metamask](#tft-and-metamask) + - [Add TFT to Metamask](#add-tft-to-metamask) + - [Buy TFT on Metamask](#buy-tft-on-metamask) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +The TFT Stellar-Ethereum bridge serves as a vital link between the Stellar and Ethereum blockchains, enabling the seamless transfer of TFT tokens between these two networks. This bridge enhances interoperability and expands the utility of TFT by allowing users to leverage the strengths of both platforms. With the bridge in place, TFT holders can convert their tokens from the Stellar network to the Ethereum network and vice versa, unlocking new possibilities for engagement with decentralized applications, smart contracts, and the vibrant Ethereum ecosystem. This bridge promotes liquidity, facilitates cross-chain transactions, and encourages collaboration between the Stellar and Ethereum communities. + +*** + +## TFT Ethereum-Stellar Bridge + +The easiest way to transfer TFT between Ethereum and Stellar is to use the [TFT Ethereum Bridge](https://bridge.eth.threefold.io). We present here the main steps on how to use this bridge. + +When you go to the [TFT Ethereum-Stellar bridge website](https://bridge.eth.threefold.io/), connect your Ethereum wallet. Then the bridge will present a QR code which you scan with your Stellar wallet. This will populate a transaction with the bridge wallet as the destination and an encoded form of your Ethereum address as the memo. The bridge will scan the transaction, decode the Ethereum wallet address, and deliver newly minted TFT on Ethereum, minus the bridge fees. + +For the reverse operation, going from Ethereum to Stellar, there is a smart contract interaction that burns TFT on Ethereum while embedding your Stellar wallet address. The bridge will scan that transaction and release TFT from its vault wallet to the specified Stellar address, again minus the bridge fees. + +Note that the contract address for TFT on Ethereum is the following: `0x395E925834996e558bdeC77CD648435d620AfB5b`. + +To see the ThreeFold Token on Etherscan, check [this link](https://etherscan.io/token/0x395E925834996e558bdeC77CD648435d620AfB5b). + +*** + +## TFT and Metamask + +The ThreeFold Token (TFT) is available on Ethereum. +It is implemented as a wrapped asset with the following token address: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +We present here the basic steps to add TFT to Metamask. We also show how to buy TFT Metamask. Finally, we present the simple steps to use the [TFT Ethereum Bridge](https://bridge.eth.threefold.io/). + + +*** + +### Add TFT to Metamask + +Open Metamask and import the ThreeFold Token. First click on `import tokens`: + +![Metamask-Main|297x500](./img/tft_on_ethereum_image_1.png) + +Then, choose `Custom Token`: + +![Metamask-ImportToken|298x500](./img/tft_on_ethereum_image_2.png) + +To add the ThreeFold Token, paste its Ethereum address in the field `Token contract address field`. The address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +Once you paste the TFT contract address, the parameter `Token symbol` should automatically be filled with `TFT`. + +Click on the button `Add Custom Token`. + +![Metamask-importCustomToken|297x500](./img/tft_on_ethereum_image_3.png) + +To confirm, click on the button `Import tokens`: + +![Metamask-ImporttokensQuestion|298x500](./img/tft_on_ethereum_image_4.png) + +TFT is now added to Metamask. + +*** + +### Buy TFT on Metamask + +Liquidity is present on Ethereum so you can use the "Swap" functionality from Metamask directly or go to [Uniswap](https://app.uniswap.org/#/swap) to swap Ethereum, or any other token, to TFT. + +When using Uniswap, paste the TFT token address in the field `Select a token` to select TFT on Ethereum. The TFT token address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +![Uniswap-selecttoken|315x500](./img/tft_on_ethereum_image_5.png) + +*** + +## Questions and Feedback + +If you have any question, feel free to write a post on the [Threefold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png new file mode 100644 index 0000000..265dcc1 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_1.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png new file mode 100644 index 0000000..37e04bb Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_10.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png new file mode 100644 index 0000000..6f05038 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_11.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png new file mode 100644 index 0000000..ad81a3d Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_12.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png new file mode 100644 index 0000000..8d808cc Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_13.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png new file mode 100644 index 0000000..014edde Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_14.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png new file mode 100644 index 0000000..895d432 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_15.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png new file mode 100644 index 0000000..b8ca3c9 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_16.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png new file mode 100644 index 0000000..5919d0c Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_17.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png new file mode 100644 index 0000000..8ea142f Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_18.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png new file mode 100644 index 0000000..14688ab Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_19.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png new file mode 100644 index 0000000..cb638bd Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_2.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png new file mode 100644 index 0000000..b072502 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_20.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png new file mode 100644 index 0000000..709e50a Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_21.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png new file mode 100644 index 0000000..6e588cd Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_22.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png new file mode 100644 index 0000000..b47c4f0 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_23.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png new file mode 100644 index 0000000..df06bec Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_24.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png new file mode 100644 index 0000000..7ba5402 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_25.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png new file mode 100644 index 0000000..f34d4ff Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_26.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png new file mode 100644 index 0000000..1de6ee5 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_27.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png new file mode 100644 index 0000000..c3e8cd0 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_28.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png new file mode 100644 index 0000000..888067f Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_29.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png new file mode 100644 index 0000000..4a18f4c Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_3.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png new file mode 100644 index 0000000..f28e697 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_30.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png new file mode 100644 index 0000000..84fe32e Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_31.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png new file mode 100644 index 0000000..3ab05eb Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_32.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png new file mode 100644 index 0000000..b30050a Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_33.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png new file mode 100644 index 0000000..553db13 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_34.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png new file mode 100644 index 0000000..b2a0d03 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_4.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png new file mode 100644 index 0000000..2b28aef Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_5.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png new file mode 100644 index 0000000..a3601b3 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_6.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png new file mode 100644 index 0000000..879a735 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_7.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png new file mode 100644 index 0000000..b1a9321 Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_8.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png new file mode 100644 index 0000000..eb2e80e Binary files /dev/null and b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/img/gettft_9.png differ diff --git a/collections/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md new file mode 100644 index 0000000..ddca2d4 --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Token/tft_lobstr/tft_lobstr.md @@ -0,0 +1,216 @@ +

Threefold Token: Buy TFT on Lobstr

+ +
+ +
+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the App and Create an Account](#download-the-app-and-create-an-account) +- [Connect Your TF Connect App Wallet](#connect-your-tf-connect-app-wallet) +- [Buy XLM with Fiat Currency](#buy-xlm-with-fiat-currency) +- [Swap XLM for TFT](#swap-xlm-for-tft) + +*** + +## Introduction + +The Threefold token (TFT) is the utility token of the Threefold Grid, a decentralized and open-source project offering network, compute and storage capacity. + +Threefold Tokens (TFT) are created (minted) by the ThreeFold Blockchain (TFChain) only when new Internet capacity is added to the ThreeFold Grid by farmers. For this reason, TFT is a pure utility token as minting is solely the result of farming on the Threefold Grid. + +* To **farm** TFT, read the [complete farming guide](https://forum.threefold.io/t/threefold-farming-guide-part-1/2989). + +* To **buy** TFT, follow this guide. + +There are many ways to buy TFT: + +* You can buy TFT on [Lobstr](https://lobstr.co/) + +* You can buy TFT at [GetTFT.com](https://gettft.com/gettft/) + +* You can buy TFT on [Pancake Swap](https://pancakeswap.finance/swap?inputCurrency=BNB&outputCurrency=0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf) + +For the current guide, we will show how to buy TFT on the [Lobstr app](https://lobstr.co/). +The process is simple. + +Note that it is possible to do these steps without connecting the Lobstr wallet to the TF Connect App wallet. But doing this has a clear advantage: when we buy and swap on Lobstr, the TFT is directly accessible on the TF Connect app wallet. + +Here we go! + +*** + +## Download the App and Create an Account + +Go on [www.lobstr.co](https://www.lobstr.co) and download the Lobstr app. +You can download it for Android or iOS. + +![image](./img/gettft_1.png) + +We will show here the steps for Android, but it is very similar with iOS. +Once you've clicked on the Android button, you can click install on the Google Store page: + +![image](./img/gettft_2.png) + +Once the app is downloaded, open it: + +![image](./img/gettft_3.png) + +On the Lobstr app, click on **Create Account**: + +![image](./img/gettft_4.png) + +You will then need to enter your email address: + +![image](./img/gettft_5.png) + +Then, choose a safe password for your account: + +![image](./img/gettft_6.png) + +Once this is done, you will need to verify your email. + +Click on **Verify Email** and then go check your email inbox. + +![image](./img/gettft_7.png) + +Simply click on **Verify Email** on the email you've received. + +![image](./img/gettft_8.png) + +Once your email is verified, you can sign in to your Lobstr account: + +![image](./img/gettft_9.png) + +![image](./img/gettft_10.png) + +*** + +## Connect Your TF Connect App Wallet + +You will then need to either create a new wallet or connect an existing wallet. + +Since we are working on the Threefold ecosystem, it is very easy and practical to simply connect your Threefold Connect app wallet. You can also create a new wallet. + +Using the TF Connect wallet is very useful and quick. When you buy XLM and swap XLM tokens for TFTs, they will be directly available on your TF Connect app wallet. + +![image](./img/gettft_11.png) + +To connect your TF Connect app wallet, you will need to find your Stellar address and chain secret key. +This is very simple to do. + +Click on **I have a public or secret key**. + +![image](./img/gettft_12.png) + +As you can see on this next picture, you need the Stellar address and secret key to properly connect your TF Connect app wallet to Lobstr: + +![image](./img/gettft_18.png) + +To find your Stellar address and secret key, go on the TF Connect app and select the **Wallet** section: + +![image](./img/gettft_13.png) + +At the top of the section, click on the **copy** button to copy your Stellar Address: + +![image](./img/gettft_17.png) + +Now, we will find the Stellar secret key. +At the botton of the section, click on the encircled **i** button: + +![image](./img/gettft_14.png) + +Next, click on the **eye** button to reveal your secret key: + +![image](./img/gettft_15.png) + +You can now simply click on the **copy** button on the right: + +![image](./img/gettft_16.png) + +That's it! You've now connected your TF Connect app wallet to your Lobstr account. + +## Buy XLM with Fiat Currency + +Now, all we need to do, is buy XLM and then swap it for TFT. +It will be directly available in your TF Connect App wallet. + +On the Lobstr app, click on the top right menu button: + +![image](./img/gettft_19.png) + +Then, click on **Buy Crypto**: + +![image](./img/gettft_20.png) + +By default, the crypto selected is XLM. This is alright for us as we will quickly swap the XLM for TFT. + +On the Buy Crypto page, you can choose the type of Fiat currency you want. +By default it is in USD. To select some othe fiat currency, you can click on **ALL** and see the available fiat currencies: + +![image](./img/gettft_21.png) + +You can search or select the current you want for the transfer: + +![image](./img/gettft_22.png) + +You will then need to decide how much XLM you want to buy. Note that there can be a minimum amount. +Once you chose the desired amount, click on **Continue**. + +![image](./img/gettft_23.png) + +Lobstr will then ask you to proceed to a payment method. In this case, it is Moonpay. +Note that in some cases, your credit card won't accept Moonpay payments. You will simply need to confirm with them that you agree with transacting with Moonpay. This can be done by phone. Check with your bank and credit card company if this applies. + +![image](./img/gettft_24.png) + +Once you've set up your Moonpay payment method, you will need to process and confirm the transaction: + +![image](./img/gettft_25.png) +![image](./img/gettft_26.png) + +You will then see a processing window. +This process is usually fast. Within a few minutes, you should receive your XLM. + +![image](./img/gettft_27.png) + +Once the XLM is delivered, you will receive a notification: + +![image](./img/gettft_28.png) + +When your transaction is complete, you will see this message: + +![image](./img/gettft_29.png) + +On the Trade History page, you can choose to download the csv file version of your transaction: + +![image](./img/gettft_30.png) + +That's it! You've bought XLM on Lobstr and Moonpay. + +## Swap XLM for TFT + +Now we want to swap the XLM tokens for the Threefold tokens (TFT). +This is even easier than the previous steps. + +Go to the Lobstr Home menu and select **Swap**: + +![image](./img/gettft_31.png) + +On the **Swap** page, write "tft" and select the Threefold token: + +![image](./img/gettft_32.png) + +Select the amount of XLM you want to swap. It is recommended to keep at least 1 XLM in your wallet for transaction fees. + +![image](./img/gettft_33.png) + +Within a few seconds, you will receive a confirmation that your swap is completed: +Note that the TFT is directly sent on your TF Connect app wallet. + +![image](./img/gettft_34.png) + +That's it. You've swapped XLM for TFT. + +You can now use your TFT to deploy workloads on the Threefold Grid. diff --git a/collections/system_administrators/getstarted/TF_Token/tft_toc.md b/collections/system_administrators/getstarted/TF_Token/tft_toc.md new file mode 100644 index 0000000..2a25e32 --- /dev/null +++ b/collections/system_administrators/getstarted/TF_Token/tft_toc.md @@ -0,0 +1,6 @@ +

ThreeFold Token

+ +

Table of Contents

+ +- [TFT on Lobstr](../TF_Token/tft_lobstr/tft_lobstr.html) +- [TFT on Ethereum](../TF_Token/tft_ethereum/tft_ethereum.html) \ No newline at end of file diff --git a/collections/system_administrators/getstarted/img/endlessscalable.png b/collections/system_administrators/getstarted/img/endlessscalable.png new file mode 100644 index 0000000..90cede7 Binary files /dev/null and b/collections/system_administrators/getstarted/img/endlessscalable.png differ diff --git a/collections/system_administrators/getstarted/img/network_concepts_.jpg b/collections/system_administrators/getstarted/img/network_concepts_.jpg new file mode 100644 index 0000000..c08deae Binary files /dev/null and b/collections/system_administrators/getstarted/img/network_concepts_.jpg differ diff --git a/collections/system_administrators/getstarted/img/peer2peer_net_.jpg b/collections/system_administrators/getstarted/img/peer2peer_net_.jpg new file mode 100644 index 0000000..bbc21f0 Binary files /dev/null and b/collections/system_administrators/getstarted/img/peer2peer_net_.jpg differ diff --git a/collections/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg b/collections/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg new file mode 100644 index 0000000..67f7fd8 Binary files /dev/null and b/collections/system_administrators/getstarted/img/stfgrid3_storage_concepts_.jpg differ diff --git a/collections/system_administrators/getstarted/img/webgw_.jpg b/collections/system_administrators/getstarted/img/webgw_.jpg new file mode 100644 index 0000000..f555d0f Binary files /dev/null and b/collections/system_administrators/getstarted/img/webgw_.jpg differ diff --git a/collections/system_administrators/getstarted/planetarynetwork.md b/collections/system_administrators/getstarted/planetarynetwork.md new file mode 100644 index 0000000..cd28491 --- /dev/null +++ b/collections/system_administrators/getstarted/planetarynetwork.md @@ -0,0 +1,224 @@ + +

Planetary Network

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install](#install) +- [Run](#run) + - [Linux](#linux) + - [MacOS](#macos) +- [Test Connectivity](#test-connectivity) +- [Firewalls](#firewalls) + - [Linux](#linux-1) + - [MacOS](#macos-1) +- [Get Yggdrasil IP](#get-yggdrasil-ip) +- [Add Peers](#add-peers) +- [Peers](#peers) + - [Central europe](#central-europe) + - [Ghent](#ghent) + - [Austria](#austria) +- [Peers config for usage in every Yggdrasil - Planetary Network client](#peers-config-for-usage-in-every-yggdrasil---planetary-network-client) + +*** + +## Introduction + +In a first phase, to get started, you need to launch the planetary network by running [Yggdrasil](https://yggdrasil-network.github.io) from the command line. + +Yggdrasil is an implementation of a fully end-to-end encrypted IPv6 network. It is lightweight, self-arranging, supported on multiple platforms, and allows pretty much any IPv6-capable application to communicate securely with other nodes on the network. Yggdrasil does not require you to have IPv6 Internet connectivity - it also works over IPv4. + +## Install + +Yggdrasil is necessary for communication between your local machine and the nodes on the Grid that you deploy to. Binaries and packages are available for all major operating systems, or it can be built from source. Find installation instructions here. + +After installation, you'll need to add at least one publicly available peer to your Yggdrasil configuration file. By default on Unix based systems, you'll find the file at `/etc/yggdrasil.conf`. To find peers, check this site, which compiles and displays the peer information available on Github. + +Add peers to your configuration file like so: + +``` +Peers: ["PEER_URL:PORT", "PEER_URL:PORT", ...] +``` + +Please consult [yggdrasil installation page](https://yggdrasil-network.github.io/installation.html) for more information and clients + +## Run + +### Linux + +On Linux with `systemd`, Yggdrasil can be started and enabled as a service, or run manually from the command line: + +``` +sudo yggdrasil -useconffile /etc/yggdrasil.conf +``` + +Get your IPv6 address with following command : + +``` +yggdrasilctl getSelf +``` + +### MacOS + +The MacOS package will automatically install and start the `launchd` service. After adding peers to your config file, restart Yggdrasil by stopping the service (it will be restarted automatically): + +``` +sudo launchctl stop yggdrasil +``` + +Get your IPv6 address with following command : + +``` +sudo yggdrasilctl getSelf +``` + +## Test Connectivity + +To ensure that you have successfully connected to the Yggdrasil network, try loading the site in your browser: + +``` +http://[319:3cf0:dd1d:47b9:20c:29ff:fe2c:39be]/ +``` + +## Firewalls + +Creating deployments on the Grid also requires that nodes can reach your machine as well. This means that a local firewall preventing inbound connections will cause deployments to fail. + +### Linux + +On systems using `iptables`, check: +``` +sudo ip6tables -S INPUT +``` + +If the first line is `-P INPUT DROP`, then all inbound connections over IPv6 will be blocked. To open inbound connections, run: + +``` +sudo ip6tables -P INPUT ACCEPT +``` + +To make this persist after a reboot, run: + +``` +sudo ip6tables-save +``` + +If you'd rather close the firewall again after you're done, use: + +``` +sudo ip6tables -P INPUT DROP +``` + +### MacOS + +The MacOS system firewall is disabled by default. You can check your firewall settings according to instructions here. + +## Get Yggdrasil IP + +Once Yggdrasil is installed, you can find your Yggdrasil IP address using this command on both Linux and Mac: + +``` +yggdrasil -useconffile /etc/yggdrasil.conf -address +``` + +You'll need this address when registering your twin on TFChain later. + + +## Add Peers + + + - Add the needed [peers](https://publicpeers.neilalexander.dev/) in the config file generated under Peers. + + **example**: +``` + Peers: + [ + tls://54.37.137.221:11129 + ] +``` +- Restart yggdrasil by + + systemctl restart yggdrasil + +## Peers + +### Central europe + +#### Ghent + +- tcp://gent01.grid.tf:9943 +- tcp://gent02.grid.tf:9943 +- tcp://gent03.grid.tf:9943 +- tcp://gent04.grid.tf:9943 +- tcp://gent01.test.grid.tf:9943 +- tcp://gent02.test.grid.tf:9943 +- tcp://gent01.dev.grid.tf:9943 +- tcp://gent02.dev.grid.tf:9943 + +### Austria + +- tcp://gw291.vienna1.greenedgecloud.com:9943 +- tcp://gw293.vienna1.greenedgecloud.com:9943 +- tcp://gw294.vienna1.greenedgecloud.com:9943 +- tcp://gw297.vienna1.greenedgecloud.com:9943 +- tcp://gw298.vienna1.greenedgecloud.com:9943 +- tcp://gw299.vienna2.greenedgecloud.com:9943 +- tcp://gw300.vienna2.greenedgecloud.com:9943 +- tcp://gw304.vienna2.greenedgecloud.com:9943 +- tcp://gw306.vienna2.greenedgecloud.com:9943 +- tcp://gw307.vienna2.greenedgecloud.com:9943 +- tcp://gw309.vienna2.greenedgecloud.com:9943 +- tcp://gw313.vienna2.greenedgecloud.com:9943 +- tcp://gw324.salzburg1.greenedgecloud.com:9943 +- tcp://gw326.salzburg1.greenedgecloud.com:9943 +- tcp://gw327.salzburg1.greenedgecloud.com:9943 +- tcp://gw328.salzburg1.greenedgecloud.com:9943 +- tcp://gw330.salzburg1.greenedgecloud.com:9943 +- tcp://gw331.salzburg1.greenedgecloud.com:9943 +- tcp://gw333.salzburg1.greenedgecloud.com:9943 +- tcp://gw422.vienna2.greenedgecloud.com:9943 +- tcp://gw423.vienna2.greenedgecloud.com:9943 +- tcp://gw424.vienna2.greenedgecloud.com:9943 +- tcp://gw425.vienna2.greenedgecloud.com:9943 + +## Peers config for usage in every Yggdrasil - Planetary Network client + +``` + Peers: + [ + # Threefold Lochrist + tcp://gent01.grid.tf:9943 + tcp://gent02.grid.tf:9943 + tcp://gent03.grid.tf:9943 + tcp://gent04.grid.tf:9943 + tcp://gent01.test.grid.tf:9943 + tcp://gent02.test.grid.tf:9943 + tcp://gent01.dev.grid.tf:9943 + tcp://gent02.dev.grid.tf:9943 + # GreenEdge + tcp://gw291.vienna1.greenedgecloud.com:9943 + tcp://gw293.vienna1.greenedgecloud.com:9943 + tcp://gw294.vienna1.greenedgecloud.com:9943 + tcp://gw297.vienna1.greenedgecloud.com:9943 + tcp://gw298.vienna1.greenedgecloud.com:9943 + tcp://gw299.vienna2.greenedgecloud.com:9943 + tcp://gw300.vienna2.greenedgecloud.com:9943 + tcp://gw304.vienna2.greenedgecloud.com:9943 + tcp://gw306.vienna2.greenedgecloud.com:9943 + tcp://gw307.vienna2.greenedgecloud.com:9943 + tcp://gw309.vienna2.greenedgecloud.com:9943 + tcp://gw313.vienna2.greenedgecloud.com:9943 + tcp://gw324.salzburg1.greenedgecloud.com:9943 + tcp://gw326.salzburg1.greenedgecloud.com:9943 + tcp://gw327.salzburg1.greenedgecloud.com:9943 + tcp://gw328.salzburg1.greenedgecloud.com:9943 + tcp://gw330.salzburg1.greenedgecloud.com:9943 + tcp://gw331.salzburg1.greenedgecloud.com:9943 + tcp://gw333.salzburg1.greenedgecloud.com:9943 + tcp://gw422.vienna2.greenedgecloud.com:9943 + tcp://gw423.vienna2.greenedgecloud.com:9943 + tcp://gw424.vienna2.greenedgecloud.com:9943 + tcp://gw425.vienna2.greenedgecloud.com:9943 + ] +``` + diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md new file mode 100644 index 0000000..6a08a66 --- /dev/null +++ b/collections/system_administrators/getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md @@ -0,0 +1,186 @@ +

Deploy a Full VM and Run Cockpit, a Web-based Interface for Servers

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM and Create a Root-Access User](#deploy-a-full-vm-and-create-a-root-access-user) +- [Set the VM and Install Cockpit](#set-the-vm-and-install-cockpit) +- [Change the Network System Daemon](#change-the-network-system-daemon) +- [Set a Firewall](#set-a-firewall) +- [Access Cockpit](#access-cockpit) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to deploy a full VM and access Cockpit, a web-based interface to manage servers. For more information on Cockpit, visit this [link](https://cockpit-project.org/). + +For more information on deploying a full VM and using SSH remote connection, read [this SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + +Note that the two sections [Change the Network System Daemon](#change-the-network-system-daemon) and [Set a Firewall](#set-a-firewall) are optional. That being said, they provide more features and security to the deployment. + + + +## Deploy a Full VM and Create a Root-Access User + +To start, you must [deploy and SSH into a full VM](../../ssh_guide/ssh_guide.md). + +* Go to the [Threefold dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (e.g. Ubuntu 22.04) + * With an IPv4 Address +* After deployment, copy the IPv4 address +* Connect into the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` +* Create a new user with root access + * Here we use `newuser` as an example + * ``` + adduser newuser + ``` + * To see the directory of the new user + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Make the new user accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * ``` + nano ~/.ssh/authorized_keys + ``` + * add the authorized public key in the file, then save and quit + * Exit the VM and reconnect with the new user + * ``` + exit + ``` + * ``` + ssh newuser@VM_IPv4_address + ``` + + + +## Set the VM and Install Cockpit + +* Update and upgrade the VM + * ``` + sudo apt update -y && sudo apt upgrade -y && sudo apt-get update -y + ``` +* Install Cockpit + * ``` + . /etc/os-release && sudo apt install -t ${UBUNTU_CODENAME}-backports cockpit -y + ``` + + + +## Change the Network System Daemon + +We now change the system daemon that manages network configurations. We will be using [NetworkManager](https://networkmanager.dev/) instead of [networkd](https://wiki.archlinux.org/title/systemd-networkd). This will give us further possibilities on Cockpit. + +* Install NetworkManager. Note that it might already be installed. + * ``` + sudo apt install network-manager -y + ``` +* Update the `.yaml` file + * Go to netplan's directory + * ``` + cd /etc/netplan + ``` + * Search for the proper `.yaml` file name + * ``` + ls -l + ``` + * Update the `.yaml` file + * ``` + sudo nano 50-cloud-init.yaml + ``` + * Add the following lines under `network:` + * ``` + version: 2 + renderer: NetworkManager + ``` + * Note that these two lines should be aligned with `ethernets:` + * Remove `version: 2` at the bottom of the file + * Save and exit the file +* Disable networkd and enable NetworkManager + * ``` + sudo systemctl disable systemd-networkd + ``` + * ``` + sudo systemctl enable NetworkManager + ``` +* Apply netplan to set NetworkManager + * ``` + sudo netplan apply + ``` +* Reboot the system to load the new kernel and to properly set NetworkManager + * ``` + sudo reboot + ``` +* Reconnect to the VM + * ``` + ssh newuser@VM_IPv4_address + ``` + + +## Set a Firewall + +We now set a firewall. We note that [ufw](https://wiki.ubuntu.com/UncomplicatedFirewall) is not compatible with Cockpit and for this reason, we will be using [firewalld](https://firewalld.org/). + +* Install firewalld + * ``` + sudo apt install firewalld -y + ``` + +* Add Cockpit to firewalld + * ``` + sudo firewall-cmd --add-service=cockpit + ``` + * ``` + sudo firewall-cmd --add-service=cockpit --permanent + ``` +* See if Cockpit is available + * ``` + sudo firewall-cmd --info-service=cockpit + ``` + +* See the status of firewalld + * ``` + sudo firewall-cmd --state + ``` + + + +## Access Cockpit + +* On your web browser, write the following URL with the proper VM IPv4 address + * ``` + VM_IPv4_Address:9090 + ``` +* Enter the username and password of the root-access user +* You might need to grant administrative access to the user + * On the top right of the Cockpit window, click on `Limited access` + * Enter the root-access user password then click `Authenticate` + + + +## Conclusion + +You now have access to a web-based graphical interface to manage your VM. You can read [Cockpit's documentation](https://cockpit-project.org/documentation.html) to explore further this interface. + + + +## Acknowledgements and References + +A big thank you to Drew Smith for his [advice on using NetworkManager](https://forum.threefold.io/t/cockpit-managed-ubuntu-vm/3376) instead of networkd with Cockpit. \ No newline at end of file diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md new file mode 100644 index 0000000..d41569e --- /dev/null +++ b/collections/system_administrators/getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md @@ -0,0 +1,184 @@ +

Deploy a Full VM and Run Apache Guacamole (RDP Connection, Remote Desktop)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM and Create a Root-Access User](#deploy-a-full-vm-and-create-a-root-access-user) +- [SSH with Root-Access User, Install Prerequisites and Apache Guacamole](#ssh-with-root-access-user-install-prerequisites-and-apache-guacamole) +- [Access Apache Guacamole and Create Admin-Access User](#access-apache-guacamole-and-create-admin-access-user) +- [Download the Desktop Environment and Run xrdp](#download-the-desktop-environment-and-run-xrdp) +- [Create an RDP Connection and Access the Server Remotely](#create-an-rdp-connection-and-access-the-server-remotely) +- [Feedback and Questions](#feedback-and-questions) +- [References](#references) + +*** + +## Introduction + +In this guide, we deploy a full virtual machine (Ubuntu 20.04) on the Threefold Grid with IPv4. We install and run [Apache Guacamole](https://guacamole.apache.org/) and access the VM with remote desktop connection by using [xrdp](https://www.xrdp.org/). + +The Apache Guacamole instance has a two-factor authorization to give further security to the deployment. + +With Apache Guacamole, a user can access different deployments and command servers remotely, with desktop access. + +This guide can be done on a Windows, MAC, or Linux computer. For more information on deploying a full VM and using SSH remote connection, read this [SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + + + +## Deploy a Full VM and Create a Root-Access User + +* Go to the [Threefold Dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) with at least the minimum specs for a desktop environment + * IPv4 Address + * Minimum vcores: 2vcores + * Minimum Gb of RAM: 4Gb + * Minimum storage: 15Gb +* After deployment, note the VM IPv4 address +* Connect to the VM via SSH + * ``` + ssh root@VM_IPv4_address + ``` +* Once connected, create a new user with root access (for this guide we use "newuser") + * ``` + adduser newuser + ``` + * You should now see the new user directory + * ``` + ls /home + ``` + * Give sudo capacity to the new user + * ``` + usermod -aG sudo newuser + ``` + * Make the new user accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * Add authorized public key in the file and save it + * ``` + nano ~/.ssh/authorized_keys + ``` +* Exit the VM and reconnect with the new user + + + +## SSH with Root-Access User, Install Prerequisites and Apache Guacamole + +* SSH into the VM + * ``` + ssh newuser@VM_IPv4_address + ``` +* Update and upgrade Ubuntu + * ``` + sudo apt update && sudo apt upgrade -y && sudo apt-get install software-properties-common -y + ``` +* Download and run Apache Guacamole + * ``` + wget -O guac-install.sh https://git.io/fxZq5 + ``` + * ``` + chmod +x guac-install.sh + ``` + * ``` + sudo ./guac-install.sh + ``` + + + +## Access Apache Guacamole and Create Admin-Access User + +* On your local computer, open a browser and write the following URL with the proper IPv4 address + * ``` + https://VM_IPv4_address:8080/guacamole + ``` + * On Guacamole, enter the following for both the username and the password + * ``` + guacadmin + ``` + * Download the [TOTP](https://totp.app/) app on your Android or iOS + * Scan the QR Code + * Enter the code + * Next time you log in + * go to the TOTP app and enter the given code +* Go to the Guacamole Settings + * Users + * Create a new user with all admin privileges +* Log out of the session +* Enter with the new admin user +* Go to Settings + * Users + * Delete the default user +* Apache Guacamole is now installed + + + +## Download the Desktop Environment and Run xrdp + +* Download a Ubuntu desktop environment on the VM + * ``` + sudo apt install tasksel -y && sudo apt install lightdm -y + ``` + * Choose lightdm + * Run tasksel and choose `ubuntu desktop` + * ``` + sudo tasksel + ``` + +* Download and run xrdp + * ``` + wget https://c-nergy.be/downloads/xRDP/xrdp-installer-1.4.6.zip + ``` + * ``` + unzip xrdp-installer-1.4.6.zip + ``` + * ``` + bash xrdp-installer-1.4.6.sh + ``` + + + +## Create an RDP Connection and Access the Server Remotely + +* Create an RDP connection on Guacamole + * Open Guacamole + * ``` + http://VM_IPv4_address:8080/guacamole/ + ``` + * Go to Settings + * Click on Connections + * Click on New Connection + * Write the following parameters + * Name: Choose a name for the connection + * Location: ROOT + * Protocol: RDP + * Network + * Hostname: VM_IPv4_Address + * Port: 3389 + * Authentication + * Username: your root-access username (newuser) + * Password: your root-access username password (newuser) + * Security mode: Any + * Ignore server certificate: Yes + * Click Save + * Go to the Apache Guacamole Home menu (top right button) + * Click on the new connection + * The remote desktop access is done + + + +## Feedback and Questions + +If you have any questions, let us know by writing a post on the [Threefold Forum](https://forum.threefold.io/). + + + +## References + +Apache Guacamole for Secure Remote Access to your Computers, [https://discussion.scottibyte.com/t/apache-guacamole-for-secure-remote-access-to-your-computers/32](https://discussion.scottibyte.com/t/apache-guacamole-for-secure-remote-access-to-your-computers/32) + +MysticRyuujin's guac-install, [https://github.com/MysticRyuujin/guac-install](https://github.com/MysticRyuujin/guac-install) \ No newline at end of file diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md b/collections/system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md new file mode 100644 index 0000000..d801b48 --- /dev/null +++ b/collections/system_administrators/getstarted/remote-desktop_gui/remote-desktop_gui.md @@ -0,0 +1,11 @@ +# Remote Desktop and GUI + +This section of the Threefold Guide provides different methods to access your 3node servers with either a remote desktop protocol or a graphical user interface (GUI). + +If you have any questions, or if you would like to see a specific guide on remote desktop connection or GUI, please let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). + +

Table of Contents

+ +- [Cockpit: a Web-based Graphical Interface for Servers](./cockpit_guide/cockpit_guide.md) +- [XRDP: an Open-Source Remote Desktop Procol](./xrdp_guide/xrdp_guide.md) +- [Apache Guacamole: a Clientless Remote Desktop Gateway.](./guacamole_guide/guacamole_guide.md) diff --git a/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md new file mode 100644 index 0000000..e1648ab --- /dev/null +++ b/collections/system_administrators/getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md @@ -0,0 +1,168 @@ +

Deploy a Full VM and Run XRDP for Remote Desktop Connection

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Server Side: Deploy the Full VM, install a desktop and XRDP](#server-side-deploy-the-full-vm-install-a-desktop-and-xrdp) +- [Client Side: Install Remote Desktop Connection for Windows, MAC or Linux](#client-side-install-remote-desktop-connection-for-windows-mac-or-linux) + - [Download the App](#download-the-app) + - [Connect Remotely](#connect-remotely) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this guide, we learn how to deploy a full virtual machine on a 3node on the Threefold Grid. +We access Ubuntu with a desktop environment to offer a graphical user interface (GUI). + +This guide can be done on a Windows, MAC, or Linux computer. The only difference will be in the Remote Desktop app. The steps are very similar. + +For more information on deploying a full VM and using SSH remote connection, read this [SSH guide](../../ssh_guide/ssh_guide.md). + +If you are new to the Threefold ecosystem and you want to deploy workloads on the Threefold Grid, read the [Get Started section](../../tfgrid3_getstarted.md) of the Threefold Manual. + + + +## Server Side: Deploy the Full VM, install a desktop and XRDP + +* Go to the [Threefold Dashboard](https://dashboard.grid.tf/#/) +* Deploy a full VM (Ubuntu 20.04) + * With an IPv4 Address +* After deployment, copy the IPv4 address +* To SSH into the VM, write in the terminal + * ``` + ssh root@VM_IPv4_address + ``` +* Once connected, update, upgrade and install the desktop environment + * Update + * ``` + sudo apt update -y && sudo apt upgrade -y + ``` + * Install a light-weight desktop environment (Xfce) + * ``` + sudo apt install xfce4 xfce4-goodies -y + ``` +* Create a user with root access + * ``` + adduser newuser + ``` + * ``` + ls /home + ``` + * You should see the newuser directory + * Give sudo capacity to newuser + * ``` + usermod -aG sudo newuser + ``` + * Make newuser accessible by SSH + * ``` + su - newuser + ``` + * ``` + mkdir ~/.ssh + ``` + * ``` + nano ~/.ssh/authorized_keys + ``` + * add authorized public key in file and save + * Exit the VM and reconnect with new user + * ``` + exit + ``` +* Reconnect to the VM terminal and install XRDP + * ``` + ssh newuser@VM_IPv4_address + ``` +* Install XRDP + * ``` + sudo apt install xrdp -y + ``` +* Check XRDP status + * ``` + sudo systemctl status xrdp + ``` + * If not running, run manually: + * ``` + sudo systemctl start xrdp + ``` +* If needed, configure xrdp (optional) + * ``` + sudo nano /etc/xrdp/xrdp.ini + ``` +* Create a session with root-access user +Move to home directory + * Go to home directory of root-access user + * ``` + cd ~ + ``` +* Create session + * ``` + echo "xfce4-session" | tee .xsession + ``` +* Restart the server + * ``` + sudo systemctl restart xrdp + ``` + +* Find your local computer IP address + * On your local computer terminal, write + * ``` + curl ifconfig.me + ``` + +* On the VM terminal, allow client computer port to the firewall (ufw) + * ``` + sudo ufw allow from your_local_ip/32 to any port 3389 + ``` +* Allow SSH connection to your firewall + * ``` + sudo ufw allow ssh + ``` +* Verify status of the firewall + * ``` + sudo ufw status + ``` + * If not active, do the following: + * ``` + sudo ufw disable + ``` + * ``` + sudo ufw enable + ``` + * Then the ufw status should show changes + * ``` + sudo ufw status + ``` + + +## Client Side: Install Remote Desktop Connection for Windows, MAC or Linux + +For the client side (the local computer accessing the VM remotely), you can use remote desktop connection for Windows, MAC and Linux. The process is very similar in all three cases. + +Simply download the app, open it and write the IPv4 address of the VM. You then will need to write the username and password to enter into your VM. + +### Download the App + +* Client side Remote app + * Windows + * [Remote Desktop Connection app](https://apps.microsoft.com/store/detail/microsoft-remote-desktop/9WZDNCRFJ3PS?hl=en-ca&gl=ca&rtc=1) + * MAC + * Download in app store + * [Microsoft Remote Desktop Connection app](https://apps.apple.com/ca/app/microsoft-remote-desktop/id1295203466?mt=12) + * Linux + * [Remmina RDP Client](https://remmina.org/) + +### Connect Remotely + +* General process + * In the Remote app, enter the following: + * the IPv4 Address of the VM + * the VM root-access username and password + * You now have remote desktop connection to your VM + + + +## Conclusion + +You now have a remote access to the desktop environment of your VM. If you have any questions, let us know by writing a post on the [Threefold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/sidebar.md b/collections/system_administrators/getstarted/sidebar.md new file mode 100644 index 0000000..b7edb51 --- /dev/null +++ b/collections/system_administrators/getstarted/sidebar.md @@ -0,0 +1,4 @@ +- [**Manual Home**](@manual3_home_new) +--------- +**Get Started** +!!!include:getstarted_toc \ No newline at end of file diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_guide.md b/collections/system_administrators/getstarted/ssh_guide/ssh_guide.md new file mode 100644 index 0000000..020e2ad --- /dev/null +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_guide.md @@ -0,0 +1,10 @@ +

SSH Remote Connection

+ +SSH is a secure protocol used as the primary means of connecting to Linux servers remotely. It provides a text-based interface by spawning a remote shell. After connecting, all commands you type in your local terminal are sent to the remote server and executed there. + +

Table of Contents

+ +- [SSH with OpenSSH](./ssh_openssh.md) +- [SSH with PuTTY](./ssh_putty.md) +- [SSH with WSL](./ssh_wsl.md) +- [WireGuard Access](./ssh_wireguard.md) \ No newline at end of file diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md b/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md new file mode 100644 index 0000000..87daa22 --- /dev/null +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_openssh.md @@ -0,0 +1,281 @@ +

SSH Remote Connection with OpenSSH

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps and Prerequisites](#main-steps-and-prerequisites) +- [Step-by-Step Process with OpenSSH](#step-by-step-process-with-openssh) + - [Linux](#linux) + - [SSH into a 3Node with IPv4 on Linux](#ssh-into-a-3node-with-ipv4-on-linux) + - [SSH into a 3Node with the Planetary Network on Linux](#ssh-into-a-3node-with-the-planetary-network-on-linux) + - [MAC](#mac) + - [SSH into a 3Node with IPv4 on MAC](#ssh-into-a-3node-with-ipv4-on-mac) + - [SSH into a 3Node with the Planetary Network on MAC](#ssh-into-a-3node-with-the-planetary-network-on-mac) + - [Windows](#windows) + - [SSH into a 3Node with IPv4 on Windows](#ssh-into-a-3node-with-ipv4-on-windows) + - [SSH into a 3Node with the Planetary Network on Windows](#ssh-into-a-3node-with-the-planetary-network-on-windows) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node with [OpenSSH](https://www.openssh.com/) on Linux, MAC and Windows with both an IPv4 and a Planetary Network connection. To connect to the 3Node with WireGuard, read [this documentation](./ssh_wireguard.md). + +To deploy different workloads, the SSH connection process should be very similar. + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + + +# Main Steps and Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +The main steps for the whole process are the following: + +* Create an SSH Key pair +* Deploy a 3Node + * Choose IPv4 or the Planetary Network +* SSH into the 3Node + * For the Planetary Network, download the Planetary Network Connector + + + +# Step-by-Step Process with OpenSSH + +## Linux + +### SSH into a 3Node with IPv4 on Linux + +Here are the steps to SSH into a 3Node with IPv4 on Linux. + +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on Linux with IPv4. + + + +### SSH into a 3Node with the Planetary Network on Linux + +Here are the steps to SSH into a 3Node with the Planetary Network on Linux. + +* Set a [Planetary Network connection](../planetarynetwork.md) +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on Linux with the Planetary Network. + + + +## MAC + +### SSH into a 3Node with IPv4 on MAC + +Here are the steps to SSH into a 3Node with IPv4 on MAC. + +* To create the SSH key pair, in the terminal write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on MAC with IPv4. + + + +### SSH into a 3Node with the Planetary Network on MAC + +Here are the steps to SSH into a 3Node with the Planetary Network on MAC. + +* Set a [Planetary Network connection](../planetarynetwork.md) +* To create the SSH key pair, write in the terminal + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in the terminal + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open the terminal, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on MAC with the Planetary Network. + + + +## Windows + +### SSH into a 3Node with IPv4 on Windows + +* To download OpenSSH client and OpenSSH server + * Open the `Settings` and select `Apps` + * Click `Apps & Features` + * Click `Optional Features` + * Verifiy if OpenSSH Client and OpenSSH Server are there + * If not + * Click `Add a feature` + * Search OpenSSH + * Install OpenSSH Client and OpenSSH Server +* To create the SSH key pair, open `PowerShell` and write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in `PowerShell` + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the IPv4 address + * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@IPv4_address + ``` + +You now have an SSH connection on Window with IPv4. + + + +### SSH into a 3Node with the Planetary Network on Windows + +* Set a [Planetary Network connection](../planetarynetwork.md) +* To download OpenSSH client and OpenSSH server + * Open the `Settings` and select `Apps` + * Click `Apps & Features` + * Click `Optional Features` + * Verifiy if OpenSSH Client and OpenSSH Server are there + * If not + * Click `Add a feature` + * Search OpenSSH + * Install OpenSSH Client and OpenSSH Server +* To create the SSH key pair, open `PowerShell` and write + * ``` + ssh-keygen + ``` + * Save in default location + * Write a password (optional) +* To see the public key, write in `PowerShell` + * ``` + cat ~/.ssh/id_rsa.pub + ``` + * Select and copy the public key when needed +* To deploy a full VM + * On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select Planetary Network address in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Copy the Planetary Network address + * Open `PowerShell`, write the following with the deployment address and write **yes** to confirm + * ``` + ssh root@planetary_network_address + ``` + +You now have an SSH connection on Window with the Planetary Network. + + + +# Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_putty.md b/collections/system_administrators/getstarted/ssh_guide/ssh_putty.md new file mode 100644 index 0000000..c829f75 --- /dev/null +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_putty.md @@ -0,0 +1,81 @@ +

SSH Remote Connection with PuTTY

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps and Prerequisites](#main-steps-and-prerequisites) +- [SSH with PuTTY on Windows](#ssh-with-putty-on-windows) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to deploy a full virtual machine (VM) and SSH into a 3Node on Windows with [PuTTY](https://www.putty.org/). + +To deploy different workloads, the SSH connection process should be very similar. + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + + + +## Main Steps and Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +The main steps for the whole process are the following: + +* Create an SSH Key pair +* Deploy a 3Node + * Choose IPv4 or the Planetary Network +* SSH into the 3Node + * For the Planetary Network, set a [Planetary Network connection](../planetarynetwork.md) + + + +## SSH with PuTTY on Windows + +Here are the main steps to SSH into a full VM using PuTTY on a Windows machine. + +* Download [PuTTY](https://www.putty.org/) + * You can download the Windows Installer in .msi format [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) + * This will add both PuTTY and PuTTYgen to your computer + * Make sure that you have the latest version of PuTTY to avoid potential issues +* Generate an SSH key pair + * Open PuTTYgen + * In `Parameters`, you can set the type of key to `RSA` or to `EdDSA` + * Click on `Generate` + * Add a passphrase for your private key (optional) + * Take note of the generated SSH public key + * You will need to paste it to the Dashboard later + * Click `Save private key` +* To deploy a full VM + * Go to the following section of the [Threefold Dashboard](https://dashboard.grid.tf/): Deploy -> Virtual Machines -> Full Virtual Machine + * Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb + * Select IPv4 in `Network` + * In `Node Selection`, click on `Load Nodes` + * Click `Deploy` +* To SSH into the VM once the 3Node is deployed + * Take note of the IPv4 address +* Connect to the full VM with PuTTY + * Open PuTTY + * Go to the section `Session` + * Add the VM IPv4 address under `Host Name (or IP address)` + * Make sure `Connection type` is set to `SSH` + * Go to the section `Connection` -> `SSH` -> `Auth` -> `Credentials` + * Under `Private key file for authentication`, click on `Browse...` + * Look for the generated SSH private key in .ppk format and click `Open` + * In the main `PuTTY` window, click `Open` + * In the PuTTY terminal window, enter `root` as the login parameter + * Enter the passphrase for the private key if you set one + +You now have an SSH connection on Windows using PuTTY. + + + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md b/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md new file mode 100644 index 0000000..260fab0 --- /dev/null +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_wireguard.md @@ -0,0 +1,129 @@ +

WireGuard Access

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy a Weblet with WireGuard Access](#deploy-a-weblet-with-wireguard-access) +- [Install WireGuard](#install-wireguard) +- [Set the WireGuard Configurations](#set-the-wireguard-configurations) + - [Linux and MAC](#linux-and-mac) + - [Windows](#windows) +- [Test the WireGuard Connection](#test-the-wireguard-connection) +- [SSH into the Deployment with Wireguard](#ssh-into-the-deployment-with-wireguard) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +In this Threefold Guide, we show how to set up [WireGuard](https://www.wireguard.com/) to access a 3Node deployment with an SSH connection. + +Note that WireGuard provides the connection to the 3Node deployment. It is up to you to decide which SSH client you want to use. This means that the steps to SSH into a 3Node deployment will be similar to the steps proposed in the guides for [Open-SSH](./ssh_openssh.md), [PuTTy](ssh_putty.md) and [WSL](./ssh_wsl.md). Please refer to [this documentation](./ssh_guide.md) if you have any questions concerning SSH clients. The main difference will be that we connect to the 3Node deployment using a WireGuard connection instead of an IPv4 or a Planetary Network connection. + + + +# Prerequisites + +Make sure to [read the introduction](../tfgrid3_getstarted.md#get-started---your-first-deployment) before going further. + +* SSH client of your choice + * [Open-SSH](./ssh_openssh.md) + * [PuTTy](ssh_putty.md) + * [WSL](./ssh_wsl.md) + + + +# Deploy a Weblet with WireGuard Access + +For this guide on WireGuard access, we deploy a [Full VM](../../../dashboard/solutions/fullVm.md). Note that the whole process is similar with other types of ThreeFold weblets on the Dashboard. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/), go to: Deploy -> Virtual Machines -> Full Virtual Machine +* Choose the parameters you want + * Minimum CPU: 1 vCore + * Minimum Memory: 512 Mb + * Minimum Disk Size: 15 Gb +* Select `Add WireGuard Access` in `Network` +* In `Node Selection`, click on `Load Nodes` +* Click `Deploy` + +Once the Full VM is deployed, a window named **Details** will appear. You will need to take note of the **WireGuard Config** to set the WireGuard configurations and the **WireGuard IP** to SSH into the deployment. + +> Note: At anytime, you can open the **Details** window by clicking on the button **Show Details** under **Actions** on the Dashboard weblet page. + + + +# Install WireGuard + +To install WireGuard, please refer to the official [WireGuard installation documentation](https://www.wireguard.com/install/). + + + +# Set the WireGuard Configurations + +When it comes to setting the WireGuard configurations, the steps are similar for Linux and MAC, but differ slightly for Windows. For Linux and MAC, we will be using the CLI. For Windows, we will be using the WireGuard GUI app. + +## Linux and MAC + +To set the WireGuard connection on Linux or MAC, create a WireGuard configuration file and run WireGuard via the command line: + +* Copy the content **WireGuard Config** from the Dashboard **Details** window +* Paste the content to a file with the extension `.conf` (e.g. **wg.conf**) in the directory `/etc/wireguard` + * ``` + sudo nano /etc/wireguard/wg.conf + ``` +* Start WireGuard with the command **wg-quick** and, as a parameter, pass the configuration file without the extension (e.g. *wg.conf -> wg*) + * ``` + wg-quick up wg + ``` + * Note that you can also specify a config file by path, stored in any location + * ``` + wg-quick up /etc/wireguard/wg.conf + ``` +* If you want to stop the WireGuard service, you can write the following in the terminal + * ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a WireGuard connection with the same file, write on the terminal `wg-quick down wg`, then `wg-quick up wg` to reset the connection with new configurations. + +## Windows + +To set the WireGuard connection on Windows, add and activate a tunnel with the WireGuard app: + +* Open the WireGuard GUI app +* Click on **Add Tunnel** and then **Add empty tunnel** +* Choose a name for the tunnel +* Erase the content of the main window and paste the content **WireGuard Config** from the Dashboard **Details** window +* Click **Save** and then click on **Activate**. + + + + +# Test the WireGuard Connection + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the WireGuard connection is properly established. Make sure to replace `VM_WireGuard_IP` with the proper WireGuard IP address: + +* Ping the deployment + * ``` + ping VM_WireGuard_IP + ``` + + + +# SSH into the Deployment with Wireguard + +To SSH into the deployment with Wireguard, use the **WireGuard IP** shown in the Dashboard **Details** window. + +* SSH into the deployment + * ``` + ssh root@VM_WireGuard_IP + ``` + +You now have access to the deployment over a WireGuard SSH connection. + + + +# Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/getstarted/ssh_guide/ssh_wsl.md b/collections/system_administrators/getstarted/ssh_guide/ssh_wsl.md new file mode 100644 index 0000000..793f877 --- /dev/null +++ b/collections/system_administrators/getstarted/ssh_guide/ssh_wsl.md @@ -0,0 +1,89 @@ +

SSH Remote Connection with WSL

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [SSH Key Generation](#ssh-key-generation) +- [Connect to Remote Host with SSH](#connect-to-remote-host-with-ssh) +- [Enable Port 22 in Windows Firewall](#enable-port-22-in-windows-firewall) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this Threefold Guide, we show how easy it is to SSH into a 3node on Windows with [Windows Subsystem for Linux (WSL)](https://ubuntu.com/wsl). + +If you have any questions, feel free to write a post on the [Threefold Forum](http://forum.threefold.io/). + +## SSH Key Generation + +Make sure SSH is installed by entering following command at the command prompt: + +```sh +sudo apt install openssh-client +``` + +The key generation process is identical to the process on a native Linux or Ubuntu installation. +With SSH installed, run the SSH key generator by typing the following: + +```sh +ssh-keygen -t rsa +``` + +Then choose the key name and passphrase or simply press return twice to accept the default values (`key name = id_rsa` and `no passphrase`). +When the process has finished, the private key and the public key can be found in the `~/.ssh` directory accessible from the Ubuntu terminal. +You can also access the key from Windows file manager in the following folder: + +```sh +\\wsl$\\Ubuntu\home\\.ssh\ +``` + +Your private key will be generated using the default name (`id_rsa`) or the filename you specified. +The corresponding public key will be generated using the same filename but with a `.pub` extension added. +If you open the public key in a text editor it should contain something similar to this: + +``` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com +``` + +Copying the entire text you can specify your public SSH key while connecting your wallet before deploying a VM. + +## Connect to Remote Host with SSH + +With the SSH key you should be able to SSH to your account on the remote system from the computer that has your private key using the following command: + +```sh +ssh username@remote_IP_host +``` + +If the private key you're using does not have the default name, or is not stored in the default path (not `~/.ssh/id_rsa`), you must explicitly invoke it! +On the SSH command line add the `-i` flag and the path to your private key. +For example, to invoke the private key `my_key`, stored in the `~/.ssh/keys` directory, when connecting to your account on a remote host, enter: + +```sh +ssh -i ~/.ssh/keys/my_key username@remote_IP_host +``` + +## Enable Port 22 in Windows Firewall + +The port 22 is used for Secure Shell (SSH) communication and allows remote administration access to the VM. +Sometimes it needs and can be unblocked as follows: + +- open Windows Firewall Advance Settings +- click on `New Rule…` under `Inbound Rules` to create a new firewall rule +- under `Rule Type` select `Port` +- under `Protocol and Ports` select `TCP`, `Specific local Ports` and enter `22` +- under `Action` select `Allow the connection` +- under `Profile` make sure to only select `Domain` and `Private` + +NB: do not select `Public` unless you absolutely require a direct connection form the outside world. +This is not recommend especially for portable device (Laptop, Tablets) that connect to random Wi-fi hotspots. + +- under `Name` + - Name: `SSH Server` + - Description: `SSH Server` + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/tfgrid3_getstarted.md b/collections/system_administrators/getstarted/tfgrid3_getstarted.md new file mode 100644 index 0000000..dcfd756 --- /dev/null +++ b/collections/system_administrators/getstarted/tfgrid3_getstarted.md @@ -0,0 +1,32 @@ +# TFGrid Manual - Get Started + +## Get Started - Your First Deployment + +It's easy to get started on the TFGrid and deploy applications. + +- [Create a TFChain Account](../../dashboard/wallet_connector.md) +- [Get TFT](../../threefold_token/buy_sell_tft/buy_sell_tft.md) +- [Bridge TFT to TChain](../../threefold_token/tft_bridges/tft_bridges.md) +- [Deploy an Application](../../dashboard/deploy/deploy.md) +- [SSH Remote Connection](./ssh_guide/ssh_guide.md) + - [SSH with OpenSSH](./ssh_guide/ssh_openssh.md) + - [SSH with PuTTY](./ssh_guide/ssh_putty.md) + - [SSH with WSL](./ssh_guide/ssh_wsl.md) + - [SSH and WireGuard](./ssh_guide/ssh_wireguard.md) + +## Grid Platforms + +- [TF Dashboard](../../dashboard/dashboard.md) +- [TF Flist Hub](../../developers/flist/flist_hub/zos_hub.md) + +## TFGrid Services and Resources + +- [TFGrid Services](./tfgrid_services/tf_grid_services_readme.md) + +## Advanced Deployment Techniques + +- [Advanced Topics](../advanced/advanced.md) + +*** + +If you have any question, feel free to ask for help on the [Threefold Forum](https://forum.threefold.io/c/threefold-grid-utilization/support/). \ No newline at end of file diff --git a/collections/system_administrators/getstarted/tfgrid3_network_concepts.md b/collections/system_administrators/getstarted/tfgrid3_network_concepts.md new file mode 100644 index 0000000..b3cb580 --- /dev/null +++ b/collections/system_administrators/getstarted/tfgrid3_network_concepts.md @@ -0,0 +1,23 @@ +![ ](./getstarted/img/network_concepts_.jpg) + +# TFGrid Network Concepts + +## Peer 2 Peer Private Network + +![ ](./getstarted/img/peer2peer_net_.jpg) + +All Zmachines (or kubernetes nodes) are connected to each other over private networks. + +When you use IAC or our clients then you can manually specify the name and IP Network Addr to be used. + +## The Planetary Network + +## Web Gateway (experts only) + +![ ](./getstarted/img/webgw_.jpg) + +## More Info + +- [Planetary Network](https://library.threefold.me/info/threefold#/technology/threefold__planetary_network) +- [Web Gateway](https://library.threefold.me/info/threefold#/technology/threefold__webgw) +- [Z-Net = secure network between Z-Machines](https://library.threefold.me/info/threefold#/technology/threefold__znet) \ No newline at end of file diff --git a/collections/system_administrators/getstarted/tfgrid3_storage_concepts.md b/collections/system_administrators/getstarted/tfgrid3_storage_concepts.md new file mode 100644 index 0000000..22ce7bd --- /dev/null +++ b/collections/system_administrators/getstarted/tfgrid3_storage_concepts.md @@ -0,0 +1,8 @@ +# TFGrid Storaeg Concepts + +![ ](./getstarted/img/stfgrid3_storage_concepts_.jpg) + + +## ThreeFold Flist HUB + +see https://hub.grid.tf/ diff --git a/collections/system_administrators/getstarted/tfgrid3_what_to_know.md b/collections/system_administrators/getstarted/tfgrid3_what_to_know.md new file mode 100644 index 0000000..300caa1 --- /dev/null +++ b/collections/system_administrators/getstarted/tfgrid3_what_to_know.md @@ -0,0 +1,58 @@ +# TFGrid 3.0 Whats There To Know + +- [Storage Concepts](./tfgrid3_storage_concepts.md) +- [Network Concepts](./tfgrid3_network_concepts.md) + +## Networking + +### Private network (ZNET) + +For a project that needs a private network, we need a network that can span across multiple nodes, this can be achieved by the network workload reservation [Network](/getstarted/tfgrid3_network_concepts.md) + +### Planetary network + +For a project that want their workloads directly connected to the planetary network we will need the option planetary enabled when deploying a VM or kubernetes. Check [Planetary network](https://library.threefold.me/info/threefold#/technology/threefold__planetary_network) for more info about + +### Public IPs +When you want to have a public IP assigned to your workload, you need to reserve the number of IPs along with your contract and then you can attach it to the VM workload + +## Exposing the workloads to the public + +Typically, if you reserved a public IP you can do that directly and create a domain referencing you public IP. Threefold provides also [Webgateway technology](https://library.threefold.me/info/threefold#/technology/threefold__webgw) a very cost-efficient technology to help with exposing your workloads + +### how it works +Basically you create a `domain reservation` that can be +- `prefix` based e.g `mywebsite` that will internally translate to `mywebsite.ghent01.devnet.grid.tf` +- `full domain` e.g `mysuperwebsite.com` (this needs to point to the gateway IP) + +And then you need to specify the yggrassil IP of your backend service, so the gateway knows where to redirect the traffic + +#### TLS +As a user, you have two options +- let the gateway terminate the TLS traffic for you and communicate with your workloads directly +- let the gateway forward the traffic to your backend and you do the termination yourself (the recommended way if you are doing any sensitive business) + + +## Compute + +VM workload is the only workload that you will need to run a full blown VM or an [flist-based](/flist_hub/flist_hub.md) container + +### How can I create an flist? + +The easiest way is by converting existing docker images using [the hub](https://hub.grid.tf/docker-convert) + + +### How flist-based container run in a VM? +ZOS injects its own generic kernel while booting the container based on the content of the filesystem + +### kubernetes +We leverage the VM primitive to allow provisioning kubernetes clusters across multiple nodes based on k3os flist. + + +## Exploring the capacity +You can easily check using [explorer-ui](dashboard/explorer/explorer_home.md) , also to plan your deployment you can use these [example queries](dashboard/explorer/explorer_graphql_examples.md) + +## Getting started + +Please check [Getting started](/getstarted/tfgrid3_getstarted.md) to get the necessary software / configurations + diff --git a/collections/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md b/collections/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md new file mode 100644 index 0000000..0e732c3 --- /dev/null +++ b/collections/system_administrators/getstarted/tfgrid_services/tf_grid_services_readme.md @@ -0,0 +1,95 @@ +

ThreeFold Grid Services

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Devnet](#devnet) +- [QAnet](#qanet) +- [Testnet](#testnet) +- [Mainnet](#mainnet) + - [Supported Planetary Network Nodes](#supported-planetary-network-nodes) + +*** + +## Introduction + +On this article we have aggregated a list of all of the services running on Threefold Grid 3 infrastructure for your convenience + +> Note: the usage of `dev` indicates a devnet service. +> and usage of `test` indicates a testnet service. + +## Devnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.dev.grid.tf#/explorer) `wss://tfchain.dev.grid.tf` +- [GraphQL](https://graphql.dev.grid.tf/graphql) +- [Activation Service](https://activation.dev.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.dev.grid.tf) +- [Grid Dashboard](https://dashboard.dev.grid.tf) + + +## QAnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.qa.grid.tf#/explorer) `wss://tfchain.qa.grid.tf` +- [GraphQL](https://graphql.qa.grid.tf/graphql) +- [Activation Service](https://activation.qa.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.qa.grid.tf) +- [Grid Dashboard](https://dashboard.qa.grid.tf) + +## Testnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.test.grid.tf#/explorer) `wss://tfchain.test.grid.tf` +- [GraphQL](https://graphql.test.grid.tf/graphql) +- [Activation Service](https://activation.test.grid.tf/activation/) +- [TFGrid Proxy](https://gridproxy.test.grid.tf) +- [Grid Dashboard](https://dashboard.test.grid.tf) + +## Mainnet + +- [TFChain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2F/tfchain.grid.tf#/explorer) `wss://tfchain.grid.tf` +- [GraphQL](https://graphql.grid.tf/graphql) +- [Activation Service](https://activation.grid.tf/activation/) +- [TFChain-Stellar Bridge](https://bridge.bsc.threefold.io/) +- [TFChain-Ethereum Bridge](https://bridge.eth.threefold.io/) +- [TFGrid Proxy](https://gridproxy.grid.tf) +- [Grid Dashboard](https://dashboard.grid.tf) + +### Supported Planetary Network Nodes + +``` + Peers: + [ + # Threefold Lochrist + tcp://gent01.grid.tf:9943 + tcp://gent02.grid.tf:9943 + tcp://gent03.grid.tf:9943 + tcp://gent04.grid.tf:9943 + tcp://gent01.test.grid.tf:9943 + tcp://gent02.test.grid.tf:9943 + tcp://gent01.dev.grid.tf:9943 + tcp://gent02.dev.grid.tf:9943 + # GreenEdge + tcp://gw291.vienna1.greenedgecloud.com:9943 + tcp://gw293.vienna1.greenedgecloud.com:9943 + tcp://gw294.vienna1.greenedgecloud.com:9943 + tcp://gw297.vienna1.greenedgecloud.com:9943 + tcp://gw298.vienna1.greenedgecloud.com:9943 + tcp://gw299.vienna2.greenedgecloud.com:9943 + tcp://gw300.vienna2.greenedgecloud.com:9943 + tcp://gw304.vienna2.greenedgecloud.com:9943 + tcp://gw306.vienna2.greenedgecloud.com:9943 + tcp://gw307.vienna2.greenedgecloud.com:9943 + tcp://gw309.vienna2.greenedgecloud.com:9943 + tcp://gw313.vienna2.greenedgecloud.com:9943 + tcp://gw324.salzburg1.greenedgecloud.com:9943 + tcp://gw326.salzburg1.greenedgecloud.com:9943 + tcp://gw327.salzburg1.greenedgecloud.com:9943 + tcp://gw328.salzburg1.greenedgecloud.com:9943 + tcp://gw330.salzburg1.greenedgecloud.com:9943 + tcp://gw331.salzburg1.greenedgecloud.com:9943 + tcp://gw333.salzburg1.greenedgecloud.com:9943 + tcp://gw422.vienna2.greenedgecloud.com:9943 + tcp://gw423.vienna2.greenedgecloud.com:9943 + tcp://gw424.vienna2.greenedgecloud.com:9943 + tcp://gw425.vienna2.greenedgecloud.com:9943 + ] +``` diff --git a/collections/system_administrators/gpu/gpu.md b/collections/system_administrators/gpu/gpu.md new file mode 100644 index 0000000..4d1c4bf --- /dev/null +++ b/collections/system_administrators/gpu/gpu.md @@ -0,0 +1,123 @@ +

GPU Support

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Filter and Reserve a GPU Node](#filter-and-reserve-a-gpu-node) + - [Filter Nodes](#filter-nodes) + - [Reserve a Node](#reserve-a-node) +- [Deploy a VM with GPU](#deploy-a-vm-with-gpu) +- [Install the GPU Driver](#install-the-gpu-driver) + - [AMD Driver](#amd-driver) + - [Nvidia Driver](#nvidia-driver) + - [With an AI Model](#with-an-ai-model) +- [Troubleshooting](#troubleshooting) +- [GPU Support Links](#gpu-support-links) + +*** + +## Introduction + +This section covers the essential information to deploy a node with a GPU. We also provide links to other parts of the manual covering GPU support. + +To use a GPU on the TFGrid, users need to rent a dedicated node. Once they have rented a dedicated node equipped with a GPU, users can deploy workloads on their dedicated GPU node. + + +## Filter and Reserve a GPU Node + +You can filter and reserve a GPU node using the [Dedicated Nodes section](../../dashboard/deploy/node_finder.md#dedicated-nodes) of the **ThreeFold Dashboard**. + +### Filter Nodes + +* Filter nodes using the vendor name + * In **Filters**, select **GPU's vendor name** + * A new window will appear below named **GPU'S VENDOR NAME** + * Write the name of the vendor desired (e.g. **nvidia**, **amd**) + +![image](./img/gpu_8.png) + +* Filter nodes using the device name + * In **Filters**, select **GPU's device name** + * A new window will appear below named **GPU'S DEVICE NAME** + * Write the name of the device desired (e.g. **GT218**) + +![image](./img/gpu_9.png) + +### Reserve a Node + +When you have decided which node to reserve, click on **Reserve** under the column **Actions**. Once you've rented a dedicated node that has a GPU, you can deploy GPU workloads. + +![image](./img/gpu_2.png) + + +## Deploy a VM with GPU + +Now that you've reserverd a dedicated GPU node, it's time to deploy a VM to make use of the GPU! There are many ways to proceed. You can use the [Dashboard](../../dashboard/solutions/fullVm.md), [Go](../../developers/go/grid3_go_gpu.md), [Terraform](../terraform/terraform_gpu_support.md), etc. + +For example, deploying a VM with GPU on the Dashboard is easy. Simply set the GPU option and make sure to select your dedicated node, as show here: +![image](./img/gpu_3.png) + +## Install the GPU Driver + +Once you've deployed a VM with GPU, you want to SSH into the VM and install the GPU driver. + +- SSH to the VM and get your system updated +```bash +dpkg --add-architecture i386 +apt-get update +apt-get dist-upgrade +reboot +``` +- Find your Driver installer + - [AMD driver](https://www.amd.com/en/support/linux-drivers) + - [Nvidia driver](https://www.nvidia.com/download/index.aspx) + +- You can see the node card details on the ThreeFold Dashboard or by using the following command lines: +```bash +lspci | grep VGA +lshw -c video +``` + +### AMD Driver + +- Download the GPU driver using `wget` + - For example: `wget https://repo.radeon.com/amdgpu-install/23.30.2/ubuntu/focal/amdgpu-install_5.7.50702-1_all.deb` +- Install the GPU driver using `apt-get`. Make sure to update ``. +```bash +apt-get install ./amdgpu-install_.deb +amdgpu-install --usecase="dkms,graphics,opencl,hip,rocm,rocmdev,opencl,hiplibsdk,mllib,mlsdk" --opencl=rocr --vulkan=pro --opengl=mesa +``` +- To verify that the GPU is properly installed, use the following command lines: +```bash +rocm-smi +rocminfo +``` +- You should something like this: +![image](./img/gpu_4.png) +![image](./img/gpu_5.png) + +### Nvidia Driver + +For Nvidia, you can follow [those steps](https://linuxize.com/post/how-to-nvidia-drivers-on-ubuntu-20-04/#installing-the-nvidia-drivers-using-the-command-line). +- To verify that the GPU is properly installed, you can use `nvidia-smi`. You should something like this: + + ![image](./img/gpu_6.png) + +### With an AI Model + +You can also try this [AI model](https://github.com/invoke-ai/InvokeAI#getting-started-with-invokeai) to install your driver. + +## Troubleshooting + +Here are some useful links to troubleshoot your GPU installation. + +- [Steps to install the driver](https://amdgpu-install.readthedocs.io/en/latest/index.html) +- Changing kernel version + - [Link 1](https://linux.how2shout.com/how-to-change-default-kernel-in-ubuntu-22-04-20-04-lts/) + - [Link 2](https://gist.github.com/chaiyujin/c08e59752c3e238ff3b1a5098322b363) + +> Note: It is recommended to use Ubuntu 22.04.2 LTS (GNU/Linux 5.18.13-051813-generic x86_64) + +## GPU Support Links + +You can consult the [GPU Table of Contents](./gpu_toc.md) to see all available GPU support links on the ThreeFold Manual. \ No newline at end of file diff --git a/collections/system_administrators/gpu/gpu_toc.md b/collections/system_administrators/gpu/gpu_toc.md new file mode 100644 index 0000000..073c79e --- /dev/null +++ b/collections/system_administrators/gpu/gpu_toc.md @@ -0,0 +1,19 @@ +

GPU on the TFGrid

+ +The ThreeFold Manual covers many ways to use a GPU node on the TFGrid. A good place to start would be the **GPU Introduction** section. + +Feel free to explore the different possibilities! + +

Table of Contents

+ +- [GPU Support](./gpu.md) +- [Node Finder and GPU](../../dashboard/deploy/node_finder.md#gpu-support) +- [Javascript Client and GPU](../../developers/javascript/grid3_javascript_gpu_support.md) +- [GPU and Go](../../developers/go/grid3_go_gpu.md) + - [GPU Support](../../developers/go/grid3_go_gpu_support.md) + - [Deploy a VM with GPU](../../developers/go/grid3_go_vm_with_gpu.md) +- [TFCMD and GPU](../../developers/tfcmd/tfcmd_vm.md#deploy-a-vm-with-gpu) +- [Terraform and GPU](../terraform/terraform_gpu_support.md) +- [Full VM and GPU](../../dashboard/solutions/fullVm.md) +- [Zero-OS API and GPU](../../developers/internals/zos/manual/api.md#gpus) +- [GPU Farming](../../farmers/3node_building/gpu_farming.md) \ No newline at end of file diff --git a/collections/system_administrators/gpu/img/gpu_1.png b/collections/system_administrators/gpu/img/gpu_1.png new file mode 100644 index 0000000..226811d Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_1.png differ diff --git a/collections/system_administrators/gpu/img/gpu_2.png b/collections/system_administrators/gpu/img/gpu_2.png new file mode 100644 index 0000000..c755a8e Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_2.png differ diff --git a/collections/system_administrators/gpu/img/gpu_3.png b/collections/system_administrators/gpu/img/gpu_3.png new file mode 100644 index 0000000..865e543 Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_3.png differ diff --git a/collections/system_administrators/gpu/img/gpu_4.png b/collections/system_administrators/gpu/img/gpu_4.png new file mode 100644 index 0000000..de931fd Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_4.png differ diff --git a/collections/system_administrators/gpu/img/gpu_5.png b/collections/system_administrators/gpu/img/gpu_5.png new file mode 100644 index 0000000..2738475 Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_5.png differ diff --git a/collections/system_administrators/gpu/img/gpu_6.png b/collections/system_administrators/gpu/img/gpu_6.png new file mode 100644 index 0000000..9bc04f8 Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_6.png differ diff --git a/collections/system_administrators/gpu/img/gpu_7.png b/collections/system_administrators/gpu/img/gpu_7.png new file mode 100644 index 0000000..3a6fecb Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_7.png differ diff --git a/collections/system_administrators/gpu/img/gpu_8.png b/collections/system_administrators/gpu/img/gpu_8.png new file mode 100644 index 0000000..483c514 Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_8.png differ diff --git a/collections/system_administrators/gpu/img/gpu_9.png b/collections/system_administrators/gpu/img/gpu_9.png new file mode 100644 index 0000000..3cdd3ee Binary files /dev/null and b/collections/system_administrators/gpu/img/gpu_9.png differ diff --git a/collections/system_administrators/mycelium/api_yaml.md b/collections/system_administrators/mycelium/api_yaml.md new file mode 100644 index 0000000..db405ba --- /dev/null +++ b/collections/system_administrators/mycelium/api_yaml.md @@ -0,0 +1,431 @@ +

API

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [File Example](#file-example) + +*** + +## Introduction + +We provide an example of a YAML API file. + + +## File Example + +``` + +openapi: 3.0.2 +info: + version: '1.0.0' + + title: Mycelium management + contact: + url: 'https://github.com/threefoldtech/mycelium' + license: + name: Apache 2.0 + url: 'https://github.com/threefoldtech/mycelium/blob/master/LICENSE' + + description: | + This is the specification of the **mycelium** management API. It is used to perform admin tasks on the system, and + to perform administrative duties. + +externalDocs: + description: For full documentation, check out the mycelium github repo. + url: 'https://github.com/threefoldtech/mycelium' + +tags: + - name: Admin + description: Administrative operations + - name: Peer + description: Operations related to peer management + - name: Message + description: Operations on the embedded message subsystem + +servers: + - url: 'http://localhost:8989' + +paths: + '/api/v1/peers': + get: + tags: + - Admin + - Peer + summary: List known peers + description: | + List all peers known in the system, and info about their connection. + This includes the endpoint, how we know about the peer, the connection state, and if the connection is alive the amount + of bytes we've sent to and received from the peer. + operationId: getPeers + responses: + '200': + description: Succes + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PeerStats' + + '/api/v1/messages': + get: + tags: + - Message + summary: Get a message from the inbound message queue + description: | + Get a message from the inbound message queue. By default, the message is removed from the queue and won't be shown again. + If the peek query parameter is set to true, the message will be peeked, and the next call to this endpoint will show the same message. + This method returns immediately by default: a message is returned if one is ready, and if there isn't nothing is returned. If the timeout + query parameter is set, this call won't return for the given amount of seconds, unless a message is received + operationId: popMessage + parameters: + - in: query + name: peek + required: false + schema: + type: boolean + description: Whether to peek the message or not. If this is true, the message won't be removed from the inbound queue when it is read + example: true + - in: query + name: timeout + required: false + schema: + type: integer + format: int64 + minimum: 0 + description: | + Amount of seconds to wait for a message to arrive if one is not available. Setting this to 0 is valid and will return + a message if present, or return immediately if there isn't + example: 60 + - in: query + name: topic + required: false + schema: + type: string + format: byte + minLength: 0 + maxLength: 340 + description: | + Optional filter for loading messages. If set, the system checks if the message has the given string at the start. This way + a topic can be encoded. + example: example.topic + responses: + '200': + description: Message retrieved + content: + application/json: + schema: + $ref: '#/components/schemas/InboundMessage' + '204': + description: No message ready + post: + tags: + - Message + summary: Submit a new message to the system. + description: | + Push a new message to the systems outbound message queue. The system will continuously attempt to send the message until + it is either fully transmitted, or the send deadline is expired. + operationId: pushMessage + parameters: + - in: query + name: reply_timeout + required: false + schema: + type: integer + format: int64 + minimum: 0 + description: | + Amount of seconds to wait for a reply to this message to come in. If not set, the system won't wait for a reply and return + the ID of the message, which can be used later. If set, the system will wait for at most the given amount of seconds for a reply + to come in. If a reply arrives, it is returned to the client. If not, the message ID is returned for later use. + example: 120 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageBody' + responses: + '200': + description: We received a reply within the specified timeout + content: + application/json: + schema: + $ref: '#/components/schemas/InboundMessage' + + '201': + description: Message pushed successfully, and not waiting for a reply + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageResponseId' + '408': + description: The system timed out waiting for a reply to the message + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageResponseId' + + '/api/v1/messsages/reply/{id}': + post: + tags: + - Message + summary: Reply to a message with the given ID + description: | + Submits a reply message to the system, where ID is an id of a previously received message. If the sender is waiting + for a reply, it will bypass the queue of open messages. + operationId: pushMessageReply + parameters: + - in: path + name: id + required: true + schema: + type: string + format: hex + minLength: 16 + maxLength: 16 + example: abcdef0123456789 + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PushMessageBody' + responses: + '204': + description: successfully submitted the reply + + '/api/v1/messages/status/{id}': + get: + tags: + - Message + summary: Get the status of an outbound message + description: | + Get information about the current state of an outbound message. This can be used to check the transmission + state, size and destination of the message. + operationId: getMessageInfo + parameters: + - in: path + name: id + required: true + schema: + type: string + format: hex + minLength: 16 + maxLength: 16 + example: abcdef0123456789 + responses: + '200': + description: Success + content: + application/json: + schema: + $ref: '#/components/schemas/MessageStatusResponse' + '404': + description: Message not found + + +components: + schemas: + Endpoint: + description: Identification to connect to a peer + type: object + properties: + proto: + description: Protocol used + type: string + enum: + - 'tcp' + - 'quic' + example: tcp + socketAddr: + description: The socket address used + type: string + example: 192.0.2.6:9651 + + PeerStats: + description: Info about a peer + type: object + properties: + endpoint: + $ref: '#/components/schemas/Endpoint' + type: + description: How we know about this peer + type: string + enum: + - 'static' + - 'inbound' + - 'linkLocalDiscovery' + example: static + connectionState: + description: The current state of the connection to the peer + type: string + enum: + - 'alive' + - 'connecting' + - 'dead' + example: alive + connectionTxBytes: + description: The amount of bytes transmitted on the current connection + type: integer + format: int64 + minimum: 0 + example: 464531564 + connectionRxBytes: + description: The amount of bytes received on the current connection + type: integer + format: int64 + minimum: 0 + example: 64645089 + + InboundMessage: + description: A message received by the system + type: object + properties: + id: + description: Id of the message, hex encoded + type: string + format: hex + minLength: 16 + maxLength: 16 + example: 0123456789abcdef + srcIp: + description: Sender overlay IP address + type: string + format: ipv6 + example: 249:abcd:0123:defa::1 + srcPk: + description: Sender public key, hex encoded + type: string + format: hex + minLength: 64 + maxLength: 64 + example: fedbca9876543210fedbca9876543210fedbca9876543210fedbca9876543210 + dstIp: + description: Receiver overlay IP address + type: string + format: ipv6 + example: 34f:b680:ba6e:7ced:355f:346f:d97b:eecb + dstPk: + description: Receiver public key, hex encoded. This is the public key of the system + type: string + format: hex + minLength: 64 + maxLength: 64 + example: 02468ace13579bdf02468ace13579bdf02468ace13579bdf02468ace13579bdf + topic: + description: An optional message topic + type: string + format: byte + minLength: 0 + maxLength: 340 + example: hpV+ + payload: + description: The message payload, encoded in standard alphabet base64 + type: string + format: byte + example: xuV+ + + PushMessageBody: + description: A message to send to a given receiver + type: object + properties: + dst: + $ref: '#/components/schemas/MessageDestination' + topic: + description: An optional message topic + type: string + format: byte + minLength: 0 + maxLength: 340 + example: hpV+ + payload: + description: The message to send, base64 encoded + type: string + format: byte + example: xuV+ + + MessageDestination: + oneOf: + - description: An IP in the subnet of the receiver node + type: object + properties: + ip: + description: The target IP of the message + format: ipv6 + example: 249:abcd:0123:defa::1 + - description: The hex encoded public key of the receiver node + type: object + properties: + pk: + description: The hex encoded public key of the target node + type: string + minLength: 64 + maxLength: 64 + example: bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32 + + PushMessageResponseId: + description: The ID generated for a message after pushing it to the system + type: object + properties: + id: + description: Id of the message, hex encoded + type: string + format: hex + minLength: 16 + maxLength: 16 + example: 0123456789abcdef + + MessageStatusResponse: + description: Information about an outobund message + type: object + properties: + dst: + description: Ip address of the receiving node + type: string + format: ipv6 + example: 249:abcd:0123:defa::1 + state: + $ref: '#/components/schemas/TransmissionState' + created: + description: Unix timestamp of when this message was created + type: integer + format: int64 + example: 1649512789 + deadline: + description: Unix timestamp of when this message will expire. If the message is not received before this, the system will give up + type: integer + format: int64 + example: 1649513089 + msgLen: + description: Length of the message in bytes + type: integer + minimum: 0 + example: 27 + + TransmissionState: + description: The state of an outbound message in it's lifetime + oneOf: + - type: string + enum: ['pending', 'received', 'read', 'aborted'] + example: 'received' + - type: object + properties: + sending: + type: object + properties: + pending: + type: integer + minimum: 0 + example: 5 + sent: + type: integer + minimum: 0 + example: 17 + acked: + type: integer + minimum: 0 + example: 3 + example: 'received' + + +``` \ No newline at end of file diff --git a/collections/system_administrators/mycelium/data_packet.md b/collections/system_administrators/mycelium/data_packet.md new file mode 100644 index 0000000..529a4b5 --- /dev/null +++ b/collections/system_administrators/mycelium/data_packet.md @@ -0,0 +1,66 @@ + +

Data Packet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Packet Header](#packet-header) +- [Body](#body) + +*** + +## Introduction + + +A `data packet` contains user specified data. This can be any data, as long as the sender and receiver +both understand what it is, without further help. Intermediate hops, which route the data have sufficient +information with the header to know where to forward the packet. In practice, the data will be encrypted +to avoid eavesdropping by intermediate hops. + +## Packet Header + +The packet header has a fixed size of 36 bytes, with the following layout: + +``` + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +|Reserved | Length | Hop Limit | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| | ++ + +| | ++ Source IP + +| | ++ + +| | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| | ++ + +| | ++ Destination IP + +| | ++ + +| | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +``` + +The first 5 bits are reserved and must be set to 0. + +The next 19 bits are used to specify the length of the body. It is expected that +the actual length of a packet does not exceed 256K right now, so the 19th bit is +only needed because we have to account for some overhead related to the encryption. + +The next byte is the hop-limit. Every node decrements this value by 1 before sending +the packet. If a node decrements this value to 0, the packet is discarded. + +The next 16 bytes contain the sender IP address. + +The final 16 bytes contain the destination IP address. + +## Body + +Following the header is a variable length body. The protocol does not have any requirements for the +body, and the only requirement imposed is that the body is as long as specified in the header length +field. It is technically legal according to the protocol to transmit a data packet without a body, +i.e. a body length of 0. This is useless however, as there will not be any data to interpret. diff --git a/collections/system_administrators/mycelium/information.md b/collections/system_administrators/mycelium/information.md new file mode 100644 index 0000000..dccefe1 --- /dev/null +++ b/collections/system_administrators/mycelium/information.md @@ -0,0 +1,193 @@ + +

Additional Information

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Connect to Other Nodes](#connect-to-other-nodes) +- [Hosted Public Nodes](#hosted-public-nodes) +- [Default Port](#default-port) +- [Check Network Information](#check-network-information) +- [Test the Network](#test-the-network) +- [Key Pair](#key-pair) +- [Running without TUN interface](#running-without-tun-interface) +- [API](#api) +- [Message System](#message-system) +- [Inspecting Node Keys](#inspecting-node-keys) +- [Troubleshooting](#troubleshooting) + - [Root Access](#root-access) + - [Enable IPv6 at the OS Level](#enable-ipv6-at-the-os-level) + - [VPN Can Block Mycelium](#vpn-can-block-mycelium) + - [Add Peers](#add-peers) + +*** + +## Introduction + +We provide additional information concerning Mycelium and how to properly use it. + +## Connect to Other Nodes + +If you want to connect to other nodes, you can specify their listening address as +part of the command (combined with the protocol they are listening on, usually TCP); + +```sh +mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 +``` + +If you are using other tun inferface, e.g. utun3 (default), you can set a different utun inferface + +```sh +mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun9 +``` + +## Hosted Public Nodes + +A couple of public nodes are provided, which can be freely connected to. This allows +anyone to join the global network. These are hosted in 3 geographic regions, on both +IPv4 and IPv6, and supporting both the Tcp and Quic protocols. The nodes are the +following: + +| Node ID | Region | IPv4 | IPv6 | Tcp port | Quic port | +| --- | --- | --- | --- | --- | --- | +| 01 | DE | 188.40.132.242 | 2a01:4f8:221:1e0b::2 | 9651 | 9651 | +| 02 | DE | 136.243.47.186 | 2a01:4f8:212:fa6::2 | 9651 | 9651 | +| 03 | BE | 185.69.166.7 | 2a02:1802:5e:0:8478:51ff:fee2:3331 | 9651 | 9651 | +| 04 | BE | 185.69.166.8 | 2a02:1802:5e:0:8c9e:7dff:fec9:f0d2 | 9651 | 9651 | +| 05 | FI | 65.21.231.58 | 2a01:4f9:6a:1dc5::2 | 9651 | 9651 | +| 06 | FI | 65.109.18.113 | 2a01:4f9:5a:1042::2 | 9651 | 9651 | + +These nodes are all interconnected, so 2 peers who each connect to a different node +(or set of disjoint nodes) will still be able to reach each other. For optimal performance, +it is recommended to connect to all of the above at once however. An example connection +string could be: + +`--peers tcp://188.40.132.242:9651 "tcp://[2a01:4f8:212:fa6::2]:9651" quic://185.69.166.7:9651 "tcp://[2a02:1802:5e:0:8c9e:7dff:fec9:f0d2]:9651" tcp://65.21.231.58:9651 "quic://[2a01:4f9:5a:1042::2]:9651"` + +It is up to the user to decide which peers he wants to use, over which protocol. +Note that quotation may or may not be required, depending on which shell is being +used. + +## Default Port + +By default, the node will listen on port `9651`, though this can be overwritten with the `-p` flag. + +## Check Network Information + +You can check your Mycelium network information by running the following line: + +```bash +mycelium inspect --json +``` + +Where a typical output would be: + +``` +{ + "publicKey": "abd16194646defe7ad2318a0f0a69eb2e3fe939c3b0b51cf0bb88bb8028ecd1d", + "address": "3c4:c176:bf44:b2ab:5e7e:f6a:b7e2:11ca" +} +``` + +## Test the Network + +You can easily test that the network works by pinging to anyone in the network. + +``` +ping6 3c4:c176:bf44:b2ab:5e7e:f6a:b7e2:11ca +``` + +## Key Pair + +The node uses a `x25519` key pair from which its identity is derived. The private key of this key pair +is saved in a local file (32 bytes in binary format). You can specify the path to this file with the +`-k` flag. By default, the file is saved in the current working directory as `priv_key.bin`. + +## Running without TUN interface + +It is possible to run the system without creating a TUN interface, by starting with the `--no-tun` flag. +Obviously, this means that your node won't be able to send or receive L3 traffic. There is no interface +to send packets on, and consequently no interface to send received packets out of. From the point of +other nodes, your node will simply drop all incoming L3 traffic destined for it. The node **will still +route traffic** as normal. It takes part in routing, exchanges route info, and forwards packets not +intended for itself. + +The node also still allows access to the [message subsystem](#message-system). + +## API + +The node starts an HTTP API, which by default listens on `localhost:8989`. A different listening address +can be specified on the CLI when starting the system through the `--api-server-addr` flag. The API +allows access to [send and receive messages](#message-system), and will later be expanded to allow +admin functionality on the system. Note that message are sent using the identity of the node, and a +future admin API can be used to change the system behavior. As such, care should be taken that this +API is not accessible to unauthorized users. + +## Message System + +A message system is provided which allows users to send a message, which is essentially just "some data" +to a remote. Since the system is end-to-end encrypted, a receiver of a message is sure of the authenticity +and confidentiality of the content. The system does not interpret the data in any way and handles it +as an opaque block of bytes. Messages are sent with a deadline. This means the system continuously +tries to send (part of) the message, until it either succeeds, or the deadline expires. This happens +similar to the way TCP handles data. Messages are transmitted in chunks, which are embedded in the +same data stream used by L3 packets. As such, intermediate nodes can't distinguish between regular L3 +and message data. + +The primary way to interact with the message system is through [the API](#API). The message API is +documented in [here](./api_yaml.md). For some more info about how to +use the message system, see [the Message section](./message.md). + + +## Inspecting Node Keys + +Using the `inspect` subcommand, you can view the address associated with a public key. If no public key is provided, the node will show +its own public key. In either case, the derived address is also printed. You can specify the path to the private key with the `-k` flag. +If the file does not exist, a new private key will be generated. The optional `--json` flag can be used to print the information in json +format. + +```sh +mycelium inspect a47c1d6f2a15b2c670d3a88fbe0aeb301ced12f7bcb4c8e3aa877b20f8559c02 +``` + +Where the output could be something like this: + +```sh +Public key: a47c1d6f2a15b2c670d3a88fbe0aeb301ced12f7bcb4c8e3aa877b20f8559c02 +Address: 27f:b2c5:a944:4dad:9cb1:da4:8bf7:7e65 +``` + +## Troubleshooting + +### Root Access + +You might need to run Mycelium as root. Some error messages could be something like: `Error: NixError(EPERM)`. + +### Enable IPv6 at the OS Level + +You need to enable IPv6 at the OS level. Some error messages could be something like: `Permission denied (os error 13)`. + +- Check if IPv66 is enabled + - If disabled, output is 1, if enabled, output is 0 + ``` + sysctl net.ipv6.conf.all.disable_ipv6 + ``` +- Enable IPv6 + ``` + sudo sysctl net.ipv6.conf.all.disable_ipv6=0 + ``` + +Here's some commands to troubleshoot IPv6: + +``` +sudo ip6tables -S INPUT +sudo ip6tables -S OUTPUT +``` + +### VPN Can Block Mycelium + +You might need to disconnect your VPN when using Mycelium. + +### Add Peers + +It can help to connect to other peers. Check the Mycelium repository for [peers](https://github.com/threefoldtech/mycelium?tab=readme-ov-file#hosted-public-nodes). \ No newline at end of file diff --git a/collections/system_administrators/mycelium/installation.md b/collections/system_administrators/mycelium/installation.md new file mode 100644 index 0000000..b3bafa9 --- /dev/null +++ b/collections/system_administrators/mycelium/installation.md @@ -0,0 +1,116 @@ + +

Installation

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Considerations](#considerations) +- [Set Mycelium](#set-mycelium) +- [Start Mycelium](#start-mycelium) +- [Use Mycelium](#use-mycelium) +- [Mycelium Service (optional)](#mycelium-service-optional) + +*** + +## Introduction + +In this section, we cover how to install Mycelium. This guide can be done on a local machine and also on a full VM running on the TFGrid. + +Currently, Linux, macOS and Windows are supported. On Windows, you must have `wintun.dll` in the same directory you are executing the binary from. + +## Considerations + +You might need to run Mycelium as root, enable IPv6 at the OS level and disconnect your VPN. + +Read the [Troubleshooting](./information.md#troubleshooting) section for more information. + +## Set Mycelium + +- Update the system + ``` + apt update + ``` +- Download the latest Mycelium release: [https://github.com/threefoldtech/mycelium/releases/latest](https://github.com/threefoldtech/mycelium/releases/latest) + ``` + wget https://github.com/threefoldtech/mycelium/releases/download/v0.4.0/mycelium-x86_64-unknown-linux-musl.tar.gz + ``` +- Extract Mycelium + ``` + tar -xvf mycelium-x86_64-unknown-linux-musl.tar.gz + ``` +- Move Mycelium to your path + ``` + mv mycelium /usr/local/bin + ``` + +## Start Mycelium + +You can start Mycelium + +- Start Mycelium + ``` + mycelium --peers tcp://83.231.240.31:9651 quic://185.206.122.71:9651 --tun-name utun2 + ``` +- Open another terminal +- Check the Mycelium connection information (address and public key) + ``` + mycelium inspect --json + ``` + +## Use Mycelium + +Once you've set Mycelium, you can use it to ping other addresses and also to connect into VMs running on the TFGrid. + +- Ping the VM from another machine with IPv6 + ``` + ping6 mycelium_address + ``` +- SSH into a VM running on the TFGrid + ``` + ssh root@vm_mycelium_address + ``` + +## Mycelium Service (optional) + +You can create a systemd service to make sure Mycelium is always enabled and running. + +- Create a Mycelium service + ```bash + nano /etc/systemd/system/mycelium.service + ``` +- Set the service and save the file + ``` + [Unit] + Description=End-2-end encrypted IPv6 overlay network + Wants=network.target + After=network.target + Documentation=https://github.com/threefoldtech/mycelium + + [Service] + ProtectHome=true + ProtectSystem=true + SyslogIdentifier=mycelium + CapabilityBoundingSet=CAP_NET_ADMIN + StateDirectory=mycelium + StateDirectoryMode=0700 + ExecStartPre=+-/sbin/modprobe tun + ExecStart=/usr/local/bin/mycelium --tun-name mycelium -k %S/mycelium/key.bin --peers tcp://146.185.93.83:9651 quic://83.231.240.31:9651 quic://185.206.122.71:9651 tcp://[2a04:f340:c0:71:28cc:b2ff:fe63:dd1c]:9651 tcp://[2001:728:1000:402:78d3:cdff:fe63:e07e]:9651 quic://[2a10:b600:1:0:ec4:7aff:fe30:8235]:9651 + Restart=always + RestartSec=5 + TimeoutStopSec=5 + + [Install] + WantedBy=multi-user.target + ``` +- Enable the service + ``` + systemctl daemon-reload + systemctl enable mycelium + systemctl start mycelium + ``` +- Verify that the Mycelium service is properly running + ``` + systemctl status mycelium + ``` + +Systemd will start up the Mycelium, restart it if it ever crashes, and start it up automatically after any reboots. \ No newline at end of file diff --git a/collections/system_administrators/mycelium/message.md b/collections/system_administrators/mycelium/message.md new file mode 100644 index 0000000..f132367 --- /dev/null +++ b/collections/system_administrators/mycelium/message.md @@ -0,0 +1,90 @@ +

Message Subsystem

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Curl Examples](#curl-examples) +- [Mycelium Binary Examples](#mycelium-binary-examples) + +*** + +## Introduction + +The message subsystem can be used to send arbitrary length messages to receivers. A receiver is any +other node in the network. It can be identified both by its public key, or an IP address in its announced +range. The message subsystem can be interacted with both via the HTTP API, which is +[documented here](./api_yaml.md), or via the `mycelium` binary. By default, the messages do not interpret +the data in any way. When using the binary, the message is slightly modified to include an optional +topic at the start of the message. Note that in the HTTP API, all messages are encoded in base64. This +might make it difficult to consume these messages without additional tooling. + +## Curl Examples + +These examples assume you have at least 2 nodes running, and that they are both part of the same network. + +Send a message on node1, waiting up to 2 minutes for a possible reply: + +```bash +curl -v -H 'Content-Type: application/json' -d '{"dst": {"pk": "bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32"}, "payload": "xuV+"}' http://localhost:8989/api/v1/messages\?reply_timeout\=120 +``` + +Listen for a message on node2. Note that messages received while nothing is listening are added to +a queue for later consumption. Wait for up to 1 minute. + +```bash +curl -v http://localhost:8989/api/v1/messages\?timeout\=60\ +``` + +The system will (immediately) receive our previously sent message: + +```json +{"id":"e47b25063912f4a9","srcIp":"34f:b680:ba6e:7ced:355f:346f:d97b:eecb","srcPk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b","dstIp":"2e4:9ace:9252:630:beee:e405:74c0:d876","dstPk":"bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32","payload":"xuV+"} +``` + +To send a reply, we can post a message on the reply path, with the received message `id` (still on +node2): + +```bash +curl -H 'Content-Type: application/json' -d '{"dst": {"pk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b"}, "payload": "xuC+"}' http://localhost:8989/api/v1/messages/reply/e47b25063912f4a9 +``` + +If you did this fast enough, the initial sender (node1) will now receive the reply. + +## Mycelium Binary Examples + +As explained above, while using the binary the message is slightly modified to insert the optional +topic. As such, when using the binary to send messages, it is suggested to make sure the receiver is +also using the binary to listen for messages. The options discussed here are not covering all possibilities, +use the `--help` flag (`mycelium message send --help` and `mycelium message receive --help`) for a +full overview. + +Once again, send a message. This time using a topic (example.topic). Note that there are no constraints +on what a valid topic is, other than that it is valid UTF-8, and at most 255 bytes in size. The `--wait` +flag can be used to indicate that we are waiting for a reply. If it is set, we can also use an additional +`--timeout` flag to govern exactly how long (in seconds) to wait for. The default is to wait forever. + +```bash +mycelium message send 2e4:9ace:9252:630:beee:e405:74c0:d876 'this is a message' -t example.topic --wait +``` + +On the second node, listen for messages with this topic. If a different topic is used, the previous +message won't be received. If no topic is set, all messages are received. An optional timeout flag +can be specified, which indicates how long to wait for. Absence of this flag will cause the binary +to wait forever. + +```bash +mycelium message receive -t example.topic +``` + +Again, if the previous command was executed a message will be received immediately: + +```json +{"id":"4a6c956e8d36381f","topic":"example.topic","srcIp":"34f:b680:ba6e:7ced:355f:346f:d97b:eecb","srcPk":"955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b","dstIp":"2e4:9ace:9252:630:beee:e405:74c0:d876","dstPk":"bb39b4a3a4efd70f3e05e37887677e02efbda14681d0acd3882bc0f754792c32","payload":"this is a message"} +``` + +And once again, we can use the ID from this message to reply to the original sender, who might be waiting +for this reply (notice we used the hex encoded public key to identify the receiver here, rather than an IP): + +```bash +mycelium message send 955bf6bea5e1150fd8e270c12e5b2fc08f08f7c5f3799d10550096cc137d671b "this is a reply" --reply-to 4a6c956e8d36381f +``` diff --git a/collections/system_administrators/mycelium/mycelium_toc.md b/collections/system_administrators/mycelium/mycelium_toc.md new file mode 100644 index 0000000..817771d --- /dev/null +++ b/collections/system_administrators/mycelium/mycelium_toc.md @@ -0,0 +1,14 @@ + +

Mycelium

+ +In this section, we present [Mycelium](https://github.com/threefoldtech/mycelium), an end-to-end encrypted IPv6 overlay network. + +

Table of Contents

+ +- [Overview](./overview.md) +- [Installation](./installation.md) +- [Additional Information](./information.md) +- [Message](./message.md) +- [Packet](./packet.md) +- [Data Packet](./data_packet.md) +- [API YAML](./api_yaml.md) \ No newline at end of file diff --git a/collections/system_administrators/mycelium/overview.md b/collections/system_administrators/mycelium/overview.md new file mode 100644 index 0000000..d7778a6 --- /dev/null +++ b/collections/system_administrators/mycelium/overview.md @@ -0,0 +1,33 @@ + +

Overview

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Features](#features) +- [Testing](#testing) + +*** + +## Introduction + +Mycelium is an end-2-end encrypted IPv6 overlay network written in Rust where each node that joins the overlay network will receive an overlay network IP in the 400::/7 range. + +The overlay network uses some of the core principles of the [Babel routing protocol](https://www.irif.fr/~jch/software/babel). + + +## Features + +- Mycelium, is locality aware, it will look for the shortest path between nodes +- All traffic between the nodes is end-2-end encrypted +- Traffic can be routed over nodes of friends, location aware +- If a physical link goes down Mycelium will automatically reroute your traffic +- The IP address is IPV6 and linked to private key +- A simple reliable messagebus is implemented on top of Mycelium +- Mycelium has multiple ways how to communicate quic, tcp, ... and we are working on holepunching for Quick which means P2P traffic without middlemen for NATted networks e.g. most homes +- Scalability is very important for us, we tried many overlay networks before and got stuck on all of them, we are trying to design a network which scales to a planetary level +- You can run mycelium without TUN and only use it as reliable message bus. + +## Testing + +We are looking for lots of testers to push the system. Visit the [Mycelium repository](https://github.com/threefoldtech/mycelium) to contribute. \ No newline at end of file diff --git a/collections/system_administrators/mycelium/packet.md b/collections/system_administrators/mycelium/packet.md new file mode 100644 index 0000000..4ee2215 --- /dev/null +++ b/collections/system_administrators/mycelium/packet.md @@ -0,0 +1,33 @@ + +

Packet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Packet Header](#packet-header) + +*** + +## Introduction + + +A `Packet` is the largest communication object between established `peers`. All communication is done +via these `packets`. The `packet` itself consists of a fixed size header, and a variable size body. +The body contains a more specific type of data. + +## Packet Header + +The packet header has a fixed size of 4 bytes, with the following layout: + +``` + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +| Version | Type | Reserved | ++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +``` + +The first byte is used to indicate the version of the protocol. Currently, only version 1 is supported +(0x01). The next byte is used to indicate the type of the body. `0x00` indicates a data packet, while +`0x01` indicates a control packet. The remaining 16 bits are currently reserved, and should be set to +all 0. diff --git a/collections/system_administrators/pulumi/img/pulumi_logo.svg b/collections/system_administrators/pulumi/img/pulumi_logo.svg new file mode 100644 index 0000000..26d4c65 --- /dev/null +++ b/collections/system_administrators/pulumi/img/pulumi_logo.svg @@ -0,0 +1,7 @@ + + + + + + + diff --git a/collections/system_administrators/pulumi/pulumi_deployment_details.md b/collections/system_administrators/pulumi/pulumi_deployment_details.md new file mode 100644 index 0000000..5dc808c --- /dev/null +++ b/collections/system_administrators/pulumi/pulumi_deployment_details.md @@ -0,0 +1,449 @@ +

Deployment Details

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Essential Workflow](#essential-workflow) + - [State](#state) + - [Creating an Empty Stack](#creating-an-empty-stack) + - [Bringing up the Infrastructure](#bringing-up-the-infrastructure) + - [Destroy the Infrastructure](#destroy-the-infrastructure) + - [Pulumi Makefile](#pulumi-makefile) +- [Creating a Network](#creating-a-network) + - [Pulumi File](#pulumi-file) +- [Creating a Virtual Machine](#creating-a-virtual-machine) +- [Kubernetes](#kubernetes) +- [Creating a Domain](#creating-a-domain) + - [Example of a Simple Domain Prefix](#example-of-a-simple-domain-prefix) + - [Example of a Fully Controlled Domain](#example-of-a-fully-controlled-domain) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We present here noteworthy details concerning different types of deployments that are possible with the ThreeFold Pulumi plugin. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Installation + +If this isn't already done, [install Pulumi](./pulumi_install.md) on your machine. + +## Essential Workflow + +### State + +We will be creating a state directory and informing pulumi we want to use that local directory to manage the state, no need to use a cloud backend managed by pulumi or other providers (for the sake of testing). + +```sh + mkdir ${current_dir}/state + pulumi login --cloud-url file://${current_dir}/state +``` + +### Creating an Empty Stack + +```sh + pulumi stack init test +``` + +### Bringing up the Infrastructure + +```sh + pulumi up --yes +``` + +Here we create an empty stack using `stack init` and we give it the name `test` +then to bring up the infrastructure we execute `pulumi up --yes`. + +> The `pulumi up` command shows the plan before agreeing to execute it + +### Destroy the Infrastructure + +```sh + pulumi destroy --yes + pulumi stack rm --yes + pulumi logout +``` + +### Pulumi Makefile + +In every example directory, you will find a project file `Pulumi.yaml` and a `Makefile` to reduce the amount of typing: + +```Makefile +current_dir = $(shell pwd) + +run: + rm -rf ${current_dir}/state + mkdir ${current_dir}/state + pulumi login --cloud-url file://${current_dir}/state + pulumi stack init test + pulumi up --yes + +destroy: + pulumi destroy --yes + pulumi stack rm --yes + pulumi logout +``` + +This means that, to execute, you just need to type `make run` and to destroy, you need to type `make destroy`. + +## Creating a Network + +We address here how to create a [network](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/network). + +### Pulumi File + +You can find the original file [here](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/network/Pulumi.yaml). + +```yml +name: pulumi-provider-grid +runtime: yaml + +plugins: + providers: + - name: grid + path: ../.. + +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + farm_ids: [1] + + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: testing + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 + +outputs: + node_deployment_id: ${network.node_deployment_id} + nodes_ip_range: ${network.nodes_ip_range} +``` + +We will now go through this file section by section to properly understand what is happening. + +```yml +name: pulumi-provider-grid +runtime: yaml +``` + +- name is for the project name (can be anything) +- runtime: the runtime we are using can be code in yaml, python, go, etc. + +```yml +plugins: + providers: + - name: grid + path: ../.. + +``` + +Here, we define the plugins we are using within our project and their locations. Note that we use `../..` due to the repository hierarchy. + +```yml +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + +``` + +We then start by initializing the resources. The provider which we loaded in the plugins section is also a resource that has properties (the main one now is just the mnemonic of TCHhain). + +```yaml + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + farm_ids: [1] +``` + +Then, we create a scheduler `grid:internal:Scheduler`, that does the planning for us. Instead of being too specific about node IDs, we just give it some generic information. For example, "I want to work against these data centers (farms)". As long as the necessary criteria are provided, the scheduler can be more specific in the planning and select the appropriate resources available on the TFGrid. + +```yaml + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: testing + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 +``` + +Now, that we created the scheduler, we can go ahead and create the network resource `grid:internal:Network`. Please note that the network depends on the scheduler's existence. If we remove it, the scheduler and the network will be created in parallel, that's why we have the `dependsOn` section. We then proceed to specify the network resource properties, e.g. the name, the description, which nodes to deploy our network on, the IP range of the network. In our case, we only choose one node. + +To access information related to our deployment, we set the section **outputs**. This will display results that we can use, or reuse, while we develop our infrastructure further. + +```yaml +outputs: + node_deployment_id: ${network.node_deployment_id} + nodes_ip_range: ${network.nodes_ip_range} +``` + +## Creating a Virtual Machine + +Now, we will check an [example](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/virtual_machine) on how to create a virtual machine. + +Just like we've seen above, we will have two files `Makefile` and `Pulumi.yaml` where we describe the infrastructure. + +```yml +name: pulumi-provider-grid +runtime: yaml + +plugins: + providers: + - name: grid + path: ../.. + +resources: + provider: + type: pulumi:providers:grid + properties: + mnemonic: + + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + mru: 256 + sru: 2048 + farm_ids: [1] + + network: + type: grid:internal:Network + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: test + description: test network + nodes: + - ${scheduler.nodes[0]} + ip_range: 10.1.0.0/16 + + deployment: + type: grid:internal:Deployment + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + node_id: ${scheduler.nodes[0]} + name: deployment + network_name: test + vms: + - name: vm + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entrypoint: "/sbin/zinit init" + network_name: test + cpu: 2 + memory: 256 + planetary: true + mounts: + - disk_name: data + mount_point: /app + env_vars: + SSH_KEY: + + disks: + - name: data + size: 2 + +outputs: + node_deployment_id: ${deployment.node_deployment_id} + ygg_ip: ${deployment.vms_computed[0].ygg_ip} +``` + +We have a scheduler, and a network just like before. But now, we also have a deployment `grid:internal:Deployment` object that can have one or more disks and virtual machines. + +```yaml +deployment: + type: grid:internal:Deployment + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + node_id: ${scheduler.nodes[0]} + name: deployment + network_name: test + vms: + - name: vm + flist: https://hub.grid.tf/tf-official-apps/base:latest.flist + entrypoint: "/sbin/zinit init" + network_name: test + cpu: 2 + memory: 256 + planetary: true + mounts: + - disk_name: data + mount_point: /app + env_vars: + SSH_KEY: + + disks: + - name: data + size: 2 +``` + +The deployment can be linked to a network using `network_name` and can have virtual machines in the `vms` section, and disks in the `disks` section. The disk can be linked and mounted in the VM if `disk_name` is used in the `mounts` section of the VM. + +We also specify a couple of essential properties, like how many virtual cores, how much memory, what FList to use, and the environment variables in the `env_vars` section. + +That's it! You can now execute `make run` to bring the infrastructure up. + +## Kubernetes + +We now see how to deploy a [Kubernetes cluster using Pulumi](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/kubernetes/Pulumi.yaml). + +```yaml + content was removed for brevity + kubernetes: + type: grid:internal:Kubernetes + options: + provider: ${provider} + dependsOn: + - ${network} + properties: + master: + name: kubernetes + node: ${scheduler.nodes[0]} + disk_size: 2 + planetary: true + cpu: 2 + memory: 2048 + + workers: + - name: worker1 + node: ${scheduler.nodes[0]} + disk_size: 2 + cpu: 2 + memory: 2048 + - name: worker2 + node: ${scheduler.nodes[0]} + disk_size: 2 + cpu: 2 + memory: 2048 + + token: t123456789 + network_name: test + ssh_key: + +outputs: + node_deployment_id: ${kubernetes.node_deployment_id} + ygg_ip: ${kubernetes.master_computed.ygg_ip} +``` + +Now, we define the Kubernetes resource `grid:internal:Kubernetes` that has master and workers blocks. You define almost everything like a normal VM except for the FLiist. Also note that the token is the `cluster token`. This will ensure that the workers and the master communicate properly. + +## Creating a Domain + +The ThreeFold Pulumi repository also covers examples on [how to work with TFGrid gateways](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/gateway_name/Pulumi.yaml). + +The basic idea is that you have a virtual machine workload on a specific IP, e.g. public IPv4, IPv6, or Planetary Network, and you want to access it using domains. + +There are two versions to achieve this, a simple and a fully controlled version. + +- Simple domain version: + - subdomain.gent01.dev.grid.tf + - This is a generous service from ThreeFold to reserve a subdomain on a set of defined gateway domains like **gent01.dev.grid.tf**. +- Fully controlled domain version: + - e.g. `mydomain.com` where you manage the domain with the name provider. + +### Example of a Simple Domain Prefix + +We present here the file for a simple domain prefix. + +```yml + content was removed for brevity + scheduler: + type: grid:internal:Scheduler + options: + provider: ${provider} + properties: + mru: 256 + farm_ids: [1] + ipv4: true + free_ips: 1 + + gatewayName: + type: grid:internal:GatewayName + options: + provider: ${provider} + dependsOn: + - ${scheduler} + properties: + name: pulumi + node_id: ${scheduler.nodes[0]} + backends: + - "http://69.164.223.208" + +outputs: + node_deployment_id: ${gatewayName.node_deployment_id} + fqdn: ${gatewayName.fqdn} + +``` + +In this example, we create a gateway name resource `grid:internal:GatewayName` for the name `pulumi.gent01.dev.grid.tf`. + +Some things to note: + +- **pulumi** is the prefix we want to reserve. +- It's assuming that the gateway domain we received by scheduler was the one managed by freefarm `gent01.dev.grid.tf`. +- **backends:** defines a list of IPs to load balance against when a request for `pulumi.gent01.dev.grid.tf` is received on the gateway. + +### Example of a Fully Controlled Domain + +Here's an [example](https://github.com/threefoldtech/pulumi-provider-grid/blob/development/examples/gateway_fqdn/Pulumi.yaml) of a more complicated, but fully controlled domain. + +```yml + code removed for brevity + gatewayFQDN: + type: grid:internal:GatewayFQDN + options: + provider: ${provider} + dependsOn: + - ${deployment} + properties: + name: testing + node_id: 14 + fqdn: mydomain.com + backends: + - http://[${deployment.vms_computed[0].ygg_ip}]:9000 +``` + +Here, we informed the gateway that any request coming for the domain `mydomain.com` needs to be balanced through the backends. + +> Note: You need to create an A record for your domain (here `mydomain.com`) pointing to the gateway IP. + +## Conclusion + +We covered in this guide some basic details concerning the use of the ThreeFold Pulumi plugin. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/pulumi/pulumi_examples.md b/collections/system_administrators/pulumi/pulumi_examples.md new file mode 100644 index 0000000..73363c3 --- /dev/null +++ b/collections/system_administrators/pulumi/pulumi_examples.md @@ -0,0 +1,89 @@ +

Deployment Examples

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Set the Environment Variables](#set-the-environment-variables) +- [Test the Plugin](#test-the-plugin) +- [Destroy the Deployment](#destroy-the-deployment) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +[Pulumi](https://www.pulumi.com/) is an infrastructure as code platform that allows you to use familiar programming languages and tools to build, deploy, and manage cloud infrastructure. + +We present here the basic steps to test the examples within the [ThreeFold Pulumi](https://github.com/threefoldtech/pulumi-threefold) plugin repository. Once you've set the plugin and exported the necessary variables, the deployment process from one example to another is very similar. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Prerequisites + +There are a few things to set up before exploring Pulumi. Since we will be using the examples in the ThreeFold Pulumi repository, we must clone the repository before going further. + +* [Install Pulumi](./pulumi_install.md) on your machine +* Clone the **Pulumi-ThreeFold** repository + * ``` + git clone https://github.com/threefoldtech/pulumi-threefold + ``` +* Change directory + * ``` + cd ./pulumi-threefold + ``` + +## Set the Environment Variables + +You can export the environment variables before deploying workloads. + +* Export the network (**dev**, **qa**, **test**, **main**). Note that we are using the **dev** network by default. + * ``` + export NETWORK="Enter the network" + ``` +* Export your mnemonics. + * ``` + export MNEMONIC="Enter the mnemonics" + ``` +* Export the SSH_KEY (public key). + * ``` + export SSH_KEY="Enter the public Key" + ``` + +## Test the Plugin + +Once you've properly set the prerequisites, you can test many of the examples by simply going into the proper repository and running **make run**. + +The different examples that work simply by running **make run** are the following: + +* virtual_machine +* kubernetes +* network +* zdb +* gateway_name + +We give an example with **virtual_machine**. + +* Go to the directory **virtual_machine** + * ``` + cd examples/virtual_machine + ``` +* Deploy the Pulumi workload with **make** + * ``` + make run + ``` + +Note: To test **gateway_fqdn**, you will have to update the fqdn in **Pulumi.yaml** and create an A record for your domain pointing to the gateway IP. + + +## Destroy the Deployment + +You can destroy your Pulumi deployment at any time with the following make command: + +``` +make destroy +``` + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/pulumi/pulumi_install.md b/collections/system_administrators/pulumi/pulumi_install.md new file mode 100644 index 0000000..93262a8 --- /dev/null +++ b/collections/system_administrators/pulumi/pulumi_install.md @@ -0,0 +1,44 @@ +

Installing Pulumi

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Installation](#installation) +- [Verification](#verification) + +*** + +## Introduction + +You can install [Pulumi](https://www.pulumi.com/) on Linux, MAC and Windows. + +To install Pulumi, simply follow the steps provided in the [Pulumi documentation](https://www.pulumi.com/docs/install/). We cover the basic steps here for convenience. + +## Installation + +* Install on Linux + * ``` + curl -fsSL https://get.pulumi.com | sh + ``` +* Install on MAC + * ``` + brew install pulumi/tap/pulumi + ``` +* Install on Windows + * ``` + choco install pulumi + ``` + +For Linux, if you prefer checking the shell script before executing, please do so. + +For Windows, note that there are other installation methods. Read the [Pulumi documentation](https://www.pulumi.com/docs/install/) for more information. + +## Verification + +To verify that Pulumi is properly installed on your machine, use the following command: + +``` +pulumi version +``` + +If you need more in-depth information, e.g. installing a specific version or migrating from an older version, please check the [installation documentation](https://www.pulumi.com/docs/install/). \ No newline at end of file diff --git a/collections/system_administrators/pulumi/pulumi_intro.md b/collections/system_administrators/pulumi/pulumi_intro.md new file mode 100644 index 0000000..4595724 --- /dev/null +++ b/collections/system_administrators/pulumi/pulumi_intro.md @@ -0,0 +1,130 @@ +

Introduction to Pulumi

+ +With Pulumi, you can express your infrastructure requirements using the languages you know and love, creating a seamless bridge between development and operations. Let's go! + +

Table of Contents

+ +- [Introduction](#introduction) +- [Benefits of Using Pulumi](#benefits-of-using-pulumi) +- [Declarative vs. Imperative Programming](#declarative-vs-imperative-programming) + - [Declaration Programming Example](#declaration-programming-example) + - [Benefits of declarative programming in IaC](#benefits-of-declarative-programming-in-iac) +- [Concepts](#concepts) + - [Pulumi Project](#pulumi-project) + - [Project File](#project-file) + - [Stacks](#stacks) + - [Resources](#resources) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +[ThreeFold Grid](https://threefold.io) is a decentralized cloud infrastructure platform that provides developers with a secure and scalable way to deploy and manage their applications. It is based on a peer-to-peer network of nodes that are distributed around the world. + +[Pulumi](https://www.pulumi.com/) is a cloud-native infrastructure as code (IaC) platform that allows developers to manage their infrastructure using code. It supports a wide range of cloud providers, including ThreeFold Grid. + +The [Pulumi plugin for ThreeFold Grid](https://github.com/threefoldtech/pulumi-provider-grid) provides developers with a way to deploy and manage their ThreeFold Grid resources using Pulumi. This means that developers can benefit from all of the features and benefits that Pulumi offers, such as cross-cloud support, type safety, preview and diff, and parallel execution -still in the works-. + +Please note that the Pulumi plugin for ThreeFold Grid is not yet officially published. We look forward to your feedback on this project. + +## Benefits of Using Pulumi + +Here are some additional benefits of using the Pulumi plugin for ThreeFold Grid: + +- Increased productivity: Pulumi allows developers to manage their infrastructure using code, which can significantly increase their productivity. +- Reduced errors: Pulumi's type safety and preview and diff features can help developers catch errors early, which can reduce the number of errors that occur in production. +- Improved collaboration: Pulumi programs can be shared with other developers, which can make it easier to collaborate on infrastructure projects. + +The Pulumi plugin for ThreeFold Grid is a powerful tool that can be used to deploy and manage a wide range of ThreeFold Grid applications. It is a good choice for developers who want to manage their ThreeFold Grid infrastructure using code and benefit from all of the features and benefits that Pulumi offers. + +## Declarative vs. Imperative Programming + +Declarative programming and imperative programming are two different ways to write code. Declarative programming focuses on describing the desired outcome, while imperative programming focuses on describing the steps needed to achieve that outcome. + +In the context of infrastructure as code (IaC), declarative programming allows you to describe your desired infrastructure state, and the IaC tool will figure out how to achieve it. Imperative programming, on the other hand, requires you to describe the steps needed to create and configure your infrastructure. + +### Declaration Programming Example + +Say I want an infrastructure of two virtual machines with X disks. The following would happen: + +1. Connect to the backend services. +2. Send the requests to create the virtual machines. +3. Sign the requests. +4. Execute the requests in a careful order. + +As you can see, the declarative code is much simpler and easier to read. It also makes it easier to make changes to your infrastructure, as you only need to change the desired state, and the IaC tool will figure out how to achieve it. + +### Benefits of declarative programming in IaC + +There are several benefits to using declarative programming in IaC: + +- Simpler code: Declarative code is simpler and easier to read than imperative code. This is because declarative code focuses on describing the desired outcome, rather than the steps needed to achieve it. +- More concise code: Declarative code is also more concise than imperative code. This is because declarative code does not need to specify the steps needed to achieve the desired outcome. +- Easier to make changes: Declarative code makes it easier to make changes to your infrastructure. This is because you only need to change the desired state, and the IaC tool will figure out how to achieve it. +- More reliable code: Declarative code is more reliable than imperative code. This is because declarative code does not need to worry about the order in which the steps are executed. The IaC tool will take care of that. + +We will be taking a look at a couple of examples, I'll be linking the source directory of the example and go through it, but first let's go through some concepts first + +## Concepts + +### Pulumi Project + +A Pulumi project is any folder that contains a **Pulumi.yaml** file. When in a subfolder, the closest enclosing folder with a **Pulumi.yaml** file determines the current project. A new project can be created with pulumi new. A project specifies which runtime to use and determines where to look for the program that should be executed during deployments. Supported runtimes are nodejs, python, dotnet, go, java, and yaml. + +### Project File + +The **Pulumi.yaml** project file specifies metadata about your project. The project file must begin with a capitalized P, although either a **.yml** or **.yaml** extension will work. + +A typical Pulumi.yaml file looks like the following: + +```yaml +name: my-project +runtime: + name: go + options: + binary: mybinary +description: A minimal Go Pulumi program +``` + +or + +```yaml +name: my-project +runtime: yaml +resources: + bucket: + type: aws:s3:Bucket + +``` + +For more on project or project files, please check the [Pulumi documentation](https://www.pulumi.com/docs/concepts/projects/). + +### Stacks + +Every Pulumi program is deployed to a [stack](https://www.pulumi.com/docs/concepts/stack/). A stack is an isolated, independently configurable instance of a Pulumi program. Stacks are commonly used to denote different phases of development (such as development, staging, and production) or feature branches (such as feature-x-dev). + +A project can have as many stacks as you need. By default, Pulumi creates a stack for you when you start a new project using the **pulumi new** command. + +### Resources + +Resources represent the fundamental units that make up your cloud infrastructure, such as a compute instance, a storage bucket, or a Kubernetes cluster. + +All infrastructure resources are described by one of two subclasses of the Resource class. These two subclasses are: + +- CustomResource: A custom resource is a cloud resource managed by a resource provider such as AWS, Microsoft Azure, Google Cloud, or Kubernetes. +- ComponentResource: A component resource is a logical grouping of other resources that creates a larger, higher-level abstraction that encapsulates its implementation details. + +Here's an example: + +```yaml +resources: + res: + type: the:resource:Type + properties: ...args + options: ...options +``` + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/pulumi/pulumi_readme.md b/collections/system_administrators/pulumi/pulumi_readme.md new file mode 100644 index 0000000..0cc318c --- /dev/null +++ b/collections/system_administrators/pulumi/pulumi_readme.md @@ -0,0 +1,12 @@ +

Pulumi Plugin

+ +Welcome to the *Pulumi Plugin* section of the ThreeFold Manual! + +In this section, we will explore the dynamic world of infrastructure as code (IaC) through the lens of Pulumi, a versatile tool that empowers you to define, deploy, and manage infrastructure using familiar programming languages. + +

Table of Contents

+ +- [Introduction to Pulumi](./pulumi_intro.md) +- [Installing Pulumi](./pulumi_install.md) +- [Deployment Examples](./pulumi_examples.md) +- [Deployment Details](./pulumi_deployment_details.md) \ No newline at end of file diff --git a/collections/system_administrators/system_administrators.md b/collections/system_administrators/system_administrators.md new file mode 100644 index 0000000..2c5074b --- /dev/null +++ b/collections/system_administrators/system_administrators.md @@ -0,0 +1,95 @@ +# ThreeFold System Administrators + +This section covers all practical tutorials for system administrators working on the ThreeFold Grid. + +For complementary information on ThreeFold grid and its cloud component, refer to the [Cloud](../../knowledge_base/cloud/cloud_toc.md) section. + +

Table of Contents

+ +- [Getting Started](./getstarted/tfgrid3_getstarted.md) + - [SSH Remote Connection](./getstarted/ssh_guide/ssh_guide.md) + - [SSH with OpenSSH](./getstarted/ssh_guide/ssh_openssh.md) + - [SSH with PuTTY](./getstarted/ssh_guide/ssh_putty.md) + - [SSH with WSL](./getstarted/ssh_guide/ssh_wsl.md) + - [WireGuard Access](./getstarted/ssh_guide/ssh_wireguard.md) + - [Remote Desktop and GUI](./getstarted/remote-desktop_gui/remote-desktop_gui.md) + - [Cockpit: a Web-based Interface for Servers](./getstarted/remote-desktop_gui/cockpit_guide/cockpit_guide.md) + - [XRDP: an Open-Source Remote Desktop Protocol](./getstarted/remote-desktop_gui/xrdp_guide/xrdp_guide.md) + - [Apache Guacamole: a Clientless Remote Desktop Gateway](./getstarted/remote-desktop_gui/guacamole_guide/guacamole_guide.md) +- [Planetary Network](./getstarted/planetarynetwork.md) +- [TFGrid Services](./getstarted/tfgrid_services/tf_grid_services_readme.md) +- [GPU](./gpu/gpu_toc.md) + - [GPU Support](./gpu/gpu.md) +- [Terraform](./terraform/terraform_toc.md) + - [Overview](./terraform/terraform_readme.md) + - [Installing Terraform](./terraform/terraform_install.md) + - [Terraform Basics](./terraform/terraform_basics.md) + - [Full VM Deployment](./terraform/terraform_full_vm.md) + - [GPU Support](./terraform/terraform_gpu_support.md) + - [Resources](./terraform/resources/terraform_resources_readme.md) + - [Using Scheduler](./terraform/resources/terraform_scheduler.md) + - [Virtual Machine](./terraform/resources/terraform_vm.md) + - [Web Gateway](./terraform/resources/terraform_vm_gateway.md) + - [Kubernetes Cluster](./terraform/resources/terraform_k8s.md) + - [ZDB](./terraform/resources/terraform_zdb.md) + - [Zlogs](./terraform/resources/terraform_zlogs.md) + - [Quantum Safe Filesystem](./terraform/resources/terraform_qsfs.md) + - [QSFS on Micro VM](./terraform/resources/terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](./terraform/resources/terraform_qsfs_on_full_vm.md) + - [CapRover](./terraform/resources/terraform_caprover.md) + - [Advanced](./terraform/advanced/terraform_advanced_readme.md) + - [Terraform Provider](./terraform/advanced/terraform_provider.md) + - [Terraform Provisioners](./terraform/advanced/terraform_provisioners.md) + - [Mounts](./terraform/advanced/terraform_mounts.md) + - [Capacity Planning](./terraform/advanced/terraform_capacity_planning.md) + - [Updates](./terraform/advanced/terraform_updates.md) + - [SSH Connection with Wireguard](./terraform/advanced/terraform_wireguard_ssh.md) + - [Set a Wireguard VPN](./terraform/advanced/terraform_wireguard_vpn.md) + - [Synced MariaDB Databases](./terraform/advanced/terraform_mariadb_synced_databases.md) + - [Nomad](./terraform/advanced/terraform_nomad.md) + - [Nextcloud Deployments](./terraform/advanced/terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./terraform/advanced/terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./terraform/advanced/terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./terraform/advanced/terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./terraform/advanced/terraform_nextcloud_vpn.md) +- [Pulumi](./pulumi/pulumi_readme.md) + - [Introduction to Pulumi](./pulumi/pulumi_intro.md) + - [Installing Pulumi](./pulumi/pulumi_install.md) + - [Deployment Examples](./pulumi/pulumi_examples.md) + - [Deployment Details](./pulumi/pulumi_deployment_details.md) +- [Mycelium](./mycelium/mycelium_toc.md) + - [Overview](./mycelium/overview.md) + - [Installation](./mycelium/installation.md) + - [Additional Information](./mycelium/information.md) + - [Message](./mycelium/message.md) + - [Packet](./mycelium/packet.md) + - [Data Packet](./mycelium/data_packet.md) + - [API YAML](./mycelium/api_yaml.md) +- [Computer and IT Basics](./computer_it_basics/computer_it_basics.md) + - [CLI and Scripts Basics](./computer_it_basics/cli_scripts_basics.md) + - [Docker Basics](./computer_it_basics/docker_basics.md) + - [Git and GitHub Basics](./computer_it_basics/git_github_basics.md) + - [Firewall Basics](./computer_it_basics/firewall_basics/firewall_basics.md) + - [UFW Basics](./computer_it_basics/firewall_basics/ufw_basics.md) + - [Firewalld Basics](./computer_it_basics/firewall_basics/firewalld_basics.md) + - [File Transfer](./computer_it_basics/file_transfer.md) + - [Screenshots](./computer_it_basics/screenshots.md) +- [Advanced](./advanced/advanced.md) + - [Token Transfer Keygenerator](./advanced/token_transfer_keygenerator.md) + - [Cancel Contracts](./advanced/cancel_contracts.md) + - [Contract Bills Reports](./advanced/contract_bill_report.md) + - [Listing Free Public IPs](./advanced/list_public_ips.md) + - [Cloud Console](./advanced/cloud_console.md) + - [Redis](./advanced/grid3_redis.md) + - [IPFS](./advanced/ipfs/ipfs_toc.md) + - [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md) + - [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md) +<<<<<<< HEAD + - [Hummingbot](./advanced/hummingbot.md) + - [AI & ML Workloads](./advanced/ai_ml_workloads.md) +======= + - [AI & ML Workloads](./advanced/ai_ml_workloads.md) + - [Ecommerce](./advanced/ecommerce/ecommerce.md) + - [WooCommerce](./advanced/ecommerce/woocommerce.md) + - [nopCommerce](./advanced/ecommerce/nopcommerce.md) +>>>>>>> development_estore diff --git a/collections/system_administrators/terraform/advanced/img/terraform_.png b/collections/system_administrators/terraform/advanced/img/terraform_.png new file mode 100644 index 0000000..6caa9fb Binary files /dev/null and b/collections/system_administrators/terraform/advanced/img/terraform_.png differ diff --git a/collections/system_administrators/terraform/advanced/terraform_advanced_readme.md b/collections/system_administrators/terraform/advanced/terraform_advanced_readme.md new file mode 100644 index 0000000..67168e0 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_advanced_readme.md @@ -0,0 +1,18 @@ +

Terraform Advanced

+ +

Table of Contents

+ +- [Terraform Provider](./terraform_provider.html) +- [Terraform Provisioners](./terraform_provisioners.html) +- [Mounts](./terraform_mounts.html) +- [Capacity Planning](./terraform_capacity_planning.html) +- [Updates](./terraform_updates.html) +- [SSH Connection with Wireguard](./terraform_wireguard_ssh.md) +- [Set a Wireguard VPN](./terraform_wireguard_vpn.md) +- [Synced MariaDB Databases](./terraform_mariadb_synced_databases.md) +- [Nomad](./terraform_nomad.md) +- [Nextcloud Deployments](./terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md) \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_capacity_planning.md b/collections/system_administrators/terraform/advanced/terraform_capacity_planning.md new file mode 100644 index 0000000..63c96ad --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_capacity_planning.md @@ -0,0 +1,159 @@ +

Capacity Planning

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Preparing the Requests](#preparing-the-requests) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/simple-dynamic/main.tf) we will discuss capacity planning on top of the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} +provider "grid" { +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} +``` + +## Preparing the Requests + +```terraform +resource "grid_scheduler" "sched" { + # a machine for the first server instance + requests { + name = "server1" + cru = 1 + sru = 256 + mru = 256 + } + # a machine for the second server instance + requests { + name = "server2" + cru = 1 + sru = 256 + mru = 256 + } + # a name workload + requests { + name = "gateway" + public_config = true + } +} +``` + +Here we define a `list` of requests, each request has a name and filter options e.g `cru`, `sru`, `mru`, `hru`, having `public_config` or not, `public_ips_count` for this deployment, whether or not this node should be `dedicated`, whether or not this node should be `distinct` from other nodes in this plannder, `farm_id` to search in, nodes to exlude from search in `node_exclude`, and whether or not this node should be `certified`. + +The full docs for the capacity planner `scheduler` are found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/scheduler.md) + +And after that in our code we can reference the grid_scheduler object with the request name to be used instead of node_id. + +For example: + +```terraform +resource "grid_deployment" "server1" { + node = grid_scheduler.sched.nodes["server1"] + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, grid_scheduler.sched.nodes["server1"], "") + vms { + name = "firstserver" + flist = "https://hub.grid.tf/omar0.3bot/omarelawady-simple-http-server-latest.flist" + cpu = 1 + memory = 256 + rootfs_size = 256 + entrypoint = "/main.sh" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + env_vars = { + PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + } + + planetary = true + } +} +``` + +> Note: you need to call `distinct` while specifying the nodes in the network, because the scheduler may assign server1, server2 on the same node. Example: + +```terraform + resource "grid_network" "net1" { + name = local.name + nodes = distinct(values(grid_scheduler.sched.nodes)) + ip_range = "10.1.0.0/16" + description = "newer network" + } +``` diff --git a/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md b/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md new file mode 100644 index 0000000..2fe3394 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md @@ -0,0 +1,585 @@ +

MariaDB Synced Databases Between Two VMs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer) +- [Set the VMs](#set-the-vms) + - [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the 3Nodes with Terraform](#deploy-the-3nodes-with-terraform) + - [SSH into the 3Nodes](#ssh-into-the-3nodes) + - [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment) + - [Test the Wireguard Connection](#test-the-wireguard-connection) +- [Configure the MariaDB Database](#configure-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Create User with Replication Grant](#create-user-with-replication-grant) + - [Verify the Access of the User](#verify-the-access-of-the-user) + - [Set the VMs to accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection) + - [TF Template Worker Server Data](#tf-template-worker-server-data) + - [TF Template Master Server Data](#tf-template-master-server-data) + - [Set the MariaDB Databases on Both 3Nodes](#set-the-mariadb-databases-on-both-3nodes) +- [Install and Set GlusterFS](#install-and-set-glusterfs) +- [Conclusion](#conclusion) + +*** + +# Introduction + +In this ThreeFold Guide, we show how to deploy a VPN with Wireguard and create a synced MariaDB database between the two servers using GlusterFS, a scalable network filesystem. Any change in one VM's database will be echoed in the other VM's database. This kind of deployment can lead to useful server architectures. + + + +# Main Steps + +This guide might seems overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find two 3Nodes on the TFGrid +* Deploy and set the VMs with Terraform +* Create a MariaDB database +* Set GlusterFS + + + +# Prerequisites + +* [Install Terraform](https://developer.hashicorp.com/terraform/downloads) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find Nodes with the ThreeFold Explorer + +We first need to decide on which 3Nodes we will be deploying our workload. + +We thus start by finding two 3Nodes with sufficient resources. For this current MariaDB guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +# Set the VMs +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. +Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-synced-db`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on. + + + +### Create the Terraform Files + +Open the terminal. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-synced-db`: + * ``` + mkdir -p terraform/deployment-synced-db + ``` + * ``` + cd terraform/deployment-synced-db + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this file, we name the first VM as `vm1` and the second VM as `vm2`. For ease of communication, in this guide we call `vm1` as the master VM and `vm2` as the worker VM. + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2`is 10.1.4.2. This might be different during your own deployment. If so, change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to increase or modify the quantity in the variables `size`, `cpu` and `memory`. + + + +### Deploy the 3Nodes with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-synced-db` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the VPN: + * ``` + terraform apply + ``` + +After deployments, take note of the 3Nodes' IPv4 address. You will need those addresses to SSH into the 3Nodes. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +### SSH into the 3Nodes + +* To [SSH into the 3Nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following while making sure to set the proper IP address for each VM: + * ``` + ssh root@3node_IPv4_Address + ``` + + + +### Preparing the VMs for the Deployment + +* Update and upgrade the system + * ``` + apt update && sudo apt upgrade -y && sudo apt-get install apache2 -y + ``` +* After download, you might need to reboot the system for changes to be fully taken into account + * ``` + reboot + ``` +* Reconnect to the VMs + + + +### Test the Wireguard Connection + +We now want to ping the VMs using Wireguard. This will ensure the connection is properly established. + +First, we set Wireguard with the Terraform output. + +* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The WireGuard output stands in between `EOT`. + +* Start the WireGuard on your local computer: + * ``` + wg-quick up wg + ``` + +* To stop the wireguard service: + * ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a WireGuard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`. +This should set everything properly. + +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: + * ``` + ping 10.1.3.2 + ``` + * ``` + ping 10.1.4.2 + ``` + +If you correctly receive the packets for the two VMs, you know that the VPN is properly set. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + + + +# Configure the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download the MariaDB server and client on both the master VM and the worker VM + * ``` + apt install mariadb-server mariadb-client -y + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and replace `X` by `1` for the master VM and by `2` for the worker VM + ``` + #server-id = X + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = tfdatabase + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch Mariadb + * ``` + mysql + ``` + + + +## Create User with Replication Grant + +* Do the following on both the master and the worker + * ``` + CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; + GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; + FLUSH PRIVILEGES; + show master status\G; + ``` + + + +## Verify the Access of the User +* Verify the access of repuser user + ``` + SELECT host FROM mysql.user WHERE User = 'repuser'; + ``` + * You want to see `%` in Host + + + +## Set the VMs to accept the MariaDB Connection + +### TF Template Worker Server Data + +* Write the following in the Worker VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.3.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + + + +### TF Template Master Server Data + +* Write the following in the Master VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.4.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + + + +## Set the MariaDB Databases on Both 3Nodes + +We now set the MariaDB database. You should choose your own username and password. The password should be the same for the master and worker VMs. + +* On the master VM, write: + ``` + CREATE DATABASE tfdatabase; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* On the worker VM, write: + ``` + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON tfdatabase.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see a database, write the following: + ``` + show databases; + ``` +* To see users on MariaDB: + ``` + select user from mysql.user; + ``` +* To exit MariaDB: + ``` + exit; + ``` + + + +# Install and Set GlusterFS + +We will now install and set [GlusterFS](https://www.gluster.org/), a free and open-source software scalable network filesystem. + +* Install GlusterFS on both the master and worker VMs + * ``` + add-apt-repository ppa:gluster/glusterfs-7 -y && apt install glusterfs-server -y + ``` +* Start the GlusterFS service on both VMs + * ``` + systemctl start glusterd.service && systemctl enable glusterd.service + ``` +* Set the master to worker probe IP on the master VM: + * ``` + gluster peer probe 10.1.4.2 + ``` + +* See the peer status on the worker VM: + * ``` + gluster peer status + ``` + +* Set the master and worker IP address on the master VM: + * ``` + gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force + ``` + +* Start Gluster: + * ``` + gluster volume start vol1 + ``` + +* Check the status on the worker VM: + * ``` + gluster volume status + ``` + +* Mount the server with the master IP on the master VM: + * ``` + mount -t glusterfs 10.1.3.2:/vol1 /var/www + ``` + +* See if the mount is there on the master VM: + * ``` + df -h + ``` + +* Mount the Server with the worker IP on the worker VM: + * ``` + mount -t glusterfs 10.1.4.2:/vol1 /var/www + ``` + +* See if the mount is there on the worker VM: + * ``` + df -h + ``` + +We now update the mount with the filse fstab on both master and worker. + +* To prevent the mount from being aborted if the server reboot, write the following on both servers: + * ``` + nano /etc/fstab + ``` + * Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): + * ``` + 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + + * Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): + * ``` + 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + +The databases of both VMs are accessible in `/var/www`. This means that any change in either folder `/var/www` of each VM will be reflected in the same folder of the other VM. In order words, the databases are now synced in real-time. + + + +# Conclusion + +You now have two VMs syncing their MariaDB databases. This can be very useful for a plethora of projects requiring redundancy in storage. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB and GlusterFS. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. + diff --git a/collections/system_administrators/terraform/advanced/terraform_mounts.md b/collections/system_administrators/terraform/advanced/terraform_mounts.md new file mode 100644 index 0000000..510e543 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_mounts.md @@ -0,0 +1,86 @@ +

Deploying a VM with Mounts Using Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [More Info](#more-info) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/mounts/main.tf), we will see how to deploy a VM and mount disks on it on the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") + disks { + name = "data" + size = 10 + description = "volume holding app data" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data" + mount_point = "/app" + } + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + } +} +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} +``` + +## More Info + +A complete list of Mount workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vmsmounts). diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md new file mode 100644 index 0000000..16f3390 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_aio.md @@ -0,0 +1,140 @@ +

Nextcloud All-in-One Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Set a Firewall](#set-a-firewall) +- [Set the DNS Record for Your Domain](#set-the-dns-record-for-your-domain) +- [Install Nextcloud All-in-One](#install-nextcloud-all-in-one) +- [Set BorgBackup](#set-borgbackup) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We present a quick way to install Nextcloud All-in-One on the TFGrid. This guide is based heavily on the Nextcloud documentation available [here](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). It's mostly a simple adaptation to the TFGrid with some additional information on how to set correctly the firewall and the DNS record for your domain. + + + +## Deploy a Full VM + +* Deploy a Full VM with the [TF Dashboard](../../getstarted/ssh_guide/ssh_openssh.md) or [Terraform](../terraform_full_vm.md) + * Minimum specs: + * IPv4 Address + * 2 vcores + * 4096 MB of RAM + * 50 GB of Storage +* Take note of the VM IP address +* SSH into the Full VM + + + +## Set a Firewall + +We set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS (443 and 8443). + +We thus add the following rules: + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` +* Allow port 8443 + * ``` + ufw allow 8443 + ``` +* Allow port 3478 for Nextcloud Talk + * ``` + ufw allow 3478 + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +## Set the DNS Record for Your Domain + +* Go to your domain name registrar (e.g. Namecheap) + * In the section **Advanced DNS**, add a **DNS A Record** to your domain and link it to the IP address of the VM you deployed on: + * Type: A Record + * Host: @ + * Value: + * TTL: Automatic + * It might take up to 30 minutes to set the DNS properly. + * To check if the A record has been registered, you can use a common DNS checker: + * ``` + https://dnschecker.org/#A/ + ``` + + + +## Install Nextcloud All-in-One + +For the rest of the guide, we follow the steps availabe on the Nextcloud website's tutorial [How to Install the Nextcloud All-in-One on Linux](https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/). + +* Install Docker + * ``` + curl -fsSL get.docker.com | sudo sh + ``` +* Install Nextcloud AIO + * ``` + sudo docker run \ + --sig-proxy=false \ + --name nextcloud-aio-mastercontainer \ + --restart always \ + --publish 80:80 \ + --publish 8080:8080 \ + --publish 8443:8443 \ + --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ + --volume /var/run/docker.sock:/var/run/docker.sock:ro \ + nextcloud/all-in-one:latest + ``` +* Reach the AIO interface on your browser: + * ``` + https://:8443 + ``` + * Example: `https://nextcloudwebsite.com:8443` +* Take note of the Nextcloud password +* Log in with the given password +* Add your domain name and click `Submit` +* Click `Start containers` +* Click `Open your Nextcloud` + +You can now easily access Nextcloud AIO with your domain URL! + + +## Set BorgBackup + +On the AIO interface, you can easily set BorgBackup. Since we are using Linux, we use the mounting directory `/mnt/backup`. Make sure to take note of the backup password. + +## Conclusion + +Most of the information in this guide can be found on the Nextcloud official website. We presented this guide to show another way to deploy Nextcloud on the TFGrid. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md new file mode 100644 index 0000000..940a270 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_redundant.md @@ -0,0 +1,908 @@ +

Nextcloud Redundant Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find Nodes with the ThreeFold Explorer](#find-nodes-with-the-threefold-explorer) +- [Set the VMs](#set-the-vms) + - [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the 3nodes with Terraform](#deploy-the-3nodes-with-terraform) + - [SSH into the 3nodes](#ssh-into-the-3nodes) + - [Preparing the VMs for the Deployment](#preparing-the-vms-for-the-deployment) + - [Test the Wireguard Connection](#test-the-wireguard-connection) +- [Create the MariaDB Database](#create-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Create User with Replication Grant](#create-user-with-replication-grant) + - [Verify the Access of the User](#verify-the-access-of-the-user) + - [Set the VMs to Accept the MariaDB Connection](#set-the-vms-to-accept-the-mariadb-connection) + - [TF Template Worker Server Data](#tf-template-worker-server-data) + - [TF Template Master Server Data](#tf-template-master-server-data) + - [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database) +- [Install and Set GlusterFS](#install-and-set-glusterfs) +- [Install PHP and Nextcloud](#install-php-and-nextcloud) +- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns) + - [Worker File for DuckDNS](#worker-file-for-duckdns) +- [Set Apache](#set-apache) +- [Access Nextcloud on a Web Browser with the Subdomain](#access-nextcloud-on-a-web-browser-with-the-subdomain) +- [Enable HTTPS](#enable-https) + - [Install Certbot](#install-certbot) + - [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain) + - [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal) +- [Set a Firewall](#set-a-firewall) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +# Introduction + +In this Threefold Guide, we deploy a redundant [Nextcloud](https://nextcloud.com/) instance that is continually synced on two different 3node servers running on the [Threefold Grid](https://threefold.io/). + +We will learn how to deploy two full virtual machines (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). The Terraform deployment will be composed of a virtual private network (VPN) using [Wireguard](https://www.wireguard.com/). The two VMs will thus be connected in a private and secure network. Once this is done, we will link the two VMs together by setting up a [MariaDB](https://mariadb.org/) database and using [GlusterFS](https://www.gluster.org/). Then, we will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity. + +The advantage of this redundant Nextcloud deployment is obvious: if one of the two VMs goes down, the Nextcloud instance will still be accessible, as the other VM will take the lead. Also, the two VMs will be continually synced in real-time. If the master node goes down, the data will be synced to the worker node, and the worker node will become the master node. Once the master VM goes back online, the data will be synced to the master node and the master node will retake the lead as the master node. + +This kind of real-time backup of the database is not only limited to Nextcloud. You can use the same architecture to deploy different workloads while having the redundancy over two 3node servers. This architecture could be deployed over more than two 3nodes. Feel free to explore and let us know in the [Threefold Forum](http://forum.threefold.io/) if you come up with exciting and different variations of this kind of deployment. + +As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/). + +Let's go! + + + +# Main Steps + +This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find two 3nodes on the TF Grid +* Deploy and set the VMs with Terraform +* Create a MariaDB database +* Download and set GlusterFS +* Install PHP and Nextcloud +* Create a subdomain with DuckDNS +* Set Apache +* Access Nextcloud +* Add HTTPS protection +* Set a firewall + + + +# Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find Nodes with the ThreeFold Explorer + +We first need to decide on which 3Nodes we will be deploying our workload. + +We thus start by finding two 3Nodes with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for 3Nodes with each a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find two 3Nodes with suitable resources for the deployment and take note of their node IDs on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding proper 3Nodes, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found two 3Nodes, take note of their node IDs. You will need to use those IDs when creating the Terraform files. + + + +# Set the VMs +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node IDs of the two 3nodes you will be deploying on. + +### Create the Terraform Files + +Open the terminal. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-nextcloud`: + * ``` + mkdir -p terraform/deployment-nextcloud + ``` + * ``` + cd terraform/deployment-nextcloud + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this file, we name the first VM as `vm1` and the second VM as `vm2`. In the guide, we call `vm1` as the master VM and `vm2` as the worker VM. + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers. + +### Deploy the 3nodes with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-nextcloud` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the VPN: + * ``` + terraform apply + ``` + +After deployments, take note of the 3nodes' IPv4 address. You will need those addresses to SSH into the 3nodes. + +### SSH into the 3nodes + +* To [SSH into the 3nodes](../../getstarted/ssh_guide/ssh_guide.md), write the following: + * ``` + ssh root@VM_IPv4_Address + ``` + +### Preparing the VMs for the Deployment + +* Update and upgrade the system + * ``` + apt update && apt upgrade -y && apt-get install apache2 -y + ``` +* After download, reboot the system + * ``` + reboot + ``` +* Reconnect to the VMs + + + +### Test the Wireguard Connection + +We now want to ping the VMs using Wireguard. This will ensure the connection is properly established. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +First, we set Wireguard with the Terraform output. + +* On your local computer, take the Terraform's `wg_config` output and create a `wg.conf` file in the directory `/etc/wireguard/wg.conf`. + * ``` + nano /etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment. You can use `terraform show` to see the Terraform output. The Wireguard output stands in between `EOT`. + +* Start Wireguard on your local computer: + * ``` + wg-quick up wg + ``` + +* To stop the wireguard service: + * ``` + wg-quick down wg + ``` + +If it doesn't work and you already did a wireguard connection with the same file from Terraform (from a previous deployment perhaps), do `wg-quick down wg`, then `wg-quick up wg`. +This should set everything properly. + +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP addresses of both VMs to make sure the Wireguard connection is correct: + * ``` + ping 10.1.3.2 + ``` + * ``` + ping 10.1.4.2 + ``` + +If you correctly receive the packets from the two VMs, you know that the VPN is properly set. + + + +# Create the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download MariaDB's server and client on both VMs + * ``` + apt install mariadb-server mariadb-client -y + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and replace `X` by `1` on the master VM and by `2` on the worker VM + ``` + #server-id = X + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = nextcloud + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch MariaDB + * ``` + mysql + ``` + +## Create User with Replication Grant + +* Do the following on both VMs + * ``` + CREATE USER 'repuser'@'%' IDENTIFIED BY 'password'; + GRANT REPLICATION SLAVE ON *.* TO 'repuser'@'%' ; + FLUSH PRIVILEGES; + show master status\G; + ``` + +## Verify the Access of the User +* Verify the access of the user + ``` + SELECT host FROM mysql.user WHERE User = 'repuser'; + ``` + * You want to see `%` in Host + +## Set the VMs to Accept the MariaDB Connection + +### TF Template Worker Server Data + +* Write the following in the worker VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.3.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` +### TF Template Master Server Data + +* Write the following in the master VM + * ``` + CHANGE MASTER TO MASTER_HOST='10.1.4.2', + MASTER_USER='repuser', + MASTER_PASSWORD='password', + MASTER_LOG_FILE='mysql-bin.000001', + MASTER_LOG_POS=328; + ``` + * ``` + start slave; + ``` + * ``` + show slave status\G; + ``` + +## Set the Nextcloud User and Database + +We now set the Nextcloud database. You should choose your own username and password. The password should be the same for the master and worker VMs. + +* On the master VM, write: + ``` + CREATE DATABASE nextcloud; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* On the worker VM, write: + ``` + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see the databases, write: + ``` + show databases; + ``` +* To see users, write: + ``` + select user from mysql.user; + ``` +* To exit MariaDB, write: + ``` + exit; + ``` + + + +# Install and Set GlusterFS + +We will now install and set [GlusterFS](https://www.gluster.org/), a free and open source software scalable network filesystem. + +* Install GlusterFS on both the master and worker VMs + * ``` + echo | add-apt-repository ppa:gluster/glusterfs-7 && apt install glusterfs-server -y + ``` +* Start the GlusterFS service on both VMs + * ``` + systemctl start glusterd.service && systemctl enable glusterd.service + ``` +* Set the master to worker probe IP on the master VM: + * ``` + gluster peer probe 10.1.4.2 + ``` + +* See the peer status on the worker VM: + * ``` + gluster peer status + ``` + +* Set the master and worker IP address on the master VM: + * ``` + gluster volume create vol1 replica 2 10.1.3.2:/gluster-storage 10.1.4.2:/gluster-storage force + ``` + +* Start GlusterFS on the master VM: + * ``` + gluster volume start vol1 + ``` + +* Check the status on the worker VM: + * ``` + gluster volume status + ``` + +* Mount the server with the master IP on the master VM: + * ``` + mount -t glusterfs 10.1.3.2:/vol1 /var/www + ``` + +* See if the mount is there on the master VM: + * ``` + df -h + ``` + +* Mount the server with the worker IP on the worker VM: + * ``` + mount -t glusterfs 10.1.4.2:/vol1 /var/www + ``` + +* See if the mount is there on the worker VM: + * ``` + df -h + ``` + +We now update the mount with the filse fstab on both VMs. + +* To prevent the mount from being aborted if the server reboots, write the following on both servers: + * ``` + nano /etc/fstab + ``` + +* Add the following line in the `fstab` file to set the master VM with the master virtual IP (here it is 10.1.3.2): + * ``` + 10.1.3.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + +* Add the following line in the `fstab` file to set the worker VM with the worker virtual IP (here it is 10.1.4.2): + * ``` + 10.1.4.2:/vol1 /var/www glusterfs defaults,_netdev 0 0 + ``` + + + +# Install PHP and Nextcloud + +* Install PHP and the PHP modules for Nextcloud on both the master and the worker: + * ``` + apt install php -y && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip -y + ``` + +We will now install Nextcloud. This is done only on the master VM. + +* On both the master and worker VMs, go to the folder `/var/www`: + * ``` + cd /var/www + ``` + +* To install the latest Nextcloud version, go to the Nextcloud homepage: + * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). + +* We now download Nextcloud on the master VM. + * ``` + wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip + ``` + +You only need to download on the master VM, since you set a peer-to-peer connection, it will also be accessible on the worker VM. + +* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: + * ``` + apt install p7zip-full -y + ``` + * ``` + 7z x nextcloud-27.0.1.zip -o/var/www/ + ``` + +* After the download, see if the Nextcloud file is there on the worker VM: + * ``` + ls + ``` + +* Then, we grant permissions to the folder. Do this on both the master VM and the worker VM. + * ``` + chown www-data:www-data /var/www/nextcloud/ -R + ``` + + + +# Create a Subdomain with DuckDNS + +We want to create a subdomain to access Nextcloud over the public internet. + +For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit. + +We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. Make sure to do this for both VMs. + +* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/). +* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system. + +Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure. + +## Worker File for DuckDNS + +In our current scenario, we want to make sure the master VM stays the main IP address for the DuckDNS subdomain as long as the master VM is online. To do so, we add an `if` statement in the worker VM's `duck.sh` file. The process is as follow: the worker VM will ping the master VM and if it sees that the master VM is offline, it will run the command to update DuckDNS's subdomain with the worker VM's IP address. When the master VM goes back online, it will run the `duck.sh` file within 5 minutes and the DuckDNS's subdomain will be updated with the master VM's IP address. + +The content of the `duck.sh` file for the worker VM is the following. Make sure to replace the line `echo ...` with the line provided by DuckDNS and to replace `mastervm_IPv4_address` with the master VM's IP address. + +``` +ping -c 2 mastervm_IPv4_address + +if [ $? != 0 ] +then + + echo url="https://www.duckdns.org/update?domains=exampledomain&token=a7c4d0ad-114e-40ef-ba1d-d217904a50f2&ip=" | curl -k -o ~/duckdns/duck.log -K - + +fi + +``` + +Note: When the master VM goes offline, after 5 minutes maximum DuckDNS will change the IP address from the master’s to the worker’s. Without clearing the DNS cache, your browser might have some difficulties connecting to the updated IP address when reaching the URL `subdomain.duckdns.org`. Thus you might need to [clear your DNS cache](https://blog.hubspot.com/website/flush-dns). You can also use the [Tor browser](https://www.torproject.org/) to connect to Nextcloud. If the IP address changes, you can simply leave the browser and reopen another session as the browser will automatically clear the DNS cache. + + + +# Set Apache + +We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. + +* On both the master and worker VMs, write the following: + * ``` + nano /etc/apache2/sites-available/nextcloud.conf + ``` + +The file should look like this, with your own subdomain instead of `subdomain`: + +``` + + DocumentRoot "/var/www/nextcloud" + ServerName subdomain.duckdns.org + ServerAlias www.subdomain.duckdns.org + + ErrorLog ${APACHE_LOG_DIR}/nextcloud.error + CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined + + + Require all granted + Options FollowSymlinks MultiViews + AllowOverride All + + + Dav off + + + SetEnv HOME /var/www/nextcloud + SetEnv HTTP_HOME /var/www/nextcloud + Satisfy Any + + + + +``` + +* On both the master VM and the worker VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: + * ``` + a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl + ``` + +* Then, reload and restart Apache: + * ``` + systemctl reload apache2 && systemctl restart apache2 + ``` + + + +# Access Nextcloud on a Web Browser with the Subdomain + +We now access Nextcloud over the public Internet. + +* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): + * ``` + subdomain.duckdns.org + ``` + +Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. + +* Choose a name and a password. For this guide, we use the following: + * ``` + ncadmin + password1234 + ``` + +* Enter the Nextcloud Database information created with MariaDB and click install: + * ``` + Database user: ncuser + Database password: password1234 + Database name: nextcloud + Database location: localhost + ``` + +Nextcloud will then proceed to complete the installation. + +We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database. + +After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain. + + + +# Enable HTTPS + +## Install Certbot + +We will now enable HTTPS. This needs to be done on the master VM as well as the worker VM. This section can be done simultaneously on the two VMs. But make sure to do the next section on setting the Certbot with only one VM at a time. + +To enable HTTPS, first install `letsencrypt` with `certbot`: + +Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) + +* See if you have the latest version of snap: + * ``` + snap install core; snap refresh core + ``` + +* Remove certbot-auto: + * ``` + apt-get remove certbot + ``` + +* Install certbot: + * ``` + snap install --classic certbot + ``` + +* Ensure that certbot can be run: + * ``` + ln -s /snap/bin/certbot /usr/bin/certbot + ``` + +* Then, install certbot-apache: + * ``` + apt install python3-certbot-apache -y + ``` + +## Set the Certbot with the DNS Domain + +To avoid errors, set HTTPS with the master VM and power off the worker VM. + +* To do so with a 3node, you can simply comment the `vms` section of the worker VM in the Terraform `main.tf` file and do `terraform apply` on the terminal. + * Put `/*` one line above the section, and `*/` one line below the section `vms`: +``` +/* + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +*/ +``` +* Put `#` in front of the appropriated lines, as shown below: +``` +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +#output "node1_zmachine2_ip" { +# value = grid_deployment.d2.vms[0].ip +#} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +#output "ygg_ip2" { +# value = grid_deployment.d2.vms[0].ygg_ip +#} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +#output "ipv4_vm2" { +# value = grid_deployment.d2.vms[0].computedip +#} +``` + +* To add the HTTPS protection, write the following line on the master VM with your own subdomain: + * ``` + certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org + ``` + +* Once the HTTPS is set, you can reset the worker VM: + * To reset the worker VM, simply remove `/*`, `*/` and `#` on the main file and redo `terraform apply` on the terminal. + +Note: You then need to redo the same process with the worker VM. This time, make sure to set the master VM offline to avoid errors. This means that you should comment the section `vms`of `vm1`instead of `vm2`. + +## Verify HTTPS Automatic Renewal + +* Make a dry run of the certbot renewal to verify that it is correctly set up. + * ``` + certbot renew --dry-run + ``` + +You now have HTTPS security on your Nextcloud instance. + +# Set a Firewall + +Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS. + +We thus add the following rules: + + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +# Conclusion + +If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone! + +The Nextcloud database is synced in real-time on two different 3nodes. When one 3node goes offline, the database is still synchronized on the other 3node. Once the powered-off 3node goes back online, the database is synced automatically with the node that was powered off. + +You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Wireguard, Terraform, MariaDB, GlusterFS, PHP and Nextcloud. Now, you know how to deploy workloads on the Threefold Grid with an efficient architecture in order to ensure redundancy. This is just the beginning. The Threefold Grid has a somewhat infinite potential when it comes to deployments, workloads, architectures and server projects. Let's see where it goes from here! + +This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge. + + + +# Acknowledgements and References + +A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort! + +The main reference for this guide is this [amazing video](https://youtu.be/ARsqxUw1ONc) by NETVN82. Many steps were modified or added to make this suitable with Wireguard and the Threefold Grid. Other configurations are possible. We invite you to explore the possibilities offered by the Threefold Grid! + +This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md new file mode 100644 index 0000000..5ad8116 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_single.md @@ -0,0 +1,594 @@ +

Nextcloud Single Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Steps](#main-steps) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Set the Full VM](#set-the-full-vm) + - [Overview](#overview) + - [Create the Terraform Files](#create-the-terraform-files) + - [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform) + - [SSH into the 3Node](#ssh-into-the-3node) + - [Prepare the Full VM](#prepare-the-full-vm) +- [Create the MariaDB Database](#create-the-mariadb-database) + - [Download MariaDB and Configure the Database](#download-mariadb-and-configure-the-database) + - [Set the Nextcloud User and Database](#set-the-nextcloud-user-and-database) +- [Install PHP and Nextcloud](#install-php-and-nextcloud) +- [Create a Subdomain with DuckDNS](#create-a-subdomain-with-duckdns) +- [Set Apache](#set-apache) +- [Access Nextcloud on a Web Browser](#access-nextcloud-on-a-web-browser) +- [Enable HTTPS](#enable-https) + - [Install Certbot](#install-certbot) + - [Set the Certbot with the DNS Domain](#set-the-certbot-with-the-dns-domain) + - [Verify HTTPS Automatic Renewal](#verify-https-automatic-renewal) +- [Set a Firewall](#set-a-firewall) +- [Conclusion](#conclusion) +- [Acknowledgements and References](#acknowledgements-and-references) + +*** + +# Introduction + +In this Threefold Guide, we deploy a [Nextcloud](https://nextcloud.com/) instance on a full VM running on the [Threefold Grid](https://threefold.io/). + +We will learn how to deploy a full virtual machine (Ubuntu 22.04) with [Terraform](https://www.terraform.io/). We will install and deploy Nextcloud. We will add a DDNS (dynamic DNS) domain to the Nextcloud deployment. It will then be possible to connect to the Nextcloud instance over public internet. Nextcloud will be available over your computer and even your smart phone! We will also set HTTPS for the DDNS domain in order to make the Nextcloud instance as secure as possible. You are free to explore different DDNS options. In this guide, we will be using [DuckDNS](https://www.duckdns.org/) for simplicity. + +As always, if you have questions concerning this guide, you can write a post on the [Threefold Forum](http://forum.threefold.io/). + +Let's go! + + + +# Main Steps + +This guide might seem overwhelming, but the steps are carefully explained. Take your time and it will all work out! + +To get an overview of the whole process, we present the main steps: + +* Download the dependencies +* Find a 3Node on the TF Grid +* Deploy and set the VM with Terraform +* Install PHP and Nextcloud +* Create a subdomain with DuckDNS +* Set Apache +* Access Nextcloud +* Add HTTPS protection +* Set a firewall + + + +# Prerequisites + +- [Install Terraform](../terraform_install.md) + +You need to download and install properly Terraform on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +# Find a 3Node with the ThreeFold Explorer + +We first need to decide on which 3Node we will be deploying our workload. + +We thus start by finding a 3Node with sufficient resources. For this current Nextcloud guide, we will be using 1 CPU, 2 GB of RAM and 50 GB of storage. We are also looking for a 3Node with a public IPv4 address. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Node. + * `Free SRU (GB)`: 50 + * `Free MRU (GB)`: 2 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a 3Node, take note of its node ID. You will need to use this ID when creating the Terraform files. + + + +# Set the Full VM + +## Overview + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-single-nextcloud`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable files to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +## Create the Terraform Files + +Open the terminal and follow those steps. + +* Go to the home folder + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-single-nextcloud`: + * ``` + mkdir -p terraform/deployment-single-nextcloud + ``` + * ``` + cd terraform/deployment-single-nextcloud + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +In this file, we name the full VM as `vm1`. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + + size = "50" + cpu = "1" + memory = "2048" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node. Simply replace the three dots by the appropriate content. Obviously, you can decide to set more storage (size). The memory and CPU should be sufficient for the Nextcloud deployment with the above numbers. + +## Deploy the Full VM with Terraform + +We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-single-nextcloud` with the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the full VM: + * ``` + terraform apply + ``` + +After deployments, take note of the 3Node's IPv4 address. You will need this address to SSH into the 3Node. + +## SSH into the 3Node + +* To [SSH into the 3Node](../../getstarted/ssh_guide/ssh_guide.md), write the following: + * ``` + ssh root@VM_IPv4_Address + ``` + +## Prepare the Full VM + +* Update and upgrade the system + * ``` + apt update && apt upgrade && apt-get install apache2 + ``` +* After download, reboot the system + * ``` + reboot + ``` +* Reconnect to the VM + + + +# Create the MariaDB Database + +## Download MariaDB and Configure the Database + +* Download MariaDB's server and client + * ``` + apt install mariadb-server mariadb-client + ``` +* Configure the MariaDB database + * ``` + nano /etc/mysql/mariadb.conf.d/50-server.cnf + ``` + * Do the following changes + * Add `#` in front of + * `bind-address = 127.0.0.1` + * Remove `#` in front of the following lines and make sure the variable `server-id` is set to `1` + ``` + #server-id = 1 + #log_bin = /var/log/mysql/mysql-bin.log + ``` + * Below the lines shown above add the following line: + ``` + binlog_do_db = nextcloud + ``` + +* Restart MariaDB + * ``` + systemctl restart mysql + ``` + +* Launch MariaDB + * ``` + mysql + ``` + +## Set the Nextcloud User and Database + +We now set the Nextcloud database. You should choose your own username and password. + +* On the full VM, write: + ``` + CREATE DATABASE nextcloud; + CREATE USER 'ncuser'@'%'; + GRANT ALL PRIVILEGES ON nextcloud.* TO ncuser@'%' IDENTIFIED BY 'password1234'; + FLUSH PRIVILEGES; + ``` + +* To see the databases, write: + ``` + show databases; + ``` +* To see users, write: + ``` + select user from mysql.user; + ``` +* To exit MariaDB, write: + ``` + exit; + ``` + + +# Install PHP and Nextcloud + +* Install PHP and the PHP modules for Nextcloud on both the master and the worker: + * ``` + apt install php && apt-get install php zip libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-mysql php-bcmath php-gmp zip + ``` + +We will now install Nextcloud. + +* On the full VM, go to the folder `/var/www`: + * ``` + cd /var/www + ``` + +* To install the latest Nextcloud version, go to the Nextcloud homepage: + * See the latest [Nextcloud releases](https://download.nextcloud.com/server/releases/). + +* We now download Nextcloud on the full VM. + * ``` + wget https://download.nextcloud.com/server/releases/nextcloud-27.0.1.zip + ``` + +* Then, extract the `.zip` file. This will take a couple of minutes. We use 7z to track progress: + * ``` + apt install p7zip-full + ``` + * ``` + 7z x nextcloud-27.0.1.zip -o/var/www/ + ``` +* Then, we grant permissions to the folder. + * ``` + chown www-data:www-data /var/www/nextcloud/ -R + ``` + + + +# Create a Subdomain with DuckDNS + +We want to create a subdomain to access Nextcloud over the public internet. + +For this guide, we use DuckDNS to create a subdomain for our Nextcloud deployment. Note that this can be done with other services. We use DuckDNS for simplicity. We invite users to explore other methods as they see fit. + +We create a public subdomain with DuckDNS. To set DuckDNS, you simply need to follow the steps on their website. + +* First, sign in on the website: [https://www.duckdns.org/](https://www.duckdns.org/). +* Then go to [https://www.duckdns.org/install.jsp](https://www.duckdns.org/install.jsp) and follow the steps. For this guide, we use `linux cron` as the operating system. + +Hint: make sure to save the DuckDNS folder in the home menu. Write `cd ~` before creating the folder to be sure. + + + +# Set Apache + +We now want to tell Apache where to store the Nextcloud data. To do this, we will create a file called `nextcloud.conf`. + +* On full VM, write the following: + * ``` + nano /etc/apache2/sites-available/nextcloud.conf + ``` + +The file should look like this, with your own subdomain instead of `subdomain`: + +``` + + DocumentRoot "/var/www/nextcloud" + ServerName subdomain.duckdns.org + ServerAlias www.subdomain.duckdns.org + + ErrorLog ${APACHE_LOG_DIR}/nextcloud.error + CustomLog ${APACHE_LOG_DIR}/nextcloud.access combined + + + Require all granted + Options FollowSymlinks MultiViews + AllowOverride All + + + Dav off + + + SetEnv HOME /var/www/nextcloud + SetEnv HTTP_HOME /var/www/nextcloud + Satisfy Any + + + + +``` + +* On the full VM, write the following to set the Nextcloud database with Apache and to enable the new virtual host file: + * ``` + a2ensite nextcloud.conf && a2enmod rewrite headers env dir mime setenvif ssl + ``` + +* Then, reload and restart Apache: + * ``` + systemctl reload apache2 && systemctl restart apache2 + ``` + + + +# Access Nextcloud on a Web Browser + +We now access Nextcloud over the public Internet. + +* Go to a web browser and write the subdomain name created with DuckDNS (adjust with your own subdomain): + * ``` + subdomain.duckdns.org + ``` + +Note: HTTPS isn't yet enabled. If you can't access the website, make sure to enable HTTP websites on your browser. + +* Choose a name and a password. For this guide, we use the following: + * ``` + ncadmin + password1234 + ``` + +* Enter the Nextcloud Database information created with MariaDB and click install: + * ``` + Database user: ncuser + Database password: password1234 + Database name: nextcloud + Database location: localhost + ``` + +Nextcloud will then proceed to complete the installation. + +We use `localhost` as the database location. You do not need to specifiy MariaDB's port (`3306`), as it is already configured within the database. + +After the installation, you can now access Nextcloud. To provide further security, we want to enable HTTPS for the subdomain. + + + +# Enable HTTPS + +## Install Certbot + +We will now enable HTTPS on the full VM. + +To enable HTTPS, first install `letsencrypt` with `certbot`: + +Install certbot by following the steps here: [https://certbot.eff.org/](https://certbot.eff.org/) + +* See if you have the latest version of snap: + * ``` + snap install core; snap refresh core + ``` + +* Remove certbot-auto: + * ``` + apt-get remove certbot + ``` + +* Install certbot: + * ``` + snap install --classic certbot + ``` + +* Ensure that certbot can be run: + * ``` + ln -s /snap/bin/certbot /usr/bin/certbot + ``` + +* Then, install certbot-apache: + * ``` + apt install python3-certbot-apache + ``` + +## Set the Certbot with the DNS Domain + +We now set the certbot with the DNS domain. + +* To add the HTTPS protection, write the following line on the full VM with your own subdomain: + * ``` + certbot --apache -d subdomain.duckdns.org -d www.subdomain.duckdns.org + ``` + +## Verify HTTPS Automatic Renewal + +* Make a dry run of the certbot renewal to verify that it is correctly set up. + * ``` + certbot renew --dry-run + ``` + +You now have HTTPS security on your Nextcloud instance. + +# Set a Firewall + +Finally, we want to set a firewall to monitor and control incoming and outgoing network traffic. To do so, we will define predetermined security rules. As a firewall, we will be using [Uncomplicated Firewall](https://wiki.ubuntu.com/UncomplicatedFirewall) (ufw). + +It should already be installed on your system. If it is not, install it with the following command: + +``` +apt install ufw +``` + +For our security rules, we want to allow SSH, HTTP and HTTPS. + +We thus add the following rules: + + +* Allow SSH (port 22) + * ``` + ufw allow ssh + ``` +* Allow HTTP (port 80) + * ``` + ufw allow http + ``` +* Allow https (port 443) + * ``` + ufw allow https + ``` + +* To enable the firewall, write the following: + * ``` + ufw enable + ``` + +* To see the current security rules, write the following: + * ``` + ufw status verbose + ``` + +You now have enabled the firewall with proper security rules for your Nextcloud deployment. + + + +# Conclusion + +If everything went smooth, you should now be able to access Nextcloud over the Internet with HTTPS security from any computer or smart phone! + +You can now [install Nextcloud](https://nextcloud.com/install/) on your local computer. You will then be able to "use the desktop clients to keep your files synchronized between your Nextcloud server and your desktop". You can also do regular backups with Nextcloud to ensure maximum resilience of your data. Check Nextcloud's [documentation](https://docs.nextcloud.com/server/latest/admin_manual/maintenance/backup.html) for more information on this. + +You should now have a basic understanding of the Threefold Grid, the ThreeFold Explorer, Terraform, MariaDB, PHP and Nextcloud. + +This Nextcloud deployment could be improved in many ways and other guides might be published in the future with enhanced functionalities. Stay tuned for more Threefold Guides. If you have ideas on how to improve this guide, please let us know. We learn best when sharing knowledge. + + + +# Acknowledgements and References + +A big thank you to [Scott Yeager](https://github.com/scottyeager) for his help on brainstorming, troubleshooting and creating this tutorial. This guide wouldn't have been properly done without his time and dedication. This really is a team effort! + +This guide has been inspired by Weynand Kuijpers' [great tutorial](https://youtu.be/DIhfSRKAKHw) on how to deploy Nextcloud with Terraform. + +This single Nextcloud instance guide is an adaptation from the [Nextcloud Redundant Deployment guide](terraform_nextcloud_redundant.md). The inspiration to make a single instance deployment guide comes from [RobertL](https://forum.threefold.io/t/threefold-guide-nextcloud-redundant-deployment-on-two-3node-servers/3915/3) on the ThreeFold Forum. + +Thanks to everyone who helped shape this guide. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_toc.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_toc.md new file mode 100644 index 0000000..4152838 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_toc.md @@ -0,0 +1,10 @@ +

Nextcloud Deployments

+ +We present here different Nextcloud deployments. While this section is focused on Nextcloud, those deployment architectures can be used as templates for other kind of deployments on the TFGrid. + +

Table of Contents

+ +- [Nextcloud All-in-One Deployment](./terraform_nextcloud_aio.md) +- [Nextcloud Single Deployment](./terraform_nextcloud_single.md) +- [Nextcloud Redundant Deployment](./terraform_nextcloud_redundant.md) +- [Nextcloud 2-Node VPN Deployment](./terraform_nextcloud_vpn.md) \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md new file mode 100644 index 0000000..4045078 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nextcloud_vpn.md @@ -0,0 +1,343 @@ +

Nextcloud 2-Node VPN Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [2-Node Terraform Deployment](#2-node-terraform-deployment) + - [Create the Terraform Files](#create-the-terraform-files) + - [Variables File](#variables-file) + - [Main File](#main-file) + - [Deploy the 2-Node VPN](#deploy-the-2-node-vpn) +- [Nextcloud Setup](#nextcloud-setup) +- [Nextcloud VM Prerequisites](#nextcloud-vm-prerequisites) +- [Prepare the VMs for the Rsync Daily Backup](#prepare-the-vms-for-the-rsync-daily-backup) +- [Create a Cron Job for the Rsync Daily Backup](#create-a-cron-job-for-the-rsync-daily-backup) +- [Future Projects](#future-projects) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +This guide is a proof-of-concept to show that, using two VMs in a WireGuard VPN, it is possible to, on the first VM, set a Nextcloud AIO instance on the TFGrid, set on it a daily backup and update with Borgbackup, and, on the second VM, set a second daily backup of the first backup. This means that we have 2 virtual machines, one VM with the Nextcloud instance and the Nextcloud backup, and another VM with a backup of the Nextcloud backup. + +This architecture leads to a higher redundancy level, since we can afford to lose one of the two VMs and still be able to retrieve the Nextcloud database. Note that to achieve this, we are creating a virtual private network (VPN) with WireGuard. This will connect the two VMs and allow for file transfers. While there are many ways to proceed, for this guide we will be using [ssh-keygen](https://linux.die.net/man/1/ssh-keygen), [Rsync](https://linux.die.net/man/1/rsync) and [Cron](https://linux.die.net/man/1/crontab). + +Note that, in order to reduce the deployment cost, we set the minimum CPU and memory requirements for the Backup VM. We do not need high CPU and memory for this VM since it is only used for storage. + +Note that this guide also make use of the ThreeFold gateway. For this reason, this deployment can be set on any two 3Nodes on the TFGrid, i.e. there is no need for IPv4 on the 2 nodes we are deploying on, as long as we set a gateway on a gateway node. + +For now, let's see how to achieve this redundant deployment with Rsync! + +# 2-Node Terraform Deployment + +For this guide, we are deploying a Nextcloud AIO instance along a Backup VM, enabling daily backups of both VMs. The two VMs are connected by a WireGuard VPN. The deployment will be using the [Nextcloud FList](https://github.com/threefoldtech/tf-images/tree/development/tfgrid3/nextcloud) available in the **tf-images** ThreeFold Tech repository. + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. **var.size** for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main.tf as is. + +For this example, we will be deploying the Nextcloud instance with a ThreeFold gateway and a gateway domain. Other configurations are possible. + +### Variables File + +* Copy the following content and save the file under the name `credentials.auto.tfvars`: + +``` +mnemonics = "..." +SSH_KEY = "..." +network = "main" + +size_vm1 = "50" +cpu_vm1 = "2" +memory_vm1 = "4096" + +size_vm2 = "50" +cpu_vm2 = "1" +memory_vm2 = "512" + +gateway_id = "50" +vm1_id = "5453" +vm2_id = "12" + +deployment_name = "nextcloudgatewayvpn" +nextcloud_flist = "https://hub.grid.tf/tf-official-apps/threefoldtech-nextcloudaio-latest.flist" +``` + +Make sure to add your own seed phrase and SSH public key. Simply replace the three dots by the content. Note that you can deploy on a different node than node 5453 for the **vm1** node. If you want to deploy on another node than node 5453 for the **gateway** node, make sure that you choose a gateway node. To find a gateway node, go on the [ThreeFold Dashboard](https://dashboard.grid.tf/) Nodes section of the Explorer and select **Gateways (Only)**. + +Obviously, you can decide to increase or modify the quantity for the CPU, memory and size variables. Note that we set the minimum CPU and memory parameters for the Backup VM (**vm2**). This will reduce the cost of the deployment. Since the Backup VM is only used for storage, we don't need to set the CPU and memory higher. + +### Main File + +* Copy the following content and save the file under the name `main.tf`: + +``` +variable "mnemonics" { + type = string + default = "your mnemonics" +} + +variable "network" { + type = string + default = "main" +} + +variable "SSH_KEY" { + type = string + default = "your SSH pub key" +} + +variable "deployment_name" { + type = string +} + +variable "size_vm1" { + type = string +} + +variable "cpu_vm1" { + type = string +} + +variable "memory_vm1" { + type = string +} + +variable "size_vm2" { + type = string +} + +variable "cpu_vm2" { + type = string +} + +variable "memory_vm2" { + type = string +} + +variable "nextcloud_flist" { + type = string +} + +variable "gateway_id" { + type = string +} + +variable "vm1_id" { + type = string +} + +variable "vm2_id" { + type = string +} + + +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = var.mnemonics + network = var.network +} + +data "grid_gateway_domain" "domain" { + node = var.gateway_id + name = var.deployment_name +} + +resource "grid_network" "net" { + nodes = [var.gateway_id, var.vm1_id, var.vm2_id] + ip_range = "10.1.0.0/16" + name = "network" + description = "My network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = var.vm1_id + network_name = grid_network.net.name + + disks { + name = "data" + size = var.size_vm1 + } + + vms { + name = "vm1" + flist = var.nextcloud_flist + cpu = var.cpu_vm1 + memory = var.memory_vm1 + rootfs_size = 15000 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + GATEWAY = "true" + IPV4 = "false" + NEXTCLOUD_DOMAIN = data.grid_gateway_domain.domain.fqdn + } + mounts { + disk_name = "data" + mount_point = "/mnt/data" + } + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size_vm2 + } + node = var.vm2_id + network_name = grid_network.net.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu_vm2 + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory_vm2 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + planetary = true + } +} + +resource "grid_name_proxy" "p1" { + node = var.gateway_id + name = data.grid_gateway_domain.domain.name + backends = [format("http://%s:80", grid_deployment.d1.vms[0].ip)] + network = grid_network.net.name + tls_passthrough = false +} + +output "wg_config" { + value = grid_network.net.access_wg_config +} + +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "vm2_ip" { + value = grid_deployment.d2.vms[0].ip +} + + +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +``` + +## Deploy the 2-Node VPN + +We now deploy the 2-node VPN with Terraform. Make sure that you are in the correct folder containing the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy Nextcloud: + * ``` + terraform apply + ``` + +Note that, at any moment, if you want to see the information on your Terraform deployment, write the following: + * ``` + terraform show + ``` + +# Nextcloud Setup + +* Access Nextcloud Setup + * Once you've deployed Nextcloud, you can access the Nextcloud Setup page by pasting on a browser the URL displayed on the line `fqdn = "..."` of the `terraform show` output. For more information on this, [read this documentation](../../../dashboard/solutions/nextcloud.md#nextcloud-setup). +* Create a backup and set a daily backup and update + * Make sure to create a backup with `/mnt/backup` as the mount point, and set a daily update and backup for your Nextcloud VM. For more information, [read this documentation](../../../dashboard/solutions/nextcloud.md#backups-and-updates). + +> Note: By default, the daily Borgbackup is set at 4:00 UTC. If you change this parameter, make sure to adjust the moment the [Rsync backup](#create-a-cron-job-for-the-rsync-daily-backup) is done. + +# Nextcloud VM Prerequisites + +We need to install a few things on the Nextcloud VM before going further. + +* Update the Nextcloud VM + * ``` + apt update + ``` +* Install ping on the Nextcloud VM if you want to test the VPN connection (Optional) + * ``` + apt install iputils-ping -y + ``` +* Install Rsync on the Nextcloud VM + * ``` + apt install rsync + ``` +* Install nano on the Nextcloud VM + * ``` + apt install nano + ``` +* Install Cron on the Nextcloud VM + * apt install cron + +# Prepare the VMs for the Rsync Daily Backup + +* Test the VPN (Optional) with [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) + * ``` + ping + ``` +* Generate an SSH key pair on the Backup VM + * ``` + ssh-keygen + ``` +* Take note of the public key in the Backup VM + * ``` + cat ~/.ssh/id_rsa.pub + ``` +* Add the public key of the Backup VM in the Nextcloud VM + * ``` + nano ~/.ssh/authorized_keys + ``` + +> Make sure to put the Backup VM SSH public key before the public key already present in the file **authorized_keys** of the Nextcloud VM. + +# Create a Cron Job for the Rsync Daily Backup + +We now set a daily cron job that will make a backup between the Nextcloud VM and the Backup VM using Rsync. + +* Open the crontab on the Backup VM + * ``` + crontab -e + ``` +* Add the cron job at the end of the file + * ``` + 0 8 * * * rsync -avz --no-perms -O --progress --delete --log-file=/root/rsync_storage.log root@10.1.3.2:/mnt/backup/ /mnt/backup/ + ``` + +> Note: By default, the Nextcloud automatic backup is set at 4:00 UTC. For this reason, we set the Rsync daily backup at 8:00 UTC. + +> Note: To set Rsync with a script, [read this documentation](../../computer_it_basics/file_transfer.md#automate-backup-with-rsync). + +# Future Projects + +This concept can be expanded in many directions. We can generate a script to facilitate the process, we can set a script directly in an FList for minimal user configurations, we can also explore Mariadb and GlusterFS instead of Rsync. + +As a generic deployment, we can develop a weblet that makes a daily backup of any other ThreeFold Playground weblet. + +# Questions and Feedback + +We invite others to propose ideas and codes if they feel inspired! + +If you have any questions or feedback, please let us know by either writing a post on the [ThreeFold Forum](https://forum.threefold.io/), or by chatting with us on the [TF Grid Tester Community](https://t.me/threefoldtesting) Telegram channel. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_nomad.md b/collections/system_administrators/terraform/advanced/terraform_nomad.md new file mode 100644 index 0000000..debc309 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_nomad.md @@ -0,0 +1,359 @@ +

Deploy a Nomad Cluster

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [What is Nomad?](#what-is-nomad) +- [Prerequisites](#prerequisites) +- [Create the Terraform Files](#create-the-terraform-files) + - [Main File](#main-file) + - [Credentials File](#credentials-file) +- [Deploy the Nomad Cluster](#deploy-the-nomad-cluster) +- [SSH into the Client and Server Nodes](#ssh-into-the-client-and-server-nodes) + - [SSH with the Planetary Network](#ssh-with-the-planetary-network) + - [SSH with WireGuard](#ssh-with-wireguard) +- [Destroy the Nomad Deployment](#destroy-the-nomad-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy a Nomad cluster on the TFGrid with Terraform. We cover a basic two client and three server nodes Nomad cluster. After completing this guide, you will have sufficient knowledge to build your own personalized Nomad cluster. + + + +## What is Nomad? + +[Nomad](https://www.nomadproject.io/) is a simple and flexible scheduler and orchestrator to deploy and manage containers and non-containerized applications across on-premises and clouds at scale. + +In the dynamic world of cloud computing, managing and orchestrating workloads across diverse environments can be a daunting task. Nomad emerges as a powerful solution, simplifying and streamlining the deployment, scheduling, and management of applications. + +Nomad's elegance lies in its lightweight architecture and ease of use. It operates as a single binary, minimizing resource consumption and complexity. Its intuitive user interface and straightforward configuration make it accessible to a wide range of users, from novices to experienced DevOps. + +Nomad's versatility extends beyond its user-friendliness. It seamlessly handles a wide array of workloads, including legacy applications, microservices, and batch jobs. Its adaptability extends to diverse environments, effortlessly orchestrating workloads across on-premises infrastructure and public clouds. It's more of Kubernetes for humans! + + + +## Prerequisites + +* [Install Terraform](https://developer.hashicorp.com/terraform/downloads) +* [Install WireGuard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + +If you are new to Terraform, feel free to read this basic [Terraform Full VM guide](../terraform_full_vm.md) to get you started. + + + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform: a main file and a variables file. The variables file contains the environment variables and the main file contains the necessary information to deploy your workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The file `main.tf` will be using the environment variables from the variables files (e.g. `var.cpu` for the CPU parameter) and thus you do not need to change this file. + +Of course, you can adjust the two files based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the main file as is. + +Also note that this deployment uses both the Planetary network and WireGuard. + +### Main File + +We start by creating the main file for our Nomad cluster. + +* Create a directory for your Terraform Nomad cluster + * ``` + mkdir nomad + ``` + * ``` + cd nomad + ``` +* Create the `main.tf` file + * ``` + nano main.tf + ``` + +* Copy the following `main.tf` template and save the file + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "nomadcluster" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid] + ip_range = "10.1.0.0/16" + description = "nomad network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid + network_name = grid_network.net1.name + vms { + name = "server1" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + ip = "10.1.3.2" + env_vars = { + SSH_KEY = var.SSH_KEY + } + planetary = true + } + vms { + name = "server2" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "server3" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-server-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "client1" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } + vms { + name = "client2" + flist = "https://hub.grid.tf/aelawady.3bot/abdulrahmanelawady-nomad-client-latest.flist" + cpu = var.cpu + memory = var.memory + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + FIRST_SERVER_IP = "10.1.3.2" + } + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "server1_wg_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "server2_wg_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "server3_wg_ip" { + value = grid_deployment.d1.vms[2].ip +} +output "client1_wg_ip" { + value = grid_deployment.d1.vms[3].ip +} +output "client2_wg_ip" { + value = grid_deployment.d1.vms[4].ip +} + +output "server1_planetary_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "server2_planetary_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} +output "server3_planetary_ip" { + value = grid_deployment.d1.vms[2].ygg_ip +} +output "client1_planetary_ip" { + value = grid_deployment.d1.vms[3].ygg_ip +} +output "client2_planetary_ip" { + value = grid_deployment.d1.vms[4].ygg_ip +} +``` + +### Credentials File + +We create a credentials file that will contain the environment variables. This file should be in the same directory as the main file. + +* Create the `credentials.auto.tfvars` file + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid = "..." + + size = "50" + cpu = "2" + memory = "1024" + ``` + +Make sure to replace the three dots by your own information for `mnemonics` and `SSH_KEY`. You will also need to find a suitable node for your deployment and set its node ID (`tfnodeid`). Feel free to adjust the parameters `size`, `cpu` and `memory` if needed. + + + +## Deploy the Nomad Cluster + +We now deploy the Nomad Cluster with Terraform. Make sure that you are in the directory containing the `main.tf` file. + +* Initialize Terraform + * ``` + terraform init + ``` + +* Apply Terraform to deploy the Nomad cluster + * ``` + terraform apply + ``` + + + +## SSH into the Client and Server Nodes + +You can now SSH into the client and server nodes using both the Planetary network and WireGuard. + +Note that the IP addresses will be shown under `Outputs` after running the command `Terraform apply`, with `planetary_ip` for the Planetary network and `wg_ip` for WireGuard. + +### SSH with the Planetary Network + +* To [SSH with the Planetary network](../../getstarted/ssh_guide/ssh_openssh.md), write the following with the proper IP address + * ``` + ssh root@planetary_ip + ``` + +You now have an SSH connection access over the Planetary network to the client and server nodes of your Nomad cluster. + +### SSH with WireGuard + +To SSH with WireGuard, we first need to set the proper WireGuard configurations. + +* Create a file named `wg.conf` in the directory `/etc/wireguard` + * ``` + nano /etc/wireguard/wg.conf + ``` + +* Paste the content provided by the Terraform deployment in the file `wg.conf` and save it. + * Note that you can use `terraform show` to see the Terraform output. The WireGuard configurations (`wg_config`) stands in between the two `EOT` instances. + +* Start WireGuard on your local computer + * ``` + wg-quick up wg + ``` +* As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the WireGuard IP of a node to make sure the connection is correct + * ``` + ping wg_ip + ``` + +We are now ready to SSH into the client and server nodes with WireGuard. + +* To SSH with WireGuard, write the following with the proper IP address: + * ``` + ssh root@wg_ip + ``` + +You now have an SSH connection access over WireGuard to the client and server nodes of your Nomad cluster. For more information on connecting with WireGuard, read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + + + +## Destroy the Nomad Deployment + +If you want to destroy the Nomad deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. + + +## Conclusion + +You now have the basic knowledge to deploy a Nomad cluster on the TFGrid. Feel free to explore the many possibilities available that come with Nomad. + +You can now use a Nomad cluster to deploy your workloads. For more information on this, read this documentation on [how to deploy a Redis workload on the Nomad cluster](https://developer.hashicorp.com/nomad/tutorials/get-started/gs-deploy-job). + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_provider.md b/collections/system_administrators/terraform/advanced/terraform_provider.md new file mode 100644 index 0000000..eafe66c --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_provider.md @@ -0,0 +1,53 @@ +

Terraform Provider

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Environment Variables](#environment-variables) +- [Remarks](#remarks) + +*** + +## Introduction + +We present the basics of the Terraform Provider. + +## Example + +``` terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = grid network, one of: dev test qa main + key_type = key type registered on substrate (ed25519 or sr25519) + relay_url = example: "wss://relay.dev.grid.tf" + rmb_timeout = timeout duration in seconds for rmb calls + substrate_url = substrate url, example: "wss://tfchain.dev.grid.tf/ws" +} +``` + +## Environment Variables + +should be recognizable as Env variables too + +- `MNEMONICS` +- `NETWORK` +- `SUBSTRATE_URL` +- `KEY_TYPE` +- `RELAY_URL` +- `RMB_TIMEOUT` + +The *_URL variables can be used to override the dafault urls associated with the specified network + +## Remarks + +- Grid terraform provider is hosted on terraform registry [here](https://registry.terraform.io/providers/threefoldtech/grid/latest/docs?pollNotifications=true) +- All provider input variables and their description can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/index.md) +- Capitalized environment variables can be used instead of writing them in the provider (e.g. MNEMONICS) diff --git a/collections/system_administrators/terraform/advanced/terraform_provisioners.md b/collections/system_administrators/terraform/advanced/terraform_provisioners.md new file mode 100644 index 0000000..3bae3ea --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_provisioners.md @@ -0,0 +1,119 @@ +

Terraform and Provisioner

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Params docs](#params-docs) + - [Requirements](#requirements) + - [Connection Block](#connection-block) + - [Provisioner Block](#provisioner-block) + - [More Info](#more-info) + +*** + +## Introduction + +In this [example](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/external_provisioner/remote-exec_hello-world/main.tf), we will see how to deploy a VM and apply provisioner commands on it on the TFGrid. + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +locals { + name = "myvm" +} + +resource "grid_network" "net1" { + nodes = [1] + ip_range = "10.1.0.0/24" + name = local.name + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + name = local.name + node = 1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/grid3_ubuntu20.04-latest.flist" + entrypoint = "/init.sh" + cpu = 2 + memory = 1024 + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + connection { + type = "ssh" + user = "root" + agent = true + host = grid_deployment.d1.vms[0].ygg_ip + } + + provisioner "remote-exec" { + inline = [ + "echo 'Hello world!' > /root/readme.txt" + ] + } +} +``` + +## Params docs + +### Requirements + +- the machine should have `ssh server` running +- the machine should have `scp` installed + +### Connection Block + +- defines how we will connect to the deployed machine + +``` terraform + connection { + type = "ssh" + user = "root" + agent = true + host = grid_deployment.d1.vms[0].ygg_ip + } +``` + +type: defines the used service to connect to +user: the connecting users +agent: if used the provisoner will use the default key to connect to the remote machine +host: the ip/host of the remote machine + +### Provisioner Block + +- defines the actual provisioner behaviour + +``` terraform + provisioner "remote-exec" { + inline = [ + "echo 'Hello world!' > /root/readme.txt" + ] + } +``` + +- remote-exec: the provisoner type we are willing to use can be remote, local or another type +- inline: This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts. +- script: This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts. +- scripts: This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script. + +### More Info + +A complete list of provisioner parameters can be found [here](https://www.terraform.io/language/resources/provisioners/remote-exec). diff --git a/collections/system_administrators/terraform/advanced/terraform_updates.md b/collections/system_administrators/terraform/advanced/terraform_updates.md new file mode 100644 index 0000000..e3c2b66 --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_updates.md @@ -0,0 +1,55 @@ +

Updating

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Updating with Terraform](#updating-with-terraform) +- [Adjustments](#adjustments) + +*** + +## Introduction + +We present ways to update using Terraform. Note that this is not fully supported. + +Some of the updates are working, but the code is not finished, use at your own risk. + +## Updating with Terraform + +Updates are triggered by changing the deployments fields. +So for example, if you have the following network resource: + +```terraform +resource "grid_network" "net" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +``` + +Then decided to add a node: + +```terraform +resource "grid_network" "net" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} +``` + +After calling `terraform apply`, the provider does the following: + +- Add node 4 to the network. +- Update the version of the workload. +- Update the version of the deployment. +- Update the hash in the contract (the contract id will stay the same) + +## Adjustments + +There are workloads that doesn't support in-place updates (e.g. Zmachines). To change them there are a couple of options (all performs destroy/create so data can be lost): + +1. `terraform taint grid_deployment.d1` (next apply will destroy ALL workloads within grid_deployment.d1 and create a new deployment) +2. `terraform destroy --target grid_deployment.d1 && terraform apply --target grid_deployment.d1` (same as above) +3. Remove the vm, then execute a `terraform apply`, then add the vm with the new config (this performs two updates but keeps neighboring workloads inside the same deployment intact). (CAUTION: this could be done only if the vm is last one in the list of vms, otherwise undesired behavior will occur) diff --git a/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md b/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md new file mode 100644 index 0000000..174bc3a --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_wireguard_ssh.md @@ -0,0 +1,280 @@ +

SSH Into a 3Node with Wireguard

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Create the Terraform Files](#create-the-terraform-files) +- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform) +- [Set the Wireguard Connection](#set-the-wireguard-connection) +- [SSH into the 3Node with Wireguard](#ssh-into-the-3node-with-wireguard) +- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we show how simple it is to deploy a micro VM on the ThreeFold Grid with Terraform and to make an SSH connection with Wireguard. + + + +## Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) to find a 3Node +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. Here's what would work for our currernt situation. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. + +Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-ssh`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +Now let's create the Terraform files. + +* Open the terminal and go to the home directory + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-wg-ssh`: + * ``` + mkdir -p terraform/deployment-wg-ssh + ``` + * ``` + cd terraform/deployment-wg-ssh + ``` + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +``` + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content, set the node ID as well as your mnemonics and SSH public key, then save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + + size = "15" + cpu = "1" + memory = "512" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node server you wish to deploy on. Simply replace the three dots by the proper content. + + + +## Deploy the Micro VM with Terraform + +We now deploy the micro VM with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-ssh` containing the main and variables files. + +* Initialize Terraform: + * ``` + terraform init + ``` + +* Apply Terraform to deploy the micro VM: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## Set the Wireguard Connection + +To set the Wireguard connection, on your local computer, you will need to take the Terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + * Paste the content between the two `EOT` displayed after you set `terraform apply`. + +* Start the wireguard: + * ``` + wg-quick up wg + ``` + +If you want to stop the Wireguard service, write the following on your terminal: + +* ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a Wireguard connection with the same file from Terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`. + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VM to make sure the Wireguard connection is correct. Make sure to replace `vm_wg_ip` with the proper IP address: +* ``` + ping vm_wg_ip + ``` + * Note that, with this Terraform deployment, the Wireguard IP address of the micro VM is named `node1_zmachine1_ip` + + +## SSH into the 3Node with Wireguard + +To SSH into the 3Node with Wireguard, simply write the following in the terminal with the proper Wireguard IP address: + +``` +ssh root@vm_wg_ip +``` + +You now have access into the VM over Wireguard SSH connection. + + + +## Destroy the Terraform Deployment + +If you want to destroy the Terraform deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-ssh`. + + + +## Conclusion + +In this simple ThreeFold Guide, you learned how to SSH into a 3Node with Wireguard and Terraform. Feel free to explore further Terraform and Wireguard. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md b/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md new file mode 100644 index 0000000..d8d27ea --- /dev/null +++ b/collections/system_administrators/terraform/advanced/terraform_wireguard_vpn.md @@ -0,0 +1,345 @@ +

Deploy Micro VMs and Set a Wireguard VPN

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) +- [Create a Two Servers Wireguard VPN with Terraform](#create-a-two-servers-wireguard-vpn-with-terraform) +- [Deploy the Micro VMs with Terraform](#deploy-the-micro-vms-with-terraform) +- [Set the Wireguard Connection](#set-the-wireguard-connection) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Destroy the Terraform Deployment](#destroy-the-terraform-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy two micro virtual machines (Ubuntu 22.04) with Terraform. The Terraform deployment will be composed of a virtual private network (VPN) using Wireguard. The two VMs will thus be connected in a private and secure network. + +Note that this concept can be extended with more than two micro VMs. Once you understand this guide, you will be able to adjust and deploy your own personalized Wireguard VPN on the ThreeFold Grid. + + +## Prerequisites + +* [Install Terraform](../terraform_install.md) +* [Install Wireguard](https://www.wireguard.com/install/) + +You need to download and install properly Terraform and Wireguard on your local computer. Simply follow the linked documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create a Two Servers Wireguard VPN with Terraform + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workloads. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. + +Of course, you can adjust the deployments based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployment-wg-vpn`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phras and SSH keys. You should also specifiy the node IDs of the two 3Nodes you will be deploying on. + +Now let's create the Terraform files. + + +* Open the terminal and go to the home directory + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-wg-vpn`: + * ``` + mkdir -p terraform && cd $_ + ``` + * ``` + mkdir deployment-wg-vpn && cd $_ + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "tfnodeid2" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1, var.tfnodeid2] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +resource "grid_deployment" "d2" { + disks { + name = "disk2" + size = var.size + } + name = local.name + node = var.tfnodeid2 + network_name = grid_network.net1.name + + vms { + name = "vm2" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk2" + mount_point = "/disk2" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d2.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} +output "ygg_ip2" { + value = grid_deployment.d2.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ipv4_vm2" { + value = grid_deployment.d2.vms[0].computedip +} + +``` + +In this guide, the virtual IP for `vm1` is 10.1.3.2 and the virtual IP for `vm2` is 10.1.4.2. This might be different during your own deployment. Change the codes in this guide accordingly. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ``` + mnemonics = "..." + SSH_KEY = "..." + + tfnodeid1 = "..." + tfnodeid2 = "..." + + size = "15" + cpu = "1" + memory = "512" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the two node IDs of the servers used. Simply replace the three dots by the content. + +Set the parameters for your VMs as you wish. The two servers will have the same parameters. For this example, we use the minimum parameters. + + +## Deploy the Micro VMs with Terraform + +We now deploy the VPN with Terraform. Make sure that you are in the correct folder `terraform/deployment-wg-vpn` containing the main and variables files. + +* Initialize Terraform by writing the following in the terminal: + * ``` + terraform init + ``` +* Apply the Terraform deployment: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## Set the Wireguard Connection + +To set the Wireguard connection, on your local computer, you will need to take the terraform `wg_config` output and create a `wg.conf` file in the directory: `/usr/local/etc/wireguard/wg.conf`. Note that the Terraform output starts and ends with EOT. + +For more information on WireGuard, notably in relation to Windows, please read [this documentation](../../getstarted/ssh_guide/ssh_wireguard.md). + +* Create a file named `wg.conf` in the directory: `/usr/local/etc/wireguard/wg.conf`. + * ``` + nano /usr/local/etc/wireguard/wg.conf + ``` + * Paste the content between the two `EOT` displayed after you set `terraform apply`. + +* Start the wireguard: + * ``` + wg-quick up wg + ``` + +If you want to stop the Wireguard service, write the following on your terminal: + +* ``` + wg-quick down wg + ``` + +> Note: If it doesn't work and you already did a Wireguard connection with the same file from terraform (from a previous deployment), write on the terminal `wg-quick down wg`, then `wg-quick up wg`. + +As a test, you can [ping](../../computer_it_basics/cli_scripts_basics.md#test-the-network-connectivity-of-a-domain-or-an-ip-address-with-ping) the virtual IP address of the VMs to make sure the Wireguard connection is correct. Make sure to replace `wg_vm_ip` with the proper IP address for each VM: + +* ``` + ping wg_vm_ip + ``` + + + +## SSH into the 3Node + +You can now SSH into the 3Nodes with either Wireguard or IPv4. + +To SSH with Wireguard, write the following with the proper IP address for each 3Node: + +``` +ssh root@vm_wg_ip +``` + +To SSH with IPv4, write the following for each 3Nodes: + +``` +ssh root@vm_IPv4 +``` + +You now have an SSH connection access to the VMs over Wireguard and IPv4. + + + +## Destroy the Terraform Deployment + +If you want to destroy the Terraform deployment, write the following in the terminal: + +* ``` + terraform destroy + ``` + * Then write `yes` to confirm. + +Make sure that you are in the corresponding Terraform folder when writing this command. In this guide, the folder is `deployment-wg-vpn`. + + + +## Conclusion + +In this ThreeFold Guide, we learned how easy it is to deploy a VPN with Wireguard and Terraform. You can adjust the parameters how you like and explore different possibilities. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/terraform/grid3_terraform_home.md b/collections/system_administrators/terraform/grid3_terraform_home.md new file mode 100644 index 0000000..82e6155 --- /dev/null +++ b/collections/system_administrators/terraform/grid3_terraform_home.md @@ -0,0 +1 @@ +!!!include:terraform_readme diff --git a/collections/system_administrators/terraform/grid_terraform.md b/collections/system_administrators/terraform/grid_terraform.md new file mode 100644 index 0000000..60f4dbe --- /dev/null +++ b/collections/system_administrators/terraform/grid_terraform.md @@ -0,0 +1,270 @@ +# Grid provider for terraform + + - A resource, and a data source (`internal/provider/`), + - Examples (`examples/`) + +## Requirements + +- [Terraform](https://www.terraform.io/downloads.html) >= 0.13.x +- [Go](https://golang.org/doc/install) >= 1.15 + +## Building The Provider + +Note: please clone all of the following repos in the same directory +- clone github.com/threefoldtech/zos (switch to master-3 branch) +- Clone github.com/threefoldtech/tf_terraform_provider (deployment_resource branch) +- Enter the repository directory + +```bash +go get +mkdir -p ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64 +go build -o terraform-provider-grid +mv terraform-provider-grid ~/.terraform.d/plugins/threefoldtech.com/providers/grid/0.1/linux_amd64 +``` + + +## example deployment + + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" {} + + +resource "grid_deployment" "d1" { + node = 2 + disks { + name = "mydisk1" + size = 2 + description = "this is my disk description1" + + } + disks { + name = "mydisk2" + size=2 + description = "this is my disk2" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 2048 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "mydisk1" + mount_point = "/opt" + } + mounts { + disk_name = "mydisk2" + mount_point = "/test" + } + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP" + } + + } +} + +``` + +## Using the provider + +to create your twin please check [grid substrate getting started](grid_substrate_getting_started) + +```bash +./msgbusd --twin #run message bus with your twin id +cd examples/resources +export MNEMONICS="" +terraform init && terraform apply +``` +## Destroying deployment +```bash +terraform destroy +``` + +## More examples + +a two machine deployment with the first using a public ip + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = grid_network.net1.nodes_ip_range["2"] + + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} + +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} +``` + +multinode deployments +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [4, 2] + ip_range = "172.20.0.0/16" + name = "net1" + description = "new network" +} + +resource "grid_deployment" "d1" { + node = 4 + network_name = grid_network.net1.name + ip_range = grid_network.net1.deployment_info[0].ip_range + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } + +} + +resource "grid_deployment" "d2" { + node = 2 + network_name = grid_network.net1.name + ip_range = grid_network.net1.nodes_ip_range["2"] + vms { + name = "vm3" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} + + +output "node2_vm1_ip" { + value = grid_deployment.d2.vms[0].ip +} + + +``` + +zds + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_deployment" "d1" { + node = 2 + + zdbs{ + name = "zdb1" + size = 1 + description = "zdb1 description" + password = "zdbpasswd1" + mode = "user" + } + zdbs{ + name = "zdb2"You can easily check using [explorer-ui](@explorer_home) , + size = 2 + description = "zdb2 description" + password = "zdbpasswd2" + mode = "seq" + } +} + +output "deployment_id" { + value = grid_deployment.d1.id +} +``` \ No newline at end of file diff --git a/collections/system_administrators/terraform/img/terraform_install.png b/collections/system_administrators/terraform/img/terraform_install.png new file mode 100644 index 0000000..a6026ae Binary files /dev/null and b/collections/system_administrators/terraform/img/terraform_install.png differ diff --git a/collections/system_administrators/terraform/img/terraform_works.png b/collections/system_administrators/terraform/img/terraform_works.png new file mode 100644 index 0000000..8cd0757 Binary files /dev/null and b/collections/system_administrators/terraform/img/terraform_works.png differ diff --git a/collections/system_administrators/terraform/img/weblets_contracts.png b/collections/system_administrators/terraform/img/weblets_contracts.png new file mode 100644 index 0000000..cc2cdbf Binary files /dev/null and b/collections/system_administrators/terraform/img/weblets_contracts.png differ diff --git a/collections/system_administrators/terraform/resources/img/graphql_publicconf.png b/collections/system_administrators/terraform/resources/img/graphql_publicconf.png new file mode 100644 index 0000000..80fe4f5 Binary files /dev/null and b/collections/system_administrators/terraform/resources/img/graphql_publicconf.png differ diff --git a/collections/system_administrators/terraform/resources/terraform_caprover.md b/collections/system_administrators/terraform/resources/terraform_caprover.md new file mode 100644 index 0000000..a0302a6 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_caprover.md @@ -0,0 +1,506 @@ +

Terraform Caprover

+ +

Table of Contents

+ +- [What is CapRover?](#what-is-caprover) +- [Features of Caprover](#features-of-caprover) +- [Prerequisites](#prerequisites) +- [How to Run CapRover on ThreeFold Grid 3](#how-to-run-caprover-on-threefold-grid-3) + - [Clone the Project Repo](#clone-the-project-repo) + - [A) leader node deployment/setup:](#a-leader-node-deploymentsetup) + - [Step 1: Deploy a Leader Node](#step-1-deploy-a-leader-node) + - [Step 2: Connect Root Domain](#step-2-connect-root-domain) + - [Note](#note) + - [Step 3: CapRover Root Domain Configurations](#step-3-caprover-root-domain-configurations) + - [Step 4: Access the Captain Dashboard](#step-4-access-the-captain-dashboard) + - [To allow cluster mode](#to-allow-cluster-mode) + - [B) Worker Node Deployment/setup:](#b-worker-node-deploymentsetup) +- [Implementations Details:](#implementations-details) + +*** + +## What is CapRover? + +[CapRover](https://caprover.com/) is an easy-to-use app/database deployment and web server manager that works for a variety of applications such as Node.js, Ruby, PHP, Postgres, and MongoDB. It runs fast and is very robust, as it uses Docker, Nginx, LetsEncrypt, and NetData under the hood behind its user-friendly interface. +Here’s a link to CapRover's open source repository on [GitHub](https://github.com/caprover/caprover). + +## Features of Caprover + +- CLI for automation and scripting +- Web GUI for ease of access and convenience +- No lock-in: Remove CapRover and your apps keep working ! +- Docker Swarm under the hood for containerization and clustering. +- Nginx (fully customizable template) under the hood for load-balancing. +- Let’s Encrypt under the hood for free SSL (HTTPS). +- **One-Click Apps** : Deploying one-click apps is a matter of seconds! MongoDB, Parse, MySQL, WordPress, Postgres and many more. +- **Fully Customizable** : Optionally fully customizable nginx config allowing you to enable HTTP2, specific caching logic, custom SSL certs and etc. +- **Cluster Ready** : Attach more nodes and create a cluster in seconds! CapRover automatically configures nginx to load balance. +- **Increase Productivity** : Focus on your apps ! Not the bells and whistles, just to run your apps. +- **Easy Deploy** : Many ways to deploy. You can upload your source from dashboard, use command line caprover deploy, use webhooks and build upon git push + +## Prerequisites + +- Domain Name: + after installation, you will need to point a wildcard DNS entry to your CapRover IP Address. + Note that you can use CapRover without a domain too. But you won't be able to setup HTTPS or add `Self hosted Docker Registry`. +- TerraForm installed to provision, adjust and tear down infrastructure using the tf configuration files provided here. +- Yggdrasil installed and enabled for End-to-end encrypted IPv6 networking. +- account created on [Polkadot](https://polkadot.js.org/apps/?rpc=wss://tfchain.dev.threefold.io/ws#/accounts) and got an twin id, and saved you mnemonics. +- TFTs in your account balance (in development, Transferer some test TFTs from ALICE account). + +## How to Run CapRover on ThreeFold Grid 3 + +In this guide, we will use Caprover to setup your own private Platform as a service (PaaS) on TFGrid 3 infrastructure. + +### Clone the Project Repo + +```sh +git clone https://github.com/freeflowuniverse/freeflow_caprover.git +``` + +### A) leader node deployment/setup: + +#### Step 1: Deploy a Leader Node + +Create a leader caprover node using terraform, here's an example : + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = "" + network = "dev" # or test to use testnet +} + +resource "grid_network" "net0" { + nodes = [4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d0" { + node = 4 + network_name = grid_network.net0.name + ip_range = lookup(grid_network.net0.nodes_ip_range, 4, "") + disks { + name = "data0" + # will hold images, volumes etc. modify the size according to your needs + size = 20 + description = "volume holding docker data" + } + disks { + name = "data1" + # will hold data reltaed to caprover conf, nginx stuff, lets encrypt stuff. + size = 5 + description = "volume holding captain data" + } + + vms { + name = "caprover" + flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist" + # modify the cores according to your needs + cpu = 4 + publicip = true + # modify the memory according to your needs + memory = 8192 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data0" + mount_point = "/var/lib/docker" + } + mounts { + disk_name = "data1" + mount_point = "/captain" + } + env_vars = { + "PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + # SWM_NODE_MODE env var is required, should be "leader" or "worker" + # leader: will run sshd, containerd, dockerd as zinit services plus caprover service in leader mode which start caprover, lets encrypt, nginx containers. + # worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. check the wroker terrafrom file example. + "SWM_NODE_MODE" = "leader" + # CAPROVER_ROOT_DOMAIN is optional env var, by providing it you can access the captain dashboard after vm initilization by visiting http://captain.your-root-domain + # otherwise you will have to add the root domain manually from the captain dashboard by visiting http://{publicip}:3000 to access the dashboard + "CAPROVER_ROOT_DOMAIN" = "roverapps.grid.tf" + } + } +} + +output "wg_config" { + value = grid_network.net0.access_wg_config +} +output "ygg_ip" { + value = grid_deployment.d0.vms[0].ygg_ip +} +output "vm_ip" { + value = grid_deployment.d0.vms[0].ip +} +output "vm_public_ip" { + value = grid_deployment.d0.vms[0].computedip +} +``` + +```bash +cd freeflow_caprover/terraform/leader/ +vim main.tf +``` + +- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on. +- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing. +- In the `PUBLIC_KEY` env var value put your ssh public key . +- In the `CAPROVER_ROOT_DOMAIN` env var value put your root domain, this is optional and you can add it later from the dashboard put it will save you the extra step and allow you to access your dashboard using your domain name directly after the deployment. + +- save the file, and execute the following commands: + + ```bash + terraform init + terraform apply + ``` + +- wait till you see `apply complete`, and note the VM public ip in the final output. + +- verify the status of the VM + + ```bash + ssh root@{public_ip_address} + zinit list + zinit log caprover + ``` + + You will see output like this: + + ```bash + root@caprover:~ # zinit list + sshd: Running + containerd: Running + dockerd: Running + sshd-init: Success + caprover: Running + root@caprover:~ # zinit log caprover + [+] caprover: CapRover Root Domain: newapps.grid.tf + [+] caprover: { + [+] caprover: "namespace": "captain", + [+] caprover: "customDomain": "newapps.grid.tf" + [+] caprover: } + [+] caprover: CapRover will be available at http://captain.newapps.grid. tf after installation + [-] caprover: docker: Cannot connect to the Docker daemon at unix:///var/ run/docker.sock. Is the docker daemon running?. + [-] caprover: See 'docker run --help'. + [-] caprover: Unable to find image 'caprover/caprover:latest' locally + [-] caprover: latest: Pulling from caprover/caprover + [-] caprover: af4c2580c6c3: Pulling fs layer + [-] caprover: 4ea40d27a2cf: Pulling fs layer + [-] caprover: 523d612e9cd2: Pulling fs layer + [-] caprover: 8fee6a1847b0: Pulling fs layer + [-] caprover: 60cce3519052: Pulling fs layer + [-] caprover: 4bae1011637c: Pulling fs layer + [-] caprover: ecf48b6c1f43: Pulling fs layer + [-] caprover: 856f69196742: Pulling fs layer + [-] caprover: e86a512b6f8c: Pulling fs layer + [-] caprover: cecbd06d956f: Pulling fs layer + [-] caprover: cdd679ff24b0: Pulling fs layer + [-] caprover: d60abbe06609: Pulling fs layer + [-] caprover: 0ac0240c1a59: Pulling fs layer + [-] caprover: 52d300ad83da: Pulling fs layer + [-] caprover: 8fee6a1847b0: Waiting + [-] caprover: e86a512b6f8c: Waiting + [-] caprover: 60cce3519052: Waiting + [-] caprover: cecbd06d956f: Waiting + [-] caprover: cdd679ff24b0: Waiting + [-] caprover: 4bae1011637c: Waiting + [-] caprover: d60abbe06609: Waiting + [-] caprover: 0ac0240c1a59: Waiting + [-] caprover: 52d300ad83da: Waiting + [-] caprover: 856f69196742: Waiting + [-] caprover: ecf48b6c1f43: Waiting + [-] caprover: 523d612e9cd2: Verifying Checksum + [-] caprover: 523d612e9cd2: Download complete + [-] caprover: 4ea40d27a2cf: Verifying Checksum + [-] caprover: 4ea40d27a2cf: Download complete + [-] caprover: af4c2580c6c3: Verifying Checksum + [-] caprover: af4c2580c6c3: Download complete + [-] caprover: 4bae1011637c: Verifying Checksum + [-] caprover: 4bae1011637c: Download complete + [-] caprover: 8fee6a1847b0: Verifying Checksum + [-] caprover: 8fee6a1847b0: Download complete + [-] caprover: 856f69196742: Verifying Checksum + [-] caprover: 856f69196742: Download complete + [-] caprover: ecf48b6c1f43: Verifying Checksum + [-] caprover: ecf48b6c1f43: Download complete + [-] caprover: e86a512b6f8c: Verifying Checksum + [-] caprover: e86a512b6f8c: Download complete + [-] caprover: cdd679ff24b0: Verifying Checksum + [-] caprover: cdd679ff24b0: Download complete + [-] caprover: d60abbe06609: Verifying Checksum + [-] caprover: d60abbe06609: Download complete + [-] caprover: cecbd06d956f: Download complete + [-] caprover: 0ac0240c1a59: Verifying Checksum + [-] caprover: 0ac0240c1a59: Download complete + [-] caprover: 60cce3519052: Verifying Checksum + [-] caprover: 60cce3519052: Download complete + [-] caprover: af4c2580c6c3: Pull complete + [-] caprover: 52d300ad83da: Download complete + [-] caprover: 4ea40d27a2cf: Pull complete + [-] caprover: 523d612e9cd2: Pull complete + [-] caprover: 8fee6a1847b0: Pull complete + [-] caprover: 60cce3519052: Pull complete + [-] caprover: 4bae1011637c: Pull complete + [-] caprover: ecf48b6c1f43: Pull complete + [-] caprover: 856f69196742: Pull complete + [-] caprover: e86a512b6f8c: Pull complete + [-] caprover: cecbd06d956f: Pull complete + [-] caprover: cdd679ff24b0: Pull complete + [-] caprover: d60abbe06609: Pull complete + [-] caprover: 0ac0240c1a59: Pull complete + [-] caprover: 52d300ad83da: Pull complete + [-] caprover: Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f + [-] caprover: Status: Downloaded newer image for caprover/caprover:latest + [+] caprover: Captain Starting ... + [+] caprover: Overriding skipVerifyingDomains from /captain/data/ config-override.json + [+] caprover: Installing Captain Service ... + [+] caprover: + [+] caprover: Installation of CapRover is starting... + [+] caprover: For troubleshooting, please see: https://caprover.com/docs/ troubleshooting.html + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: + [+] caprover: >>> Checking System Compatibility <<< + [+] caprover: Docker Version passed. + [+] caprover: Ubuntu detected. + [+] caprover: X86 CPU detected. + [+] caprover: Total RAM 8339 MB + [+] caprover: Pulling: nginx:1 + [+] caprover: Pulling: caprover/caprover-placeholder-app:latest + [+] caprover: Pulling: caprover/certbot-sleeping:v1.6.0 + [+] caprover: October 12th 2021, 12:49:26.301 pm Fresh installation! + [+] caprover: October 12th 2021, 12:49:26.309 pm Starting swarm at 185.206.122.32:2377 + [+] caprover: Swarm started: z06ymksbcoren9cl7g2xzw9so + [+] caprover: *** CapRover is initializing *** + [+] caprover: Please wait at least 60 seconds before trying to access CapRover. + [+] caprover: =================================== + [+] caprover: **** Installation is done! ***** + [+] caprover: CapRover is available at http://captain.newapps.grid.tf + [+] caprover: Default password is: captain42 + [+] caprover: =================================== + ``` + + Wait until you see \***\* Installation is done! \*\*\*** in the caprover service log. + +#### Step 2: Connect Root Domain + +After the container runs, you will now need to connect your CapRover instance to a Root Domain. + +Let’s say you own example.com. You can set \*.something.example.com as an A-record in your DNS settings to point to the IP address of the server where you installed CapRover. To do this, go to the DNS settings in your domain provider website, and set a wild card A record entry. + +For example: Type: A, Name (or host): \*.something.example.com, IP (or Points to): `110.122.131.141` where this is the IP address of your CapRover machine. + +```yaml +TYPE: A record +HOST: \*.something.example.com +POINTS TO: (IP Address of your server) +TTL: (doesn’t really matter) +``` + +To confirm, go to https://mxtoolbox.com/DNSLookup.aspx and enter `somethingrandom.something.example.com` and check if IP address resolves to the IP you set in your DNS. + +##### Note + +`somethingrandom` is needed because you set a wildcard entry in your DNS by setting `*.something.example.com` as your host, not `something.example.com`. + +#### Step 3: CapRover Root Domain Configurations + +skip this step if you provided your root domain in the TerraFrom configuration file + +Once the CapRover is initialized, you can visit `http://[IP_OF_YOUR_SERVER]:3000` in your browser and login to CapRover using the default password `captain42`. You can change your password later. + +In the UI enter you root domain and press Update Domain button. + +#### Step 4: Access the Captain Dashboard + +Once you set your root domain as caprover.example.com, you will be redirected to captain.caprover.example.com. + +Now CapRover is ready and running in a single node. + +##### To allow cluster mode + +- Enable HTTPS + + - Go to CapRover `Dashboard` tab, then in `CapRover Root Domain Configurations` press on `Enable HTTPS` then you will asked to enter your email address + +- Docker Registry Configuration + + - Go to CapRover `Cluster` tab, then in `Docker Registry Configuration` section, press on `Self hosted Docker Registry` or add your `Remote Docker Registry` + +- Run the following command in the ssh session: + + ```bash + docker swarm join-token worker + ``` + + It will output something like this: + + ```bash + docker swarm join --token SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ez fpdd6svnnbq7 185.206.122.33:2377 + ``` + +- To add a worker node to this swarm, you need: + + - Generated token `SWMTKN-1-0892ds1ney7pa0hymi3qwph7why1d9r3z6bvwtin51r14hcz3t-cjsephnu4f2ezfpdd6svnnbq7` + - Leader node public ip `185.206.122.33` + +This information is required in the next section to run CapRover in cluster mode. + +### B) Worker Node Deployment/setup: + +We show how to deploy a worker node by providing an example worker Terraform file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { + mnemonics = "" + network = "dev" # or test to use testnet +} + +resource "grid_network" "net2" { + nodes = [4] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d2" { + node = 4 + network_name = grid_network.net2.name + ip_range = lookup(grid_network.net2.nodes_ip_range, 4, "") + disks { + name = "data2" + # will hold images, volumes etc. modify the size according to your needs + size = 20 + description = "volume holding docker data" + } + + vms { + name = "caprover" + flist = "https://hub.grid.tf/samehabouelsaad.3bot/abouelsaad-caprover-tf_10.0.1_v1.0.flist" + # modify the cores according to your needs + cpu = 2 + publicip = true + # modify the memory according to your needs + memory = 2048 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "data2" + mount_point = "/var/lib/docker" + } + env_vars = { + "PUBLIC_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + } + # SWM_NODE_MODE env var is required, should be "leader" or "worker" + # leader: check the wroker terrafrom file example. + # worker: will run sshd, containerd, dockerd as zinit services plus caprover service in orker mode which only join the swarm cluster. + + "SWM_NODE_MODE" = "worker" + # from the leader node (the one running caprover) run `docker swarm join-token worker` + # you must add the generated token to SWMTKN env var and the leader public ip to LEADER_PUBLIC_IP env var + + "SWMTKN"="SWMTKN-1-522cdsyhknmavpdok4wi86r1nihsnipioc9hzfw9dnsvaj5bed-8clrf4f2002f9wziabyxzz32d" + "LEADER_PUBLIC_IP" = "185.206.122.38" + + } +} + +output "wg_config" { + value = grid_network.net2.access_wg_config +} +output "ygg_ip" { + value = grid_deployment.d2.vms[0].ygg_ip +} +output "vm_ip" { + value = grid_deployment.d2.vms[0].ip +} +output "vm_public_ip" { + value = grid_deployment.d2.vms[0].computedip +} +``` + +```bash +cd freeflow_caprover/terraform/worker/ +vim main.tf +``` + +- In `provider` Block, add your `mnemonics` and specify the grid network to deploy on. +- In `resource` Block, update the disks size, memory size, and cores number to fit your needs or leave as it is for testing. +- In the `PUBLIC_KEY` env var value put your ssh public key. +- In the `SWMTKN` env var value put the previously generated token. +- In the `LEADER_PUBLIC_IP` env var value put the leader node public ip. + +- Save the file, and execute the following commands: + + ```bash + terraform init + terraform apply + ``` + +- Wait till you see `apply complete`, and note the VM public ip in the final output. + +- Verify the status of the VM. + + ```bash + ssh root@{public_ip_address} + zinit list + zinit log caprover + ``` + + You will see output like this: + + ```bash + root@caprover:~# zinit list + caprover: Success + dockerd: Running + containerd: Running + sshd: Running + sshd-init: Success + root@caprover:~# zinit log caprover + [-] caprover: Cannot connect to the Docker daemon at unix:///var/run/ docker.sock. Is the docker daemon running? + [+] caprover: This node joined a swarm as a worker. + ``` + +This means that your worker node is now ready and have joined the cluster successfully. + +You can also verify this from CapRover dashboard in `Cluster` tab. Check `Nodes` section, you should be able to see the new worker node added there. + +Now CapRover is ready in cluster mode (more than one server). + +To run One-Click Apps please follow this [tutorial](https://caprover.com/docs/one-click-apps.html) + +## Implementations Details: + +- we use Ubuntu 18.04 to minimize the production issues as CapRover is tested on Ubuntu 18.04 and Docker 19.03. +- In standard installation, CapRover has to be installed on a machine with a public IP address. +- Services are managed by `Zinit` service manager to bring these processes up and running in case of any failure: + + - sshd-init : service used to add user public key in vm ssh authorized keys (it run once). + - containerd: service to maintain container runtime needed by docker. + - caprover: service to run caprover container(it run once). + - dockerd: service to run docker daemon. + - sshd: service to maintain ssh server daemon. + +- we adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system + ```bash + echo -500 >/proc/self/oom_score_adj + ``` diff --git a/collections/system_administrators/terraform/resources/terraform_k8s.md b/collections/system_administrators/terraform/resources/terraform_k8s.md new file mode 100644 index 0000000..2b3f992 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_k8s.md @@ -0,0 +1,210 @@ +

Kubernetes Cluster

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) +- [Grid Kubernetes Resource](#grid-kubernetes-resource) + - [Kubernetes Outputs](#kubernetes-outputs) +- [More Info](#more-info) +- [Demo Video](#demo-video) + +*** + +## Introduction + +While Kubernetes deployments can be quite difficult and can require lots of experience, we provide here a very simple way to provision Kubernetes cluster on the TFGrid. + +## Example + +An example for deploying a kubernetes cluster could be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/k8s/main.tf) + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_scheduler" "sched" { + requests { + name = "master_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + public_ips_count = 1 + } + requests { + name = "worker1_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } + requests { + name = "worker2_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } + requests { + name = "worker3_node" + cru = 2 + sru = 512 + mru = 2048 + distinct = true + } +} + +locals { + solution_type = "Kubernetes" + name = "myk8s" +} +resource "grid_network" "net1" { + solution_type = local.solution_type + name = local.name + nodes = distinct(values(grid_scheduler.sched.nodes)) + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_kubernetes" "k8s1" { + solution_type = local.solution_type + name = local.name + network_name = grid_network.net1.name + token = "12345678910122" + ssh_key = "PUT YOUR SSH KEY HERE" + + master { + disk_size = 2 + node = grid_scheduler.sched.nodes["master_node"] + name = "mr" + cpu = 2 + publicip = true + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker1_node"] + name = "w0" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker2_node"] + name = "w2" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker3_node"] + name = "w3" + cpu = 2 + memory = 2048 + } +} + +output "computed_master_public_ip" { + value = grid_kubernetes.k8s1.master[0].computedip +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +``` + +Everything looks similar to our first example, the global terraform section, the provider section and the network section. + +## Grid Kubernetes Resource + +```terraform +resource "grid_kubernetes" "k8s1" { + solution_type = local.solution_type + name = local.name + network_name = grid_network.net1.name + token = "12345678910122" + ssh_key = "PUT YOUR SSH KEY HERE" + + master { + disk_size = 2 + node = grid_scheduler.sched.nodes["master_node"] + name = "mr" + cpu = 2 + publicip = true + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker1_node"] + name = "w0" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker2_node"] + name = "w2" + cpu = 2 + memory = 2048 + } + workers { + disk_size = 2 + node = grid_scheduler.sched.nodes["worker3_node"] + name = "w3" + cpu = 2 + memory = 2048 + } +} +``` + +It requires + +- Network name that would contain the cluster +- A cluster token to work as a key for other nodes to join the cluster +- SSH key to access the cluster VMs. + +Then, we describe the the master and worker nodes in terms of: + +- name within the deployment +- disk size +- node to deploy it on +- cpu +- memory +- whether or not this node needs a public ip + +### Kubernetes Outputs + +```terraform +output "master_public_ip" { + value = grid_kubernetes.k8s1.master[0].computedip +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} + +``` + +We will be mainly interested in the master node public ip `computed IP` and the wireguard configurations + +## More Info + +A complete list of k8s resource parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/kubernetes.md). + +## Demo Video + +Here is a video showing how to deploying k8s with Terraform. + +
+ +
\ No newline at end of file diff --git a/collections/system_administrators/terraform/resources/terraform_k8s_demo.md b/collections/system_administrators/terraform/resources/terraform_k8s_demo.md new file mode 100644 index 0000000..2e49eb1 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_k8s_demo.md @@ -0,0 +1,7 @@ +

Demo Video Showing Deploying k8s with Terraform

+ +
+ +
+ + diff --git a/collections/system_administrators/terraform/resources/terraform_qsfs.md b/collections/system_administrators/terraform/resources/terraform_qsfs.md new file mode 100644 index 0000000..3dcffc7 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_qsfs.md @@ -0,0 +1,20 @@ +

Quantum Safe Filesystem (QSFS)

+ +

Table of Contents

+ +- [QSFS on Micro VM](./terraform_qsfs_on_microvm.md) +- [QSFS on Full VM](./terraform_qsfs_on_full_vm.md) + +*** + +## Introduction + +Quantum Storage is a FUSE filesystem that uses mechanisms of forward error correction (Reed Solomon codes) to make sure data (files and metadata) are stored in multiple remote places in a way that we can afford losing x number of locations without losing the data. + +The aim is to support unlimited local storage with remote backends for offload and backup which cannot be broken, even by a quantum computer. + +## QSFS Workload Parameters and Documentation + +A complete list of QSFS workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-qsfs). + +The [quantum-storage](https://github.com/threefoldtech/quantum-storage) repo contains a more thorough description of QSFS operation. \ No newline at end of file diff --git a/collections/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md b/collections/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md new file mode 100644 index 0000000..8fe3628 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_qsfs_on_full_vm.md @@ -0,0 +1,211 @@ +

QSFS on Full VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Create the Terraform Files](#create-the-terraform-files) +- [Full Example](#full-example) +- [Mounting the QSFS Disk](#mounting-the-qsfs-disk) +- [Debugging](#debugging) + +*** + +## Introduction + +This short ThreeFold Guide will teach you how to deploy a Full VM with QSFS disk on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04 based cloud-init image. + +The steps are very simple. You first need to create the Terraform files, and then deploy the full VM and the QSFS workloads. After the deployment is done, you will need to SSH into the full VM and manually mount the QSFS disk. + +The main goal of this guide is to show you all the necessary steps to deploy a Full VM with QSFS disk on the TGrid using Terraform. + + + +## Prerequisites + +- [Install Terraform](../terraform_install.md) + +You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +## Create the Terraform Files + +Deploying a FullVM is a bit different than deploying a MicroVM, let take a look first at these differences +- FullVMs uses `cloud-init` images and unlike the microVMs it needs at least one disk attached to the vm to copy the image to, and it serves as the root fs for the vm. +- QSFS disk is based on `virtiofs`, and you can't use QSFS disk as the first mount in a FullVM, instead you need a regular disk. +- Any extra disks/mounts will be available on the vm but unlike mounts on MicroVMs, extra disks won't be mounted automatically. you will need to mount it manually after the deployment. + +Let modify the qsfs-on-microVM [example](./terraform_qsfs_on_microvm.md) to deploy a QSFS on FullVM this time: + +- Inside the `grid_deployment` resource we will need to add a disk for the vm root fs. + + ```terraform + disks { + name = "roof-fs" + size = 10 + description = "root fs" + } + ``` + +- We need also to add an extra mount inside the `grid_deployment` resource in `vms` block. it must be the first mounts block in the vm: + + ```terraform + mounts { + disk_name = "rootfs" + mount_point = "/" + } + ``` + +- We also need to specify the flist for our FullVM, inside the `grid_deployment` in the `vms` block, change the flist filed to use this image: + - https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist + + + +## Full Example +The full example would be like this: + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +locals { + metas = ["meta1", "meta2", "meta3", "meta4"] + datas = ["data1", "data2", "data3", "data4"] +} + +resource "grid_network" "net1" { + nodes = [11] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = 11 + dynamic "zdbs" { + for_each = local.metas + content { + name = zdbs.value + description = "description" + password = "password" + size = 10 + mode = "user" + } + } + dynamic "zdbs" { + for_each = local.datas + content { + name = zdbs.value + description = "description" + password = "password" + size = 10 + mode = "seq" + } + } +} + +resource "grid_deployment" "qsfs" { + node = 11 + network_name = grid_network.net1.name + disks { + name = "rootfs" + size = 10 + description = "rootfs" + } + qsfs { + name = "qsfs" + description = "description6" + cache = 10240 # 10 GB + minimal_shards = 2 + expected_shards = 4 + redundant_groups = 0 + redundant_nodes = 0 + max_zdb_data_dir_size = 512 # 512 MB + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + compression_algorithm = "snappy" + metadata { + type = "zdb" + prefix = "hamada" + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + groups { + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + } + vms { + name = "vm" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + planetary = true + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9MI7fh4xEOOEKL7PvLvXmSeRWesToj6E26bbDASvlZnyzlSKFLuYRpnVjkr8JcuWKZP6RQn8+2aRs6Owyx7Tx+9kmEh7WI5fol0JNDn1D0gjp4XtGnqnON7d0d5oFI+EjQQwgCZwvg0PnV/2DYoH4GJ6KPCclPz4a6eXrblCLA2CHTzghDgyj2x5B4vB3rtoI/GAYYNqxB7REngOG6hct8vdtSndeY1sxuRoBnophf7MPHklRQ6EG2GxQVzAOsBgGHWSJPsXQkxbs8am0C9uEDL+BJuSyFbc/fSRKptU1UmS18kdEjRgGNoQD7D+Maxh1EbmudYqKW92TVgdxXWTQv1b1+3dG5+9g+hIWkbKZCBcfMe4nA5H7qerLvoFWLl6dKhayt1xx5mv8XhXCpEC22/XHxhRBHBaWwSSI+QPOCvs4cdrn4sQU+EXsy7+T7FIXPeWiC2jhFd6j8WIHAv6/rRPsiwV1dobzZOrCxTOnrqPB+756t7ANxuktsVlAZaM= sameh@sameh-inspiron-3576" + } + mounts { + disk_name = "rootfs" + mount_point = "/" + } + mounts { + disk_name = "qsfs" + mount_point = "/qsfs" + } + } +} +output "metrics" { + value = grid_deployment.qsfs.qsfs[0].metrics_endpoint +} +output "ygg_ip" { + value = grid_deployment.qsfs.vms[0].ygg_ip +} +``` + +**note**: the `grid_deployment.qsfs.name` should be the same as the qsfs disk name in `grid_deployment.vms.mounts.disk_name`. + + + +## Mounting the QSFS Disk +After applying this terraform file, you will need to manually mount the disk. +SSH into the VM and type `mount -t virtiofs /qsfs`: + +```bash +mkdir /qsfs +mount -t virtiofs qsfs /qsfs +``` + + + +## Debugging + +During deployment, you might encounter the following error when using mount command: + +`mount: /qsfs: wrong fs type, bad option, bad superblock on qsfs3, missing codepage or helper program, or other error.` + +- **Explanations**: Most likely you typed a wrong qsfs deployment/disk name that not matched with the one from qsfs deployment. +- **Solution**: Double check your terraform file, and make sure the name you are using as qsfs deployment/disk name is matching the one you are trying to mount on your VM. diff --git a/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md b/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md new file mode 100644 index 0000000..9b3a609 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_qsfs_on_microvm.md @@ -0,0 +1,348 @@ +

QSFS on Micro VM with Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Find a 3Node](#find-a-3node) +- [Create the Terraform Files](#create-the-terraform-files) + - [Create the Files with the Provider](#create-the-files-with-the-provider) + - [Create the Files Manually](#create-the-files-manually) +- [Deploy the Micro VM with Terraform](#deploy-the-micro-vm-with-terraform) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +In this ThreeFold Guide, we will learn how to deploy a Quantum Safe File System (QSFS) deployment with Terraform. The main template for this example can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/examples/resources/qsfs/main.tf). + + +## Prerequisites + +In this guide, we will be using Terraform to deploy a QSFS workload on a micro VM that runs on the TFGrid. Make sure to have the latest Terraform version. + +- [Install Terraform](../terraform_install.md) + + + + +## Find a 3Node + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +* Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +* Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +* For proper understanding, we give further information on some relevant columns: + * `ID` refers to the node ID + * `Free Public IPs` refers to available IPv4 public IP addresses + * `HRU` refers to HDD storage + * `SRU` refers to SSD storage + * `MRU` refers to RAM (memory) + * `CRU` refers to virtual cores (vcores) +* To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + * At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + * For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + * `Free SRU (GB)`: 15 + * `Free MRU (GB)`: 1 + * `Total CRU (Cores)`: 1 + * `Free Public IP`: 2 + * Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +We present two different methods to create the Terraform files. In the first method, we will create the Terraform files using the [TFGrid Terraform Provider](https://github.com/threefoldtech/terraform-provider-grid). In the second method, we will create the Terraform files manually. Feel free to choose the method that suits you best. + +### Create the Files with the Provider + +Creating the Terraform files is very straightforward. We want to clone the repository `terraform-provider-grid` locally and run some simple commands to properly set and start the deployment. + +* Clone the repository `terraform-provider-grid` + * ``` + git clone https://github.com/threefoldtech/terraform-provider-grid + ``` +* Go to the subdirectory containing the examples + * ``` + cd terraform-provider-grid/examples/resources/qsfs + ``` +* Set your own mnemonics (replace `mnemonics words` with your own mnemonics) + * ``` + export MNEMONICS="mnemonics words" + ``` +* Set the network (replace `network` by the desired network, e.g. `dev`, `qa`, `test` or `main`) + * ``` + export NETWORK="network" + ``` +* Initialize the Terraform deployment + * ``` + terraform init + ``` +* Apply the Terraform deployment + * ``` + terraform apply + ``` +* At any moment, you can destroy the deployment with the following line + * ``` + terraform destroy + ``` + +When using this method, you might need to change some parameters within the `main.tf` depending on your specific deployment. + +### Create the Files Manually + +For this method, we use two files to deploy with Terraform. The first file contains the environment variables (**credentials.auto.tfvars**) and the second file contains the parameters to deploy our workloads (**main.tf**). To facilitate the deployment, only the environment variables file needs to be adjusted. The **main.tf** file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file, but only the file **credentials.auto.tfvars**. + +* Open the terminal and go to the home directory (optional) + * ``` + cd ~ + ``` + +* Create the folder `terraform` and the subfolder `deployment-qsfs-microvm`: + * ``` + mkdir -p terraform && cd $_ + ``` + * ``` + mkdir deployment-qsfs-microvm && cd $_ + ``` +* Create the `main.tf` file: + * ``` + nano main.tf + ``` + +* Copy the `main.tf` content and save the file. + + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +# Variables + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "network" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +variable "minimal_shards" { + type = string +} + +variable "expected_shards" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = var.network +} + +locals { + metas = ["meta1", "meta2", "meta3", "meta4"] + datas = ["data1", "data2", "data3", "data4"] +} + +resource "grid_network" "net1" { + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = var.tfnodeid1 + dynamic "zdbs" { + for_each = local.metas + content { + name = zdbs.value + description = "description" + password = "password" + size = var.size + mode = "user" + } + } + dynamic "zdbs" { + for_each = local.datas + content { + name = zdbs.value + description = "description" + password = "password" + size = var.size + mode = "seq" + } + } +} + +resource "grid_deployment" "qsfs" { + node = var.tfnodeid1 + network_name = grid_network.net1.name + qsfs { + name = "qsfs" + description = "description6" + cache = 10240 # 10 GB + minimal_shards = var.minimal_shards + expected_shards = var.expected_shards + redundant_groups = 0 + redundant_nodes = 0 + max_zdb_data_dir_size = 512 # 512 MB + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + compression_algorithm = "snappy" + metadata { + type = "zdb" + prefix = "hamada" + encryption_algorithm = "AES" + encryption_key = "4d778ba3216e4da4231540c92a55f06157cabba802f9b68fb0f78375d2e825af" + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode != "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + groups { + dynamic "backends" { + for_each = [for zdb in grid_deployment.d1.zdbs : zdb if zdb.mode == "seq"] + content { + address = format("[%s]:%d", backends.value.ips[1], backends.value.port) + namespace = backends.value.namespace + password = backends.value.password + } + } + } + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = var.cpu + memory = var.memory + entrypoint = "/sbin/zinit init" + planetary = true + env_vars = { + SSH_KEY = var.SSH_KEY + } + mounts { + disk_name = "qsfs" + mount_point = "/qsfs" + } + } +} +output "metrics" { + value = grid_deployment.qsfs.qsfs[0].metrics_endpoint +} +output "ygg_ip" { + value = grid_deployment.qsfs.vms[0].ygg_ip +} +``` + +Note that we named the VM as **vm1**. + +* Create the `credentials.auto.tfvars` file: + * ``` + nano credentials.auto.tfvars + ``` + +* Copy the `credentials.auto.tfvars` content and save the file. + * ```terraform + # Network + network = "main" + + # Credentials + mnemonics = "..." + SSH_KEY = "..." + + # Node Parameters + tfnodeid1 = "..." + size = "15" + cpu = "1" + memory = "512" + + # QSFS Parameters + minimal_shards = "2" + expected_shards = "4" + ``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the 3Node you want to deploy on. Simply replace the three dots by the content. If you want to deploy on the Test net, you can replace **main** by **test**. + +Set the parameters for your VMs as you wish. For this example, we use the minimum parameters. + +For the section QSFS Parameters, you can decide on how many VMs your data will be sharded. You can also decide the minimum of VMs to recover the whole of your data. For example, a 16 minimum, 20 expected configuration will disperse your data on 20 3Nodes, and the deployment will only need at any time 16 VMs to recover the whole of your data. This gives resilience and redundancy to your storage. A 2 minimum, 4 expected configuration is given here for the main template. + + + +## Deploy the Micro VM with Terraform + +We now deploy the QSFS deployment with Terraform. Make sure that you are in the correct folder `terraform/deployment-qsfs-microvm` containing the main and variables files. + +* Initialize Terraform by writing the following in the terminal: + * ``` + terraform init + ``` +* Apply the Terraform deployment: + * ``` + terraform apply + ``` + * Terraform will then present you the actions it will perform. Write `yes` to confirm the deployment. + +Note that, at any moment, if you want to see the information on your Terraform deployments, write the following: + * ``` + terraform show + ``` + + + +## SSH into the 3Node + +You can now SSH into the 3Node with Planetary Network. + +To SSH with Planetary Network, write the following: + +``` +ssh root@planetary_IP +``` + +Note that the IP address should be the value of the parameter **ygg_ip** from the Terraform Outputs. + +You now have an SSH connection access to the VM over Planetary Network. + + + +## Questions and Feedback + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/terraform/resources/terraform_resources_readme.md b/collections/system_administrators/terraform/resources/terraform_resources_readme.md new file mode 100644 index 0000000..1b6405b --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_resources_readme.md @@ -0,0 +1,14 @@ +

Terraform Resources

+ +

Table of Contents

+ +- [Using Scheduler](./terraform_scheduler.md) +- [Virtual Machine](./terraform_vm.md) +- [Web Gateway](./terraform_vm_gateway.md) +- [Kubernetes Cluster](./terraform_k8s.md) +- [ZDB](./terraform_zdb.md) +- [Zlogs](./terraform_zlogs.md) +- [Quantum Safe Filesystem](./terraform_qsfs.md) + - [QSFS on Micro VM](./terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](./terraform_qsfs_on_full_vm.md) +- [CapRover](./terraform_caprover.md) diff --git a/collections/system_administrators/terraform/resources/terraform_scheduler.md b/collections/system_administrators/terraform/resources/terraform_scheduler.md new file mode 100644 index 0000000..dc51676 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_scheduler.md @@ -0,0 +1,153 @@ +

Scheduler Resource

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How the Scheduler Works](#how-the-scheduler-works) +- [Quick Example](#quick-example) + +*** + + +## Introduction + +Using the TFGrid scheduler enables users to automatically get the nodes that match their criterias. We present here some basic information on this resource. + + + +## How the Scheduler Works + +To better understand the scheduler, we summarize the main process: + +- At first if `farm_id` is specified, then the scheduler will check if this farm has the Farmerbot enabled + - If so it will try to find a suitable node using the Farmerbot. +- If the Farmerbot is not enabled, it will use grid proxy to find a suitable node. + + + +## Quick Example + +Let's take a look at the following example: + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1-dev" + } + } +} +provider "grid" { +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + farm_id = 53 + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} + +``` + +From the example above, we take a closer look at the following section: + +``` +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} +``` + +In this case, the user is specifying the requirements which match the deployments. + +Later on, the user can use the result of the scheduler which contains the `[nodes]` in the deployments: + +``` +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ... +} + +``` + +and + +``` +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + ... + } + ... +} +``` + diff --git a/collections/system_administrators/terraform/resources/terraform_vm.md b/collections/system_administrators/terraform/resources/terraform_vm.md new file mode 100644 index 0000000..b349c5e --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_vm.md @@ -0,0 +1,282 @@ +

VM Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Template](#template) +- [Using scheduler](#using-scheduler) +- [Using Grid Explorer](#using-grid-explorer) +- [Describing the overlay network for the project](#describing-the-overlay-network-for-the-project) +- [Describing the deployment](#describing-the-deployment) +- [Which flists to use](#which-flists-to-use) +- [Remark multiple VMs](#remark-multiple-vms) +- [Reference](#reference) + +*** + +## Introduction + +The following provides the basic information to deploy a VM with Terraform on the TFGrid. + +## Template + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1-dev" + } + } +} + +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet +} + +locals { + name = "testvm" +} + +resource "grid_scheduler" "sched" { + requests { + name = "node1" + cru = 3 + sru = 1024 + mru = 2048 + node_exclude = [33] # exlude node 33 from your search + public_ips_count = 0 # this deployment needs 0 public ips + public_config = false # this node does not need to have public config + } +} + +resource "grid_network" "net1" { + name = local.name + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } +} +output "vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "vm1_ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "vm2_ygg_ip" { + value = grid_deployment.d1.vms[1].ygg_ip +} + +``` + +## Using scheduler + +- If the user decided to choose [scheduler](terraform_scheduler.md) to find a node for him, then he will use the node returned from the scheduler as the example above + +## Using Grid Explorer + +- If not, the user can still specify the node directly if he wants using the grid explorer to find a node that matches his requirements + +## Describing the overlay network for the project + +```terraform +resource "grid_network" "net1" { + nodes = [grid_scheduler.sched.nodes["node1"]] + ip_range = "10.1.0.0/16" + name = "network" + description = "some network" + add_wg_access = true +} +``` + +We tell terraform we will have a network one node `having the node ID returned from the scheduler` using the IP Range `10.1.0.0/16` and add wireguard access for this network + +## Describing the deployment + +```terraform +resource "grid_deployment" "d1" { + name = local.name + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + } + +} +``` + +It's bit long for sure but let's try to dissect it a bit + +```terraform + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") +``` + +- `node = grid_scheduler.sched.nodes["node1"]` means this deployment will happen on node returned from the scheduler. Otherwise the user can specify the node as `node = 2` and in this case the choice of the node is completely up to the user at this point. They need to do the capacity planning. Check the [Node Finder](../../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria. +- `network_name` which network to deploy our project on, and here we choose the `name` of network `net1` +- `ip_range` here we [lookup](https://www.terraform.io/docs/language/functions/lookup.html) the iprange of node `2` and initially load it with `""` + +> Advannced note: Direct map access fails during the planning if the key doesn't exist which happens in cases like adding a node to the network and a new deployment on this node. So it's replaced with this to make a default empty value to pass the planning validation and it's validated anyway inside the plugin. + +## Which flists to use + +see [list of flists](../../../developers/flist/grid3_supported_flists.md) + +## Remark multiple VMs + +in terraform you can define items of a list like the following + +``` +listname { + +} +listname { + +} +``` + +So to add a VM + +```terraform + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY ="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeq1MFCQOv3OCLO1HxdQl8V0CxAwt5AzdsNOL91wmHiG9ocgnq2yipv7qz+uCS0AdyOSzB9umyLcOZl2apnuyzSOd+2k6Cj9ipkgVx4nx4q5W1xt4MWIwKPfbfBA9gDMVpaGYpT6ZEv2ykFPnjG0obXzIjAaOsRthawuEF8bPZku1yi83SDtpU7I0pLOl3oifuwPpXTAVkK6GabSfbCJQWBDSYXXM20eRcAhIMmt79zo78FNItHmWpfPxPTWlYW02f7vVxTN/LUeRFoaNXXY+cuPxmcmXp912kW0vhK9IvWXqGAEuSycUOwync/yj+8f7dRU7upFGqd6bXUh67iMl7 ahmed@ahmedheaven" + } + + } +``` + +- We give it a name within our deployment `vm1` +- `flist` is used to define the flist to run within the VM. Check the [list of flists](../../../developers/flist/grid3_supported_flists.md) +- `cpu` and `memory` are used to define the cpu and memory +- `publicip` is usued to define if it requires a public IP or not +- `entrypoint` is used define the entrypoint which in most of the cases in `/sbin/zinit init`, but in case of flists based on vms it can be specific to each flist +- `env_vars` are used to define te environment variables, in this example we define `SSH_KEY` to authorize me accessing the machine + Here we say we will have this deployment on node with `twin ID 2` using the overlay network defined from before `grid_network.net1.name` and use the ip range allocated to that specific node `2` + +The file describes only the desired state which is `a deployment of two VMs and their specifications in terms of cpu and memory, and some environment variables e.g sshkey to ssh into the machine` + +## Reference + +A complete list of VM workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-vms). + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [8] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + node = 8 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "") + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + publicip = true + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + planetary = true + } + vms { + name = "anothervm" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtCuUUCZGLZ4NoihAiUK8K0kSoTR1WgIaLQKqMdQ/99eocMLqJgQMRIp8lueFG7SpcgXVRzln8KNKZX1Hm8lcrXICr3dnTW/0bpEnF4QOGLYZ/qTLF5WmoCgKyJ6WO96GjWJBsZPads+RD0WeiijV7jj29lALsMAI8CuOH0pcYUwWsRX/I1z2goMPNRY+PBjknMYFXEqizfUXqUnpzF3w/bKe8f3gcrmOm/Dxh1nHceJDW52TJL/sPcl6oWnHZ3fY4meTiAS5NZglyBF5oKD463GJnMt/rQ1gDNl8E4jSJUArN7GBJntTYxFoFo6zxB1OsSPr/7zLfPG420+9saBu9yN1O9DlSwn1ZX+Jg0k7VFbUpKObaCKRmkKfLiXJdxkKFH/+qBoCCnM5hfYxAKAyQ3YCCP/j9wJMBkbvE1QJMuuoeptNIvSQW6WgwBfKIK0shsmhK2TDdk0AHEnzxPSkVGV92jygSLeZ4ur/MZqWDx/b+gACj65M3Y7tzSpsR76M= omar@omar-Predator-PT315-52" + } + } +} +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_zmachine2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +output "ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} +``` diff --git a/collections/system_administrators/terraform/resources/terraform_vm_gateway.md b/collections/system_administrators/terraform/resources/terraform_vm_gateway.md new file mode 100644 index 0000000..d7feb5d --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_vm_gateway.md @@ -0,0 +1,172 @@ +

Terraform Web Gateway With VM

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Expose with Prefix](#expose-with-prefix) +- [Expose with Full Domain](#expose-with-full-domain) +- [Using Gateway Name on Private Networks (WireGuard)](#using-gateway-name-on-private-networks-wireguard) + +*** + +## Introduction + +In this section, we provide the basic information for a VM web gateway using Terraform on the TFGrid. + +## Expose with Prefix + +A complete list of gateway name workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/name_proxy.md). + +``` + terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +# this data source is used to break circular dependency in cases similar to the following: +# vm: needs to know the domain in its init script +# gateway_name: needs the ip of the vm to use as backend. +# - the fqdn can be computed from grid_gateway_domain for the vm +# - the backend can reference the vm ip directly +data "grid_gateway_domain" "domain" { + node = 7 + name = "ashraf" +} +resource "grid_network" "net1" { + nodes = [8] + ip_range = "10.1.0.0/24" + name = "network" + description = "newer network" + add_wg_access = true +} +resource "grid_deployment" "d1" { + node = 8 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 8, "") + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/strm-helloworld-http-latest.flist" + cpu = 2 + publicip = true + memory = 1024 + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP ashraf@thinkpad" + } + planetary = true + } +} +resource "grid_name_proxy" "p1" { + node = 7 + name = "ashraf" + backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])] + tls_passthrough = false +} +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "public_ip" { + value = split("/",grid_deployment.d1.vms[0].computedip)[0] +} + +output "ygg_ip" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +``` + +please note to use grid_name_proxy you should choose a node that has public config and has a domain in its public config like node 7 in the following example +![ ](./img/graphql_publicconf.png) + +Here + +- we created a grid domain resource `ashraf` to be deployed on gateway node `7` to end up with a domain `ashraf.ghent01.devnet.grid.tf` +- we create a proxy for the gateway to send the traffic coming to `ashraf.ghent01.devnet.grid.tf` to the vm as a backend, we say `tls_passthrough is false` to let the gateway terminate the traffic, if you replcae it with `true` your backend service needs to be able to do the TLS termination + +## Expose with Full Domain + +A complete list of gateway fqdn workload parameters can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/fqdn_proxy.md). + +it is more like the above example the only difference is you need to create an `A record` on your name provider for `remote.omar.grid.tf` to gateway node `7` IPv4. + +``` + +resource "grid_fqdn_proxy" "p1" { + node = 7 + name = "workloadname" + fqdn = "remote.omar.grid.tf" + backends = [format("http://%s", split("/", grid_deployment.d1.vms[0].computedip)[0])] + tls_passthrough = true +} + +output "fqdn" { + value = grid_fqdn_proxy.p1.fqdn +} +``` + +## Using Gateway Name on Private Networks (WireGuard) + +It is possible to create a vm with private ip (wireguard) and use it as a backend for a gateway contract. this is done as the following + +- Create a gateway domain data source. this data source will construct the full domain so we can use it afterwards + +``` +data "grid_gateway_domain" "domain" { + node = grid_scheduler.sched.nodes["node1"] + name = "examp123456" +} +``` + +- create a network resource + +``` +resource "grid_network" "net1" { + nodes = [grid_scheduler.sched.nodes["node1"]] + ip>_range = "10.1.0.0/16" + name = mynet + description = "newer network" +} +``` + +- Create a vm to host your service + +``` +resource "grid_deployment" "d1" { + name = vm1 + node = grid_scheduler.sched.nodes["node1"] + network_name = grid_network.net1.name + vms { + ... + } +} +``` + +- Create a grid_name_proxy resource using the network created above and the wireguard ip of the vm that host the service. Also consider changing the port to the correct port + +``` +resource "grid_name_proxy" "p1" { + node = grid_scheduler.sched.nodes["node1"] + name = "examp123456" + backends = [format("http://%s:9000", grid_deployment.d1.vms[0].ip)] + network = grid_network.net1.name + tls_passthrough = false +} +``` + +- To know the full domain created using the data source above you can show it via + +``` +output "fqdn" { + value = data.grid_gateway_domain.domain.fqdn +} +``` + +- Now vist the domain you should be able to reach your service hosted on the vm diff --git a/collections/system_administrators/terraform/resources/terraform_zdb.md b/collections/system_administrators/terraform/resources/terraform_zdb.md new file mode 100644 index 0000000..8efa916 --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_zdb.md @@ -0,0 +1,64 @@ +

Deploying a ZDB with terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +We provide a basic template for ZDB deployment with Terraform on the TFGrid. + +A brief description of zdb fields can be found [here](https://github.com/threefoldtech/terraform-provider-grid/blob/development/docs/resources/deployment.md#nested-schema-for-zdbs). + +A more thorough description of zdb operation can be found in its parent [repo](https://github.com/threefoldtech/0-db). + +## Example + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_deployment" "d1" { + node = 4 + + zdbs{ + name = "zdb1" + size = 10 + description = "zdb1 description" + password = "zdbpasswd1" + mode = "user" + } + zdbs{ + name = "zdb2" + size = 2 + description = "zdb2 description" + password = "zdbpasswd2" + mode = "seq" + } +} + +output "deployment_id" { + value = grid_deployment.d1.id +} + +output "zdb1_endpoint" { + value = format("[%s]:%d", grid_deployment.d1.zdbs[0].ips[0], grid_deployment.d1.zdbs[0].port) +} + +output "zdb1_namespace" { + value = grid_deployment.d1.zdbs[0].namespace +} +``` + + diff --git a/collections/system_administrators/terraform/resources/terraform_zlogs.md b/collections/system_administrators/terraform/resources/terraform_zlogs.md new file mode 100644 index 0000000..afe344b --- /dev/null +++ b/collections/system_administrators/terraform/resources/terraform_zlogs.md @@ -0,0 +1,119 @@ +

Zlogs

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Using Zlogs](#using-zlogs) + - [Creating a server](#creating-a-server) + - [Streaming logs](#streaming-logs) + +--- + +## Introduction + +Zlogs is a utility that allows you to stream VM logs to a remote location. You can find the full description [here](https://github.com/threefoldtech/zos/tree/main/docs/manual/zlogs) + +## Using Zlogs + +In Terraform, a vm has a zlogs field, this field should contain a list of target URLs to stream logs to. + +Valid protocols are: `ws`, `wss`, and `redis`. + +For example, to deploy two VMs named "vm1" and "vm2", with one vm1 streaming logs to vm2, this is what main.tf looks like: +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +provider "grid" { +} + +resource "grid_network" "net1" { + nodes = [2, 4] + ip_range = "10.1.0.0/16" + name = "network" + description = "some network description" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 2, "") + vms { + name = "vm1" #streaming logs + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + entrypoint = "/sbin/zinit init" + cpu = 2 + memory = 1024 + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + zlogs = tolist([ + format("ws://%s:5000", replace(grid_deployment.d2.vms[0].computedip, "//.*/", "")), + ]) + } +} + +resource "grid_deployment" "d2" { + node = 4 + network_name = grid_network.net1.name + ip_range = lookup(grid_network.net1.nodes_ip_range, 4, "") + vms { + name = "vm2" #receiving logs + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = "PUT YOUR SSH KEY HERE" + } + publicip = true + } +} +``` + +At this point, two VMs are deployed, and vm1 is ready to stream logs to vm2. +But what is missing here is that vm1 is not actually producing any logs, and vm2 is not listening for incoming messages. + +### Creating a server + +- First, we will create a server on vm2. This should be a websocket server listening on port 5000 as per our zlogs definition in main.tf ```ws://%s:5000```. + +- a simple python websocket server looks like this: +``` +import asyncio +import websockets +import gzip + + +async def echo(websocket): + async for message in websocket: + data = gzip.decompress(message).decode('utf-8') + f = open("output.txt", "a") + f.write(data) + f.close() + +async def main(): + async with websockets.serve(echo, "0.0.0.0", 5000, ping_interval=None): + await asyncio.Future() + +asyncio.run(main()) +``` +- Note that incoming messages are decompressed since zlogs compresses any messages using gzip. +- After a message is decompressed, it is then appended to `output.txt`. + +### Streaming logs + +- Zlogs streams anything written to stdout of the zinit process on a vm. +- So, simply running ```echo "to be streamed" 1>/proc/1/fd/1``` on vm1 should successfuly stream this message to the vm2 and we should be able to see it in `output.txt`. +- Also, if we want to stream a service's logs, a service definition file should be created in ```/etc/zinit/example.yaml``` on vm1 and should look like this: +``` +exec: sh -c "echo 'to be streamed'" +log: stdout +``` + diff --git a/collections/system_administrators/terraform/sidebar.md b/collections/system_administrators/terraform/sidebar.md new file mode 100644 index 0000000..19759ad --- /dev/null +++ b/collections/system_administrators/terraform/sidebar.md @@ -0,0 +1,30 @@ +- [**Home**](@threefold:threefold_home) +- [**Manual 3 Home**](@manual3_home_new) + +--- + +**Terraform** + +- [Read Me First](@terraform_readme) +- [Install](@terraform_install) +- [Basics](@terraform_basics) +- [Tutorial](@terraform_get_started) +- [Delete](@terraform_delete) + +--- + +**Resources** + +- [using scheduler](@terraform_scheduler) +- [Virtual Machine](@terraform_vm) +- [Web Gateway](@terraform_vm_gateway) +- [Kubernetes cluster](@terraform_k8s) +- [ZDB](@terraform_zdb) +- [Quantum Filesystem](@terraform_qsfs) +- [CapRover](@terraform_caprover) + **Advanced** +- [Terraform Provider](@terraform_provider) +- [Terraform Provisioners](@terraform_provisioners) +- [Mounts](@terraform_mounts) +- [Capacity planning](@terraform_capacity_planning) +- [Updates](terraform_updates) diff --git a/collections/system_administrators/terraform/terraform_basic_example.md b/collections/system_administrators/terraform/terraform_basic_example.md new file mode 100644 index 0000000..60c1fca --- /dev/null +++ b/collections/system_administrators/terraform/terraform_basic_example.md @@ -0,0 +1,44 @@ +```terraform +resource "grid_network" "net" { + nodes = [2] + ip_range = "10.1.0.0/16" + name = "network" + description = "newer network" +} + +resource "grid_deployment" "d1" { + node = 2 + network_name = grid_network.net.name + ip_range = lookup(grid_network.net.nodes_ip_range, 2, "") + disks { + name = "mydisk1" + size = 2 + description = "this is my disk description1" + + } + disks { + name = "mydisk2" + size=2 + description = "this is my disk2" + } + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 1 + memory = 1024 + entrypoint = "/sbin/zinit init" + mounts { + disk_name = "mydisk1" + mount_point = "/opt" + } + mounts { + disk_name = "mydisk2" + mount_point = "/test" + } + env_vars = { + SSH_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTwULSsUubOq3VPWL6cdrDvexDmjfznGydFPyaNcn7gAL9lRxwFbCDPMj7MbhNSpxxHV2+/iJPQOTVJu4oc1N7bPP3gBCnF51rPrhTpGCt5pBbTzeyNweanhedkKDsCO2mIEh/92Od5Hg512dX4j7Zw6ipRWYSaepapfyoRnNSriW/s3DH/uewezVtL5EuypMdfNngV/u2KZYWoeiwhrY/yEUykQVUwDysW/xUJNP5o+KSTAvNSJatr3FbuCFuCjBSvageOLHePTeUwu6qjqe+Xs4piF1ByO/6cOJ8bt5Vcx0bAtI8/MPApplUU/JWevsPNApvnA/ntffI+u8DCwgP" + } + + } +} +``` diff --git a/collections/system_administrators/terraform/terraform_basics.md b/collections/system_administrators/terraform/terraform_basics.md new file mode 100644 index 0000000..0d200d6 --- /dev/null +++ b/collections/system_administrators/terraform/terraform_basics.md @@ -0,0 +1,187 @@ +

Terraform Basics

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Requirements](#requirements) +- [Basic Commands](#basic-commands) +- [Find A Node](#find-a-node) +- [Preparation](#preparation) +- [Main File Details](#main-file-details) + - [Initializing the Provider](#initializing-the-provider) +- [Export Environment Variables](#export-environment-variables) + - [Output Section](#output-section) +- [Start a Deployment](#start-a-deployment) +- [Delete a Deployment](#delete-a-deployment) +- [Available Flists](#available-flists) +- [Full and Micro Virtual Machines](#full-and-micro-virtual-machines) +- [Tips on Managing Resources](#tips-on-managing-resources) +- [Conclusion](#conclusion) + +*** + +## Introduction + +We cover some important aspects of Terraform deployments on the ThreeFold Grid. + +For a complete guide on deploying a full VM on the TFGrid, read [this documentation](./terraform_full_vm.md). + +## Requirements + +Here are the requirements to use Terraform on the TFGrid: + +- [Set your TFGrid account](../getstarted/tfgrid3_getstarted.md) +- [Install Terraform](../terraform/terraform_install.md) + +## Basic Commands + +Here are some very useful commands to use with Terraform: + +- Initialize the repo `terraform init` +- Execute a terraform file `terraform apply` +- See the output `terraform output` + - This is useful when you want to output variables such as public ip, planetary network ip, wireguard configurations, etc. +- See the state `terraform show` +- Destroy `terraform destroy` + +## Find A Node + +There are two options when it comes to finding a node to deploy on. You can use the scheduler or search for a node with the Nodes Explorer. + +- Use the [scheduler](resources/terraform_scheduler.md) + - Scheduler will help you find a node that matches your criteria +- Use the Nodes Explorer + - You can check the [Node Finder](../../dashboard/deploy/node_finder.md) to know which nodes fits your deployment criteria. + - Make sure you choose a node which has enough capacity and is available (up and running). + +## Preparation + +We cover the basic preparations beforing explaining the main file. + +- Make a directory for your project + - ``` + mkdir myfirstproject + ``` +- Change directory + - ``` + cd myfirstproject + ``` +- Create a main file and insert content + - ``` + nano main.tf + ``` + + +## Main File Details + +Here is a concrete example of a Terraform main file. + +### Initializing the Provider + + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1" + } + } +} + +``` +- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet +- If `version = "1.8.1"` is omitted, the provider will fetch the latest version, but for environments other than main you have to specify the version explicitly +- For devnet, qanet and testnet use version = `"-dev", "-qa" and "-rcx"` respectively + +Providers can have different arguments e.g using which identity when deploying, which Substrate network to create contracts on, etc. This can be done in the provider section, as shown below: + +```terraform +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet + +} +``` + +## Export Environment Variables + +When writing the main file, you can decide to leave a variable content empty. In this case you can export the variable content as environment variables. + +* Export your mnemonics + * ``` + export MNEMONICS="..." + ``` +* Export the network + * ``` + export NETWORK="..." + ``` + +For more info, consult the [Provider Manual](./advanced/terraform_provider.md). + +### Output Section + +The output section is useful to find information such as: + +- the overlay wireguard network configurations +- the private IPs of the VMs +- the public IP of the VM `exposed under computedip` + + +The output section will look something like this: + +```terraform +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +## Start a Deployment + +To start a deployment, run the following command `terraform init && terraform apply`. + +## Delete a Deployment + +To delete a deployment, run the following command: + +``` +terraform destroy +``` + +## Available Flists + +You can consult the [list of Flists](../../developers/flist/flist.md) to learn more about the available Flist to use with a virtual machine. + +## Full and Micro Virtual Machines + +There are some key distinctions to take into account when it comes to deploying full or micro VMs on the TFGrid: + +* Only the flist determines if we get a full or a micro VM +* Full VMs ignore the **rootfs** field and use the first mount as their root filesystem (rootfs) +* We can upgrade a full VM by tearing it down, leaving the disk in detached state, and then reattaching the disk to a new VM + * For more information on this, read [this documentation](https://forum.threefold.io/t/full-vm-recovery-tool/4152). + +## Tips on Managing Resources + +As a general advice, you can use multiple accounts on TFChain and group your resources per account. + +This gives you the following benefits: + +- More control over TFT spending +- Easier to delete all your contracts +- Less chance to make mistakes +- Can use an account to share access with multiple people + +## Conclusion + +This was a quick introduction to Terraform, for a complete guide, please read [this documentation](./terraform_full_vm.md). For advanced tutorials and deployments, read [this section](./advanced/terraform_advanced_readme.md). To learn more about the different resources to deploy with Terraform on the TFGrid, read [this section](./resources/terraform_resources_readme.md). \ No newline at end of file diff --git a/collections/system_administrators/terraform/terraform_full_vm.md b/collections/system_administrators/terraform/terraform_full_vm.md new file mode 100644 index 0000000..42ceee2 --- /dev/null +++ b/collections/system_administrators/terraform/terraform_full_vm.md @@ -0,0 +1,280 @@ +

Terraform Complete Full VM Deployment

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Main Process](#main-process) +- [Prerequisites](#prerequisites) +- [Find a 3Node with the ThreeFold Explorer](#find-a-3node-with-the-threefold-explorer) + - [Using the Grid Scheduler](#using-the-grid-scheduler) + - [Using the Grid Explorer](#using-the-grid-explorer) +- [Create the Terraform Files](#create-the-terraform-files) +- [Deploy the Full VM with Terraform](#deploy-the-full-vm-with-terraform) +- [SSH into the 3Node](#ssh-into-the-3node) +- [Delete the Deployment](#delete-the-deployment) +- [Conclusion](#conclusion) + +*** + +## Introduction + +This short ThreeFold Guide will teach you how to deploy a Full VM on the TFGrid using Terraform. For this guide, we will be deploying Ubuntu 22.04. + +The steps are very simple. You first need to create the Terraform files, the variables file and the deployment file, and then deploy the full VM. After the deployment is done, you can SSH into the full VM. + +The main goal of this guide is to show you all the necessary steps to deploy a Full VM on the TGrid using Terraform. Once you get acquainted with this first basic deployment, you should be able to explore on your own the possibilities that the TFGrid and Terraform combined provide. + + + +## Main Process + +For this guide, we use two files to deploy with Terraform. The first file contains the environment variables and the second file contains the parameters to deploy our workload. + +To facilitate the deployment, only the environment variables file needs to be adjusted. The `main.tf` file contains the environment variables (e.g. `var.size` for the disk size) and thus you do not need to change this file. Of course, you can adjust the deployment based on your preferences. That being said, it should be easy to deploy the Terraform deployment with the `main.tf` file as is. + +On your local computer, create a new folder named `terraform` and a subfolder called `deployments`. In the subfolder, store the files `main.tf` and `credentials.auto.tfvars`. + +Modify the variable file to take into account your own seed phrase and SSH keys. You should also specifiy the node ID of the 3Node you will be deploying on. + +Once this is done, initialize and apply Terraform to deploy your workload, then SSH into the Full VM. That's it! Now let's go through all these steps in further details. + + + +## Prerequisites + +- [Install Terraform](./terraform_install.md) + +You need to download and install properly Terraform. Simply follow the documentation depending on your operating system (Linux, MAC and Windows). + + + +## Find a 3Node with the ThreeFold Explorer + +We want to find a proper 3Node to deploy our workload. For this guide, we want a 3Node with at least 15GB of storage, 1 vcore and 512MB of RAM, which are the minimum specifications for a micro VM on the TFGrid. We are also looking for a 3Node with a public IPv4 address. + +We present two options to find a suitable node: the scheduler and the TFGrid Explorer. + + + +### Using the Grid Scheduler + +Using the TFGrid scheduler can be very efficient depending on what you are trying to achieve. To learn more about the scheduler, please refer to this [Scheduler Guide](resources/terraform_scheduler.md). + + + +### Using the Grid Explorer + +We show here how to find a suitable 3Node using the ThreeFold Explorer. + +- Go to the ThreeFold Grid [Node Finder](https://dashboard.grid.tf/#/deploy/node-finder/) (Main Net) +- Find a 3Node with suitable resources for the deployment and take note of its node ID on the leftmost column `ID` +- For proper understanding, we give further information on some relevant columns: + - `ID` refers to the node ID + - `Free Public IPs` refers to available IPv4 public IP addresses + - `HRU` refers to HDD storage + - `SRU` refers to SSD storage + - `MRU` refers to RAM (memory) + - `CRU` refers to virtual cores (vcores) +- To quicken the process of finding a proper 3Node, you can narrow down the search by adding filters: + - At the top left of the screen, in the `Filters` box, select the parameter(s) you want. + - For each parameter, a new field will appear where you can enter a minimum number requirement for the 3Nodes. + - `Free SRU (GB)`: 15 + - `Free MRU (GB)`: 1 + - `Total CRU (Cores)`: 1 + - `Free Public IP`: 2 + - Note: if you want a public IPv4 address, it is recommended to set the parameter `FREE PUBLIC IP` to at least 2 to avoid false positives. This ensures that the shown 3Nodes have viable IP addresses. + +Once you've found a proper node, take node of its node ID. You will need to use this ID when creating the Terraform files. + + + +## Create the Terraform Files + +Open the terminal. + +- Go to the home folder + + - ``` + cd ~ + ``` + +- Create the folder `terraform` and the subfolder `deployment-full-vm`: + - ``` + mkdir -p terraform/deployment-full-vm + ``` + - ``` + cd terraform/deployment-full-vm + ``` +- Create the `main.tf` file: + + - ``` + nano main.tf + ``` + +- Copy the `main.tf` content and save the file. + +``` +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + } + } +} + +variable "mnemonics" { + type = string +} + +variable "SSH_KEY" { + type = string +} + +variable "tfnodeid1" { + type = string +} + +variable "size" { + type = string +} + +variable "cpu" { + type = string +} + +variable "memory" { + type = string +} + +provider "grid" { + mnemonics = var.mnemonics + network = "main" +} + +locals { + name = "tfvm" +} + +resource "grid_network" "net1" { + name = local.name + nodes = [var.tfnodeid1] + ip_range = "10.1.0.0/16" + description = "newer network" + add_wg_access = true +} + +resource "grid_deployment" "d1" { + disks { + name = "disk1" + size = var.size + } + name = local.name + node = var.tfnodeid1 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04.flist" + cpu = var.cpu + mounts { + disk_name = "disk1" + mount_point = "/disk1" + } + memory = var.memory + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = var.SSH_KEY + } + publicip = true + planetary = true + } +} + +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_zmachine1_ip" { + value = grid_deployment.d1.vms[0].ip +} + +output "ygg_ip1" { + value = grid_deployment.d1.vms[0].ygg_ip +} + +output "ipv4_vm1" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +In this file, we name the VM as `vm1`. + +- Create the `credentials.auto.tfvars` file: + + - ``` + nano credentials.auto.tfvars + ``` + +- Copy the `credentials.auto.tfvars` content and save the file. + +``` +mnemonics = "..." +SSH_KEY = "..." + +tfnodeid1 = "..." + +size = "15" +cpu = "1" +memory = "512" +``` + +Make sure to add your own seed phrase and SSH public key. You will also need to specify the node ID of the server used. Simply replace the three dots by the content. + +We set here the minimum specs for a full VM, but you can adjust these parameters. + + + +## Deploy the Full VM with Terraform + +We now deploy the full VM with Terraform. Make sure that you are in the correct folder `terraform/deployments` containing the main and variables files. + +- Initialize Terraform: + + - ``` + terraform init + ``` + +- Apply Terraform to deploy the full VM: + - ``` + terraform apply + ``` + +After deployments, take note of the 3Node' IPv4 address. You will need this address to SSH into the 3Node. + + + +## SSH into the 3Node + +- To [SSH into the 3Node](../getstarted/ssh_guide/ssh_guide.md), write the following: + - ``` + ssh root@VM_IPv4_Address + ``` + + + +## Delete the Deployment + +To stop the Terraform deployment, you simply need to write the following line in the terminal: + +``` +terraform destroy +``` + +Make sure that you are in the Terraform directory you created for this deployment. + + + +## Conclusion + +You now have the basic knowledge and know-how to deploy on the TFGrid using Terraform. + +As always, if you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. diff --git a/collections/system_administrators/terraform/terraform_get_started.md b/collections/system_administrators/terraform/terraform_get_started.md new file mode 100644 index 0000000..a4693da --- /dev/null +++ b/collections/system_administrators/terraform/terraform_get_started.md @@ -0,0 +1,87 @@ +![ ](./advanced/img//terraform_.png) + +## Using Terraform + +- make a directory for your project `mkdir myfirstproject` +- `cd myfirstproject` +- create `main.tf` <- creates the terraform main file + +## Create + +to start the deployment `terraform init && terraform apply` + +## Destroying + +can be done using `terraform destroy` + +And that's it!! you managed to deploy 2 VMs on the threefold grid v3 + +## How to use a Terraform File + +### Initializing the provider + +In terraform's global section + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtech/grid" + version = "1.8.1" + } + } +} + +``` + +- You can always provide a version to chooses a specific version of the provider like `1.8.1-dev` to use version `1.8.1` for devnet +- If `version = "1.8.1"` is omitted, the provider will fetch the latest version but for environments other than main you have to specify the version explicitly +- For devnet, qanet and testnet use version = `"-dev", "-qa" and "-rcx"` respectively + +Providers can have different arguments e.g using which identity when deploying, which substrate network to create contracts on, .. etc. This can be done in the provider section + +```terraform +provider "grid" { + mnemonics = "FROM THE CREATE TWIN STEP" + network = "dev" # or test to use testnet + +} +``` + +Please note you can leave its content empty and export everything as environment variables + +``` +export MNEMONICS="....." +export NETWORK="....." + +``` + +For more info see [Provider Manual](./advanced/terraform_provider.md) + +### output section + +```terraform +output "wg_config" { + value = grid_network.net1.access_wg_config +} +output "node1_vm1_ip" { + value = grid_deployment.d1.vms[0].ip +} +output "node1_vm2_ip" { + value = grid_deployment.d1.vms[1].ip +} +output "public_ip" { + value = grid_deployment.d1.vms[0].computedip +} + +``` + +Output parameters show what has been done: + +- the overlay wireguard network configurations +- the private IPs of the VMs +- the public IP of the VM `exposed under computedip` + +### Which flists to use in VM + +see [list of flists](../manual3_iac/grid3_supported_flists.md) diff --git a/collections/system_administrators/terraform/terraform_gpu_support.md b/collections/system_administrators/terraform/terraform_gpu_support.md new file mode 100644 index 0000000..345b5e1 --- /dev/null +++ b/collections/system_administrators/terraform/terraform_gpu_support.md @@ -0,0 +1,55 @@ +

GPU Support and Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Example](#example) + +*** + +## Introduction + +The TFGrid now supports GPUs. We present here a quick example. This section will be expanded as new information comes in. + + + +## Example + +```terraform +terraform { + required_providers { + grid = { + source = "threefoldtechdev.com/providers/grid" + } + } +} +provider "grid" { +} +locals { + name = "testvm" +} +resource "grid_network" "net1" { + name = local.name + nodes = [93] + ip_range = "10.1.0.0/16" + description = "newer network" +} +resource "grid_deployment" "d1" { + name = local.name + node = 93 + network_name = grid_network.net1.name + vms { + name = "vm1" + flist = "https://hub.grid.tf/tf-official-apps/base:latest.flist" + cpu = 2 + memory = 1024 + entrypoint = "/sbin/zinit init" + env_vars = { + SSH_KEY = file("~/.ssh/id_rsa.pub") + } + planetary = true + gpus = [ + "0000:0e:00.0/1002/744c" + ] + } +``` \ No newline at end of file diff --git a/collections/system_administrators/terraform/terraform_install.md b/collections/system_administrators/terraform/terraform_install.md new file mode 100644 index 0000000..f125b7d --- /dev/null +++ b/collections/system_administrators/terraform/terraform_install.md @@ -0,0 +1,53 @@ +

Installing Terraform

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Terraform](#install-terraform) + - [Install Terraform on Linux](#install-terraform-on-linux) + - [Install Terraform on MAC](#install-terraform-on-mac) + - [Install Terraform on Windows](#install-terraform-on-windows) +- [ThreeFold Terraform Plugin](#threefold-terraform-plugin) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +There are many ways to install Terraform depending on your operating system. Terraform is available for Linux, MAC and Windows. + +## Install Terraform + +You can get Terraform from the Terraform website [download page](https://www.terraform.io/downloads.html). You can also install it using your system package manager. The Terraform [installation manual](https://learn.hashicorp.com/tutorials/terraform/install-cli) contains the essential information for a proper installation. + +We cover here the basic steps for Linux, MAC and Windows for convenience. Refer to the official Terraform documentation if needed. + +### Install Terraform on Linux + +To install Terraform on Linux, we follow the official [Terraform documentation](https://developer.hashicorp.com/terraform/downloads). + +* [Install Terraform on Linux](../computer_it_basics/cli_scripts_basics.md#install-terraform) + +### Install Terraform on MAC + +To install Terraform on MAC, install Brew and then install Terraform. + +* [Install Brew](../computer_it_basics/cli_scripts_basics.md#install-brew) +* [Install Terraform with Brew](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-brew) + +### Install Terraform on Windows + +To install Terraform on Windows, a quick way is to first install Chocolatey and then install Terraform. + +* [Install Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-chocolatey) +* [Install Terraform with Chocolatey](../computer_it_basics/cli_scripts_basics.md#install-terraform-with-chocolatey) + +## ThreeFold Terraform Plugin + +The ThreeFold [Terraform plugin](https://github.com/threefoldtech/terraform-provider-grid) is supported on Linux, MAC and Windows. + +There's no need to specifically install the ThreeFold Terraform plugin. Terraform will automatically load it from an online directory according to instruction within the deployment file. + +## Questions and Feedback + +If you have any questions, let us know by writing a post on the [Threefold Forum](http://forum.threefold.io/) or by reaching out to the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/system_administrators/terraform/terraform_readme.md b/collections/system_administrators/terraform/terraform_readme.md new file mode 100644 index 0000000..17b9463 --- /dev/null +++ b/collections/system_administrators/terraform/terraform_readme.md @@ -0,0 +1,45 @@ + + +

Terraform

+ +Welcome to the *Terraform* section of the ThreeFold Manual! + +In this section, we'll embark on a journey to explore the powerful capabilities of Terraform within the ThreeFold Grid ecosystem. Terraform, a cutting-edge infrastructure as code (IaC) tool, empowers you to define and provision your infrastructure efficiently and consistently. + +

Table of Contents

+ +- [What is Terraform?](#what-is-terraform) +- [Terraform on ThreeFold Grid: Unleashing Power and Simplicity](#terraform-on-threefold-grid-unleashing-power-and-simplicity) +- [Get Started](#get-started) +- [Features](#features) +- [What is Not Supported](#what-is-not-supported) + +*** + +## What is Terraform? + +Terraform is an open-source tool that enables you to describe and deploy infrastructure using a declarative configuration language. With Terraform, you can define your infrastructure components, such as virtual machines, networks, and storage, in a human-readable configuration file. This file, often referred to as the Terraform script, becomes a blueprint for your entire infrastructure. + +The beauty of Terraform lies in its ability to automate the provisioning and management of infrastructure across various cloud providers, ensuring that your deployments are reproducible and scalable. It promotes collaboration, version control, and the ability to treat your infrastructure as code, providing a unified and seamless approach to managing complex environments. + +## Terraform on ThreeFold Grid: Unleashing Power and Simplicity + +Within the ThreeFold Grid ecosystem, Terraform plays a pivotal role in streamlining the deployment and orchestration of decentralized, peer-to-peer infrastructure. Leveraging the unique capabilities of the ThreeFold Grid, you can use Terraform to define and deploy your workloads, tapping into the TFGrid decentralized architecture for unparalleled scalability, reliability, and sustainability. + +This manual will guide you through the process of setting up, configuring, and managing your infrastructure on the ThreeFold Grid using Terraform. Whether you're a seasoned developer, a DevOps professional, or someone exploring the world of decentralized computing for the first time, this guide is designed to provide clear and concise instructions to help you get started. + +## Get Started + +![ ](../terraform/img//terraform_works.png) + +Threefold loves Open Source! In v3.0 we are integrating one of the most popular 'Infrastructure as Code' (IaC) tools of the cloud industry, [Terraform](https://terraform.io). Utilizing the Threefold grid v3 using Terraform gives a consistent workflow and a familiar experience for everyone coming from different background. Terraform describes the state desired of how the deployment should look like instead of imperatively describing the low level details and the mechanics of how things should be glued together. + +## Features + +- All basic primitives from ThreeFold grid can be deployed, which is a lot. +- Terraform can destroy a deployment +- Terraform shows all the outputs + +## What is Not Supported + +- we don't support updates/upgrades, if you want a change you need to destroy a deployment & re-create your deployment this in case you want to change the current running instances properties or change the node, but adding a vm to an existing deployment this shouldn't affect other running vm and same if we need to decommission a vm from a deployment this also shouldn't affect the others diff --git a/collections/system_administrators/terraform/terraform_toc.md b/collections/system_administrators/terraform/terraform_toc.md new file mode 100644 index 0000000..55d5771 --- /dev/null +++ b/collections/system_administrators/terraform/terraform_toc.md @@ -0,0 +1,38 @@ +

Terraform

+ +

Table of Contents

+ +- [Overview](./terraform_readme.md) +- [Installing Terraform](./terraform_install.md) +- [Terraform Basics](./terraform_basics.md) +- [Full VM Deployment](./terraform_full_vm.md) +- [GPU Support](./terraform_gpu_support.md) +- [Resources](./resources/terraform_resources_readme.md) + - [Using Scheduler](./resources/terraform_scheduler.md) + - [Virtual Machine](./resources/terraform_vm.md) + - [Web Gateway](./resources/terraform_vm_gateway.md) + - [Kubernetes Cluster](./resources/terraform_k8s.md) + - [ZDB](./resources/terraform_zdb.md) + - [Zlogs](./resources/terraform_zlogs.md) + - [Quantum Safe Filesystem](./resources/terraform_qsfs.md) + - [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md) + - [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md) + - [CapRover](./resources/terraform_caprover.md) +- [QSFS on Micro VM](./resources/terraform_qsfs_on_microvm.md) +- [QSFS on Full VM](./resources/terraform_qsfs_on_full_vm.md) + - [CapRover](./resources/terraform_caprover.md) +- [Advanced](./advanced/terraform_advanced_readme.md) + - [Terraform Provider](./advanced/terraform_provider.md) + - [Terraform Provisioners](./advanced/terraform_provisioners.md) + - [Mounts](./advanced/terraform_mounts.md) + - [Capacity Planning](./advanced/terraform_capacity_planning.md) + - [Updates](./advanced/terraform_updates.md) + - [SSH Connection with Wireguard](./advanced/terraform_wireguard_ssh.md) + - [Set a Wireguard VPN](./advanced/terraform_wireguard_vpn.md) + - [Synced MariaDB Databases](./advanced/terraform_mariadb_synced_databases.md) + - [Nomad](./advanced/terraform_nomad.md) + - [Nextcloud Deployments](./advanced/terraform_nextcloud_toc.md) + - [Nextcloud All-in-One Deployment](./advanced/terraform_nextcloud_aio.md) + - [Nextcloud Single Deployment](./advanced/terraform_nextcloud_single.md) + - [Nextcloud Redundant Deployment](./advanced/terraform_nextcloud_redundant.md) + - [Nextcloud 2-Node VPN Deployment](./advanced/terraform_nextcloud_vpn.md) \ No newline at end of file diff --git a/collections/technology/architecture/cloud_architecture_.md b/collections/technology/architecture/cloud_architecture_.md new file mode 100644 index 0000000..accb833 --- /dev/null +++ b/collections/technology/architecture/cloud_architecture_.md @@ -0,0 +1,20 @@ +![](img/architecture_why_us.jpg) + +## Architecture Overview + +**More info:** + +- QSSS + - QSFS +- [Quantum Safe Network Concept](sdk:archi_qsnetwork) + - [Zero-OS Network](sdk:capacity_network) + - [ThreeFold Network = Planetary Network](sdk:archi_psnw) + - [Web Gateway](sdk:archi_webgateway) +- TFGrid + - [3Node](3node) + - [ThreeFold Connect](tfconnect) + + diff --git a/collections/technology/architecture/cloud_wallet_.md b/collections/technology/architecture/cloud_wallet_.md new file mode 100644 index 0000000..ceff391 --- /dev/null +++ b/collections/technology/architecture/cloud_wallet_.md @@ -0,0 +1,43 @@ +# Wallet on Stellar Network + +![](img/3bot_wallet_detail.jpg) + +### Prepaid Wallets + +The VDC has a built-in __prepaid wallet__, which is the wallet used for paying the capacity requested in the VDC. This wallet expresses in TFT the remaining balance available for ensuring the operational continuity of the reserved capacity. + +This wallet is registered on the Stellar network, and is exclusively used for capacity reservation aligned with the chosen VDC size. +Both the TFGrid testnet and mainnet are connected to the Stellar mainnet, so TFTs used are the same. Testnet prices are substantially lower than mainnet prices, though there's no guarantee about continuity of operation: testnet is reset at regular times, and available capacity is also lower than on mainnet. + +### A public key and a shared private key + +The wallet is characterized by 2 strings: +- A public address, starting with a 'G', is the address that can be shared with anyone, as it the address to be mentioned when transferring tokens TO the wallet. +- A private key, starting with an 'S', is the secret that gives control over the wallet, and which is needed to generate outgoing transfers. + +### Payment for Capacity Process + +The Prepaid Wallet which is setup within your VDC is exclusively used for this purpose. The private key of this wallet is shared between you and the VDC provider : +- The VDC provider needs the private key to pay the farmer on a rolling basis : every hour an amount is transferred to the farmer(s) that owns the reserved hardware capacity, so it stays reserved for the next 2 weeks. These 2 weeks are meant as a 'grace period' : when the balance of the prepaid wallet becomes zero, you have 2 weeks to top up the wallet. You will get notified for this, while the workload remains operational. +In case after these 2 weeks grace period the wallet hasn't been topped up again, the workload will be removed and the capacity will be made available again for new reservations. + +## Top-up a Wallet + +Please read the [Top-up](evdc_wallet_topup) page for instructions. + +## Viewing Your Balance + +Simply click on one of your existing wallet see the details of the wallet. + +![](img/3bot_wallet_detail.jpg) + +## Withdraw TFTs from the wallet + +The private key is available to transfer tokens from the prepaid wallet to your personal TFT wallet. Evidently, transferring tokens has a direct impact on the expiration date of your VDC. + +### Your VDC Wallet Details + +- The Network is the Stellar mainnet network (indicated with `STD` on the wallet information) +- [Trustlines](https://www.stellar.org/developers/guides/concepts/assets.html) are specific to the Stellar network to indicate that a user is 'trusting' the asset / crypto issuer, in our case trusting ThreeFold Dubai as issuer of TFT. +Trustlines are specific to the network, so it needs to be established both on testnet and mainnet and for all the tokens that someone intends to hold. Without a trustline, a wallet address can't be fed with tokens. +In order to make it easier for the user, trustlines are being established automatically when creating a wallet for TFT in the admin panel as well as in ThreeFold Connect app. However, if you use a third party Stellar wallet for your tokens, you need to create the trustlines yourself. \ No newline at end of file diff --git a/collections/technology/architecture/evdc_qsfs_get_started_.md b/collections/technology/architecture/evdc_qsfs_get_started_.md new file mode 100644 index 0000000..50d7ab1 --- /dev/null +++ b/collections/technology/architecture/evdc_qsfs_get_started_.md @@ -0,0 +1,79 @@ +## Getting started + +Any Quantum-Safe File System has 4 storage layers : +- An etcd metadata storage layer +- Local storage +- ZDB-FS fuse layer +- ZSTOR for the dispersed storage + +Now, there are 2 ways to run the zstor filesystem: +- In self-management mode for the metadata; +- A 'Quantum Storage Enabled' mode. + +The first mode combines the local storage, ZDB-FS and ZSTOR, but requires an etcd metadata layer to be manually run and managed. +The second mode is enabled by the `ENABLE QUANTUM STORAGE` button and provisions etcd to manage the metadata. Here the 4 layers are available (hence it will consume slightly more storage from your VDC). + +### Manually Managed Metadata Mode + +This Planetary Secure File System uses a ThreeFold VDC's storage nodes to back data up to the ThreeFold Grid. Below you'll find instructions for using an executable bootstrap file that runs on Linux or in a Linux Docker container to set up the complete environment necessary to use the file system. + +Please note that this solution is currently for testing only, and some important features are still under development. + +#### VDC and Z-Stor Config + +If you haven't already, go ahead and [deploy a VDC](evdc_deploy). Then download the Z-Stor config file, found in the upper right corner of the `VDC Storage Nodes` screen. Unless you know that IPv6 works on your machine and within Docker, choose the IPv4 version of the file. + +![](img/planetaryfs_zstor_config.jpg) + +As described in [Manage Storage Nodes](evdc_storage), this file contains the necessary information to connect with the 0-DBs running on the storage nodes in your VDC. It also includes an encryption key used to encrypt data that's uploaded and a field to specify your etcd endpoints. Using the defaults here is fine. + +#### Bootstrap Executable + +Download now the zstor filesystem bootstrap, available [here](https://github.com/threefoldtech/quantum-storage/releases/download/v0.0.1/planetaryfs-bootstrap-linux-amd64). + + +> __Remark__: +For now, the bootstrap executable is only available for Linux. We'll cover how to use it within an Ubuntu container in Docker, which will also work on MacOS. +First, we'll start an Ubuntu container with Docker, enabling fuse file system capabilities. In a terminal window, + +`docker run -it --name zdbfs --cap-add SYS_ADMIN --device /dev/fuse ubuntu:20.04` + +Next, we'll copy the Z-Stor config file and the bootstrap executable into the running container. In a separate terminal window, navigate to where you downloaded the files and run: + +`docker cp planetaryfs-bootstrap-linux-amd64 zdbfs:/root/` +`docker cp zdbfs:/root/` + +Back in the container's terminal window, `cd /root` and confirm that the two files are there with `ls`. Then run the bootstrap executable, specifying your config file: + +`chmod u+x planetaryfs-bootstrap-linux-amd64` +`./planetaryfs-bootstrap-linux-amd64 ` + +This bootstrap's execution will start up all necessary components and show you that the back-end is ready for dispersing the data. + +![](img/planetaryfs_bootstrap_ready.jpg ':size=600') + +After that, your Planetary Secure File System will be mounted at `/root/.threefold/mnt/zdbfs`. Files copied there will automatically be stored on the grid incrementally as fragments of a certain size are filled, by default 32Mb. In a future release, this will no longer be a limitation. + +### Provisioned Metadata Mode + +Users that intend to have also the metadata out-of-the-box available, and have it used in the Kubernetes cluster, need to push the `ENABLE QUANTUM STORAGE` button. This will allow to use etcd key-value stores in the VDC, and can be used within a Kubernetes cluster. + +![](img/planetaryfs_enable_qs.jpg) + +Once Quantum Storage mode is enabled, you get an etcd for free. + +**Remark**: this action can't be undone in your VDC : the etcd stores can be filled immediately, and deletion of them could result in data loss. This is why the 'Disable Quantum Storage' is considered as too risky and is not available. + +### Add node + +Adding storage nodes manually is simple: press the `+ ADD NODE` button. + +![](img/planetaryfs_add_node.jpg) + +You'll be asked to deploy this storage node either on the same farm or on another one. The choice is a balance between security (have the data in multiple locations makes it more resilient against disaster). + +![](img/planetaryfs_farm.jpg ':size=600') + +If you choose `Yes`, select the farm of your choice, and then pay for the extra capacity. + +![](img/planetaryfs_pay.jpg ':size=600') \ No newline at end of file diff --git a/collections/technology/architecture/img/3bot_wallet_detail.jpg b/collections/technology/architecture/img/3bot_wallet_detail.jpg new file mode 100644 index 0000000..ab79ec2 Binary files /dev/null and b/collections/technology/architecture/img/3bot_wallet_detail.jpg differ diff --git a/collections/technology/architecture/img/3layers_tf_.jpg b/collections/technology/architecture/img/3layers_tf_.jpg new file mode 100644 index 0000000..1f244d7 Binary files /dev/null and b/collections/technology/architecture/img/3layers_tf_.jpg differ diff --git a/collections/technology/architecture/img/architecture_why_us.jpg b/collections/technology/architecture/img/architecture_why_us.jpg new file mode 100644 index 0000000..0c144df Binary files /dev/null and b/collections/technology/architecture/img/architecture_why_us.jpg differ diff --git a/collections/technology/architecture/img/planet_fs.jpg b/collections/technology/architecture/img/planet_fs.jpg new file mode 100644 index 0000000..7d2d814 Binary files /dev/null and b/collections/technology/architecture/img/planet_fs.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_add_node.jpg b/collections/technology/architecture/img/planetaryfs_add_node.jpg new file mode 100644 index 0000000..89f259c Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_add_node.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_bootstrap_ready.jpg b/collections/technology/architecture/img/planetaryfs_bootstrap_ready.jpg new file mode 100644 index 0000000..c59c889 Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_bootstrap_ready.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_enable_qs.jpg b/collections/technology/architecture/img/planetaryfs_enable_qs.jpg new file mode 100644 index 0000000..166edad Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_enable_qs.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_farm.jpg b/collections/technology/architecture/img/planetaryfs_farm.jpg new file mode 100644 index 0000000..45fca5b Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_farm.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_pay.jpg b/collections/technology/architecture/img/planetaryfs_pay.jpg new file mode 100644 index 0000000..0dfc38d Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_pay.jpg differ diff --git a/collections/technology/architecture/img/planetaryfs_zstor_config.jpg b/collections/technology/architecture/img/planetaryfs_zstor_config.jpg new file mode 100644 index 0000000..4467329 Binary files /dev/null and b/collections/technology/architecture/img/planetaryfs_zstor_config.jpg differ diff --git a/collections/technology/architecture/img/quantum_safe_storage.jpg b/collections/technology/architecture/img/quantum_safe_storage.jpg new file mode 100644 index 0000000..4d99c48 Binary files /dev/null and b/collections/technology/architecture/img/quantum_safe_storage.jpg differ diff --git a/collections/technology/architecture/img/quantum_safe_storage_scale.jpg b/collections/technology/architecture/img/quantum_safe_storage_scale.jpg new file mode 100644 index 0000000..b785a79 Binary files /dev/null and b/collections/technology/architecture/img/quantum_safe_storage_scale.jpg differ diff --git a/collections/technology/architecture/threefold_filesystem.md b/collections/technology/architecture/threefold_filesystem.md new file mode 100644 index 0000000..45b12fe --- /dev/null +++ b/collections/technology/architecture/threefold_filesystem.md @@ -0,0 +1,34 @@ +![](img/planet_fs.jpg) + +# ThreeFold zstor filesystem (zstor) + +Part of the eVDC is a set of Storage Nodes, which can be used as a storage infrastructure for files in any format. + +## Mount Any Files in your Storage Infrastructure + +The QSFS is a mechanism to mount any file system (in any format) on the grid, in a quantum-secure way. + +This storage layer relies on relies on 3 primitives of the ThreeFold technology : + +- [0-db](https://github.com/threefoldtech/0-db) is the storage engine. +It is an always append database, which stores objects in an immutable format. It allows keeping the history out-of-the-box, good performance on disk, low overhead, easy data structure and easy backup (linear copy and immutable files). + +- [0-stor-v2](https://github.com/threefoldtech/0-stor_v2) is used to disperse the data into chunks by performing 'forward-looking error-correcting code' (FLECC) on it and send the fragments to safe locations. +It takes files in any format as input, encrypts this file with AES based on a user-defined key, then FLECC-encodes the file and spreads out the result +to multiple 0-DBs. The number of generated chunks is configurable to make it more or less robust against data loss through unavailable fragments. Even if some 0-DBs are unreachable, you can still retrieve the original data, and missing 0-DBs can even be rebuilt to have full consistency. It's an essential element of the operational backup. + +- [0-db-fs](https://github.com/threefoldtech/0-db-fs) is the filesystem driver which uses 0-DB as a primary storage engine. It manages the storage of directories and metadata in a dedicated namespace and file payloads in another dedicated namespace. + +Together they form a storage layer that is quantum secure: even the most powerful computer can't hack the system because no single node contains all of the information needed to reconstruct the data. + +![](img/quantum_safe_storage.jpg) + +This concept scales forever, and you can bring any file system on top of it: +- S3 storage +- any backup system +- an ftp-server +- IPFS and Hypercore distributed file sharing protocols +- ... + +![](img/quantum_safe_storage_scale.jpg) + diff --git a/collections/technology/concepts/buying_storing_tft.md b/collections/technology/concepts/buying_storing_tft.md new file mode 100644 index 0000000..8019cb7 --- /dev/null +++ b/collections/technology/concepts/buying_storing_tft.md @@ -0,0 +1,32 @@ +![](./img/tft.png) + +# Buying and Storing TFTs + +If you're looking to navigate the [TFT Ecosystem](https://library.threefold.me/info/manual/#/tokens/threefold__tft_ecosystem), this collection of tutorials and manuals is here to help. Learn how to purchase, trade, and securely store your TFTs with ease. + +For a comprehensive introduction to TFT, we recommend exploring the [TFT Home Section in the ThreeFold Library](https://library.threefold.me/info/threefold#/tokens/threefold__tokens_home). + +## Manuals on How to Buy TFT +Discover step-by-step instructions on buying and storing TFTs across different platforms. Our manuals cover: + +- [BSC - Pancake Swap](https://library.threefold.me/info/manual/#/tokens/threefold__tft_binance_defi) +- [BSC - 1inch.io](https://library.threefold.me/info/manual/#/tokens/threefold__tft_1inch) +- [GetTFT.com](https://gettft.com/gettft/#how-it-works) +- [Albedo Wallet](https://library.threefold.me/info/manual/#/tokens/threefold__albedo) +- [Solar Wallet](https://library.threefold.me/info/manual/#/tokens/threefold__solar_wallet) +- [Lobstr Wallet](https://library.threefold.me/info/manual/#/tokens/threefold__lobstr_wallet) +- [StellarTerm](https://library.threefold.me/info/manual/#/tokens/threefold__tft_stellarterm) +- [Interstellar](https://library.threefold.me/info/manual/#/tokens/threefold__tft_interstellar) +- [BTC-Alpha Exchange](https://library.threefold.me/info/manual/#/tokens/threefold__tft_btc_alpha) +- [StellarX Exchange](https://library.threefold.me/info/manual/#/tokens/threefold__tft_stellarx) +- [TF Live Desk (OTC)](https://library.threefold.me/info/manual/#/tokens/threefold__tft_otc) +- [Mazraa (Farmers)](https://library.threefold.me/info/manual/#/tokens/threefold__tft_mazraa) +- [Bettertoken (Farmers)](https://library.threefold.me/info/manual/#/tokens/threefold__tft_bettertoken) + +## Other Related Manuals on TFT +- [Store TFTs on Hardware Wallets](../threefold_token/storing_tft/hardware_wallet.md) +- [Storing TFTs on TF Connect App](../threefold_token/storing_tft/tf_connect_app.md) +- [TFT Bridges](../threefold_token/tft_bridges/tft_bridges.md) + - [TFChain-Stellar Bridge](../threefold_token/tft_bridges/tfchain_stellar_bridge.md) + - [BSC-Stellar Bridge](../threefold_token/tft_bridges/bsc_stellar_bridge.md) + - [Ethereum-Stellar Bridge](../threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md) diff --git a/collections/technology/concepts/concepts_readme.md b/collections/technology/concepts/concepts_readme.md new file mode 100644 index 0000000..303c1c3 --- /dev/null +++ b/collections/technology/concepts/concepts_readme.md @@ -0,0 +1,20 @@ +# ThreeFold Grid Concepts + +On this section we will explore the fundamental principles and concepts behind the ThreeFold Grid. This comprehensive resource will take you on a journey through the core technologies that underpin the ThreeFold Grid, empowering you to understand and leverage the decentralized nature of this groundbreaking infrastructure. + +## Learn the Basics + +- [Zero-OS](./zos.md) +- [TFGrid Primitives](./grid_primitives.md) +- [TFGrid Component List](./grid3_components.md) +- [ThreeFold's Infrastructure as Code (IaC)](./grid3_iac.md) +- [Proof of Utilization](./proof_of_utilization.md) +- [Contract Grace Period](./contract_grace_period.md) +- [What's New on TFGrid v3.x](./grid3_whatsnew.md) +- [TFChain](./tfchain.md) +- [TFGrid by Design](./tfgrid_by_design.md) + +## Take an In-Depth Looks + +- [TF Technology](../technology_toc.md) +- [What's New on TFGrid v3.x](./grid3_whatsnew.md) \ No newline at end of file diff --git a/collections/technology/concepts/contract_grace_period.md b/collections/technology/concepts/contract_grace_period.md new file mode 100644 index 0000000..7dc818a --- /dev/null +++ b/collections/technology/concepts/contract_grace_period.md @@ -0,0 +1,89 @@ +

Grace Period: Ensuring Seamless Operations

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [What is the Grace Period?](#what-is-the-grace-period) +- [How does it work?](#how-does-it-work) +- [When does the Grace Period kick in?](#when-does-the-grace-period-kick-in) +- [How to resume your workloads:](#how-to-resume-your-workloads) +- [Grace Period Contract State: Easily Accessible Information](#grace-period-contract-state-easily-accessible-information) + - [Grid Weblets:](#grid-weblets) + - [ThreeFold Grid Proxy:](#threefold-grid-proxy) +- [TFChain GraphQL:](#tfchain-graphql) + - [Node Contract](#node-contract) + - [Rent Contract](#rent-contract) +- [PolkadotJS UI:](#polkadotjs-ui) + +*** + +## Introduction + +__The Grace Period__ serves as a crucial aspect of the ThreeFold ecosystem, providing a safety net for users when their funds run low. Let's explore the key details in a user-friendly manner: + +## What is the Grace Period? + +When a contract owner exhausts their wallet funds required for their deployment, the contract enters a Grace Period. During this time, the deployment becomes temporarily inaccessible to the user. However, once the wallet is replenished with TFT (ThreeFold Tokens), the contract resumes normal operation. + +It's important to note that if the Grace Period expires (typically after 2 weeks), the user's deployment and data will be deleted from the node. + +## How does it work? + +When a ``twin`` (a user account) depletes its funds, all linked contracts enter a Grace Period during the next billing cycle. +By default, the Grace Period lasts for 14 days. Throughout this period, users cannot utilize any deployments associated with the twin. + +Additionally, users cannot delete contracts during the Grace Period, whether they are related to nodes, names, or rent. +Workloads become usable again when the twin is funded with the required amount of TFT. + +If the twin is not funded during the Grace Period, the contracts will be automatically deleted after this period. + +## When does the Grace Period kick in? +The Grace Period commences when the twin balance falls below the minimum required for the respective deployments or workloads. + +## How to resume your workloads: +To regain access to workloads within the Grace Period, it is essential to fund your twin with sufficient TFT tokens. This action ensures the resumption of operations and allows you to continue your work seamlessly. + +The Grace Period feature acts as a safeguard, providing users with the opportunity to manage their funds effectively and maintain uninterrupted operations within the ThreeFold ecosystem. + +## Grace Period Contract State: Easily Accessible Information + +Checking the state of your contracts within the ``Grace Period`` is simple and convenient. Here's how you can do it: + +### Grid Weblets: +The Contracts tab on the Dashboard provides an easy way to monitor your contracts. Here, you can find comprehensive details about the desired ``contract``, including its ``State`` and ``Expiration date`` if the node is in the Grace Period. + +![](./img/manual__grace_period_weblets.png) + +### ThreeFold Grid Proxy: +Access the Grace Period contracts through the following endpoint: + +``https://gridproxy.grid.tf/contracts?state=GracePeriod&twin_id=`` + +![](./img/manual__grace_period_gridproxy.png) + +This allows you to retrieve information about contracts that are currently in the Grace Period. + +## TFChain GraphQL: +You can also check the Contract State using [__GraphQL queries__](https://graphql.grid.tf/graphql). Depending on the contract type, utilize the appropriate queries available for ``Node Contract`` and ``Rent Contract``. + +### Node Contract +![](./img/manual__grace_period_graphql_node.png) + +### Rent Contract +These queries provide insights into the status and details of the contracts. + +![](./img/manual__grace_period_graphql_rent.png) + +## PolkadotJS UI: +Another option is to check the Contract state using the [__PolkadotJS UI__](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftfchain.grid.tf#/chainstate). Simply navigate to ``chainstate`` -> ``SmartContractModule`` -> ``Contracts(ID_OF_CONTRACT)`` to view the relevant contract information. + +![](./img/manual__grace_period_polkadot_ui.png) + +With these user-friendly options at your disposal, you can effortlessly track and monitor the state of your contracts within the Grace Period. + + + + + + + diff --git a/collections/technology/concepts/grid3_components.md b/collections/technology/concepts/grid3_components.md new file mode 100644 index 0000000..c18fe19 --- /dev/null +++ b/collections/technology/concepts/grid3_components.md @@ -0,0 +1,351 @@ +

TFGrid Component List (Last Updated May 2023)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFGrid Components (Alphabetical Orders)](#tfgrid-components-alphabetical-orders) + - [TF Admin Portal](#tf-admin-portal) + - [AtomicSwap](#atomicswap) + - [Builders](#builders) + - [TF Capacity Explorer](#tf-capacity-explorer) + - [Cloud Container](#cloud-container) + - [Cloud Console](#cloud-console) + - [TF Dashboard](#tf-dashboard) + - [Farm Management](#farm-management) + - [TF Farming Calculator](#tf-farming-calculator) + - [Farmerbot](#farmerbot) + - [Freeflow Twin Main App or Freeflow Connect (previously Uhuru)](#freeflow-twin-main-app-or-freeflow-connect-previously-uhuru) + - [GetTFT Shop](#gettft-shop) + - [TF Grid3 Client TS](#tf-grid3-client-ts) + - [TF Grid Proxy](#tf-grid-proxy) + - [TF Grid-SDK-Go](#tf-grid-sdk-go) + - [TF Grid-SDK-TS](#tf-grid-sdk-ts) + - [TF Grid Simulator](#tf-grid-simulator) + - [TF Grid Stats](#tf-grid-stats) + - [JS-SDK](#js-sdk) + - [JS-NG](#js-ng) + - [Itenv\_TFGridv2](#itenv_tfgridv2) + - [Libp2p-relay](#libp2p-relay) + - [Minting v3](#minting-v3) + - [Node-Pilot](#node-pilot) + - [Oauth-Proxy](#oauth-proxy) + - [TF Planetary Network Tool](#tf-planetary-network-tool) + - [TF Dashboard and Weblets](#tf-Dashboard-and-weblets) + - [QSFS](#qsfs) + - [Reliable Message Bus Relay (RMB-RS)](#reliable-message-bus-relay-rmb-rs) + - [RMB-SDK-Go](#rmb-sdk-go) + - [Terraform Provider](#terraform-provider) + - [TCP-Router](#tcp-router) + - [TFChain](#tfchain) + - [TFChain Activation Service](#tfchain-activation-service) + - [TFChain Explorer](#tfchain-explorer) + - [TFChain Block Explorer](#tfchain-block-explorer) + - [TFChain-GraphQL](#tfchain-graphql) + - [TFChain TFT Bridge](#tfchain-tft-bridge) + - [3Bot or Threebot](#3bot-or-threebot) + - [Threebot-deployer or 3Bot Deployer](#threebot-deployer-or-3bot-deployer) + - [ThreeFold Wallet](#threefold-wallet) + - [ThreeFold Connect App](#threefold-connect-app) + - [Zinit](#zinit) + - [0-OS or ZOS](#0-os-or-zos) + - [0-bootstrap](#0-bootstrap) + - [0-Bus or ZBus](#0-bus-or-zbus) + - [0-DB](#0-db) + - [0-DB-FS](#0-db-fs) + - [0-Flist](#0-flist) + - [0-Hub](#0-hub) + - [0-InitramFS](#0-initramfs) + - [0-stor\_v2](#0-stor_v2) + +*** + +## Introduction + +This list serves as a comprehensive glossary that provides an overview of the various components and tools within the ThreeFold Grid ecosystem. It serves as a valuable reference for developers, stakeholders, and enthusiasts who want to gain a deeper understanding of the building blocks that power the ThreeFold Grid. + +The glossary covers a wide range of components, including infrastructure elements, software tools, protocols, and services that are integral to the functioning and expansion of the grid. From blockchain-based technologies like TFChain and TFGrid Explorer to networking components like RMB-RS and Zinit, the TFGrid Component List offers concise explanations of each component's purpose and functionality. + +*** + +## TFGrid Components (Alphabetical Orders) + +### TF Admin Portal +A tool within TF Dashboard provided by ThreeFold for administrators to manage and monitor various aspects of the ThreeFold Grid ecosystem. It serves as a central hub where administrators can access and control different components of the grid, including nodes, capacity, workloads, and user management. + +The TF Admin Portal provides a comprehensive set of tools and features to configure, deploy, and monitor resources within the grid, ensuring efficient management and utilization of the decentralized infrastructure. Through the portal, administrators can track the performance and health of the grid, allocate resources, manage user permissions, and gain insights into the grid's utilization and usage patterns. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/tfgrid_dashboard) + +### AtomicSwap +A component within the ThreeFold ecosystem that refers to Atomic Swaps, a cryptographic technology that enables the peer-to-peer exchange of cryptocurrencies or digital assets between different blockchain networks without the need for intermediaries. Atomic swaps use smart contracts to facilitate trustless and secure transactions, ensuring that both parties involved in the swap fulfill their obligations. By leveraging atomic swaps, users can seamlessly exchange digital assets across different blockchains, fostering interoperability and eliminating the reliance on centralized exchanges. + +> [Component Repository on Github](https://github.com/threefoldtech/atomicswap) + +### Builders +A Docker-based component within the ThreeFold Grid ecosystem. This particular aspect of Builders involves leveraging Docker containers to package and deploy applications and services on the ThreeFold Grid. Docker is an open-source platform that enables developers to build, package, and distribute applications as lightweight, portable containers. + +By using Builders as a Docker-based component, developers can easily containerize their applications, ensuring consistency and compatibility across different environments. This approach simplifies the deployment process, allowing developers to quickly deploy their applications on the ThreeFold Grid with minimal configuration and setup. The Builders component takes care of managing the underlying infrastructure and orchestrating the deployment of Docker containers, making it an efficient and convenient way to leverage the capabilities of the ThreeFold Grid for hosting and running applications. + +> [Component Repository on Github](https://github.com/threefoldtech/builders) + +### TF Capacity Explorer +A tool within the TF Dashboard provided by ThreeFold that allows users to explore and analyze the available capacity within the ThreeFold Grid. It provides insights into the distributed computing resources, including storage, processing power, and network bandwidth, that are available for utilization within the ThreeFold network. + +The TF Capacity Explorer enables users to discover and assess the capacity of different nodes and data centers within the ThreeFold Grid, helping them make informed decisions when deploying their workloads or applications. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/tfgrid_dashboard) + +### Cloud Container +A containerization technology provided by ThreeFold that enables the deployment and management of applications and services in a cloud environment. It offers a lightweight and isolated execution environment for running applications, ensuring scalability, portability, and efficient resource utilization. + +With ThreeFold's Cloud Container, developers and organizations can package their applications along with their dependencies and configurations, making it easier to deploy and manage them in a cloud-native manner. The Cloud Container technology provides features such as automated scaling, load balancing, and resource allocation, allowing for efficient utilization of computing resources and optimal performance of applications. + +> [Component Repository on Github](https://github.com/threefoldtech/cloud-container) + +### Cloud Console +A web-based graphical user interface (GUI) provided by ThreeFold that allows users to manage and control their cloud infrastructure and resources. It serves as a central hub for managing various aspects of the cloud environment, including virtual machines, storage, networking, and other services. + +Through the cloud console, users can perform a wide range of tasks, such as provisioning and configuring virtual machines, managing storage volumes, creating and managing networks, monitoring resource usage, and accessing logs and metrics. It provides an intuitive and user-friendly interface that simplifies the management and administration of the cloud infrastructure. + +> [Component Repository on Github](https://github.com/threefoldtech/cloud-console) + +### TF Dashboard +A Graphical user interface (GUI) provided by ThreeFold for users to access and manage their ThreeFold Grid resources. It serves as a centralized control panel where users can monitor and control various aspects of their infrastructure, including their deployed workloads, storage capacity, network connectivity, and overall system health. The TF Dashboard provides real-time statistics, logs, and metrics to help users gain insights into the performance and utilization of their resources. It also offers tools for managing user accounts, configuring security settings, and accessing support and documentation. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/tfgrid_dashboard) + +### Farm Management +A set of tools, processes, and functionalities provided by ThreeFold to manage and operate farms within the ThreeFold Grid. Farms are the physical locations where ThreeFold Farmers deploy and maintain the infrastructure that powers the decentralized network. TF Farm Management offers a comprehensive suite of features that enable farmers to efficiently manage their resources, monitor the health and performance of their infrastructure, and handle various administrative tasks. This includes functionalities such as capacity allocation, monitoring and reporting tools, farmer reputation management, billing and invoicing systems, and overall farm administration. + +Note: This is a feature that involves multiple component repositories. It is listed here to give a complete picture of ThreeFold's component list. + +### TF Farming Calculator +A tool provided by ThreeFold that allows users to estimate and calculate potential earnings from farming on the ThreeFold Grid. Farming refers to the process of providing computing resources, such as storage and processing power, to the ThreeFold Grid and earning tokens in return. The tf-farming-calculator takes into account various factors, including the amount of resources contributed, the duration of farming, and the current market conditions, to provide users with an estimate of their potential earnings in terms of ThreeFold Tokens (TFT). + +> [Component Repository on Github](https://github.com/threefoldtech/tf-farming-calculator) + +### Farmerbot +A software tool developed by ThreeFold that serves as a management and monitoring system for ThreeFold farmers. It is designed to automate various tasks related to operating and managing the ThreeFold Grid infrastructure. The TF Farmerbot helps farmers to efficiently manage their resources, including storage capacity, compute power, and network bandwidth. It provides real-time monitoring of the farmer's nodes, ensuring optimal performance and availability. + +> [Component Repository on Github](https://github.com/threefoldtech/farmerbot) + +### Freeflow Twin Main App or Freeflow Connect (previously Uhuru) +FFTwin is a component of the ThreeFold ecosystem that serves as the main interface for users to access and utilize the features of Freeflow Twin. Freeflow Twin is a decentralized communication and collaboration platform developed by ThreeFold. + +The Twin Main App allows users to securely communicate, share files, and collaborate with others in a decentralized manner, ensuring privacy and data sovereignty. Users can create chat channels, join communities, and engage in real-time messaging with end-to-end encryption. The app also supports file sharing, voice and video calls, and other collaborative features. With the Freeflow Twin Main App, users can experience a decentralized and secure communication platform that empowers them to connect and collaborate with others while maintaining control over their data. + +> [Component Repository on Github](https://github.com/threefoldtech/freeflow_twin_main_app) + +### GetTFT Shop +An official, online platform provided by ThreeFold where users can purchase ThreeFold Tokens (TFT) directly. It serves as a dedicated marketplace for individuals and organizations to buy TFT tokens using various payment methods. The GetTFT Shop ensures a seamless and user-friendly experience for acquiring TFT, which is the native cryptocurrency of the ThreeFold ecosystem. + +Note: This repository is private. You can visit the GetTFT Shop [here](https://gettft.com/gettft/shop/). + +### TF Grid3 Client TS +A software component that serves as a client library for interacting with the Grid3 platform. It provides developers with a set of tools, functions, and interfaces to communicate with the ThreeFold Grid and utilize its resources. The Grid3 Client TS allows users to perform various operations, such as creating and managing virtual machines, deploying applications, accessing storage services, and interacting with the decentralized network. It acts as a bridge between developers and the ThreeFold Grid, enabling them to leverage the platform's decentralized infrastructure and harness its capabilities programmatically. + +### TF Grid Proxy +A fundamental component which serves as a gateway that allows external applications and users to interact with the grid. Acting as a bridge between the decentralized infrastructure of the ThreeFold Grid and external networks, GridProxy facilitates seamless communication and integration. It provides a standardized interface for accessing and managing resources within the grid, enabling developers, businesses, and users to leverage the power and scalability of the ThreeFold Grid in their applications and workflows. By abstracting the complexities of the grid infrastructure, GridProxy simplifies the process of interacting with the grid, making it more accessible and user-friendly. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/tfgridclient_proxy) + +### TF Grid-SDK-Go +ThreeFold Grid Software Development Kit (SDK) for the Go programming language. It is a collection of tools, libraries, and APIs provided by ThreeFold to facilitate the development and integration of applications with the ThreeFold Grid. The TFGrid-SDK-Go allows developers to interact with the ThreeFold Grid infrastructure, such as provisioning and managing compute resources, accessing storage, and interacting with blockchain-based services. It provides a standardized and efficient way to leverage the features and capabilities of the ThreeFold Grid within Go applications. + +> [Component Repository on Github](https://github.com/threefoldtech/tfgrid-sdk-go) + +### TF Grid-SDK-TS +ThreeFold Grid Software Development Kit (SDK) for TypeScript. It is a set of tools, libraries, and APIs provided by ThreeFold to simplify the development and integration of applications with the ThreeFold Grid. The TFGrid-SDK-TS enables developers to interact with the ThreeFold Grid infrastructure, such as provisioning and managing compute resources, accessing storage, and interacting with the blockchain-based services. It provides a standardized and convenient way to leverage the features and capabilities of the ThreeFold Grid within TypeScript applications. + +> [Component Repository on Github](https://github.com/threefoldtech/tfgrid-sdk-ts) + +### TF Grid Simulator +A component or tool within the ThreeFold ecosystem that allows for the simulation of the ThreeFold Grid infrastructure. It provides a simulated environment where users can test and evaluate the behavior and performance of the grid without the need for actual hardware or network resources. The tfgrid_simulator mimics the functionalities of the real ThreeFold Grid, enabling users to experiment with various configurations, scenarios, and workloads. This simulation tool is valuable for developers, administrators, and users who want to understand and optimize the behavior of the ThreeFold Grid, test applications, and evaluate the impact of different factors on grid performance. It helps in fine-tuning the grid setup and ensuring optimal resource allocation and utilization. + +> [Component Repository on Github](https://github.com/threefoldtech/tfgrid_simulator) + +### TF Grid Stats +A component or tool within the ThreeFold ecosystem that is designed to gather and provide statistics and metrics related to the ThreeFold Grid. It collects data on various aspects of the grid, such as the number of active nodes, their capacities, network performance, usage patterns, and other relevant information. tfgrid_stats allows users and administrators to monitor the health and performance of the grid, track its growth and utilization, and make informed decisions based on the collected data. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/tfgrid_stats) + +### JS-SDK +A software development kit (SDK) provided by ThreeFold that enables developers to interact with and utilize the ThreeFold Grid infrastructure using JavaScript. It provides a set of libraries, tools, and APIs that simplify the integration and interaction with various ThreeFold services and functionalities. + +With the JS-SDK, developers can programmatically manage and deploy resources, interact with the ThreeFold Grid's decentralized storage, perform transactions on the ThreeFold Chain blockchain, and access other platform features. The JS-SDK empowers developers to build decentralized applications (dApps), create custom automation scripts, and leverage the capabilities of the ThreeFold Grid using the familiar JavaScript programming language. + +> [Component Repository on Github](https://github.com/threefoldtech/js-sdk) + +### JS-NG +JavaScript Next-Generation (js-ng) framework, which is a modern and advanced framework for building web applications using JavaScript. It provides developers with a set of tools, libraries, and utilities to streamline the development process and create high-performance, scalable, and maintainable web applications. The js-ng framework incorporates the latest features and best practices of JavaScript, allowing developers to write clean and efficient code. It offers a modular architecture, allowing for easy integration of third-party libraries and extensions. + +With js-ng, developers can build interactive user interfaces, handle data management, perform client-server communication, and implement various functionalities required for robust web applications. The framework promotes code reusability, testability, and code organization, making it an ideal choice for developing modern web applications. + +> [Component Repository on Github](https://github.com/threefoldtech/js-ng) + +### Itenv_TFGridv2 +The development and testing environment for the TFGrid v2, which is the second version of the ThreeFold Grid. It is a comprehensive set of tools, configurations, and resources that enable developers to create, test, and deploy applications on the ThreeFold Grid infrastructure. The itenv_tfgridv2 environment provides developers with the necessary tools and utilities to set up a local development environment that closely resembles the production environment of the ThreeFold Grid. It includes various components such as virtual machines, containers, networking configurations, and monitoring tools, all specifically tailored for the development and testing of applications on the ThreeFold Grid. + +Note: This repository is private. + +### Libp2p-relay +A component within the ThreeFold ecosystem that refers to the libp2p relay functionality. libp2p is a modular networking stack that allows peer-to-peer communication and data transfer between nodes in a decentralized network. The libp2p-relay component specifically focuses on providing relay services, which enable nodes that are behind firewalls or NATs (Network Address Translators) to establish direct connections with other nodes in the network. This relaying functionality helps overcome network obstacles and facilitates seamless communication between nodes, ensuring that the ThreeFold Grid operates efficiently and nodes can interact with each other effectively. + +> [Component Repository on Github](https://github.com/threefoldtech/libp2p-relay) + +### Minting v3 +The third version of the ThreeFold Token (TFT) minting process. It is a protocol implemented by ThreeFold to create new TFT tokens and manage the token supply. TF Minting v3 incorporates various features and improvements over its previous versions to enhance the functionality and security of token creation. It involves the issuance of new TFT tokens according to predefined rules and algorithms, such as token distribution, inflation rates, and token unlocking schedules. TF Minting v3 ensures a fair and transparent distribution of tokens while maintaining the integrity and stability of the ThreeFold ecosystem. + +> [Component Repository on Github](https://github.com/threefoldtech/minting_v3) + +### Node-Pilot +A software package provided by ThreeFold for running and managing individual nodes on the ThreeFold Grid. It is designed to enable users to set up and operate their own decentralized infrastructure nodes. TFNode-Pilot provides the necessary tools and functionality to deploy, configure, and monitor nodes, allowing users to contribute their computing resources to the ThreeFold Grid and participate in the decentralized ecosystem. + +With TFNode-Pilot, users can easily transform their hardware into powerful nodes that contribute to the storage, compute, and networking capabilities of the ThreeFold Grid. The software package includes features such as node management, resource monitoring, security measures, and integration with other components of the ThreeFold ecosystem. + +> [Component Repository on Github](https://github.com/threefoldtech/node-pilot-light) + +### Oauth-Proxy +A component specifically developed by ThreeFold to enhance security and facilitate the authentication process for accessing ThreeFold services and resources. It acts as a middleware between users, applications, and the ThreeFold infrastructure, implementing the OAuth protocol. By using the oauth-proxy, applications can securely obtain authorization to access protected resources on the ThreeFold network without directly handling user credentials. The oauth-proxy handles the authentication flow, obtaining consent from users, and issuing access tokens to authorized applications. This helps ensure that access to ThreeFold's services and resources is controlled and secure, protecting user data and privacy. + +> [Component Repository on Github](https://github.com/threefoldtech/oauth-proxy) + +### TF Planetary Network Tool +A software application or platform that provides users with the necessary tools and functionalities to interact with and utilize the ThreeFold Planetary Network. The ThreeFold Planetary Network is a decentralized and distributed infrastructure network that spans across the globe. It is built on the principles of autonomy, neutrality, and sustainability. The network consists of a vast number of interconnected computing resources, including servers, storage devices, and networking equipment, which are owned and operated by individuals and organizations called farmers. + +> [Component Repository on Github](https://github.com/threefoldtech/planetary_network) + +### TF Dashboard and Weblets +TF Dashboard and TF Weblets are two interconnected components of the ThreeFold ecosystem. TF Dashboard is a user-friendly web-based interface that serves as a sandbox environment for developers, allowing them to experiment, test, and deploy their applications on the ThreeFold Grid. It provides an intuitive interface where users can write, compile, and execute code, explore various programming languages and frameworks, and interact with the ThreeFold infrastructure. + +TF Weblets, on the other hand, are modular, lightweight applications that run on the ThreeFold Grid. They are designed to be decentralized, secure, and easily deployable, enabling users to create and deploy their own web-based services and applications on the ThreeFold network. + +> [Component Repository on Github (Archived)](https://github.com/threefoldtech/grid_weblets) + +### QSFS +It is ThreeFold's innovative storage solution designed to address the security challenges posed by quantum computing. QSFS employs advanced cryptographic techniques that are resistant to attacks from quantum computers, ensuring the confidentiality and integrity of stored data. By utilizing quantum-resistant algorithms, QSFS offers long-term data protection, even in the face of quantum threats. This technology is crucial in a future where quantum computers could potentially break traditional encryption methods. With ThreeFold's QSFS, users can have peace of mind knowing that their data is safeguarded against emerging quantum computing risks, reinforcing the security and resilience of the ThreeFold ecosystem. + +> [Component Repository on Github](https://github.com/threefoldtech/quantum-storage) + +### Reliable Message Bus Relay (RMB-RS) +A component or system that facilitates the reliable and secure transfer of messages between different entities or systems within the ThreeFold ecosystem. It acts as a relay or intermediary, ensuring that messages are delivered accurately and efficiently, even in the presence of network disruptions or failures. The RMB-RS employs robust protocols and mechanisms to guarantee message reliability, integrity, and confidentiality. It plays a crucial role in enabling seamless communication and data exchange between various components, applications, or nodes within the ThreeFold network, enhancing the overall reliability and performance of the system. + +> [Component Repository on Github](https://github.com/threefoldtech/rmb-rs) + +### RMB-SDK-Go +Software development kit (SDK) for interacting with the Reliable Message Bus (RMB) in the Go programming language. The Reliable Message Bus is a messaging system used within the ThreeFold ecosystem to enable reliable and secure communication between different components and services. The rmb-sdk-go provides a set of tools, libraries, and APIs that developers can use to integrate their Go applications with the RMB infrastructure. It simplifies the process of sending and receiving messages, managing subscriptions, and handling the reliability and security aspects of messaging within the ThreeFold environment. + +> [Component Repository on Github](https://github.com/threefoldtech/rmb-sdk-go) + +### Terraform Provider +A software tool that integrates with the popular infrastructure-as-code platform, Terraform. It enables users to provision and manage resources on the ThreeFold Grid using Terraform's declarative configuration language. The provider acts as a bridge between Terraform and the ThreeFold Grid, allowing users to define and deploy infrastructure components such as virtual machines, storage, and networking resources with ease. This integration simplifies the process of building and managing infrastructure on the ThreeFold Grid, offering users the familiar and powerful capabilities of Terraform while leveraging the decentralized and scalable nature of the ThreeFold technology. + +> [Component Repository on Github](https://github.com/threefoldtech/terraform-provider-grid) + +### TCP-Router +A component of the ThreeFold technology stack that acts as a TCP (Transmission Control Protocol) router and load balancer. It serves as a network gateway for incoming TCP connections, routing them to the appropriate destinations based on predefined rules and configurations. The TCP-Router component is responsible for distributing incoming network traffic across multiple backend services or nodes, ensuring efficient load balancing and high availability. It helps optimize network performance by evenly distributing the workload and preventing any single node from being overwhelmed. By managing and balancing TCP connections, tcprouter contributes to the overall scalability, reliability, and performance of applications running on the ThreeFold Grid. + +> [Component Repository on Github](https://github.com/threefoldtech/tcprouter) + +### TFChain +A blockchain developed by the ThreeFold Foundation. It serves as the underlying technology for managing the ThreeFold Grid. TFChain is built on Parity Substrate. It is responsible for storing information related to the ThreeFold Grid, including identity information of entities, 3Node and farmer details, reputation information, digital twin registry, and more. TFChain also acts as the backend for the TFChain database and supports smart contracts for provisioning workloads on top of the ThreeFold Grid. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain) + +### TFChain Activation Service +A component within the ThreeFold ecosystem that facilitates the activation of TFChain accounts. TFChain is a blockchain developed by ThreeFold that serves as the backbone of the ThreeFold Grid. The Activation Service provides the necessary infrastructure and processes to activate and onboard users onto the TFChain network. It ensures that users can securely create and manage their TFChain accounts, including generating cryptographic keys, validating user identities, and enabling the activation of TFChain functionalities. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain_activation_service) + +### TFChain Explorer +A web-based tool that allows users to explore and interact with the TFChain blockchain. It provides a graphical interface where users can view transaction history, account balances, smart contracts, and other blockchain-related information. The TFChain Explorer offers transparency and visibility into the TFChain ecosystem, enabling users to track transactions, verify balances, and monitor the overall health of the network. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain_explorer) + +### TFChain Block Explorer +A web-based tool provided by ThreeFold that allows users to explore and interact with the TFChain blockchain. It provides a user-friendly interface to browse through blocks, transactions, and addresses on the TFChain network. Users can view detailed information about individual blocks and transactions, including timestamps, transaction amounts, and involved addresses. The block explorer also enables searching for specific transactions or addresses, making it easier to track and verify transactions on the TFChain blockchain. With the TFChain Block Explorer, users can gain transparency and visibility into the TFChain network, facilitating better understanding and analysis of blockchain activities. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain_block_explorer) + +### TFChain-GraphQL +The integration of GraphQL, a query language for APIs, with TFChain, the blockchain technology used by ThreeFold. It enables developers and users to interact with the TFChain blockchain using GraphQL queries and mutations. GraphQL provides a flexible and efficient way to retrieve and manipulate data from the TFChain blockchain, allowing for customized and precise data retrieval. With TFChain-GraphQL, users can easily query blockchain information, such as transaction details, account balances, or smart contract data, and perform mutations, such as submitting transactions or updating contract states. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain_graphql) + +### TFChain TFT Bridge +The bridge mechanism that enables the conversion of TFT tokens between different blockchain networks, specifically between the ThreeFold Chain (TFChain) and other blockchain networks such as Ethereum or Stellar. The TFChain TFT Bridge allows TFT tokens to be transferred seamlessly and securely across different blockchain platforms, maintaining their value and integrity. This bridge plays a crucial role in interoperability, enabling users to leverage TFT tokens on multiple blockchain networks, unlocking new possibilities for decentralized applications and token ecosystems. + +> [Component Repository on Github](https://github.com/threefoldtech/tfchain_tft_bridge) + +### 3Bot or Threebot +3Bot is a component of the ThreeFold ecosystem that refers to a personal digital assistant. It is a software entity that acts as a virtual representation of an individual or organization, providing various services and performing tasks on their behalf. The 3Bot is designed to be decentralized and secure, running on the ThreeFold Grid infrastructure. It can handle functions such as managing personal data, interacting with other digital entities, executing transactions, and offering a range of services through its customizable capabilities. The 3Bot component enables individuals and organizations to have their own private and secure digital assistant, tailored to their specific needs and preferences. + +Note: This is a feature that involves multiple component repositories. It is listed here to give a complete picture of ThreeFold's component list. + +### Threebot-deployer or 3Bot Deployer +A tool provided by ThreeFold that facilitates the deployment of ThreeFold's ThreeBot applications. A ThreeBot is a personal digital assistant that can perform various tasks and provide services to users. The threebot-deployer simplifies the process of setting up and configuring a ThreeBot instance by automating many of the steps involved. It allows users to specify the desired configuration and parameters for their ThreeBot, such as the domain name, authentication settings, and available services. The threebot-deployer then handles the deployment process, ensuring that the ThreeBot is properly installed and configured according to the specified parameters. This tool streamlines the deployment process and enables users to quickly and easily set up their own personalized ThreeBot instances. + +> [Component Repository on Github](https://github.com/threefoldtech/threebot-deployer) + +### ThreeFold Wallet +A digital wallet designed to securely store and manage ThreeFold Tokens (TFT) and other digital assets inside the ThreeFold Connect App (TFConnect App). It provides users with a convenient and user-friendly interface to interact with their tokens, perform transactions, and track their token balances. The ThreeFold Wallet offers features such as wallet creation, private key management, token transfers, and transaction history. It ensures the security of users' assets through encryption and various authentication methods. The wallet serves as a gateway for users to access and engage with the ThreeFold ecosystem, enabling them to participate in token transactions, staking, and other activities. + +> [Component Repository on Github](https://github.com/threefoldtech/threefold_connect_wallet) + +### ThreeFold Connect App +Mobile application developed by ThreeFold that serves as a gateway to the ThreeFold Grid. It provides users with a secure and user-friendly wallet interface to access and manage their digital assets, such as ThreeFold Tokens (TFT), and interact with various services and applications within the ThreeFold ecosystem. The ThreeFold Connect App also provides an authenticator feature that ensures secure access and authentication to various services within the ThreeFold ecosystem. As an authenticator, it verifies the identity of users and provides them with secure access to their accounts and associated resources. + +> [Component Repository on Github](https://github.com/threefoldtech/threefold_connect) + +### Zinit +A lightweight, fast, and versatile package manager designed to simplify the installation and management of software components within the ThreeFold ecosystem. It provides a user-friendly interface for developers and system administrators to easily install, update, and remove software packages on their ThreeFold nodes. Zinit supports various package sources, including remote repositories, local files, and even directly from Git repositories, allowing users to easily fetch and install the desired software components. It also supports dependency resolution, ensuring that all required dependencies are installed correctly. + +> [Component Repository on Github](https://github.com/threefoldtech/zinit) + +### 0-OS or ZOS +ZOS (Zero Operating System) is a lightweight and secure operating system designed specifically for running workloads on the ThreeFold Grid. ZOS provides a minimalistic and containerized environment for applications, enabling efficient resource allocation and management. With ZOS, developers can deploy their applications easily and take advantage of the scalability and resilience offered by the ThreeFold Grid. + +> [Component Repository on Github](https://github.com/threefoldtech/zos) + +### 0-bootstrap +Also known as Zero-Bootstrap, is a component of the ThreeFold Grid infrastructure. It serves as the initial bootstrap mechanism for setting up and initializing the ThreeFold Grid network. 0-bootstrap provides the necessary tools and processes to deploy the core components of the ThreeFold Grid, including the Zero-OS operating system and other essential services. It helps in establishing the foundational layer of the grid network, enabling the deployment and management of compute resources, storage, and other decentralized services. + +> [Component Repository on Github](https://github.com/threefoldtech/0-bootstrap) + +### 0-Bus or ZBus +A component that facilitates interprocess communication (IPC) within the ThreeFold technology stack. It provides a lightweight and efficient messaging system that allows different software components or services to communicate with each other in a distributed environment. zbus implements a message bus architecture, where components can publish messages to topics and subscribe to receive messages from those topics. It enables decoupled and asynchronous communication between various parts of the system, promoting modularity and scalability. zbus plays a crucial role in enabling communication and coordination between different components of the ThreeFold infrastructure, such as the ThreeBot, ThreeFold Chain, and storage services, allowing them to work together seamlessly to deliver the desired functionality. + +> [Component Repository on Github](https://github.com/threefoldtech/zbus) + +### 0-DB +a distributed key-value database system. It is designed to provide efficient and secure storage for data in a decentralized environment. +In 0-db, data is stored as key-value pairs, allowing for fast and efficient retrieval of information. It provides high-performance read and write operations, making it suitable for applications that require quick access to data. The distributed nature of 0-db ensures that data is replicated and stored across multiple nodes, enhancing data availability and durability. + +> [Component Repository on Github](https://github.com/threefoldtech/0-db) + +### 0-DB-FS +A storage system that allows for efficient and secure storage of files on the ThreeFold Grid. 0-db-fs is built on top of 0-db, which is a key-value store optimized for high performance and scalability. It provides a decentralized and distributed approach to file storage, ensuring data redundancy and availability. With 0-db-fs, users can securely store and retrieve files, benefiting from the decentralized nature of the ThreeFold Grid, which enhances data privacy, security, and resilience. + +> [Component Repository on Github](https://github.com/threefoldtech/0-db-fs) + +### 0-Flist +Also known as Zero-Flist, is a file system image format used in the ThreeFold Grid infrastructure. It represents a compressed and immutable snapshot of a specific file system configuration or application stack. 0-Flist files are used to package and distribute software, data, and configurations within the ThreeFold Grid. They contain all the necessary files and dependencies required to run an application or service. 0-Flist files are lightweight, portable, and easy to distribute, making them ideal for deploying applications across the decentralized network. + +> [Component Repository on Github](https://github.com/threefoldtech/0-flist) + +### 0-Hub +Also known as Zero-Hub, is a key component of the ThreeFold Grid infrastructure. It serves as the central hub or entry point for users and applications to connect with the decentralized network. 0-hub provides a user-friendly interface and API endpoints that allow users to interact with the ThreeFold Grid and access its resources. It acts as a bridge between the users and the underlying infrastructure, enabling them to deploy and manage their workloads, access decentralized storage, and utilize other services provided by the ThreeFold Grid. 0-hub also plays a crucial role in facilitating peer-to-peer communication and collaboration within the network, connecting users and allowing them to share and exchange resources securely. + +> [Component Repository on Github](https://github.com/threefoldtech/0-hub) + +### 0-InitramFS +Initial RAM file system used in the ThreeFold ecosystem. An initramfs is a temporary file system that is loaded into memory during the boot process before the root file system is mounted. It contains essential files and utilities needed to initialize the system and prepare it for the boot process. In the context of ThreeFold, the 0-initramfs is a customized initial RAM file system specifically designed for the ThreeFold Grid infrastructure. It includes necessary components and configurations to ensure a smooth and efficient boot process for ThreeFold nodes. By utilizing the 0-initramfs, the ThreeFold ecosystem can optimize the boot sequence and ensure the proper initialization of the system components before transitioning to the main operating system. + +> [Component Repository on Github](https://github.com/threefoldtech/0-initramfs) + +### 0-stor_v2 +A component of the ThreeFold technology stack that refers to the second version of the 0-stor storage system. 0-stor_v2 is a distributed and decentralized storage solution that enables data storage and retrieval on the ThreeFold Grid. It utilizes erasure coding and sharding techniques to distribute data across multiple storage nodes, ensuring high availability and data redundancy. The 0-stor_v2 component provides an efficient and secure way to store data on the ThreeFold Grid, with features such as data encryption, replication, and integrity checks. It is designed to be scalable and fault-tolerant, allowing for the seamless expansion of storage capacity as needed. Developers and users can leverage 0-stor_v2 to store and manage their data in a decentralized and resilient manner, ensuring data privacy and accessibility on the ThreeFold Grid. + +> [Component Repository on Github](https://github.com/threefoldtech/0-stor_v2) + + diff --git a/collections/technology/concepts/grid3_iac.md b/collections/technology/concepts/grid3_iac.md new file mode 100644 index 0000000..321d059 --- /dev/null +++ b/collections/technology/concepts/grid3_iac.md @@ -0,0 +1,47 @@ +

Infrastructure as Code (IaC)

+ +

Table of Contents

+ +- [What is IaC?](#what-is-iac) +- [Benefits of IaC](#benefits-of-iac) +- [ThreeFold's IaC](#threefolds-iac) +- [How it Works](#how-it-works) +- [Dive Deeper](#dive-deeper) +- [Manuals](#manuals) + +*** + +## What is IaC? +Infrastructure as Code (IaC) is a concept that revolutionizes the way infrastructure is provisioned and managed in the world of IT. It involves the use of declarative scripts or configuration files to define and automate the deployment, configuration, and management of infrastructure resources. With IaC, organizations can treat infrastructure provisioning as code, applying software development principles and practices to infrastructure management. This approach brings numerous benefits, such as increased efficiency, consistency, scalability, and reproducibility. + +## Benefits of IaC +Using IaC in the ThreeFold ecosystem brings several benefits. + +- __Streamlined operations and increased efficiency__: One of the key benefits of Infrastructure as Code (IaC) is the significant improvement in speed and consistency. By eliminating manual processes and automating infrastructure provisioning through code, tasks can be completed faster and with greater accuracy. There is no longer a need to wait for IT administrators to manually perform tasks or worry about their availability. This allows for quicker iterations and faster deployments, enabling organizations to be more agile in their development processes. Consistency is also enhanced as infrastructure configurations are defined in code, ensuring that the same setup is replicated across environments and reducing the risk of configuration drift. + +- __Empowered software development lifecycle__: IaC places more control in the hands of developers, enabling them to focus on application development rather than spending time on infrastructure management. With reliable and consistent infrastructure provisioning, developers can leverage reusable code scripts to deploy and manage resources efficiently. This streamlines the software development lifecycle, enabling faster development cycles and reducing the time and effort spent on manual infrastructure tasks. Developers can quickly spin up development and testing environments, experiment with different configurations, and roll out changes with ease. + +- __Reduced management overhead__: IaC eliminates the need for multiple roles dedicated to managing different layers of hardware and middleware in traditional data center environments. With IaC, the management overhead is significantly reduced as infrastructure is defined and managed through code. This frees up administrators to focus on exploring and implementing new technologies and innovations, rather than being tied down by routine maintenance tasks. It simplifies the operational structure and allows for a more efficient allocation of resources, ultimately leading to cost savings and increased productivity. + +Overall, IaC brings faster speed, improved consistency, efficient software development, and reduced management overhead. It empowers organizations to accelerate their deployment processes, enhance collaboration between development and operations teams, and optimize resource utilization. By adopting IaC practices, organizations can achieve greater agility, scalability, and cost efficiency in their infrastructure management, enabling them to stay competitive in today's fast-paced digital landscape. + +## ThreeFold's IaC +At ThreeFold, IaC plays a crucial role in the deployment and management of the ThreeFold Grid infrastructure. ThreeFold leverages popular IaC tools and methodologies to enable users to define and manage their infrastructure resources in a programmatic and scalable way. The IaC mechanism in ThreeFold involves components like __Terraform__, __TypeScript Client__, __GraphQL Client__, and __Grid Proxy REST API__. + +__Terraform__ acts as the foundation for infrastructure provisioning, allowing users to define their desired infrastructure state using declarative configuration files. __The TypeScript and GraphQL clients__ provide interfaces for interacting with the ThreeFold Grid and managing resources programmatically, while the__ Grid Proxy REST API__ enables integration with external systems and applications. + +## How it Works +Firstly, __Terraform__ acts as the primary infrastructure provisioning tool. It provides a declarative language for defining infrastructure resources and their configurations, enabling users to express their infrastructure requirements in a human-readable and version-controlled manner. + +__TypeScript Client and GraphQL Client__ serve as interfaces for interacting with the ThreeFold Grid, allowing users to create, update, and manage their infrastructure resources programmatically. These clients offer rich functionality and flexibility for managing various aspects of the ThreeFold Grid, including node deployment, capacity allocation, networking, and more. + +__Grid Proxy REST API__ further enhances the extensibility of the IaC mechanism by enabling integration with external systems and applications, allowing for seamless automation and orchestration of infrastructure tasks. Together, these components form a robust and efficient IaC framework within the ThreeFold ecosystem, empowering users to manage their infrastructure as code with ease and precision. + +## Dive Deeper +- [How the Grid Works](../grid3_howitworks.md) +- [ThreeFold's Component List](./grid3_components.md) +- [Grace Periods](./contract_grace_period.md) + +## Manuals +- [Terraform](../../../documentation/system_administrators/terraform/terraform_toc.md) +- [Typescript Client](../../../documentation/developers/javascript/grid3_javascript_readme.md) \ No newline at end of file diff --git a/collections/technology/concepts/grid3_whatsnew.md b/collections/technology/concepts/grid3_whatsnew.md new file mode 100644 index 0000000..d97fca3 --- /dev/null +++ b/collections/technology/concepts/grid3_whatsnew.md @@ -0,0 +1,61 @@ +

What's New on ThreeFold Grid v3.x

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFChain v3.x](#tfchain-v3x) + - [Key Features of TFChain v3.x:](#key-features-of--tfchain-v3x) +- [Proof of Utilization: Enhancing Your Cloud Experience](#proof-of-utilization-enhancing-your-cloud-experience) + - [Key Features:](#key-features) + - [New TFGrid Explorer UI](#new-tfgrid-explorer-ui) + +*** + +## Introduction + +The ThreeFold Grid v3.x is packed with exciting new features and enhancements. This marks a significant milestone in the evolution of our decentralized grid infrastructure, bringing even more power, flexibility, and innovation to our users. In this introduction, we will highlight some of the key new features that make ThreeFold Grid v3.x a game-changer in the world of decentralized technologies. + +> Click [here](../concepts/grid3_components.md) to see the complete TFGrid Component List + +## TFChain v3.x + +[__TFChain v3.x__](../concepts/tfchain.md) is a decentralized chain that holds all the information about the entities comprising the ThreeFold Grid. It operates on the Parity Substrate blockchain infrastructure. + +### Key Features of TFChain v3.x: + +- Your identity and proofs/reputation are stored on our blockchain. +- All information about TFGrid, including nodes and farmers, is made available. +- A GraphQL interface that allows easy querying of the blockchain. +- Provide Side chains supports, enabling unlimited scalability and allowing others to run their own blockchains. +- TFT now exists on TFChain, addressing scalability issues with Stellar. +- A bridge facilitates the transfer of TFT between Stellar and TFChain. +- Blockchain-based provisioning process/ +- TFChain API that is available in JavaScript, Golang, and Vlang. +- 'Infrastructure as Code' (IAC) framework support for: + - Terraform + - Kubernetes, Helm, Kubernetes + - Ansible (planned) +- Support for App deployment using CapRover +- The use of RMB (Reliable Message Bus) that ensures secure peer-to-peer communication with Zero-OS. + +Please note that the above list summarizes the key features introduced in TFChain v3.0. + +## Proof of Utilization: Enhancing Your Cloud Experience + +Experience the benefits of Proof of Utilization, a user-friendly feature that optimizes your cloud usage and rewards your pre-purchases. Here are the key features: + +### Key Features: + +- __Hourly Resource Utilization__: Your resource utilization is accurately captured and calculated on an hourly basis, ensuring transparency and precision in tracking your cloud usage. + +- __Secure Storage on TFChain__: Your resource utilization data is securely stored in TFChain, our dedicated blockchain, providing a reliable and tamper-proof record of your usage history. + +- __Automated Discount System__: Our innovative automated discount system acknowledges your pre-purchased cloud needs. Based on the amount of TFT (ThreeFold Tokens) you hold in your account and the duration you maintain these tokens, you can enjoy significant price discounts. + +- __Personalized Discounts__: The discount you receive is customized to your TFT holdings and usage patterns. For instance, if you hold TFT tokens equivalent to 12 months' worth of usage, you receive a generous 40% discount. Holding TFT tokens for 36 months unlocks an impressive 60% discount on your cloud services. + + +### New TFGrid Explorer UI + +- The TFGrid Explorer v3.x has an updated user interface that is now nicer and easier to use. +- It utilizes the GraphQL layer of TFChain. diff --git a/collections/technology/concepts/grid_primitives.md b/collections/technology/concepts/grid_primitives.md new file mode 100644 index 0000000..24812df --- /dev/null +++ b/collections/technology/concepts/grid_primitives.md @@ -0,0 +1,60 @@ +

ThreeFold Grid Primitives: Empowering Your Solutions

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Compute](#compute) +- [Storage](#storage) +- [Network](#network) +- [Zero-OS Advantages](#zero-os-advantages) +- [Conclusion](#conclusion) + +*** + +## Introduction + +Within the ThreeFold Grid, we offer a range of __low-level constructs known as Primitives__. These powerful functionalities enable you to create diverse and customized solutions atop the grid, opening up a world of possibilities. It's important to note that any application compatible with Linux can seamlessly run on the ThreeFold Grid, ensuring maximum flexibility. + +## Compute + +Harness the power of computation with our compute primitives, measured in [compute units](../../cloud/cloudunits.md) (CU). + +- [ZKube](../../technology/primitives/compute/zkube.md): Deploy and manage Kubernetes clusters effortlessly. +- [ZMachine](../../technology/primitives/compute/zmachine.md): Run your applications within containers or virtual machines powered by the Zero-OS operating system. +- [CoreX](../../technology/primitives/compute/corex.md) (optional): Gain remote access to your ZMachine by utilizing the CoreX process manager. + +## Storage + +Leverage our robust storage Primitives, measured in [storage units](../../cloud/cloudunits.md) (SU), to meet your data storage requirements efficiently. + +- [ZOS Filesystem](../../technology/primitives/storage/zos_fs.md): Enjoy a deduplicated and immutable filesystem for secure and reliable data storage. +- [ZOS Mount](../../technology/primitives/storage/zmount.md): Utilize a portion of a high-speed SSD (Solid State Drive) as a mounted disk directly accessible within your ZMachine. +- [Quantum Safe Filesystem](../../technology/primitives/storage/qsfs.md): Secure your data with an unbreakable storage system, ideal for secondary storage needs. +- [Zero-DB](../../technology/primitives/storage/zdb.md): Experience a powerful key-value storage mechanism that serves as the foundation for other storage mechanisms. +- [Zero-Disk](../../technology/primitives/storage/zdisk.md) (OEM only): Employ a virtual disk format designed exclusively for original equipment manufacturers. + +## Network + +Harness our network Primitives, measured in [network units](../../cloud/cloudunits.md) (CU), to enable seamless communication and connectivity. + +- [ZNET](../../technology/primitives/network/znet.md): Establish private networks between ZMachines, ensuring secure and efficient communication. +- [ZNIC](../../technology/primitives/network/znic.md): Access and manage network interfaces within the Planetary Network, enabling efficient data transfer and communication. +- [WebGateway](../../technology/primitives/network/webgw3.md): Connect your ZNET to the internet with ease, facilitating seamless integration with the wider network. + +## Zero-OS Advantages + +Enjoy the [numerous advantages](../../technology/zos/benefits/zos_advantages.md) that Zero-OS brings to the table. + +- [Zero Install](../../technology/zos/benefits/zos_advantages.md#zero-os-installation): Experience hassle-free deployment without the need for complex installations. +- [Unbreakable Storage](../../technology/zos/benefits/zos_advantages.md#unbreakable-storage): Ensure the integrity and security of your data with our robust storage mechanisms. +- [Zero Hacking Surface](../../technology/zos/benefits/zos_advantages.md#zero-hacking-surface): Benefit from a minimized attack surface, bolstering the security of your infrastructure. +- [Zero Boot](../../technology/zos/benefits/zos_advantages.md#zero-boot): Enjoy lightning-fast boot times, allowing for swift and efficient system initialization. +- [Deterministic Deployment](../../technology/zos/benefits/zos_advantages.md#deterministic-deployment): Achieve consistent and predictable deployments, streamlining your development process. +- [ZOS Protect](../../technology/zos/benefits/zos_advantages.md#zero-os-protect): Experience enhanced protection and security measures to safeguard your infrastructure. + +## Conclusion + +With these powerful Primitives and Zero-OS advantages, the ThreeFold Grid empowers you to build, scale, and secure your solutions with ease. Unleash your creativity and unlock limitless possibilities within the ThreeFold ecosystem. + + + diff --git a/collections/technology/concepts/img/layers.jpeg b/collections/technology/concepts/img/layers.jpeg new file mode 100644 index 0000000..fa1d017 Binary files /dev/null and b/collections/technology/concepts/img/layers.jpeg differ diff --git a/collections/technology/concepts/img/manual__grace_period_graphql_node.png b/collections/technology/concepts/img/manual__grace_period_graphql_node.png new file mode 100644 index 0000000..e25e4ec Binary files /dev/null and b/collections/technology/concepts/img/manual__grace_period_graphql_node.png differ diff --git a/collections/technology/concepts/img/manual__grace_period_graphql_rent.png b/collections/technology/concepts/img/manual__grace_period_graphql_rent.png new file mode 100644 index 0000000..cc0db2c Binary files /dev/null and b/collections/technology/concepts/img/manual__grace_period_graphql_rent.png differ diff --git a/collections/technology/concepts/img/manual__grace_period_gridproxy.png b/collections/technology/concepts/img/manual__grace_period_gridproxy.png new file mode 100644 index 0000000..d21ce2d Binary files /dev/null and b/collections/technology/concepts/img/manual__grace_period_gridproxy.png differ diff --git a/collections/technology/concepts/img/manual__grace_period_polkadot_ui.png b/collections/technology/concepts/img/manual__grace_period_polkadot_ui.png new file mode 100644 index 0000000..ccd17c8 Binary files /dev/null and b/collections/technology/concepts/img/manual__grace_period_polkadot_ui.png differ diff --git a/collections/technology/concepts/img/manual__grace_period_weblets.png b/collections/technology/concepts/img/manual__grace_period_weblets.png new file mode 100644 index 0000000..939f40c Binary files /dev/null and b/collections/technology/concepts/img/manual__grace_period_weblets.png differ diff --git a/collections/technology/concepts/img/payment.png b/collections/technology/concepts/img/payment.png new file mode 100644 index 0000000..68b2e1d Binary files /dev/null and b/collections/technology/concepts/img/payment.png differ diff --git a/collections/technology/concepts/img/tft.png b/collections/technology/concepts/img/tft.png new file mode 100644 index 0000000..03d884e Binary files /dev/null and b/collections/technology/concepts/img/tft.png differ diff --git a/collections/technology/concepts/proof_of_utilization.md b/collections/technology/concepts/proof_of_utilization.md new file mode 100644 index 0000000..c86b4c3 --- /dev/null +++ b/collections/technology/concepts/proof_of_utilization.md @@ -0,0 +1,29 @@ +

Proof of Utilization

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) +- [CU, SU, NU](#cu-su-nu) + +*** + +## Introduction + +The ThreeFold Grid employs a unique mechanism called __Proof of Utilization__ to track and measure resource utilization within its decentralized network. We provide here an overview of this mechanism. + +## Overview + +Proof of Utilization is a system that records resource usage on an hourly basis and serves as a transparent and reliable way to validate and verify the utilization of various components of the grid. + +The Proof of Utilization concept encompasses the monitoring and tracking of three key types of resources within the ThreeFold Grid:__Compute Resources (CU)__, __Storage Resources (SU)__, and __Network resources (NU)__. These resources are essential for supporting the diverse needs of users and applications on the grid. + +## CU, SU, NU + +__Compute resources (CU)__ refer to the computational power and processing capabilities provided by the ThreeFold Grid. This includes the ability to run applications, execute tasks, and perform complex computations in a distributed and decentralized manner. + +__Storage resources (SU)__ encompass the capacity to store and manage data within the ThreeFold Grid. It enables users to securely store and retrieve their data, ensuring reliable and scalable data management solutions. + +__Network resources (NU)__ focus on the network connectivity and bandwidth available within the ThreeFold Grid. This includes the transmission of data, communication between nodes, and facilitating the seamless flow of information across the decentralized network. + +In addition to the resources mentioned above, the Proof of Utilization system also tracks network utilization parameters such as __IPv4 addresses__, __DNS services__, and __name-on-web gateways__. These elements play a crucial role in enabling effective communication and accessibility within the ThreeFold Grid. \ No newline at end of file diff --git a/collections/technology/concepts/tfchain.md b/collections/technology/concepts/tfchain.md new file mode 100644 index 0000000..ee95510 --- /dev/null +++ b/collections/technology/concepts/tfchain.md @@ -0,0 +1,37 @@ +

ThreeFold Chain

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) +- [Key Functionalities](#key-functionalities) + +*** + +## Introduction + +__TFChain__, also known as __ThreeFold Chain__, is the blockchain at the core of managing the ThreeFold Grid, which operates as a decentralized autonomous organization (DAO). Built on Substrate, TFChain provides the infrastructure to support the seamless functioning of the ThreeFold Grid ecosystem. + +## Overview + +One of the key features of TFChain is its __compatibility with multiple blockchains__. The native token of the ThreeFold ecosystem, TFT, can be utilized across different blockchain networks including TFChain, Stellar, and Binance Smart Chain. This compatibility enables users to transfer their TFT tokens seamlessly between these blockchains, providing flexibility and convenience. + +To leverage the Internet Capacity available on the ThreeFold Grid, users are required to transfer funds to their TFChain account. This ensures that users have the necessary financial resources to access and utilize the storage, compute, and network services offered by the ThreeFold Grid. By transferring money to their TFChain account, users can seamlessly tap into the vast potential of the decentralized Internet Capacity provided by the ThreeFold Grid. + +TFChain serves as the backbone of the ThreeFold ecosystem, facilitating efficient transactions, secure transfers of value, and the management of user accounts. It plays a vital role in supporting the seamless interaction between users and the ThreeFold Grid, enabling them to leverage the available Internet Capacity and contribute to the growth of the decentralized network. + +## Key Functionalities + +__TFChain (Threefold Chain)__ is a powerful blockchain that orchestrates the interactions within the ThreeFold Grid ecosystem, providing users with a range of key functions. Let's explore some of these user-friendly functions: + +- __Users Registration__: TFChain allows seamless registration for users to join the ThreeFold Grid. By creating an account, users can easily become part of the decentralized network and access the various resources and services available. + +- __Farms Management__: TFChain simplifies the process of managing node farms within the ThreeFold Grid. It provides a streamlined registration system, enabling farmers to register their nodes and effectively contribute to the grid. Additionally, TFChain facilitates IP management, allowing farmers to efficiently manage and allocate IP addresses to their nodes. + +- __Fund Transfers__: TFChain supports secure and efficient fund transfers within the ThreeFold ecosystem. Users can seamlessly transfer funds, including the native TFT token, between accounts on TFChain. This feature enables easy financial transactions and fosters a thriving economy within the ThreeFold Grid. + +- __Billing and Consumption Reports__: TFChain offers detailed billing and consumption reports, providing users with insights into their resource usage and associated costs. Users can easily track their consumption, monitor usage patterns, and access comprehensive reports, ensuring transparency and accountability in resource management. + +And More: TFChain is continuously evolving and expanding its functionality. In addition to the key functions mentioned above, TFChain provides a robust foundation for other essential features within the ThreeFold Grid. This includes __facilitating secure transactions, maintaining a transparent ledger, enabling governance mechanisms,__ and supporting various interactions and operations within the decentralized ecosystem. + +TFChain's user-friendly functions empower users to participate actively in the ThreeFold Grid, seamlessly manage their resources, and engage in secure and efficient transactions. \ No newline at end of file diff --git a/collections/technology/concepts/tfgrid_by_design.md b/collections/technology/concepts/tfgrid_by_design.md new file mode 100644 index 0000000..02672f7 --- /dev/null +++ b/collections/technology/concepts/tfgrid_by_design.md @@ -0,0 +1,152 @@ +

TFGrid by Design: Deployment Architectures and Solution Categories

+ +

Table of Contents

+ +- [Introduction](#introduction) + - [TFGrid by Design](#tfgrid-by-design) + - [Capacity and Connectivity](#capacity-and-connectivity) +- [TFGrid Main Components Overview and Examples](#tfgrid-main-components-overview-and-examples) + - [Storage Units](#storage-units) + - [0-DB-FS](#0-db-fs) + - [0-stor\_v2](#0-stor_v2) + - [QSFS](#qsfs) + - [Compute Units](#compute-units) + - [Virtual CPUs (vCPUs)](#virtual-cpus-vcpus) + - [Kubernetes](#kubernetes) + - [TF Grid-SDK-Go and TF Grid-SDK-TS](#tf-grid-sdk-go-and-tf-grid-sdk-ts) + - [Network Units](#network-units) + - [Reliable Message Bus Relay (RMB-RS)](#reliable-message-bus-relay-rmb-rs) + - [TCP-Router](#tcp-router) +- [Solution Categories](#solution-categories) + - [DIY Workloads](#diy-workloads) + - [Independent Commercial Offerings](#independent-commercial-offerings) + - [ThreeFold Commercial Offerings](#threefold-commercial-offerings) +- [Best Practices](#best-practices) +- [Questions and Feedback](#questions-and-feedback) + +*** + +# Introduction + +Before starting a project on the TFGrid, it can be good idea to consider the overall design of the grid itself, and to ponder the potential solution designs you can come up with to ensure reliable and resilient deployments. This text will explore some of the main components of the TFGrid as well as its inherent design in order to provide sufficient information for system administrators to deploy effective and reliable solutions. We will also cover the three main solution categories that can be built on top of the TFGrid. + +## TFGrid by Design + +At its core, the TFGrid is composed of thousands of 3Nodes. 3Nodes provide storage, compute and network units to the TFGrid. By design, 3Nodes are not reliable in themselves, in the sense that a 3Node online today could be offline tomorrow if hardware or connection failures arise. This reality is inherent to any cloud enterprises. But this does not mean that reliability is not possible on the TFGrid. To the contrary, the TFGrid is composed of different components that can be utilized to provide reliability in all aspects of the grid: storage, compute and network. It is the role of system administrators to develop solutions that will be reliable in themselves. + +A myriad of possibilities and configurations are possible within the TFGrid ecosystem and, by understanding the interconnectedness between the grid components, one can knowingly build a solid deployment that will respond to the needs of a given project. + +## Capacity and Connectivity + +When it comes to deployments, we must consider two major aspects of the Internet infrastructure: capacity and connectivity. While capacity can be thought as the individual 3Nodes composing the TFGrid, where information is processed and stored within the 3Nodes, connectivity can be thought as the links and information transfers between the 3Nodes and the public Internet. + +As a general consideration, the TFGrid works mostly on the capacity side, whereas a 3Node will always be connected to the Internet by ways of different Internet Service Providers (ISP) depending on the farmer's location and resources. The 3Nodes provide storage and compute units where users can store information on SSD and HDD disks and where they can generate compute processes with CPUs. Another major component of the TFGrid would be network units. While, as said before, the TFGrid does not provide directly connectivity as per the traditional ISP services, elements such as gateways and Wireguard VPNs are further related to network units than compute or storage units. + +To build a reliable deployment on the TFGrid, you need to take into consideration the three different types of unit on the TFGrid: storage, compute and network. Let's delve into these a little bit more. + +# TFGrid Main Components Overview and Examples + +We provide here an overview of some of the main components of the TFGrid. We also provide examples for each of those components in order for the reader to obtain a clear understanding of the TF Ecosystem. By understanding the different components of the TFGrid, system administrators will be able to deploy resilient, redundant and reliable solutions on the grid. + +For a complete list of the TFGrid components, read [this documentation](./grid3_components.md). + +## Storage Units + +Storage units are related to the data stored in SSD and HDD disks. The Quantum Safe Filesystem (QSFS) technology developed by ThreeFold ensures redundancy and resilience in storage units. If one disk of the QSFS array goes offline, the rest of the system can still function properly. To the contrary, if a user stores information on one single 3Node and this 3Node has a drastic disk failure, the user will lose the data. Another way to achieve redundancy in the storage category would be to deploy a solution with real-time synced databases of two or more 3nodes connected via a wireguard VPN. Note that other configurations offering reliability and redundancy are possible. + +Let's explore some storage components of the ThreeFold Grid. + +### 0-DB-FS + +[0-DB-FS](./grid3_components.md#0-db-fs) is storage system that allows for efficient and secure storage of files on the ThreeFold Grid. 0-DB-FS is built on top of 0-DB, which is a key-value store optimized for high performance and scalability. It provides a decentralized and distributed approach to file storage, ensuring data redundancy and availability. + +### 0-stor_v2 + +[0-stor_v2](./grid3_components.md#0-stor_v2) is a distributed and decentralized storage solution that enables data storage and retrieval on the ThreeFold Grid. + +### QSFS + +[Quantum Safe Filesystem (QSFS)](./grid3_components.md#qsfs) is ThreeFold's innovative storage solution designed to address the security challenges posed by quantum computing. QSFS employs advanced cryptographic techniques that are resistant to attacks from quantum computers, ensuring the confidentiality and integrity of stored data. By its design, QSFS also offers a high level of redundancy. + +## Compute Units + +Compute units are related to the CPUs doing calculations during the deployment. If a user deploys on a 3Node and uses the CPUs of the units while those CPUs experience failure, the user will lose compute power. as a main example, a way to achieve redundancy in the compute category would be to deploy a solution via Kubernetes. In this case, the CPU workload is balanced between the different 3Nodes of the Kubernetes cluster and if one 3Node fails, the deployment can still function properly. + +Let's explore some compute components of the ThreeFold Grid. + +### Virtual CPUs (vCPUs) + +Virtual CPUs (vCPUs) are virtual representations of physical CPUs that allow multiple virtual machines (VMs) to run concurrently on a single physical server or host. Virtualization platforms allocate vCPUs to each VM, enabling them to execute tasks and run applications as if they were running on dedicated physical hardware. The number of vCPUs assigned to a VM determines its processing power and capacity to handle workloads. On the TFGrid, the number of vCPUs is limited to the physical number of CPUs on the host (i.e. the 3Node). Since this limitation is done per VM, this means that a node with 8 cores can still have 2 VMs each with 8 vCPUs. + +### Kubernetes + +[Kubernetes](../../../documentation/dashboard/solutions/k8s.md) is an open-source container orchestration system for automating software deployment, scaling, and management. On the TFGrid, Kubernetes clusters can be deployed out of the box. Thus, system administrators can seamlessly deploy solutions on the TFGrid that are reliable in terms of compute units. + +### TF Grid-SDK-Go and TF Grid-SDK-TS + +The [TFGrid-SDK-Go](./grid3_components.md#tf-grid-sdk-go) and [TFGrid-SDK-TS](./grid3_components.md#tf-grid-sdk-ts) enable developers to interact with the ThreeFold Grid infrastructure, such as provisioning and managing compute resources, accessing storage, and interacting with the blockchain-based services. They provide a standardized and convenient way to leverage the features and capabilities of the ThreeFold Grid within Go and Typescript applications. + +## Network Units + +Network units are related to the data transmitted over the Internet. While TFGrid does not provide direct ISP services, elements such as gateways are clearly related to the network. [Gateways](../../../documentation/system_administrators/terraform/resources/terraform_vm_gateway.md) can be used to balance network workloads. A deployment could consist of two different gateways with a master node gateway and a worker node gateway. If the master gateway would fail, the worker gateway would take the lead and become the master gateway. Deploying solutions with several gateways can help system administrators build reliable solutions. + +Note that it is also possible to deploy a Wireguard virtual private network (VPN) between different 3Nodes and synchronize their databases. This provides resilience and redundancy. Read more on VPN and synced databases [here](../../../documentation/system_administrators/terraform/advanced/terraform_mariadb_synced_databases.md). + +Let's explore some network components of the ThreeFold Grid. + +### Reliable Message Bus Relay (RMB-RS) + +[Reliable Message Bus Relay (RMB-RS)](./grid3_components.md#reliable-message-bus-relay-rmb-rs) is a component or system that facilitates the reliable and secure transfer of messages between different entities or systems within the ThreeFold ecosystem. It acts as a relay or intermediary, ensuring that messages are delivered accurately and efficiently, even in the presence of network disruptions or failures. The RMB-RS employs robust protocols and mechanisms to guarantee message reliability, integrity, and confidentiality. + +### TCP-Router + +[TCP-Router](./grid3_components.md#tcp-router) is a component of the ThreeFold technology stack that acts as a TCP (Transmission Control Protocol) router and load balancer. It serves as a network gateway for incoming TCP connections, routing them to the appropriate destinations based on predefined rules and configurations. The TCP-Router component is responsible for distributing incoming network traffic across multiple backend services or nodes, ensuring efficient load balancing and high availability. + +# Solution Categories + +There are three main solution categories on the TFGrid: DIY workloads, independent commercial offerings, and ThreeFold commercial offerings. Let's take a look at them and discuss their basic properties. + +## DIY Workloads + +Out-of-the-box applications are available on the [TF Dashboard](../../../documentation/dashboard/deploy/applications.md) and [Terraform](../../../documentation/system_administrators/terraform/terraform_toc.md), where anyone can [buy TFTs](../../../documentation/threefold_token/buy_sell_tft/buy_sell_tft.md) and deploy on the decentralized and open-source grid. The reliability of those deployments depend on the capacity and resources of each DIY system administrator. + +In essence, when you deploy on the decentralized and open-source TFGrid, you act as a centralized entity building the solution architecture. You must design the solution in a way that it can be reliable with high-availability and resilience levels that suit the needs of your project. + +Note that when you deploy on the ThreeFold Grid, you are doing so in accordance with the [ThreeFold Terms and Conditions](../../legal/terms_conditions_all3.md). + +## Independent Commercial Offerings + +Since the TFGrid is open-source, anyone could decide to build a commercial offering on top of the grid. In this case, it would be recommended for the commercial offering to provide Terms and Conditions, clear support, a website to advertise the product and a marketing strategy to obtain customers. + +In this case, the commercial offering is the centralized entity and if the company makes a mistake, it would be liable to the users to the extent discussed in the T&C. + +The ThreeFold Manual already contains a lot of resourceful information on how to [deploy applications](../../../documentation/dashboard/deploy/applications.md) on the TFGrid. We invite everyone to develop independent commercial offerings on top of the ThreeFold Grid. + +## ThreeFold Commercial Offerings + +ThreeFold is building commercial offerings on top of the TFGrid. Those commercial offerings are for-profit organizations. Each of those organizations would function as a centralized entity. + +ThreeFold Ventures will be the branch exploring this aspect of the TF Ecosystem. A major project on the way is [ThreeFold Cloud](https://cloud.threefold.io/). ThreeFold Cloud is thus a centralized entity that will generate its own Terms and Conditions, support, marketing and website strategy. Furthermore, ThreeFold Cloud will be liable to its users to the extent developed in the ThreeFold Cloud Terms and Conditions. + +# Best Practices + +This text provided an introduction to some deployment architectures and solution categories possible on the TFGrid. In the future, we will expand on some of the main TFGrid best practices. Stay tuned for more on this topic. + +Some of the best practices to be covered are the following: + +* Use Kubernetes to deploy redundant workloads +* Use multi-gateways deployments for redundancy + * Deploy manually two VMs + * Use two webgateways to access the VMs + * Choose a data replication strategy to have content on both places (e.g. syncing databases) +* Use continous deployment and integration workloads + * Deploy on 2 different VMs + * Ensure continuous deployment and integration when changes occur +* Use DNS with redundancy +* Use QSFS for storage resilience and redundancy + +These are only a few of the many possibilities that the TFGrid offers. We invite everyone to explore the TFGrid and share their experience and learning. + +# Questions and Feedback + +If you have any questions or feedback, we invite you to discuss with the ThreeFold community on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) chat on Telegram. diff --git a/collections/technology/concepts/zos.md b/collections/technology/concepts/zos.md new file mode 100644 index 0000000..493b8a2 --- /dev/null +++ b/collections/technology/concepts/zos.md @@ -0,0 +1,29 @@ +

Zero-OS

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview of Zero-OS](#overview-of-zero-os) +- [Zero-OS Advantages](#zero-os-advantages) + +*** + +## Introduction + +Z-OS (Zero Operating System) is a lightweight and secure operating system designed specifically for running workloads on the ThreeFold Grid. Z-OS provides a minimalistic and containerized environment for applications, enabling efficient resource allocation and management. With Z-OS, developers can deploy their applications easily and take advantage of the scalability and resilience offered by the ThreeFold Grid. + +## Overview of Zero-OS + +ThreeFold built this decentralized autonomous operating system (OS) from scratch, starting with just a Linux kernel, for the purpose of dedicating hardware capacity to users of the TF Grid. + +Based on ThreeFold’s open-source technology, Zero-OS is a stateless and lightweight operating system that allows for an improved efficiency of up to 10x for certain workloads. Our OS achieves unparalleled levels of efficiency and security. With no remote shell or login and extremely small footprint, Zero-OS ensures that hosted workloads are protected from administrative exploits and human intervention. + +All 3Nodes are booted with Zero-OS to provide the storage, compute and network primitives for our open-source peer-to-peer Internet infrastructure. Due to the unique design of Zero-OS, any server-like hardware with an AMD or Intel processor can be booted and dedicated to the network. + +Zero-OS runs autonomously on 3Nodes once booted, requiring no maintenance or administration. The process is actually quite simple, also enabling people without technical skills to join the TFGrid by connecting a node in their home or office with full data sovereignty and security. + +## Zero-OS Advantages + +Zero-OS provides many advantages: deterministic deployment, zero hacking surface, unbreakable storage and more. + +To learn more about the Zero-OS advantages, read [this section](./grid_primitives.md#zero-os-advantages). \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/consensus3.md b/collections/technology/consensus3_mechanism/consensus3.md new file mode 100644 index 0000000..9ba3105 --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3.md @@ -0,0 +1,19 @@ +![](img/grid_header.jpg) + +# DAO Consensus Engine + +!!!include:dao_info + +## DAO Engine + +On TFGrid 3.0 ThreeFold has implemented a DAO consensus engine using Polkadot/Substrate blockchain technology. + +This is a powerful blockchain construct which allows us to run our TFGrid and maintain consensus on global scale. + +This system has been designed to be compatible with multiple blockchains. + +!!!include:consensus3_overview_graph + +!!!include:consensus3_toc + +!!!def alias:consensus3,consensus_engine \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/consensus3_engine_farming.md b/collections/technology/consensus3_mechanism/consensus3_engine_farming.md new file mode 100644 index 0000000..1774781 --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3_engine_farming.md @@ -0,0 +1,17 @@ +![](img/grid_header.jpg) + +### consensus engine in relation to TFT Farming Rewards in TFGrid 3.0 + +!!!include:consensus3_overview_graph + +The consensus engine checks the farming rules as defined in + +- [farming logic 3.0](farming_reward) +- [farming reward calculator](farming_calculator) + +- if uptime + 98% per month then the TFT will be rewarded to the farmer (for TFGrid 3.0, can change later). + +All the data of the farmer and the 3nodes are registered on TFChain + + +!!!include:consensus3_toc \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/consensus3_oracles.md b/collections/technology/consensus3_mechanism/consensus3_oracles.md new file mode 100644 index 0000000..0af143f --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3_oracles.md @@ -0,0 +1,44 @@ + +## Consensus 3.X Oracles used + +Oracles are external resources of information. + +The TFChain captures and holds that information so we get more certainty about the accuracy. + +We have oracles for price & reputation for e.g. TF Farmers and 3Nodes. + +These oracles are implemented on TF_CHAIN for TFGrid 3.0. + +```mermaid + + +graph TB + subgraph Digital Currency Ecosystem + money_blockchain[Money Blockchain Explorers] + Exch1[Money Blockchain Decentralized Exchange] + OracleEngine --> Exch1[Polkadot] + OracleEngine --> Exch1[Money Blockchain Exchange] + OracleEngine --> Exch2[Binance Exchange] + OracleEngine --> Exch3[other... exchanges] + end + subgraph ThreeFold Grid + Monitor_Engine --> 3Node1 + Monitor_Engine --> 3Node2 + Monitor_Engine --> 3Node3 + end + subgraph TFChainNode1[TFGrid Blockchain Node] + Monitor_Engine + Explorers[TFChain Explorers]-->TFGridDB --> BCNode + Explorers --> BCNode + ConsensusEngine1-->BCNode[Blockchain Validator Node] + ConsensusEngine1 --> money_blockchain[Money Blockchain] + ConsensusEngine1 --> ReputationEngine[Reputation Engine] + ReputationEngine --> Monitor_Engine[Monitor Engine] + ConsensusEngine1 --> OracleEngine[Oracle For Pricing Digital Currencies] + end + +``` + +> TODO: outdated info + +!!!include:consensus3_toc \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/consensus3_overview_graph.md b/collections/technology/consensus3_mechanism/consensus3_overview_graph.md new file mode 100644 index 0000000..8221c40 --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3_overview_graph.md @@ -0,0 +1,51 @@ + +```mermaid +graph TB + subgraph Money Blockchain + money_blockchain --> account1 + money_blockchain --> account2 + money_blockchain --> account3 + click money_blockchain "/threefold/#money_blockchain" + end + subgraph TFChainNode1[TFChain BCNode] + Explorer1-->BCNode1 + ConsensusEngine1-->BCNode1 + ConsensusEngine1 --> money_blockchain + ConsensusEngine1 --> ReputationEngine1 + ReputationEngine1 --> Monitor_Engine1 + click ReputationEngine1 "/info/threefold/#reputationengine" + click ConsensusEngine1 "/info/threefold/#consensusengine" + click BCNode1 "/info/threefold/#bcnode" + click Explorer1 "/info/threefold/#tfexplorer" + end + subgraph TFChainNode2[TFChain BCNode] + Explorer2-->BCNode2 + ConsensusEngine2-->BCNode2 + ConsensusEngine2 --> money_blockchain + ConsensusEngine2 --> ReputationEngine2 + ReputationEngine2 --> Monitor_Engine2 + click ReputationEngine2 "/info/threefold/#reputationengine" + click ConsensusEngine2 "/info/threefold/#consensusengine" + click BCNoBCNode2de1 "/info/threefold/#bcnode" + click Explorer2 "/info/threefold/#tfexplorer" + + end + Monitor_Engine1 --> 3Node1 + Monitor_Engine1 --> 3Node2 + Monitor_Engine1 --> 3Node3 + Monitor_Engine2 --> 3Node1 + Monitor_Engine2 --> 3Node2 + Monitor_Engine2 --> 3Node3 + click 3Node1 "/info/threefold/#3node" + click 3Node2 "/info/threefold/#3node" + click 3Node3 "/info/threefold/#3node" + click Monitor_Engine1 "/info/threefold/#monitorengine" + click Monitor_Engine2 "/info/threefold/#monitorengine" + + +``` + +*click on the parts of the image, they will go to more info* + +> TODO: outdated info + diff --git a/collections/technology/consensus3_mechanism/consensus3_principles.md b/collections/technology/consensus3_mechanism/consensus3_principles.md new file mode 100644 index 0000000..27ab007 --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3_principles.md @@ -0,0 +1,45 @@ +# Consensus Mechanism + +## Blockchain node components + +!!!include:consensus3_overview_graph + +- A Blockchain node (= Substrate node) called TF-Chain, containing all entities interacting with each other on the TF-Grid +- An explorer = a Rest + GraphQL interface to TF-Chain (Graphql is a nice query language to make it easy for everyone to query for info) +- Consensus Engine + - is a Multisignature Engine running on TF-Chain + - The multisignature is done for the Money BlockchainAccounts + - It checks the AccountMetadata versus reality and if ok, will sign, which allows transactions to happen after validation of the "smart contract" +- SLA & reputation engine + - Each node uptime is being checked by Monitor_Engine + - Also bandwidth will be checked in the future (starting 3.x) + +### Remarks + +- Each Monitor_Engine checks uptime of X nr of nodes (in beginning it can do all nodes), and stores the info in local DB (to keep history of check) + +## Principle + +- We keep things as simple as we can + - Money Blockchain blockchain used to hold the money + - Money Blockchain has all required features to allow users to manage their money like wallet support, decentralized exchange, good reporting, low transaction fees, ... + - Substrate based TFChain is holding the metadata for the accounts which express what we need to know per account to allow the start contracts to execute. + - Smart Contracts are implemented using multisignature feature on Money Blockchain in combination with Multi Signature done by Consensus_Engine. +- on money_blockchain: + - each user has Money BlockchainAccounts (each of them holds money) + - there are normal Accounts (means people can freely transfer money from these accounts) as well as RestrictedAccounts. Money cannot be transfered out of RestrictedAccounts unless consensus has been achieved from ConsensusEngine. +- Restricted_Account + - On stellar we use the multisignature feature to make sure that locked/vesting or FarmingPool cannot transfer money unless consensus is achieved by the ConsensusEngine + +- Each account on money_blockchain (Money BlockchainAccount) has account record in TFChain who needs advanced features like: + - lockup + - vesting + - minting (rewards to farmers) + - tfta to tft conversion + +- The Account record in TFGrid_DB is called AccountMetadata. + - The AccountMetadata describes all info required to be able for consensus engine to define what to do for advanced features like vesting, locking, ... + +> TODO: outdated info + +!!!include:consensus3_toc \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/consensus3_toc.md b/collections/technology/consensus3_mechanism/consensus3_toc.md new file mode 100644 index 0000000..ec2109c --- /dev/null +++ b/collections/technology/consensus3_mechanism/consensus3_toc.md @@ -0,0 +1,13 @@ + +## Consensus Engine Information + +- [Consensus Engine Homepage](consensus3) +- [Principles TFChain 3.0 Consensus](consensus3_principles) +- [Consensus Engine Farming 3.0](consensus3_engine_farming) +- [TFGrid 3.0 wallets](tfgrid3_wallets) +- Architecture: + - [Money Blockchains/Substrate architecture](money_blockchain_partity_link) + + +> implemented in TFGrid 3.0 + diff --git a/collections/technology/consensus3_mechanism/img/grid_header.jpg b/collections/technology/consensus3_mechanism/img/grid_header.jpg new file mode 100644 index 0000000..d2808e3 Binary files /dev/null and b/collections/technology/consensus3_mechanism/img/grid_header.jpg differ diff --git a/collections/technology/consensus3_mechanism/img/limitedsupply_.png b/collections/technology/consensus3_mechanism/img/limitedsupply_.png new file mode 100644 index 0000000..b9220ec Binary files /dev/null and b/collections/technology/consensus3_mechanism/img/limitedsupply_.png differ diff --git a/collections/technology/consensus3_mechanism/money_blockchain_partity_link.md b/collections/technology/consensus3_mechanism/money_blockchain_partity_link.md new file mode 100644 index 0000000..ae8845e --- /dev/null +++ b/collections/technology/consensus3_mechanism/money_blockchain_partity_link.md @@ -0,0 +1,53 @@ + +## Link between different Money Blockchain & TFChain + +TF-Chain is the ThreeFold blockchain infrastructure, set up in the Substrate framework. + +We are building a consensus layer which allows us to easily bridge between different money blockchains. + +Main blockchain for TFT remains the Stellar network for now. A secure bridging mechanism exists, able to transfer TFT between the different blockchains. +Active bridges as from TFGrid 3.0 release: +- Stellar <> Binance Smart Chain +- Stellar <> Parity Substrate +More bridges are under development. + +```mermaid + + +graph TB + subgraph Money Blockchain + money_blockchain --- account1a + money_blockchain --- account2a + money_blockchain --- account3a + account1a --> money_user_1 + account2a --> money_user_2 + account3a --> money_user_3 + click money_blockchain "/info/threefold/#money_blockchain" + end + subgraph ThreeFold Blockchain On Parity + TFBlockchain --- account1b[account 1] + TFBlockchain --- account2b[account 2] + TFBlockchain --- account3b[account 3] + account1b --- smart_contract_data_1 + account2b --- smart_contract_data_2 + account3b --- smart_contract_data_3 + click TFBlockchain "/info/threefold/#tfchain" + end + account1b ---- account1a[account 1] + account2b ---- account2a[account 2] + account3b ---- account3a[account 3] + + consensus_engine --> smart_contract_data_1[fa:fa-ban smart contract metadata] + consensus_engine --> smart_contract_data_2[fa:fa-ban smart contract metadata ] + consensus_engine --> smart_contract_data_3[fa:fa-ban smart contract metadata] + consensus_engine --> account1a + consensus_engine --> account2a + consensus_engine --> account3a + click consensus_engine "/info/threefold/#consensus_engine" + + +``` + +Above diagram shows how our consensus engine can deal with Substrate and multiple Money Blockchains at same time. + +!!!include:consensus3_toc \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/roadmap_tfchain3.md b/collections/technology/consensus3_mechanism/roadmap_tfchain3.md new file mode 100644 index 0000000..28f484f --- /dev/null +++ b/collections/technology/consensus3_mechanism/roadmap_tfchain3.md @@ -0,0 +1,52 @@ + +# Roadmap For our TFCHain and ThreeFold DAO + +![](img/limitedsupply_.png) + +## TFChain / DAO 3.0.2 + +For this phase our TFChain and TFDAO has been implemented using parity/substrate. + +Features + +- poc +- pou +- identity management +- consensus for upgrades of DAO and TFChain (code) +- capacity tracking (how much capacity used) +- uptime achieved +- capacity utization +- smart contract for IT +- storage of value = TFT +- request/approval for adding a validator + +Basically all basic DAO concepts are in place + +## TFChain / DAO 3.0.x + +TBD version nr, planned Q1 2022 + +NEW + +- proposals for TFChain/DAO/TFGrid changes (request for change) = we call them TFCRP (ThreeFold Change Request Proposal) +- voting on proposals = we call them TFCRV (ThreeFold Change Request Vote) + + +## TFChain / DAO 3.1.x + +TBD version nr, planned Q1 2022 + +This version adds more layers to our existing DAO and prepares for an even more scalable future. + +NEW + +- Cosmos based chain on L2 +- Validator Nodes for TFGrid and TFChain. +- Cosmos based HUB = security for all TFChains + +> More info about our DAO strategy see TFDAO. + + + +!!!def alias:tfchain_roadmap,dao_roadmap,tfdao_roadmap + diff --git a/collections/technology/consensus3_mechanism/tfgrid3_wallets.md b/collections/technology/consensus3_mechanism/tfgrid3_wallets.md new file mode 100644 index 0000000..f054c65 --- /dev/null +++ b/collections/technology/consensus3_mechanism/tfgrid3_wallets.md @@ -0,0 +1,73 @@ + +# TFGrid 3.0 Wallets + +ThreeFold has a mobile wallet which will allow to be used on the TFChain backend (Substrate) as well as any other Money Blockchain it supports. + +This provides for a very secure digital currency infrastructure with lots of advantages. + +- [X] ultra flexible smart contracts possible +- [X] super safe +- [X] compatible with multiple blockchains (money blockchains) +- [X] ultra scalable + +```mermaid + + +graph TB + + subgraph Money Blockchain + money_blockchain[Money Blockchain Explorers] + money_blockchain --- money_blockchain_node_1 & money_blockchain_node_2 + money_blockchain_node_1 + money_blockchain_node_2 + end + + subgraph ThreeFold Wallets + mobile_wallet[Mobile Wallet] + desktop_wallet[Desktop Wallet] + mobile_wallet & desktop_wallet --> money_blockchain + mobile_wallet & desktop_wallet --> Explorers + money_blockchain_wallet[Any Money Blockchain Wallet] --> money_blockchain + end + + + subgraph TFChain[TFGrid Blockchain on Substrate] + Explorers[TFChain Explorers]-->TFGridDB --> BCNode + Explorers --> BCNode + end + + +``` + +Generic overview: + +```mermaid + +graph TB + + subgraph TFChain[TFGrid Chain] + guardian1[TFChain Node 1] + guardian2[TFChain Node 2] + guardian3[TFChain Node 3...9] + end + + User_wallet[User Wallet] --> money_blockchain_account + User_wallet[User Wallet] --> money_blockchain_restricted_account + + subgraph Money Blockchain Ecosystem + money_blockchain_account + money_blockchain_restricted_account --- guardian1 & guardian2 & guardian3 + end + + subgraph consensus[Consensus Layer on Substrate] + guardian1 --> ReputationEngine & PricingOracle + guardian1 --> contract1[Smart Contract Vesting] + guardian1 --> contract2[Smart Contract Minting/Farming] + end + + + + +``` + +!!!include:consensus3_toc \ No newline at end of file diff --git a/collections/technology/consensus3_mechanism/tfgrid_db_models.v b/collections/technology/consensus3_mechanism/tfgrid_db_models.v new file mode 100644 index 0000000..b34cf6d --- /dev/null +++ b/collections/technology/consensus3_mechanism/tfgrid_db_models.v @@ -0,0 +1,52 @@ + +// - vesting +// - startdate: epoch +// - currency: USD +// - [[$month_nr,$minprice_unlock,$TFT_to_vest],...] +// - if 48 months then list will have 48 parts +// - month 0 = first month +// - e.g. [[0,0.11,10000],[1,0.12,10000],[2,0.13,10000],[3,0.14,10000]...] + +//information stored at account level in TFGridDB +struct AccountMeta{ + //corresponds to unique address on money_blockchain + money_blockchain_address string + vesting Vesting[] + unlocked_TFT int +} + +struct Vesting{ + startdate int + //which currency is used to execute on the acceleration in the vesting + //if price above certain level (which is currency + amount of that currency) the auto unlock + currency CurrencyEnum + months []VestingMonth +} + +struct VestingMonth{ + month_nr int + //if 0 then will not unlock based on price + unlock_price f32 + tft_amount int +} + +enum CurrencyEnum{ + usd + eur + egp + gbp + aed +} + +//this is stored in the TFGridDB +fn (mut v AccountMeta) serialize() string{ + //todo code which does serialization see above + return "" +} + + +//write minting pool + + +//REMARKS +// if unlock triggered because of month or price then that record in the VestingMonth[] goes away and TFT go to unlocked_TFT \ No newline at end of file diff --git a/collections/technology/grid3_howitworks.md b/collections/technology/grid3_howitworks.md new file mode 100644 index 0000000..041e68d --- /dev/null +++ b/collections/technology/grid3_howitworks.md @@ -0,0 +1,39 @@ +

How It Works

+ +Welcome to the ThreeFold ecosystem, your gateway to a global and sustainable network! + +## TFGrid in a Nutshell + +The ThreeFold Grid is a remarkable network sustained by dedicated individuals, known as **ThreeFold farmers**, who offer network, storage and compute (CPU, GPU) resources to users via 3Nodes, specialized computers that run the innovative Zero-OS software. + +> [Become a ThreeFold farmer](../../documentation/farmers/farmers.md) + +## If it runs on Linux, it runs on the Grid! + +The ThreeFold Grid supports any application that can run on Linux, guaranteeing compatibility and flexibility. Moreover, it offers additional benefits, including enhanced privacy, security, proximity to end-users, and a significantly lower cost compared to traditional alternatives. + +> [Deploy on the TFGrid](../../documentation/system_administrators/getstarted/tfgrid3_getstarted.md) + +## Z-OS: A New Operating System + +Z-OS (Zero Operating System) is a lightweight and secure operating system designed specifically for running workloads on the ThreeFold Grid. Z-OS provides a minimalistic and containerized environment for applications, enabling efficient resource allocation and management. + +> [Learn more about Z-OS](./concepts/zos.md) + +## Internet as a Resource + +In a similar manner to purchasing electricity or other utilities, the internet capacity provided by the ThreeFold Grid is produced and allocated locally. This decentralized approach empowers digital service and application providers to host their offerings closer to end-users, resulting in exceptional performance, competitive pricing, and improved profit margins. The TFGrid is fueled by the ThreeFold Token. + +> [Learn more about TFT](../about/token_overview/token_overview.md) + +## TFChain: The Backbone Blockchain Infrastructure + +__TFChain__, also known as __ThreeFold Chain__, is the powerful blockchain that orchestrates the interactions within the ThreeFold Grid ecosystem. TFChain is like the control center of the ThreeFold Grid, providing users and farmers with a wide range of key functionalities. + +> [Learn more about TFChain](../technology/concepts/tfchain.md) + +## ThreeFold Dashboard + +The [**ThreeFold Dashboard**](https://dashboard.grid.tf/) serves as an indispensable tool for farmers and users of the ThreeFold Grid, facilitating node registration, resource management, workload deployments and much more. + +> [Learn more about the ThreeFold Dashboard](../../documentation/dashboard/dashboard.md) diff --git a/collections/technology/img/layer0_.jpg b/collections/technology/img/layer0_.jpg new file mode 100644 index 0000000..fa1d017 Binary files /dev/null and b/collections/technology/img/layer0_.jpg differ diff --git a/collections/technology/img/tech_architecture1.jpg b/collections/technology/img/tech_architecture1.jpg new file mode 100644 index 0000000..8595e3a Binary files /dev/null and b/collections/technology/img/tech_architecture1.jpg differ diff --git a/collections/technology/img/tech_header.jpg b/collections/technology/img/tech_header.jpg new file mode 100644 index 0000000..31cf3fb Binary files /dev/null and b/collections/technology/img/tech_header.jpg differ diff --git a/collections/technology/img/technology_home_.jpg b/collections/technology/img/technology_home_.jpg new file mode 100644 index 0000000..9d975f3 Binary files /dev/null and b/collections/technology/img/technology_home_.jpg differ diff --git a/collections/technology/layers/autonomous_layer_intro.md b/collections/technology/layers/autonomous_layer_intro.md new file mode 100644 index 0000000..4a52295 --- /dev/null +++ b/collections/technology/layers/autonomous_layer_intro.md @@ -0,0 +1,12 @@ +## Autonomous Layer + +### Digital Twin + +>TODO: + +### 3Bot + +3Bot is a virtual system administrator that manages the user's IT workloads under a private key. This ensures an immutable record of any workload as well as a self-healing functionality to restore these workloads if/when needed. Also, all 3Bot IDs are registered on a modern type of phone book that uses blockchain technology. This phone book, also referred to as the Threefold Grid Blockchain, allows all 3Bots to find each other, connect and exchange information or resources in a fully end-to-end encrypted way. Here as well, there are "zero people involved, as 3Bots operate autonomously in the network, and only under the user's commands.  + +3Bot is equipped with a cryptographic 2-factor authentication mechanism. You can log in to your 3Bot via the ThreeFold Connect app on your device which contains your private key. The 3Bot is a very powerful tool that allows you to automate & manage thousands of virtual workloads on the ThreeFold_Grid. + diff --git a/collections/technology/layers/capacity_layer_intro.md b/collections/technology/layers/capacity_layer_intro.md new file mode 100644 index 0000000..31f60f3 --- /dev/null +++ b/collections/technology/layers/capacity_layer_intro.md @@ -0,0 +1,54 @@ +## Capacity Layer + +### Zero-OS + +ThreeFold has build its own operating system called, Zero-OS which was based starting from a Linux Kernel with as purpose to remove all the unnecessary complexity found on contemporary OS's. + +Zero-OS supports a small number of primitives, and performs low-level functions natively. + +It delivers 3 primitive functions: +- storage capacity +- compute capacity +- network capacity + +There is no shell, local nor remote attached to Zero-OS. It does not allow for inbound network connections to happen to the core. Also, given its shell-less nature, the people and organizations, called farmers, that run 3nodes cannot issue any commands nor access its features. In that sense, Zero-OS enables a "zero people" (autonomous) Internet, meaning hackers cannot get in, while also eliminating human error from the paradigm. + +### 3Node + +The ThreeFold_Grid needs hardware/servers to function. Servers of all shapes and sizes can be added to the grid by anyone, anywhere in the world. The production of Internet Capacity on the Threefold Grid is called Farming and people who add these servers to the grid are called Farmers. This is a fully decentralized process and they get rewarded by the means of TFT. + +Farmers download the Zero-OS operating system and boot their servers themselves. Once booted, these servers become 3Nodes. The 3Nodes will register themselves in a database called the TF_Explorer. Once registered in the TF_Explorer, the capacity of the 3Nodes will become available on the TF Grid Explorer. Also, given the autonomous nature of the ThreeFold_Grid, there is no need for any intermediaries between the user and 3Nodes. + +This enables a complete peer-to-peer environment for people to reserve their Internet Capacity directly from the hardware. + +### Smart Contract for IT + +The purpose of the smart contract for IT is to create and enable autonomous IT. Autonomous self-driving IT is possible when we adhere to two principles from start: + +1. Information technology architectures are configured and installed by bots (a ‘smart contract agent’), not people. +2. Human beings cannot have access to these architectures and change things. + +While sticking to these principles, it provides the basis to consider and describe everything in a contract type format and to deploy any self-driving and self-healing application on the ThreeFold_Grid. + +Once the smart contract for IT is created, it will be registered in the Blockchain Database in a complete end-to-end process. It will also leave instructions for the 3Nodes in a digital notary system for them to grab the necessary instructions and complete the smart contract. + +Learn more about smart contract for IT [here](smartcontract_it). + +### TFChain + +A blockchain running on the TFGrid stores following information (TFGrid 3.0) + +- registry for all digital twins (identity system, aka phonebook) +- registry for all farmers & 3nodes +- registry for our reputation system +- info as required for the Smart Contract for IT + +This is the hart of our operational system of the TFGrid + +### Peer-to-Peer Network + +The peer-to-peer network allows any zmachine or user to connect with other zmachine or users on the TF Grid securely and creates a private shortest path peer-to-peer network. + +### Web Gateway + + The Web Gateway is a mechanism to connect the private (overlay) networks to the open Internet. By not providing an open and direct path in to the private network, a lot of malicious phishing and hacking attempts are stopped at the Web Gateway level for container applications. \ No newline at end of file diff --git a/collections/technology/layers/experience_layer_intro.md b/collections/technology/layers/experience_layer_intro.md new file mode 100644 index 0000000..1992925 --- /dev/null +++ b/collections/technology/layers/experience_layer_intro.md @@ -0,0 +1 @@ +## Experience Layer diff --git a/collections/technology/layers/technology_layers.md b/collections/technology/layers/technology_layers.md new file mode 100644 index 0000000..e6ab9c5 --- /dev/null +++ b/collections/technology/layers/technology_layers.md @@ -0,0 +1,7 @@ + +!!!include:capacity_layer_intro + +!!!include:autonomous_layer_intro + +!!!include:experience_layer_intro + diff --git a/collections/technology/primitives/compute/beyond_containers.md b/collections/technology/primitives/compute/beyond_containers.md new file mode 100644 index 0000000..18b419c --- /dev/null +++ b/collections/technology/primitives/compute/beyond_containers.md @@ -0,0 +1,17 @@ +## Beyond Containers + +![](img/container_native.jpg) + + +Default features: + +- compatible with Docker +- compatible with any Linux workload + +We have following unique advantages: + +- no need to work with images, we work with our unique zos_fs. +- every container runs in a dedicated virtual machine providing more security. +- the containers talk to each other over a private network: zos_net. +- the containers can use web_gw to allow users on the internet connect to the applications as running in their secure containers. +- can use core-x to manage the workload. diff --git a/collections/technology/primitives/compute/compute_toc.md b/collections/technology/primitives/compute/compute_toc.md new file mode 100644 index 0000000..2e52fa6 --- /dev/null +++ b/collections/technology/primitives/compute/compute_toc.md @@ -0,0 +1,7 @@ +# Compute + +

Table of Contents

+ +- [ZKube](./zkube.md) +- [ZMachine](./zmachine.md) +- [CoreX](./corex.md) \ No newline at end of file diff --git a/collections/technology/primitives/compute/corex.md b/collections/technology/primitives/compute/corex.md new file mode 100644 index 0000000..e599792 --- /dev/null +++ b/collections/technology/primitives/compute/corex.md @@ -0,0 +1,21 @@ + +

CoreX

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [ZMachine Process Manager](#zmachine-process-manager) + +*** + +## Introduction + +CoreX allows you to manage your ZMachine over web remotely. + +## ZMachine Process Manager + +- Provide a web interface and a REST API to control your processes. +- Allow to watch the logs of your processes. +- Or use it as a web terminal (access over https to your terminal)! + +![](img/corex.jpg) \ No newline at end of file diff --git a/collections/technology/primitives/compute/img/container_native.jpg b/collections/technology/primitives/compute/img/container_native.jpg new file mode 100644 index 0000000..c763329 Binary files /dev/null and b/collections/technology/primitives/compute/img/container_native.jpg differ diff --git a/collections/technology/primitives/compute/img/corex.jpg b/collections/technology/primitives/compute/img/corex.jpg new file mode 100644 index 0000000..977ab61 Binary files /dev/null and b/collections/technology/primitives/compute/img/corex.jpg differ diff --git a/collections/technology/primitives/compute/img/kubernetes_0_.jpg b/collections/technology/primitives/compute/img/kubernetes_0_.jpg new file mode 100644 index 0000000..f976a52 Binary files /dev/null and b/collections/technology/primitives/compute/img/kubernetes_0_.jpg differ diff --git a/collections/technology/primitives/compute/img/tfgrid_compute_.jpg b/collections/technology/primitives/compute/img/tfgrid_compute_.jpg new file mode 100644 index 0000000..37ee664 Binary files /dev/null and b/collections/technology/primitives/compute/img/tfgrid_compute_.jpg differ diff --git a/collections/technology/primitives/compute/img/zkube_architecture_.jpg b/collections/technology/primitives/compute/img/zkube_architecture_.jpg new file mode 100644 index 0000000..f8fc59a Binary files /dev/null and b/collections/technology/primitives/compute/img/zkube_architecture_.jpg differ diff --git a/collections/technology/primitives/compute/img/zmachine_zos_.jpg b/collections/technology/primitives/compute/img/zmachine_zos_.jpg new file mode 100644 index 0000000..b8dec4f Binary files /dev/null and b/collections/technology/primitives/compute/img/zmachine_zos_.jpg differ diff --git a/collections/technology/primitives/compute/tfgrid_compute.md b/collections/technology/primitives/compute/tfgrid_compute.md new file mode 100644 index 0000000..d8bd51d --- /dev/null +++ b/collections/technology/primitives/compute/tfgrid_compute.md @@ -0,0 +1,25 @@ + +## TFGrid Compute Layer + +![](img/tfgrid_compute_.jpg) + +We are more than just Container or VM technology, see [our Beyond Container Document](beyond_containers). + +A 3Node is a Zero-OS enabled computer which is hosted with any of the TF_Farmers. + +There are 4 storage mechanisms which can be used to store your data: + +- ZOS_FS is our dedupe unique filesystem, replaces docker images. +- ZOS_Mount is a mounted disk location on SSD, this can be used as faster storage location. +- QSFS, this is a super unique storage system, data can never be lost or corrupted. Please be reminded that this storage layer is only meant to be used for secondary storage applications. +- ZOS_Disk, a virtual disk technology, only for TFTech OEM partners. + +There are 4 ways how networks can be connected to a Z-Machine. + +- Planetary_network : is a planetary scalable network, we have clients for windows, osx, android and iphone. +- zos_net : is a fast end2end encrypted network technology, keep your traffic between your z_machines 100% private. +- zos_bridge: connection to a public ipaddress +- web_gw: web gateway, a secure way to allow internet traffic reach your secure Z-Machine. + + + diff --git a/collections/technology/primitives/compute/zkube.md b/collections/technology/primitives/compute/zkube.md new file mode 100644 index 0000000..e933059 --- /dev/null +++ b/collections/technology/primitives/compute/zkube.md @@ -0,0 +1,40 @@ +

ZKube

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Unique for our Kubernetes implementation](#unique-for-our-kubernetes-implementation) +- [Features](#features) +- [ZMachine Benefits](#zmachine-benefits) +- [Architecture](#architecture) + +*** + +## Introduction + +TFGrid is compatible with Kubernetes Technology. + +Each eVDC as shown above is a full blown Kubernetes deployment. + +## Unique for our Kubernetes implementation + +- The Kubernetes networks are on top of our [ZNet](../network/znet.md) technology which means all traffic between containers and kubernetes hosts is end2end encrypted independent of where your Kubernetes nodes are deployed. +- You can mount a QSFS underneath a Kubernetes Node (VM), which means that you can deploy containers on top of QSFS to host unlimited amounts of storage in a super safe way. +- You Kubernetes environment is for sure 100% decentralized, you define where you want to deploy your Kubernetes nodes and only you have access to the deployed workloads on the TFGrid. + +## Features + +* integration with znet (efficient, secure encrypted network between the zmachines) +* can be easily deployed at the edge +* single-tenant! + +## ZMachine Benefits + +* [ZOS Protect](../../zos/benefits/zos_advantages.md#zero-os-protect): no hacking surface to the Zero-Nodes, integrate silicon route of trust + + +![](img/kubernetes_0_.jpg) + +## Architecture + +![](img/zkube_architecture_.jpg) \ No newline at end of file diff --git a/collections/technology/primitives/compute/zmachine.md b/collections/technology/primitives/compute/zmachine.md new file mode 100644 index 0000000..3c88994 --- /dev/null +++ b/collections/technology/primitives/compute/zmachine.md @@ -0,0 +1,30 @@ +

ZMachine

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Features](#features) +- [Architecture](#architecture) + +*** + +## Introduction + +ZMachine is a unified container/virtual machine type. This can be used to start a virtual machine on a zos node. + +## Features + +* import from docker (market std for containers) +* can be easily deployed at the edge (edge cloud) +* single-tenant, fully decentralized! +* can deploy unlimited amounts of storage using our qsfs. +* [ZOS Protect](../../zos/benefits/zos_advantages.md#zero-os-protect): no hacking surface to the Zero-Nodes, integrate silicon route of trust +* [ZOS Filesystem](../storage/qsfs.md): dedupe, zero-install, hacker-proof +* [WebGateway](../network/webgw3.md:) intelligent connection between web (internet) and container services +* integration with [ZNet](../network/znet.md) (efficient, secure encrypted network between the zmachines) + +## Architecture + +![](img/zmachine_zos_.jpg) + +A ZMachine is running as a virtual machine on top of Zero-OS. diff --git a/collections/technology/primitives/network/img/overlay_net1.jpg b/collections/technology/primitives/network/img/overlay_net1.jpg new file mode 100644 index 0000000..9dd8138 Binary files /dev/null and b/collections/technology/primitives/network/img/overlay_net1.jpg differ diff --git a/collections/technology/primitives/network/img/planet_net_.jpg b/collections/technology/primitives/network/img/planet_net_.jpg new file mode 100644 index 0000000..afba046 Binary files /dev/null and b/collections/technology/primitives/network/img/planet_net_.jpg differ diff --git a/collections/technology/primitives/network/img/planetary_lan.jpg b/collections/technology/primitives/network/img/planetary_lan.jpg new file mode 100644 index 0000000..f60faa5 Binary files /dev/null and b/collections/technology/primitives/network/img/planetary_lan.jpg differ diff --git a/collections/technology/primitives/network/img/planetary_net.jpg b/collections/technology/primitives/network/img/planetary_net.jpg new file mode 100644 index 0000000..f4be658 Binary files /dev/null and b/collections/technology/primitives/network/img/planetary_net.jpg differ diff --git a/collections/technology/primitives/network/img/redundant_net.jpg b/collections/technology/primitives/network/img/redundant_net.jpg new file mode 100644 index 0000000..1975ef7 Binary files /dev/null and b/collections/technology/primitives/network/img/redundant_net.jpg differ diff --git a/collections/technology/primitives/network/img/webgateway.jpg b/collections/technology/primitives/network/img/webgateway.jpg new file mode 100644 index 0000000..ce67712 Binary files /dev/null and b/collections/technology/primitives/network/img/webgateway.jpg differ diff --git a/collections/technology/primitives/network/img/webgw_scaling.jpg b/collections/technology/primitives/network/img/webgw_scaling.jpg new file mode 100644 index 0000000..ad8819c Binary files /dev/null and b/collections/technology/primitives/network/img/webgw_scaling.jpg differ diff --git a/collections/technology/primitives/network/img/znet_redundancy.jpg b/collections/technology/primitives/network/img/znet_redundancy.jpg new file mode 100644 index 0000000..af7dcb3 Binary files /dev/null and b/collections/technology/primitives/network/img/znet_redundancy.jpg differ diff --git a/collections/technology/primitives/network/img/znet_znic.jpg b/collections/technology/primitives/network/img/znet_znic.jpg new file mode 100644 index 0000000..f258522 Binary files /dev/null and b/collections/technology/primitives/network/img/znet_znic.jpg differ diff --git a/collections/technology/primitives/network/img/znet_znic1.jpg b/collections/technology/primitives/network/img/znet_znic1.jpg new file mode 100644 index 0000000..f258522 Binary files /dev/null and b/collections/technology/primitives/network/img/znet_znic1.jpg differ diff --git a/collections/technology/primitives/network/img/zos_network_overlay.jpg b/collections/technology/primitives/network/img/zos_network_overlay.jpg new file mode 100644 index 0000000..cf652ad Binary files /dev/null and b/collections/technology/primitives/network/img/zos_network_overlay.jpg differ diff --git a/collections/technology/primitives/network/network_toc.md b/collections/technology/primitives/network/network_toc.md new file mode 100644 index 0000000..9b85fb4 --- /dev/null +++ b/collections/technology/primitives/network/network_toc.md @@ -0,0 +1,7 @@ +# Network + +

Table of Contents

+ +- [ZNET](./znet.md) +- [ZNIC](./znic.md) +- [WebGateway](./webgw3.md) \ No newline at end of file diff --git a/collections/technology/primitives/network/p2pagent.md b/collections/technology/primitives/network/p2pagent.md new file mode 100644 index 0000000..97f6d77 --- /dev/null +++ b/collections/technology/primitives/network/p2pagent.md @@ -0,0 +1,5 @@ +# Peer2Peer Agent + +>TODO + +!!!include:zos_toc diff --git a/collections/technology/primitives/network/planetary_network.md b/collections/technology/primitives/network/planetary_network.md new file mode 100644 index 0000000..e3e3c5b --- /dev/null +++ b/collections/technology/primitives/network/planetary_network.md @@ -0,0 +1,54 @@ +![](img/planetary_lan.jpg) + +# Planetary Network + +![](img/planet_net_.jpg) + + +The planetary network is an overlay network which lives on top of the existing internet or other peer2peer networks created. In this network, everyone is connected to everyone. End-to-end encryption between users of an app and the app running behind the network wall. + +Each user end network point is strongly authenticated and uniquely identified, independent of the network carrier used. There is no need for a centralized firewall or VPN solutions, as there is a circle based networking security in place. + +Benefits : +- It finds shortest possible paths between peers +- There's full security through end-to-end encrypted messaging +- It allows for peer2peer links like meshed wireless +- It can survive broken internet links and re-route when needed +- It resolves the shortage of IPV4 addresses + + +Whereas current computer networks depend heavily on very centralized design and configuration, this networking concept breaks this mould by making use of a global spanning tree to form a scalable IPv6 encrypted mesh network. This is a peer2peer implementation of a networking protocol. + +The following table illustrates high-level differences between traditional networks like the internet, and the planetary threefold network: + +| Characteristic | Traditional | Planetary Network | +| --------------------------------------------------------------- | ----------- | ----------------- | +| End-to-end encryption for all traffic across the network | No | Yes | +| Decentralized routing information shared using a DHT | No | Yes | +| Cryptographically-bound IPv6 addresses | No | Yes | +| Node is aware of its relative location to other nodes | No | Yes | +| IPv6 address remains with the device even if moved | No | Yes | +| Topology extends gracefully across different mediums, i.e. mesh | No | Yes | + +## What are the problems solved here? + +The internet as we know it today doesn’t conform to a well-defined topology. This has largely happened over time - as the internet has grown, more and more networks have been “bolted together”. The lack of defined topology gives us some unavoidable problems: + +- The routing tables that hold a “map” of the internet are huge and inefficient +- There isn’t really any way for a computer to know where it is located on the internet relative to anything else +- It’s difficult to examine where a packet will go on its journey from source to destination without actually sending it +- It’s very difficult to install reliable networks into locations that change often or are non-static, i.e. wireless mesh networks + +These problems have been partially mitigated (but not really solved) through centralization - rather than your computers at home holding a copy of the global routing table, your ISP does it for you. Your computers and network devices are configured just to “send it upstream” and to let your ISP decide where it goes from there, but this does leave you entirely at the mercy of your ISP who can redirect your traffic anywhere they like and to inspect, manipulate or intercept it. + +In addition, wireless meshing requires you to know a lot about the network around you, which would not typically be the case when you have outsourced this knowledge to your ISP. Many existing wireless mesh routing schemes are not scalable or efficient, and do not bridge well with existing networks. + +![](img/planetary_net.jpg) + +The planetary network is a continuation & implementation of the [Yggdrasil](https://yggdrasil-network.github.io/about.html) network initiative. This technology is in beta but has been proven to work already quite well. + +!!!def alias:planet_net,planetary_net,planetary_network,pan + +!!!include:zos_toc + +> Click [here](manual:planetary_network_connector) to read more about Planetary Network Connector Installation. Click [here](manual:yggdrasil_client) to read more about Planetary Network Installation (advanced). \ No newline at end of file diff --git a/collections/technology/primitives/network/tfgrid_network.md b/collections/technology/primitives/network/tfgrid_network.md new file mode 100644 index 0000000..d3277a4 --- /dev/null +++ b/collections/technology/primitives/network/tfgrid_network.md @@ -0,0 +1,7 @@ +# TFGrid networking + +- znet : private network between zmachines +- [Planetary Network](planetary_network) : peer2peer end2end encrypted global network +- znic : interface to planetary network +- [WebGateway](webgw) : interface between internet and znet + diff --git a/collections/technology/primitives/network/webgw.md b/collections/technology/primitives/network/webgw.md new file mode 100644 index 0000000..4abd665 --- /dev/null +++ b/collections/technology/primitives/network/webgw.md @@ -0,0 +1,60 @@ + + +# WebGW 2.0 + +The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines. + +![](img/webgateway.jpg) + + +- Separation between where compute workloads are and where services are exposed. +- Better Security +- Redundant + - Each app can be exposed on multiple webgateways at once. +- Support for many interfaces... +- Helps resolve shortage of IPv4 addresses + +If (parts of) this private overlay network need to be reachable from the Internet, the zmachines initiate a secure connection *to* the web Gateway. + +### Implementation + +It is important to mention that this connection is not a standard network connection, it is a [network socket](https://en.wikipedia.org/wiki/Network_socket) initiated by the container or VM to the web gateway. The container calls out to one or more web gateways and sets up a secure & private socket connection to the web gateway. The type of connection required is defined on the smart contract for IT layer and as such is very secure. There is no IP (TCP/UDP) coming from the internet towards the containers providing more security. + +Up to the Web Gateway Internet traffic follows the same route as for any other network end point: A DNS entry tells the consumers client to what IP address to send traffic to. This endpoint is the public interface of the Web Gateway. That interface accepts the HTTP(s) (or any other TCP) packets and forward the packet payload over the secure socket connection (initiated by the container) to the container. + +No open pipe (NAT plus port forwarding) from the public internet to specific containers in the private (overlay) network exists. + +Web Gateways are created by so called network farmers. Network farmers are people and companies that have access to good connectivity and have a large number of public IP routable IP networks. They provide the facilities (hardware) for Web Gateways to run and terminate a lot of the public inbound and output traffic for the TF Grid. Examples of network farmers are ISP's and (regional, national and international Telcos, internet exchanges etc. + +### Security + +Buy not providing an open and direct path in to the private network a lot of malicious phishing and hacking attempts are stopped at the Web Gateway. By design any private network is meant to have multiple webgateways and by design these Web Gateways exist on different infrastructure in a different location. Sniffing around and finding out what can be done with a Web Gateway might (and will happen) but it will not compromise the containers in your private network. + +### Redundant Network Connection + +![](img/redundant_net.jpg) + + +### Unlimited Scale + +![](img/webgw_scaling.jpg) + + +The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network farmers providing the web gateways on their own 3nodes and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprise and this demand side is exponentially growing for data processing and storage use cases. + +### Network Wall (future) + +see [Network Wall](network_wall) + +## Roadmap + +Above described Web Gateway is for 2.0. + +For 3.0 we start with a HTTP(S) proxy over Planetary network connection. Not all features from WebGW 2.0 have been ported. + +Further future, we envisage support for many other protocols: sql, redis, udp, ... + +!!!def alias:web_gw,zos_web_gateway + +!!!include:zos_toc + diff --git a/collections/technology/primitives/network/webgw3.md b/collections/technology/primitives/network/webgw3.md new file mode 100644 index 0000000..eef289b --- /dev/null +++ b/collections/technology/primitives/network/webgw3.md @@ -0,0 +1,50 @@ +

WebGW 2.0

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Implementation](#implementation) +- [Security](#security) +- [Redundant Network Connection](#redundant-network-connection) +- [Unlimited Scale](#unlimited-scale) + +*** + +## Introduction + +The Web Gateway is a mechanism to connect the private networks to the open Internet, in such a way that there is no direct connection between internet and the secure workloads running in the ZMachines. + +![](img/webgateway.jpg) + + +- Separation between where compute workloads are and where services are exposed. +- Redundant + - Each app can be exposed on multiple webgateways at once. +- Support for many interfaces... +- Helps resolve shortage of IPv4 addresses + +## Implementation + +Some 3nodes supports gateway functionality (configured by the farmers). A 3node with gateway config can then accept gateway workloads and then forward traffic to ZMachines that only has yggdrasil (planetary network) or Ipv6 addresses. + +The gateway workloads consists of a name (prefix) that need to be reserved on the block chain first. Then the list of backend IPs. There are other flags that can be set to control automatic TLS (please check terraform documentations for the exact details of a reservation) + +Once the 3node receives this workloads, the network configure proxy for this name and the yggdrasil ips. + +## Security + +ZMachines has to have an yggdrasil IP or any other IPv6 (also IPv4 are accepted) but it means that any person who is connected to the yggdrasil network, can also reach the ZMachine without the need for a proxy. + +So ti's up to the ZMachine owner/maintainer to make sure it is secured and only have the required ports open. + +## Redundant Network Connection + +![](img/redundant_net.jpg) + + +## Unlimited Scale + +![](img/webgw_scaling.jpg) + + +The network architecture is a pure scale-out network system, it can scale to unlimited size, there is simply no bottleneck. Network "supply" is created by network farmers, and network "demand" is done by TF Grid users. Supply and demand scale independently, for supply there can be unlimited network farmers providing the web gateways on their own 3nodes and unlimited compute farmers providing 3nodes for compute and storage. The demand side is driven by developers creating software that runs on the grid, system integrators creating solutions for enterprise and this demand side is exponentially growing for data processing and storage use cases. diff --git a/collections/technology/primitives/network/znet.md b/collections/technology/primitives/network/znet.md new file mode 100644 index 0000000..b4c7f21 --- /dev/null +++ b/collections/technology/primitives/network/znet.md @@ -0,0 +1,39 @@ +

ZNET

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Secure mesh overlay network (peer2peer)](#secure-mesh-overlay-network-peer2peer) +- [Redundancy](#redundancy) +- [Interfaces in Zero-OS](#interfaces-in-zero-os) + +*** + +## Introduction + +Decentralized networking platform allowing any compute and storage workload to be connected together on a private (overlay) network and exposed to the existing internet network. The Peer2Peer network platform allows any workload to be connected over secure encrypted networks which will look for the shortest path between the nodes. + +![](img/zos_network_overlay.jpg) + +## Secure mesh overlay network (peer2peer) + +Z_NET is the foundation of any architecture running on the TF Grid. It can be seen as a virtual private datacenter and the network allows all of the *N* containers to connected to all of the *(N-1)* other containers. Any network connection is a secure network connection between your containers and creates peer 2 peer network between containers. + +![](img/overlay_net1.jpg) + +No connection is made with the internet.The ZNet is a single tenant network and by default not connected to the public internet. Everything stays private. For connecting to the public internet a Web Gateway is included in the product to allows for public access if and when required. + +## Redundancy + +As integrated with [WebGW](./webgw3.md): + +![](img/znet_redundancy.jpg) + +- Any app can get (securely) connected to the internet by any chosen IP address made available by ThreeFold network farmers through WebGW. +- An app can be connected to multiple web gateways at once, the DNS round robin principle will provide load balancing and redundancy. +- An easy clustering mechanism where web gateways and nodes can be lost and the public service will still be up and running. +- Easy maintenance. When containers are moved or re-created the same end user connection can be reused as that connection is terminated on the Web Gateway. The moved or newly created Web Gateway will recreate the socket to the Web Gateway and receive inbound traffic. + +## Interfaces in Zero-OS + +![](img/znet_znic1.jpg) diff --git a/collections/technology/primitives/network/znic.md b/collections/technology/primitives/network/znic.md new file mode 100644 index 0000000..fbb8daa --- /dev/null +++ b/collections/technology/primitives/network/znic.md @@ -0,0 +1,24 @@ +

ZNIC

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Use Cases](#use-cases) +- [Overview](#overview) + +*** + +## Introduction + +ZNIC is the network interface which is connected to Z_Machine. + +## Use Cases + +Can be implemented as interface to + +- planetary_network. +- public ip address on a Zero-OS. + +## Overview + +![](img/znet_znic.jpg) \ No newline at end of file diff --git a/collections/technology/primitives/primitives_toc.md b/collections/technology/primitives/primitives_toc.md new file mode 100644 index 0000000..d8e313a --- /dev/null +++ b/collections/technology/primitives/primitives_toc.md @@ -0,0 +1,19 @@ +# Primitives + +

Table of Contents

+ +- [Compute](./compute/compute_toc.md) + - [ZKube](./compute/zkube.md) + - [ZMachine](./compute/zmachine.md) + - [CoreX](./compute/corex.md) +- [Storage](./storage/storage_toc.md) + - [ZOS Filesystem](./storage/zos_fs.md) + - [ZOS Mount](./storage/zmount.md) + - [Quantum Safe File System](./storage/qsfs.md) + - [Zero-DB](./storage/zdb.md) + - [Zero-Disk](./storage/zdisk.md) +- [Network](./network/network_toc.md) + - [ZNET](./network/znet.md) + - [ZNIC](./network/znic.md) + - [WebGateway](./network/webgw3.md) +- [Zero-OS Advantages](../zos/benefits/zos_advantages.md) \ No newline at end of file diff --git a/collections/technology/primitives/storage/img/zdb_arch.jpg b/collections/technology/primitives/storage/img/zdb_arch.jpg new file mode 100644 index 0000000..39f51d6 Binary files /dev/null and b/collections/technology/primitives/storage/img/zdb_arch.jpg differ diff --git a/collections/technology/primitives/storage/img/zmount.jpg b/collections/technology/primitives/storage/img/zmount.jpg new file mode 100644 index 0000000..35011b0 Binary files /dev/null and b/collections/technology/primitives/storage/img/zmount.jpg differ diff --git a/collections/technology/primitives/storage/img/zos_zstor.jpg b/collections/technology/primitives/storage/img/zos_zstor.jpg new file mode 100644 index 0000000..35f0b62 Binary files /dev/null and b/collections/technology/primitives/storage/img/zos_zstor.jpg differ diff --git a/collections/technology/primitives/storage/qsfs.md b/collections/technology/primitives/storage/qsfs.md new file mode 100644 index 0000000..5b3dac8 --- /dev/null +++ b/collections/technology/primitives/storage/qsfs.md @@ -0,0 +1,34 @@ +

Quantum Safe Filesystem

+ +

Table of Contents

+ +- [Introduction](#introduction) + - [Benefits](#benefits) + - [Use Cases](#use-cases) + - [Implementation](#implementation) + +*** + +## Introduction + +Quantum safe filesystem presents itself as a filesystem to the ZMachine. + +### Benefits + +- Safe +- Hacker Proof +- Ultra Reliable +- Low Overhead +- Ultra Scalable +- Self Healing = recovers service automatically in the event of outage with no human + +![](img/zos_zstor.jpg) + +### Use Cases + +- Backup and archive system +- Blockchain Storage Backend (OEM ONLY) + +### Implementation + +> QSFS is using QSSS inside. \ No newline at end of file diff --git a/collections/technology/primitives/storage/storage_toc.md b/collections/technology/primitives/storage/storage_toc.md new file mode 100644 index 0000000..d95b3a1 --- /dev/null +++ b/collections/technology/primitives/storage/storage_toc.md @@ -0,0 +1,9 @@ +# Storage + +

Table of Contents

+ +- [ZOS Filesystem](./zos_fs.md) +- [ZOS Mount](./zmount.md) +- [Quantum Safe File System](./qsfs.md) +- [Zero-DB](./zdb.md) +- [Zero-Disk](./zdisk.md) \ No newline at end of file diff --git a/collections/technology/primitives/storage/zdb.md b/collections/technology/primitives/storage/zdb.md new file mode 100644 index 0000000..7b0855c --- /dev/null +++ b/collections/technology/primitives/storage/zdb.md @@ -0,0 +1,21 @@ +

ZOS-DB (ZDB)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Use Cases](#use-cases) +- [Overview](#overview) + +*** + +## Introduction + +0-db is a fast and efficient key-value store redis-protocol compatible, which makes data persistent inside an always append datafile, with namespaces support. + +## Use Cases + +> ZDB is being used as backend storage for [Quantum Safe Filesystem](./qsfs.md). + +## Overview + +![](img/zdb_arch.jpg) diff --git a/collections/technology/primitives/storage/zdisk.md b/collections/technology/primitives/storage/zdisk.md new file mode 100644 index 0000000..02b3297 --- /dev/null +++ b/collections/technology/primitives/storage/zdisk.md @@ -0,0 +1,18 @@ +

ZOS Disk

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Roadmap](#roadmap) + +*** + +## Introduction + +Virtual disk creates the possibility to create and use virtual disks which can be attached to containers (and virtual machines). + +The technology is designed to be redundant without having to do anything. + +## Roadmap + +- The virtual disk technology is available for OEM's only, contact TF_Tech. \ No newline at end of file diff --git a/collections/technology/primitives/storage/zmount.md b/collections/technology/primitives/storage/zmount.md new file mode 100644 index 0000000..d8cc663 --- /dev/null +++ b/collections/technology/primitives/storage/zmount.md @@ -0,0 +1,18 @@ +

ZOS Mount

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) + +*** + +## Introduction + +ZOS mount is an SSD storage location on which can be written upon inside a VMachine and VKube. + +## Overview + +The SSD storage location is mounted on a chosen path inside your Z-Machine. + +![](img/zmount.jpg) \ No newline at end of file diff --git a/collections/technology/primitives/storage/zos_fs.md b/collections/technology/primitives/storage/zos_fs.md new file mode 100644 index 0000000..3f17724 --- /dev/null +++ b/collections/technology/primitives/storage/zos_fs.md @@ -0,0 +1,39 @@ + +

ZOS FileSystem (ZOS-FS)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Uses Flist Inside](#uses-flist-inside) + - [Why this ZFlist Concept](#why-this-zflist-concept) +- [Benefits](#benefits) + +*** + +## Introduction + +A deduped filesystem which is more efficient compared to images as used in other Virtual Machine technology. + +## Uses Flist Inside + +In Zero-OS, `flist` is the format used to store zmachine images. This format is made to provide +a complete mountable remote filesystem but downloading only the files contents that you actually needs. + +In practice, Flist itself is a small database which contains metadata about files and directories and file payload are stored on a tfgrid hub. You only need to download payload when you need it, this dramatically reduce zmachine boot time, bandwidth and disk overhead. + +### Why this ZFlist Concept + +Have you ever been in the following situation: you need two small files but they are embedded in a large archive. How to get to those 2 files in an efficient way? What a disappointment when you see this archive is 4 GB large and you only need 4 files of 2 MB inside. You'll need to download the full archive, store it somewhere to extract only what you need. Time, effort and bandwidth wasted. + +You want to start a Docker container and the base image you want to use is 2 GB. What do you need to do before being able to use your container ? Waiting to get the 2 GB downloaded. This problem exists everywhere but in Europe and the US the bandwidth speeds are such that this does not present a real problem anymore, hence none of the leading (current) tech companies are looking for solutions for this. + +We believe that there should a smarter way of dealing with this then simply throwing larger bandwidth at the problem: What if you could only download the files you actually want and not the full blob (archive, image, whatever...). + +ZFList is splitting metadata and data. Metadata is referential information about everything you need to know about content of the archive, but without the payload. Payload is the content of the referred files. The ZFList is exactly that: it consists of metadata with references that point to where to get the payload itself. So if you don't need it you won't get it. + +As soon as you have the flist mounted, you can see the full directory tree, and walk around it. The files are only downloaded and presented at moment that you try to access them. In other words, every time you want to read a file, or modify it, Zero FS will download it, so that the data is available too. You only download on-the-fly what you need which reduces dramatically the bandwidth requirement. + + +## Benefits + +- Efficient usage of bandwidth makes this service perform with and without (much) bandwidth \ No newline at end of file diff --git a/collections/technology/qsss/img/filesystem_abstract.jpg b/collections/technology/qsss/img/filesystem_abstract.jpg new file mode 100644 index 0000000..b366139 Binary files /dev/null and b/collections/technology/qsss/img/filesystem_abstract.jpg differ diff --git a/collections/technology/qsss/img/qsss_intro_0_.jpg b/collections/technology/qsss/img/qsss_intro_0_.jpg new file mode 100644 index 0000000..7bb72cd Binary files /dev/null and b/collections/technology/qsss/img/qsss_intro_0_.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/filesystem.jpg b/collections/technology/qsss/interfaces_usecases/img/filesystem.jpg new file mode 100644 index 0000000..c28a9b7 Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/filesystem.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/http.jpg b/collections/technology/qsss/interfaces_usecases/img/http.jpg new file mode 100644 index 0000000..87fa682 Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/http.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/hyperdrive.jpg b/collections/technology/qsss/interfaces_usecases/img/hyperdrive.jpg new file mode 100644 index 0000000..d07db0d Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/hyperdrive.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/ipfs.jpg b/collections/technology/qsss/interfaces_usecases/img/ipfs.jpg new file mode 100644 index 0000000..0927468 Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/ipfs.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/nft_architecture.jpg b/collections/technology/qsss/interfaces_usecases/img/nft_architecture.jpg new file mode 100644 index 0000000..0c09946 Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/nft_architecture.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/nft_storage.jpg b/collections/technology/qsss/interfaces_usecases/img/nft_storage.jpg new file mode 100644 index 0000000..5f758d0 Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/nft_storage.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/storage_architecture_1.jpg b/collections/technology/qsss/interfaces_usecases/img/storage_architecture_1.jpg new file mode 100644 index 0000000..3a0d3fe Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/storage_architecture_1.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/img/syncthing.jpg b/collections/technology/qsss/interfaces_usecases/img/syncthing.jpg new file mode 100644 index 0000000..cd1a17e Binary files /dev/null and b/collections/technology/qsss/interfaces_usecases/img/syncthing.jpg differ diff --git a/collections/technology/qsss/interfaces_usecases/nft_storage.md b/collections/technology/qsss/interfaces_usecases/nft_storage.md new file mode 100644 index 0000000..fc41947 --- /dev/null +++ b/collections/technology/qsss/interfaces_usecases/nft_storage.md @@ -0,0 +1,97 @@ +# Quantum Safe Storage System for NFT + +![](img/nft_architecture.jpg) + +The owner of the NFT can upload the data using one of our supported interfaces + +- http upload (everything possible on https://nft.storage/ is also possible on our system) +- filesystem + +Every person in the world can retrieve the NFT (if allowed) and the data will be verified when doing so. The data is available everywhere in the world using multiple interfaces again (IPFS, HTTP(S), ...). Caching happens on global level. No special software or account on threefold is needed to do this. + +The NFT system uses a super reliable storage system underneath which is sustainable for the planet (green) and ultra secure and private. The NFT owner also owns the data. + + +## Benefits + +#### Persistence = owned by the data user (as represented by digital twin) + +![](img/nft_storage.jpg) + +Is not based on a shared-all architecture. + +Whoever stores the data has full control over + +- where data is stored (specific locations) +- redundancy policy used +- how long should the data be kept +- CDN policy (where should data be available and how long) + + +#### Reliability + +- data cannot be corrupted +- data cannot be lost +- each time data is fetched back hash (fingerprint) is checked, if issues autorecovery happens +- all data is encrypted and compressed (unique per storage owner) +- data owner chooses the level of redundancy + +#### Lookup + +- multi URL & storage network support (see further the interfaces section) +- IPFS, HyperDrive URL schema +- unique DNS schema (with long key which is globally unique) + +#### CDN support (with caching) + +Each file (movie, image) stored is available on many places worldwide. + +Each file gets a unique url pointing to the data which can be retrieved on all locations. + +Caching happens on each endpoint. + +#### Self Healing & Auto Correcting Storage Interface + +Any corruption e.g. bitrot gets automatically detected and corrected. + +In case of a HD crash or storage node crash the data will automatically be expanded again to fit the chosen redundancy policy. + +#### Storage Algoritm = Uses Quantum Safe Storage System as base + +Not even a quantum computer can hack data as stored on our QSSS. + +The QSSS is a super innovative storage system which works on planetary scale and has many benefits compared to shared and/or replicated storage systems. + +It uses forward looking error correcting codes inside. + +#### Green + +Storage uses upto 10x less energy compared to classic replicated system. + +#### Multi Interface + +The stored data is available over multiple interfaces at once. + +| interface | | +| -------------------------- | ----------------------- | +| IPFS | ![](img/ipfs.jpg) | +| HyperDrive / HyperCore | ![](img/hyperdrive.jpg) | +| http(s) on top of FreeFlow | ![](img/http.jpg) | +| syncthing | ![](img/syncthing.jpg) | +| filesystem | ![](img/filesystem.jpg) | + +This allows ultimate flexibility from enduser perspective. + +The object (video,image) can easily be embedded in any website or other representation which supports http. + + +## More Info + +* [Zero-OS overview](zos) +* [Quantum Safe Storage System](qsss_home) +* [Quantum Safe Storage Algorithm](qss_algorithm) +* [Smart Contract For IT Layer](smartcontract_it) + + + +!!!def alias:nft_storage,nft_storage_system \ No newline at end of file diff --git a/collections/technology/qsss/interfaces_usecases/qss_use_cases.md b/collections/technology/qsss/interfaces_usecases/qss_use_cases.md new file mode 100644 index 0000000..5134fe9 --- /dev/null +++ b/collections/technology/qsss/interfaces_usecases/qss_use_cases.md @@ -0,0 +1,12 @@ +## Quantum Safe Storage use cases + +### Backup + +A perfect use case for the QSS is backup. Specific capbabilities needed for backup are a core part of a proper backup policy. Characteristics of QSS that makle backups secure, scalable, efficient and sustainable are: +- physical storage devices are always append. The lowest level of the storage devices, ZDB's, are storage engines that work by design as an always append storage device. +- easy provision of these ZDB's makes them almost like old fashioned tape devices that you have on a rotary schedule. Having this capability make is very visible and possible to use, store and phase out stored data in a way that is auditable and can be made very transparant +- + +### Archiving + +### \ No newline at end of file diff --git a/collections/technology/qsss/interfaces_usecases/s3_interface.md b/collections/technology/qsss/interfaces_usecases/s3_interface.md new file mode 100644 index 0000000..267ee74 --- /dev/null +++ b/collections/technology/qsss/interfaces_usecases/s3_interface.md @@ -0,0 +1,15 @@ +# S3 Service + +If you like an S3 interface you can deploy this on top of our eVDC, it works very well together with our [quantumsafe_filesystem](quantumsafe_filesystem). + +A good opensource solution delivering an S3 solution is [min.io](https://min.io/). + +Thanks to our quantum safe storage layer, you could build fast, robust and reliable storage and archiving solutions. + +A typical setup would look like: + +![](img/storage_architecture_1.jpg) + +> TODO: link to manual on cloud how to deploy minio, using helm (3.0 release) + +!!!def alias:s3_storage \ No newline at end of file diff --git a/collections/technology/qsss/manual/qsfs_setup.md b/collections/technology/qsss/manual/qsfs_setup.md new file mode 100644 index 0000000..e8bce76 --- /dev/null +++ b/collections/technology/qsss/manual/qsfs_setup.md @@ -0,0 +1,297 @@ +# QSFS getting started on ubuntu setup + +## Get components + +The following steps can be followed to set up a qsfs instance on a fresh +ubuntu instance. + +- Install the fuse kernel module (`apt-get update && apt-get install fuse3`) +- Install the individual components, by downloading the latest release from the + respective release pages: + - 0-db-fs: https://github.com/threefoldtech/0-db-fs/releases + - 0-db: https://github.com/threefoldtech/0-db, if multiple binaries + are available in the assets, choose the one ending in `static` + - 0-stor: https://github.com/threefoldtech/0-stor_v2/releases, if + multiple binaries are available in the assets, choose the one + ending in `musl` +- Make sure all binaries are executable (`chmod +x $binary`) + +## Setup and run 0-stor + +There are instructions below for a local 0-stor configuration. You can also deploy an eVDC and use the [provided 0-stor configuration](evdc_storage) for a simple cloud hosted solution. + +We will run 6 0-db instances as backends for 0-stor. 4 are used for the +metadata, 2 are used for the actual data. The metadata always consists +of 4 nodes. The data backends can be increased. You can choose to either +run 7 separate 0-db processes, or a single process with 7 namespaces. +For the purpose of this setup, we will start 7 separate processes, as +such: + +> This assumes you have moved the download 0-db binary to `/tmp/0-db` + +```bash +/tmp/0-db --background --mode user --port 9990 --data /tmp/zdb-meta/zdb0/data --index /tmp/zdb-meta/zdb0/index +/tmp/0-db --background --mode user --port 9991 --data /tmp/zdb-meta/zdb1/data --index /tmp/zdb-meta/zdb1/index +/tmp/0-db --background --mode user --port 9992 --data /tmp/zdb-meta/zdb2/data --index /tmp/zdb-meta/zdb2/index +/tmp/0-db --background --mode user --port 9993 --data /tmp/zdb-meta/zdb3/data --index /tmp/zdb-meta/zdb3/index + +/tmp/0-db --background --mode seq --port 9980 --data /tmp/zdb-data/zdb0/data --index /tmp/zdb-data/zdb0/index +/tmp/0-db --background --mode seq --port 9981 --data /tmp/zdb-data/zdb1/data --index /tmp/zdb-data/zdb1/index +/tmp/0-db --background --mode seq --port 9982 --data /tmp/zdb-data/zdb2/data --index /tmp/zdb-data/zdb2/index +``` + +Now that the data storage is running, we can create the config file for +0-stor. The (minimal) config for this example setup will look as follows: + +```toml +minimal_shards = 2 +expected_shards = 3 +redundant_groups = 0 +redundant_nodes = 0 +socket = "/tmp/zstor.sock" +prometheus_port = 9100 +zdb_data_dir_path = "/tmp/zdbfs/data/zdbfs-data" +max_zdb_data_dir_size = 25600 + +[encryption] +algorithm = "AES" +key = "000001200000000001000300000004000a000f00b00000000000000000000000" + +[compression] +algorithm = "snappy" + +[meta] +type = "zdb" + +[meta.config] +prefix = "someprefix" + +[meta.config.encryption] +algorithm = "AES" +key = "0101010101010101010101010101010101010101010101010101010101010101" + +[[meta.config.backends]] +address = "[::1]:9990" + +[[meta.config.backends]] +address = "[::1]:9991" + +[[meta.config.backends]] +address = "[::1]:9992" + +[[meta.config.backends]] +address = "[::1]:9993" + +[[groups]] +[[groups.backends]] +address = "[::1]:9980" + +[[groups.backends]] +address = "[::1]:9981" + +[[groups.backends]] +address = "[::1]:9982" +``` + +> A full explanation of all options can be found in the 0-stor readme: +https://github.com/threefoldtech/0-stor_v2/#config-file-explanation + +This guide assumes the config file is saved as `/tmp/zstor_config.toml`. + +Now `zstor` can be started. Assuming the downloaded binary was saved as +`/tmp/zstor`: + +`/tmp/zstor -c /tmp/zstor_config.toml monitor`. If you don't want the +process to block your terminal, you can start it in the background: +`nohup /tmp/zstor -c /tmp/zstor_config.toml monitor &`. + +## Setup and run 0-db + +First we will get the hook script. The hook script can be found in the +[quantum_storage repo on github](https://github.com/threefoldtech/quantum-storage). +A slightly modified version is found here: + +```bash +#!/usr/bin/env bash +set -ex + +action="$1" +instance="$2" +zstorconf="/tmp/zstor_config.toml" +zstorbin="/tmp/zstor" + +if [ "$action" == "ready" ]; then + ${zstorbin} -c ${zstorconf} test + exit $? +fi + +if [ "$action" == "jump-index" ]; then + namespace=$(basename $(dirname $3)) + if [ "${namespace}" == "zdbfs-temp" ]; then + # skipping temporary namespace + exit 0 + fi + + tmpdir=$(mktemp -p /tmp -d zdb.hook.XXXXXXXX.tmp) + dirbase=$(dirname $3) + + # upload dirty index files + for dirty in $5; do + file=$(printf "i%d" $dirty) + cp ${dirbase}/${file} ${tmpdir}/ + done + + ${zstorbin} -c ${zstorconf} store -s -d -f ${tmpdir} -k ${dirbase} & + + exit 0 +fi + +if [ "$action" == "jump-data" ]; then + namespace=$(basename $(dirname $3)) + if [ "${namespace}" == "zdbfs-temp" ]; then + # skipping temporary namespace + exit 0 + fi + + # backup data file + ${zstorbin} -c ${zstorconf} store -s --file "$3" + + exit 0 +fi + +if [ "$action" == "missing-data" ]; then + # restore missing data file + ${zstorbin} -c ${zstorconf} retrieve --file "$3" + exit $? +fi + +# unknown action +exit 1 +``` + +> This guide assumes the file is saved as `/tmp/zdbfs/zdb-hook.sh. Make sure the +> file is executable, i.e. chmod +x /tmp/zdbfs/zdb-hook.sh` + +The local 0-db which is used by 0-db-fs can be started as follows: + +```bash +/tmp/0-db \ + --index /tmp/zdbfs/index \ + --data /tmp/zdbfs/data \ + --datasize 67108864 \ + --mode seq \ + --hook /tmp/zdbfs/zdb-hook.sh \ + --background +``` + +## Setup and run 0-db-fs + +Finally, we will start 0-db-fs. This guides opts to mount the fuse +filesystem in `/mnt`. Again, assuming the 0-db-fs binary was saved as +`/tmp/0-db-fs`: + +```bash +/tmp/0-db-fs /mnt -o autons -o background +``` + +You should now have the qsfs filesystem mounted at `/mnt`. As you write +data, it will save it in the local 0-db, and it's data containers will +be periodically encoded and uploaded to the backend data storage 0-db's. +The data files in the local 0-db will never occupy more than 25GiB of +space (as configured in the 0-stor config file). If a data container is +removed due to space constraints, and data inside of it needs to be +accessed by the filesystem (e.g. a file is being read), then the data +container is recovered from the backend storage 0-db's by 0-stor, and +0-db can subsequently serve this data to 0-db-fs. + +### 0-db-fs limitation + +Any workload should be supported on this filesystem, with some exceptions: + +- Opening a file in 'always append mode' will not have the expected behavior +- There is no support of O_TMPFILE by fuse layer, which is a feature required by + overlayfs, thus this is not supported. Overlayfs is used by Docker for example. + +## docker setup + +It is possible to run the zstor in a docker container. First, create a data directory +on your host. Then, save the config file in the data directory as `zstor.toml`. Ensure +the storage 0-db's are running as desribed above. Then, run the docker container +as such: + +``` +docker run -ti --privileged --rm --network host --name fstest -v /path/to/data:/data -v /mnt:/mnt:shared azmy/qsfs +``` + +The filesystem is now available in `/mnt`. + +## Autorepair + +Autorepair automatically repairs object stored in the backend when one or more shards +are not reachable anymore. It does this by periodically checking if all the backends +are still reachable. If it detects that one or more of the backends used by an encoded +object are not reachable, the healthy shards are downloaded, the object is restored +and encoded again (possibly with a new config, if it has since changed), and uploaded +again. + +Autorepair does not validate the integrity of individual shards. This is protectected +against by having multiple spare (redundant) shards for an object. Corrupt shards +are detected when the object is rebuild, and removed before attempting to rebuild. +Autorepair also does not repair the metadata of objects. + +## Monitoring, alerting and statistics + +0-stor collects metrics about the system. It can be configured with a 0-db-fs mountpoint, +which will trigger 0-stor to collect 0-db-fs statistics, next to some 0-db statistics +which are always collected. If the `prometheus_port` config option is set, 0-stor +will serve metrics on this port for scraping by prometheus. You can then set up +graphs and alerts in grafana. Some examples include: disk space used vs available +per 0-db backend, total entries in 0-db backends, which backends are tracked, ... +When 0-db-fs monitoring is enabled, statistics are also exported about the filesystem +itself, such as read/write speeds, syscalls, and internal metrics + +For a full overview of all available stats, you can set up a prometheus scraper against +a running instance, and use the embedded promQl to see everything available. + +## Data safety + +As explained in the auto repair section, data is periodically checked and rebuild if +0-db backends become unreachable. This ensures that data, once stored, remains available, +as long as the metadata is still present. When needed, the system can be expanded with more +0-db backends, and the encoding config can be changed if needed (e.g. to change encryption keys). + +## Performance + +Qsfs is not a high speed filesystem, nor is it a distributed filesystem. It is intended to +be used for archive purposes. For this reason, the qsfs stack focusses on data safety first. +Where needed, reliability is chosen over availability (i.e. we won't write data if we can't +guarantee all the conditions in the required storage profile is met). + +With that being said, there are currently 2 limiting factors in the setup: +- speed of the disk on which the local 0-db is running +- network + +The first is the speed of the disk for the local 0-db. This imposes a hard limit on +the throughput of the filesystem. Performance testing has shown that write speeds +on the filesystem reach performance of roughly 1/3rd of the raw performance of the +disk for writing, and 1/2nd of the read performance. Note that in the case of _very_ +fast disks (mostly NVMe SSD's), the cpu might become a bottleneck if it is old and +has a low clock speed. Though this should not be a problem. + +The network is more of a soft cap. All 0-db data files will be encoded and distributed +over the network. This means that the upload speed of the node needs to be able to +handle this data througput. In the case of random data (which is not compressable), +the required upload speed would be the write speed of the 0-db-fs, increased by the +overhead generated by the storage policy. There is no feedback to 0-db-fs if the upload +of data is lagging behind. This means that in cases where a sustained high speed write +load is applied, the local 0-db might eventually grow bigger than the configured size limit +until the upload managed to catch up. If this happens for prolonged periods of time, it +is technically possible to run out of space on the disk. For this reason, you should +always have some extra space available on the disk to account for temprorary cache +excess. + +When encoded data needs to be recovered from backend nodes (if it is not in cache), +the read speed will be equal to the connection speed of the slowest backend, as all +shards are recovered before the data is build. This means that recovery of historical +data will generally be a slow process. Since we primarily focus on archive storage, +we do not consider this a priority. diff --git a/collections/technology/qsss/product/concept/img/create_png b/collections/technology/qsss/product/concept/img/create_png new file mode 100755 index 0000000..b232adf --- /dev/null +++ b/collections/technology/qsss/product/concept/img/create_png @@ -0,0 +1,9 @@ +#!/bin/bash + +for name in ./*.mmd +do + output=$(basename $name mmd)png + echo $output + mmdc -i $name -o $output -w 4096 -H 2160 -b transparant + echo $name +done diff --git a/collections/technology/qsss/product/concept/img/data_origin.mmd b/collections/technology/qsss/product/concept/img/data_origin.mmd new file mode 100644 index 0000000..52c61dc --- /dev/null +++ b/collections/technology/qsss/product/concept/img/data_origin.mmd @@ -0,0 +1,13 @@ +graph TD + subgraph Data Origin + file[Large chunk of data = part_1part_2part_3part_4] + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + file -.- |split part_1|parta + file -.- |split part_2|partb + file -.- |split part 3|partc + file -.- |split part 4|partd + parta --> partb --> partc --> partd + end \ No newline at end of file diff --git a/collections/technology/qsss/product/concept/img/data_substitution.mmd b/collections/technology/qsss/product/concept/img/data_substitution.mmd new file mode 100644 index 0000000..89a1212 --- /dev/null +++ b/collections/technology/qsss/product/concept/img/data_substitution.mmd @@ -0,0 +1,20 @@ +graph TD + subgraph Data Substitution + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + parta -.-> vara[ A = part_1] + partb -.-> varb[ B = part_2] + partc -.-> varc[ C = part_3] + partd -.-> vard[ D = part_4] + end + subgraph Create equations with the data parts + eq1[A + B + C + D = 6] + eq2[A + B + C - D = 3] + eq3[A + B - C - D = 10] + eq4[ A - B - C - D = -4] + eq5[ A - B + C + D = 0] + eq6[ A - B - C + D = 5] + vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6 + end \ No newline at end of file diff --git a/collections/technology/qsss/product/concept/img/qsfs_principle.mmd b/collections/technology/qsss/product/concept/img/qsfs_principle.mmd new file mode 100644 index 0000000..e11fe92 --- /dev/null +++ b/collections/technology/qsss/product/concept/img/qsfs_principle.mmd @@ -0,0 +1,44 @@ +graph TD + subgraph Data Origin + file[Large chunk of data = part_1part_2part_3part_4] + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + file -.- |split part_1|parta + file -.- |split part_2|partb + file -.- |split part 3|partc + file -.- |split part 4|partd + parta --> partb --> partc --> partd + parta -.-> vara[ A = part_1] + partb -.-> varb[ B = part_2] + partc -.-> varc[ C = part_3] + partd -.-> vard[ D = part_4] + end + subgraph Create equations with the data parts + eq1[A + B + C + D = 6] + eq2[A + B + C - D = 3] + eq3[A + B - C - D = 10] + eq4[ A - B - C - D = -4] + eq5[ A - B + C + D = 0] + eq6[ A - B - C + D = 5] + vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6 + end + subgraph Disk 1 + eq1 --> |store the unique equation, not the parts|zdb1[A + B + C + D = 6] + end + subgraph Disk 2 + eq2 --> |store the unique equation, not the parts|zdb2[A + B + C - D = 3] + end + subgraph Disk 3 + eq3 --> |store the unique equation, not the parts|zdb3[A + B - C - D = 10] + end + subgraph Disk 4 + eq4 --> |store the unique equation, not the parts|zdb4[A - B - C - D = -4] + end + subgraph Disk 5 + eq5 --> |store the unique equation, not the parts|zdb5[ A - B + C + D = 0] + end + subgraph Disk 6 + eq6 --> |store the unique equation, not the parts|zdb6[A - B - C + D = 5] + end \ No newline at end of file diff --git a/collections/technology/qsss/product/concept/img/quantum_safe_architecture.mmd b/collections/technology/qsss/product/concept/img/quantum_safe_architecture.mmd new file mode 100644 index 0000000..3f7a2eb --- /dev/null +++ b/collections/technology/qsss/product/concept/img/quantum_safe_architecture.mmd @@ -0,0 +1,34 @@ +graph TD + subgraph Local laptop, computer or server + user[End User] + protocol[Storage protocol] + qsfs[Filesystem on local OS] + 0store[Quantum Safe storage engine] + end + subgraph Grid storage - metadata + etcd1[ETCD-1] + etcd2[ETCD-2] + etcd3[ETCD-3] + end + subgraph Grid storage - zero proof data + zdb1[ZDB-1] + zdb2[ZDB-2] + zdb3[ZDB-3] + zdb4[ZDB-4] + zdb5[ZDB-5] + zdb6[ZDB-6] + zdb7[ZDB-7] + user -.- protocol + protocol -.- qsfs + qsfs --- 0store + 0store --- etcd1 + 0store --- etcd2 + 0store --- etcd3 + 0store <-.-> zdb1[ZDB-1] + 0store <-.-> zdb2[ZDB-2] + 0store <-.-> zdb3[ZDB-3] + 0store <-.-> zdb4[ZDB-4] + 0store <-.-> zdb5[ZDB-5] + 0store <-.-> zdb6[ZDB-...] + 0store <-.-> zdb7[ZDB-N] + end \ No newline at end of file diff --git a/collections/technology/qsss/product/file_system/img/create_png b/collections/technology/qsss/product/file_system/img/create_png new file mode 100755 index 0000000..b232adf --- /dev/null +++ b/collections/technology/qsss/product/file_system/img/create_png @@ -0,0 +1,9 @@ +#!/bin/bash + +for name in ./*.mmd +do + output=$(basename $name mmd)png + echo $output + mmdc -i $name -o $output -w 4096 -H 2160 -b transparant + echo $name +done diff --git a/collections/technology/qsss/product/file_system/img/qsss_intro_.jpg b/collections/technology/qsss/product/file_system/img/qsss_intro_.jpg new file mode 100644 index 0000000..25fda06 Binary files /dev/null and b/collections/technology/qsss/product/file_system/img/qsss_intro_.jpg differ diff --git a/collections/technology/qsss/product/file_system/img/qsstorage_architecture.jpg b/collections/technology/qsss/product/file_system/img/qsstorage_architecture.jpg new file mode 100644 index 0000000..811e6ab Binary files /dev/null and b/collections/technology/qsss/product/file_system/img/qsstorage_architecture.jpg differ diff --git a/collections/technology/qsss/product/file_system/qss_filesystem.md b/collections/technology/qsss/product/file_system/qss_filesystem.md new file mode 100644 index 0000000..54c3638 --- /dev/null +++ b/collections/technology/qsss/product/file_system/qss_filesystem.md @@ -0,0 +1,39 @@ + + +![](img/qsss_intro_.jpg) + +# Quantum Safe Filesystem + +A redundant filesystem, can store PB's (millions of gigabytes) of information. + +Unique features: + +- Unlimited scalable (many petabytes) filesystem +- Quantum Safe: + - On the TFGrid, no farmer knows what the data is about + - Even a quantum computer cannot decrypt +- Data can't be lost + - Protection for [datarot](datarot), data will autorepair +- Data is kept for ever +- Data is dispersed over multiple sites +- Sites can go down, data not lost +- Up to 10x more efficient than storing on classic storage cloud systems +- Can be mounted as filesystem on any OS or any deployment system (OSX, Linux, Windows, Docker, Kubernetes, TFGrid, ...) +- Compatible with +- all data workloads (not high performance data driven workloads like a database) +- Self-healing: when a node or disk lost, storage system can get back to original redundancy level +- Helps with compliance to regulations like GDPR (as the hosting facility has no view on what is stored, information is encrypted and incomplete) +- Hybrid: can be installed onsite, public, private, ... +- Read-write caching on encoding node (the front end) + + +## Architecture + +By using our filesystem inside a Virtual Machine or Kubernetes the TFGrid user can deploy any storage application on top e.g. Minio for S3 storage, OwnCloud as online fileserver. + +![](img/qsstorage_architecture.jpg) + +Any storage workload can be deployed on top of the zstor. + +!!!def alias:quantumsafe_filesystem,planetary_fs,planet_fs,quantumsafe_file_system,zstor,qsfs + +!!!include:qsss_toc \ No newline at end of file diff --git a/collections/technology/qsss/product/file_system/qss_system.mmd b/collections/technology/qsss/product/file_system/qss_system.mmd new file mode 100644 index 0000000..5c8e8b7 --- /dev/null +++ b/collections/technology/qsss/product/file_system/qss_system.mmd @@ -0,0 +1,14 @@ +graph TD +subgraph Data Ingress and Egress +qss[Quantum Safe Storage Engine] +end +subgraph Physical Data storage +st1[Virtual Storage Device 1] +st2[Virtual Storage Device 2] +st3[Virtual Storage Device 3] +st4[Virtual Storage Device 4] +st5[Virtual Storage Device 5] +st6[Virtual Storage Device 6] +st7[Virtual Storage Device 7] +qss -.-> st1 & st2 & st3 & st4 & st5 & st6 & st7 +end \ No newline at end of file diff --git a/collections/technology/qsss/product/img/create_png b/collections/technology/qsss/product/img/create_png new file mode 100755 index 0000000..b232adf --- /dev/null +++ b/collections/technology/qsss/product/img/create_png @@ -0,0 +1,9 @@ +#!/bin/bash + +for name in ./*.mmd +do + output=$(basename $name mmd)png + echo $output + mmdc -i $name -o $output -w 4096 -H 2160 -b transparant + echo $name +done diff --git a/collections/technology/qsss/product/img/data_origin.mmd b/collections/technology/qsss/product/img/data_origin.mmd new file mode 100644 index 0000000..52c61dc --- /dev/null +++ b/collections/technology/qsss/product/img/data_origin.mmd @@ -0,0 +1,13 @@ +graph TD + subgraph Data Origin + file[Large chunk of data = part_1part_2part_3part_4] + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + file -.- |split part_1|parta + file -.- |split part_2|partb + file -.- |split part 3|partc + file -.- |split part 4|partd + parta --> partb --> partc --> partd + end \ No newline at end of file diff --git a/collections/technology/qsss/product/img/data_substitution.mmd b/collections/technology/qsss/product/img/data_substitution.mmd new file mode 100644 index 0000000..89a1212 --- /dev/null +++ b/collections/technology/qsss/product/img/data_substitution.mmd @@ -0,0 +1,20 @@ +graph TD + subgraph Data Substitution + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + parta -.-> vara[ A = part_1] + partb -.-> varb[ B = part_2] + partc -.-> varc[ C = part_3] + partd -.-> vard[ D = part_4] + end + subgraph Create equations with the data parts + eq1[A + B + C + D = 6] + eq2[A + B + C - D = 3] + eq3[A + B - C - D = 10] + eq4[ A - B - C - D = -4] + eq5[ A - B + C + D = 0] + eq6[ A - B - C + D = 5] + vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6 + end \ No newline at end of file diff --git a/collections/technology/qsss/product/img/qsfs_principle.mmd b/collections/technology/qsss/product/img/qsfs_principle.mmd new file mode 100644 index 0000000..15a8b2f --- /dev/null +++ b/collections/technology/qsss/product/img/qsfs_principle.mmd @@ -0,0 +1,44 @@ +rgraph TD + subgraph Data Origin + file[Large chunk of data = part_1part_2part_3part_4] + parta[part_1] + partb[part_2] + partc[part_3] + partd[part_4] + file -.- |split part_1|parta + file -.- |split part_2|partb + file -.- |split part 3|partc + file -.- |split part 4|partd + parta --> partb --> partc --> partd + parta -.-> vara[ A = part_1] + partb -.-> varb[ B = part_2] + partc -.-> varc[ C = part_3] + partd -.-> vard[ D = part_4] + end + subgraph Create equations with the data parts + eq1[A + B + C + D = 6] + eq2[A + B + C - D = 3] + eq3[A + B - C - D = 10] + eq4[ A - B - C - D = -4] + eq5[ A - B + C + D = 0] + eq6[ A - B - C + D = 5] + vara & varb & varc & vard --> eq1 & eq2 & eq3 & eq4 & eq5 & eq6 + end + subgraph Disk 1 + eq1 --> |store the unique equation, not the parts|zdb1[A + B + C + D = 6] + end + subgraph Disk 2 + eq2 --> |store the unique equation, not the parts|zdb2[A + B + C - D = 3] + end + subgraph Disk 3 + eq3 --> |store the unique equation, not the parts|zdb3[A + B - C - D = 10] + end + subgraph Disk 4 + eq4 --> |store the unique equation, not the parts|zdb4[A - B - C - D = -4] + end + subgraph Disk 5 + eq5 --> |store the unique equation, not the parts|zdb5[ A - B + C + D = 0] + end + subgraph Disk 6 + eq6 --> |store the unique equation, not the parts|zdb6[A - B - C + D = 5] + end \ No newline at end of file diff --git a/collections/technology/qsss/product/img/qss_system.jpg b/collections/technology/qsss/product/img/qss_system.jpg new file mode 100644 index 0000000..3c4656e Binary files /dev/null and b/collections/technology/qsss/product/img/qss_system.jpg differ diff --git a/collections/technology/qsss/product/img/quantumsafe_storage_algo.jpg b/collections/technology/qsss/product/img/quantumsafe_storage_algo.jpg new file mode 100644 index 0000000..448c7a7 Binary files /dev/null and b/collections/technology/qsss/product/img/quantumsafe_storage_algo.jpg differ diff --git a/collections/technology/qsss/product/img/tf_banner_grid_.jpg b/collections/technology/qsss/product/img/tf_banner_grid_.jpg new file mode 100644 index 0000000..fb093de Binary files /dev/null and b/collections/technology/qsss/product/img/tf_banner_grid_.jpg differ diff --git a/collections/technology/qsss/product/qss_algorithm.md b/collections/technology/qsss/product/qss_algorithm.md new file mode 100644 index 0000000..b92dfaa --- /dev/null +++ b/collections/technology/qsss/product/qss_algorithm.md @@ -0,0 +1,82 @@ +# Quantum Safe Storage Algoritm + +![](img/tf_banner_grid_.jpg) + +The Quantum Safe Storage Algorithm is the heart of the Storage engine. The storage engine takes the original data objects and creates data part descriptions that it stores over many virtual storage devices (ZDB/s) + + +![](../img/.jpg) + +Data gets stored over multiple ZDB's in such a way that data can never be lost. + +Unique features + +- data always append, can never be lost +- even a quantum computer cannot decrypt the data +- is spread over multiple sites, sites can be lost, data will still be available +- protects for [datarot](datarot) + +### Why + +Today we produce more data than ever before. We could not continue to make full copies of data to make sure it is stored reliably. This will simply not scale. We need to move from securing the whole dataset to securing all the objects that make up a dataset. + +ThreeFold is using space technology to store data (fragments) over multiple devices (physical storage devices in 3Nodes). The solution does not distribute and store parts of an object (file, photo, movie...) but describes the part of an object. This could be visualized by thinking of it as equations. + + +### Details + +Let a,b,c,d.... be the parts of that original object. You could create endless unique equations using these parts. A simple example: let's assume we have 3 parts of original objects that have the following values: +``` +a=1 +b=2 +c=3 +``` +(and for reference that part of real-world objects is not a simple number like `1` but a unique digital number describing the part, like the binary code for it `110101011101011101010111101110111100001010101111011.....`). With these numbers we could create endless amounts of equations: +``` +1: a+b+c=6 +2: c-b-a=0 +3: b-c+a=0 +4: 2b+a-c=2 +5: 5c-b-a=12 +...... +``` +Mathematically we only need 3 to describe the content (=value) of the fragments. But creating more adds reliability. Now store those equations distributed (one equation per physical storage device) and forget the original object. So we no longer have access to the values of a, b, c and see and we just remember the locations of all the equations created with the original data fragments. Mathematically we need three equations (any 3 of the total) to recover the original values for a, b or c. So do a request to retrieve 3 of the many equations and the first 3 to arrive are good enough to recalculate the original values. Three randomly retrieved equations are: + +``` +5c-b-a=12 +b-c+a=0 +2b+a-c=2 +``` +And this is a mathematical system we could solve: +- First: `b-c+a=0 -> b=c-a` +- Second: `2b+a-c=2 -> c=2b+a-2 -> c=2(c-a)+a-2 -> c=2c-2a+a-2 -> c=a+2` +- Third: `5c-b-a=12 -> 5(a+2)-(c-a)-a=12 -> 5a+10-(a+2)+a-a=12 -> 5a-a-2=2 -> 4a=4 -> a=1` + +Now that we know `a=1` we could solve the rest `c=a+2=3` and `b=c-a=2`. And we have from 3 random equations regenerated the original fragments and could now recreate the original object. + +The redundancy and reliability in such system comes in the form of creating (more than needed) equations and storing them. As shown these equations in any random order could recreate the original fragments and therefore +redundancy comes in at a much lower overhead. + +### Example of 16/4 + +![](img/quantumsafe_storage_algo.jpg) + + +Each object is fragmented into 16 parts. So we have 16 original fragments for which we need 16 equations to mathematically describe them. Now let's make 20 equations and store them dispersedly on 20 devices. To recreate the original object we only need 16 equations, the first 16 that we find and collect which allows us to recover the fragment and in the end the original object. We could lose any 4 of those original 20 equations. + +The likelihood of losing 4 independent, dispersed storage devices at the same time is very low. Since we have continuous monitoring of all of the stored equations, we could create additional equations immediately when one of them is missing, making it an auto-regeneration of lost data and a self-repairing storage system. The overhead in this example is 4 out of 20 which is a mere **20%** instead of (up to) **400%.** + +### Content distribution Policy (10/50) + +This system can be used as backend for content delivery networks. + +Imagine a movie being stored on 60 locations from which we can loose 50 at the same time. + +If someone now wants to download the data the first 10 locations who answer fastest will provide enough of the data parts to allow the data to be rebuild. + +The overhead here is much more compared to previous example but stil order of magnitude lower compared to other cdn systems. + + +!!!def alias:quantumsafe_storage_algo,quantumsafe_storage_algorithm,space_algo,space_algorithm,quantum_safe_storage_algo,qs_algo,qs_codec + +!!!include:qsss_toc \ No newline at end of file diff --git a/collections/technology/qsss/product/qss_datarot.md b/collections/technology/qsss/product/qss_datarot.md new file mode 100644 index 0000000..9ae4e4e --- /dev/null +++ b/collections/technology/qsss/product/qss_datarot.md @@ -0,0 +1,8 @@ +# Datarot Cannot Happen on our Storage System + +Fact that data storage degrades over time and becomes unreadable, on e.g. a harddisk. +The storage system provided by ThreeFold intercepts this silent data corruption, making that it can pass by unnotified. + +> see also https://en.wikipedia.org/wiki/Data_degradation + +!!!def alias:bitrot,datarot \ No newline at end of file diff --git a/collections/technology/qsss/product/qss_zero_knowledge_proof.md b/collections/technology/qsss/product/qss_zero_knowledge_proof.md new file mode 100644 index 0000000..71f4de3 --- /dev/null +++ b/collections/technology/qsss/product/qss_zero_knowledge_proof.md @@ -0,0 +1,11 @@ + +# Zero Knowledge Proof Storage system. + +The quantum save storage system is zero knowledge proof compliant. The storage system is made up / split into 2 components: The actual storage devices use to store the data (ZDB's) and the Quantum Safe Storage engine. + + +![](img/qss_system.jpg) + +The zero proof knowledge compliancy comes from the fact the all the physical storage nodes (3nodes) can proof that they store a valid part of what data the quantum safe storage engine (QSSE) has stored on multiple independent devices. The QSSE can validate that all the QSSE storage devices have a valid part of the original information. The storage devices however have no idea what the original stored data is as they only have a part (description) of the origina data and have no access to the original data part or the complete origal data objects. + +!!!def \ No newline at end of file diff --git a/collections/technology/qsss/qss_benefits.md b/collections/technology/qsss/qss_benefits.md new file mode 100644 index 0000000..7fb099b --- /dev/null +++ b/collections/technology/qsss/qss_benefits.md @@ -0,0 +1,12 @@ +![](img/filesystem_abstract.jpg) + +# Quantum Safe Storage System benefits + +!!!include:qss_benefits_ + +!!!include:qsss_toc + + + + + diff --git a/collections/technology/qsss/qss_benefits_.md b/collections/technology/qsss/qss_benefits_.md new file mode 100644 index 0000000..bb27444 --- /dev/null +++ b/collections/technology/qsss/qss_benefits_.md @@ -0,0 +1,6 @@ +- Up to 10x more efficient (power and usage of hardware) +- Ultra reliable, data can not be lost +- Ultra safe & private +- Ultra scalable +- Sovereign, data is close to you in the country of your choice +- Truly peer-to-peer, by everyone for everyone diff --git a/collections/technology/qsss/qsss.md b/collections/technology/qsss/qsss.md new file mode 100644 index 0000000..e9b34c4 --- /dev/null +++ b/collections/technology/qsss/qsss.md @@ -0,0 +1,2 @@ + +!!!include:qsss_home \ No newline at end of file diff --git a/collections/technology/qsss/qsss2_home.md b/collections/technology/qsss/qsss2_home.md new file mode 100644 index 0000000..d62400a --- /dev/null +++ b/collections/technology/qsss/qsss2_home.md @@ -0,0 +1,21 @@ +![](img/qsstorage_architecture.jpg) + +# Quantum Safe Storage System + +Imagine a storage system with the following benefits + +!!!include:qss_benefits_ + +> This is not a dream but does exist and is the underpinning of the TFGrid. + +Our storage architecture follows the true peer-to-peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'. + +Peer-to-peer provides the unique proposition of selecting storage providers that match your application and service of business criteria. For example, you might be looking to store data for your application in a certain geographic area (for governance and compliance) reasons. Also, you might want to use different "storage policies" for different types of data. Examples are live versus archived data. All of these uses cases are possible with this storage architecture and could be built by using the same building blocks produced by farmers and consumed by developers or end-users. + + +!!!include:qsss_toc + +!!!def alias:qsss,quantum_safe_storage_system + + + diff --git a/collections/technology/qsss/qsss_home.md b/collections/technology/qsss/qsss_home.md new file mode 100644 index 0000000..a6efb37 --- /dev/null +++ b/collections/technology/qsss/qsss_home.md @@ -0,0 +1,34 @@ + + +

Quantum Safe Storage System

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [QSS Benefits](#qss-benefits) +- [Peer-to-Peer Design](#peer-to-peer-design) +- [Overview](#overview) + +*** + +## Introduction + +ThreeFold offers a quantum safe storage system (QSS). QSS is a decentralized, globally distributed data storage system. It is unbreakable, self-healing, append-only and immutable. + +## QSS Benefits + +Imagine a storage system with the following benefits: + +{{#include ./qss_benefits_.md}} + +This is not a dream but does exist and is the underpinning of the TFGrid. + +## Peer-to-Peer Design + +Our storage architecture follows the true peer-to-peer design of the TF grid. Any participating node only stores small incomplete parts of objects (files, photos, movies, databases...) by offering a slice of the present (local) storage devices. Managing the storage and retrieval of all of these distributed fragments is done by a software that creates development or end-user interfaces for this storage algorithm. We call this '**dispersed storage**'. + +Peer-to-peer provides the unique proposition of selecting storage providers that match your application and service of business criteria. For example, you might be looking to store data for your application in a certain geographic area (for governance and compliance) reasons. Also, you might want to use different "storage policies" for different types of data. Examples are live versus archived data. All of these uses cases are possible with this storage architecture and could be built by using the same building blocks produced by farmers and consumed by developers or end-users. + +## Overview + +![](img/qsss_intro_0_.jpg) \ No newline at end of file diff --git a/collections/technology/qsss/qsss_toc.md b/collections/technology/qsss/qsss_toc.md new file mode 100644 index 0000000..57670da --- /dev/null +++ b/collections/technology/qsss/qsss_toc.md @@ -0,0 +1,7 @@ +

Quantum Safe Storage More Info

+ +

Table of Contents

+ +- [Quantum Safe Storage Overview](./qsss_home.md) +- [Quantum Safe Filesystem](qss_filesystem) +- quantumsafe_storage_algo \ No newline at end of file diff --git a/collections/technology/qsss/roadmap/img/roadmap.jpg b/collections/technology/qsss/roadmap/img/roadmap.jpg new file mode 100644 index 0000000..ccb52e9 Binary files /dev/null and b/collections/technology/qsss/roadmap/img/roadmap.jpg differ diff --git a/collections/technology/qsss/roadmap/quantumsafe_roadmap.md b/collections/technology/qsss/roadmap/quantumsafe_roadmap.md new file mode 100644 index 0000000..3613d08 --- /dev/null +++ b/collections/technology/qsss/roadmap/quantumsafe_roadmap.md @@ -0,0 +1,7 @@ +![roadmap](img/roadmap.jpg) + +# Roadmap + +>TODO: to be filled in + +> See Quantum Safe Storage project [kanban](https://github.com/orgs/threefoldtech/projects/152). \ No newline at end of file diff --git a/collections/technology/qsss/sidebar.md b/collections/technology/qsss/sidebar.md new file mode 100644 index 0000000..ec25d46 --- /dev/null +++ b/collections/technology/qsss/sidebar.md @@ -0,0 +1,20 @@ +- [**Home**](@threefold_home) +- [**Technology**](@technology) +------------ +**Quantum Safe Filesystem** + +- [Home](@qsss_home) +- [Filesystem](@qss_filesystem) +- [Algorithm](@qss_algorithm) + + + + + + + + diff --git a/collections/technology/qsss/specs_todo/img/specs_header.jpg b/collections/technology/qsss/specs_todo/img/specs_header.jpg new file mode 100644 index 0000000..e84a21f Binary files /dev/null and b/collections/technology/qsss/specs_todo/img/specs_header.jpg differ diff --git a/collections/technology/qsss/specs_todo/policy.md b/collections/technology/qsss/specs_todo/policy.md new file mode 100644 index 0000000..3d58d59 --- /dev/null +++ b/collections/technology/qsss/specs_todo/policy.md @@ -0,0 +1,3 @@ +# zstor filesystem (zstor) Policy + +Describe how it works... diff --git a/collections/technology/qsss/specs_todo/qss_specs.md b/collections/technology/qsss/specs_todo/qss_specs.md new file mode 100644 index 0000000..0e66004 --- /dev/null +++ b/collections/technology/qsss/specs_todo/qss_specs.md @@ -0,0 +1,68 @@ +![specs](img/specs_header.jpg) + +# System requirements + +System that is easy to provision storage capacity on the TF grid +- user can create X storage nodes on a random or specific locations +- user can list their storage nodes +- check node status/info in some shape or form in a monitoring solution +- external authentication/payment system using threefold connect app +- user can delete their storage nodes +- user can provision mode storage nodes +- user can increase total size of storage solutions +- user can install the quantum safe filesystem on any linux based system, physical or virtual + +# Non-functional requirements + +- How many expected concurrent users: not application - each user will have it's own local binary and software install. +- How many users on the system: 10000-100000 +- Data store: fuse filesystem plus local and grid based ZDB's +- How critical is the system? it needs to be alive all the time. +- What do we know about the external payment system? +Threefold Connect, use QR code for payments and validate on the blockchain +- Life cycle of the storage nodes? How does the user keep their nodes alive? The local binary / application has a wallet from which it can pay for the existing and new storage devices. This wallet needs to be kept topped up. +- When the user is asked to sign the deployment of 20 storage nodes: + - will the user sign each single reservation? or should the system itself sign it for the user and show the QR code only for payments? +- Payments should be done to the a specific user wallet and with a background service with extend the user pools or /extend in the bot conversation? to be resolved +- Configuration and all metadata should be stored as a hash / private key. With this information you are able to regain access to your stored data from everywhere. + + +# Components mapping / SALs + +- Entities: User, storage Node +- ReservationBuilder: builds reservation for the user to sign (note the QR code data size limit is 3KB) +- we need to define how many nodes can we deploy at a time, shouldn't exceed 3KB for the QR Code, if it exceeds the limit should we split the reservations? + +- UserInfo: user info are loaded from threefold login system +- Blockchain Node (role, configurations) +- Interface to Threefold connect (authentication+payment) /identify + generate payments +- User notifications / topup +- Monitoring: monitoring + redeployment of the solutions again if they go down, when redeploying who owns the reservation to delete -can be fixed with delete signers field- and redeploy, but to deploy we need the user identity or should we inform the user in telegram and ask him to /redeploy +- Logging + +# Tech stack + +- [JS-SDK[](https://github.com/threefoldtech/js-sdk) (?) +- [0-db](https://github.com/threefoldtech/0-db-s) +- [0-db-fs](https://github.com/threefoldtech/0-db-fs) +- [0-stor_v2](https://github.com/threefoldtech/0-stor_v2) +- [quantum_storage](https://github.com/threefoldtech/quantum-storage) + + + +# Blockers + + +Idea from blockchain jukekebox brainstorm: + +## payments +- QR code contains threebot://signandpay/#https://tf.grid/api/a6254a4a-bdf4-11eb-8529-0242ac130003 (can also be uni link) +- App gets URL +- URL gives data +- { DataToSign : {RESERVATIONDETAILS}, Payment: {PAYMENTDETAILS}, CallbackUrl: {CALLBACKURL} } +- App signs reservation, makes payment, calls callbackurl { SingedData : {SINGEDRESERVATION}, Payment: {FINISHED_PAYMENTDETAILS}} + +Full flow: +- User logs in using normal login flow +- User scans QR +- User confirms reservation and payment in the app diff --git a/collections/technology/qsss/specs_todo/quantum_safe_filesystem.md b/collections/technology/qsss/specs_todo/quantum_safe_filesystem.md new file mode 100644 index 0000000..cb47e70 --- /dev/null +++ b/collections/technology/qsss/specs_todo/quantum_safe_filesystem.md @@ -0,0 +1,3 @@ +# Specs zstor filesystem + +- [Quantum Safe File System](quantum_safe_filesystem_2_6) \ No newline at end of file diff --git a/collections/technology/qsss/specs_todo/quantum_safe_filesystem_2_6.md b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_2_6.md new file mode 100644 index 0000000..0051d2c --- /dev/null +++ b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_2_6.md @@ -0,0 +1,20 @@ +# zstor filesystem 2.6 + +## requirements + +- redundancy/uptime + - data can never be lost if older than 20 min (avg will be 7.5 min, because we use 15 min push) + - if a datacenter or node goes down and we are in storage policy the storage stays available +- reliability + - data cannot have hidden data corruption, when bitrot the FS will automatically recover +- self healing + - when data policy is lower than required level then should re-silver (means make sure policy is intact again) + +## NEW + +- 100% redundancy + +## architecture + +!!!include:quantum_safe_filesystem_architecture +!!!include:quantum_safe_filesystem_sequence_graph diff --git a/collections/technology/qsss/specs_todo/quantum_safe_filesystem_architecture.md b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_architecture.md new file mode 100644 index 0000000..ef373f7 --- /dev/null +++ b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_architecture.md @@ -0,0 +1,37 @@ + +## zstor Architecture + +```mermaid +graph TD + subgraph TFGridLoc2 + ZDB5 + ZDB6 + ZDB7 + ZDB8 + ETCD3 + end + subgraph TFGridLoc1 + ZDB1 + ZDB2 + ZDB3 + ZDB4 + ETCD1 + ETCD2 + KubernetesController --> ETCD1 + KubernetesController --> ETCD2 + KubernetesController --> ETCD3 + end + + + subgraph eVDC + PlanetaryFS --> ETCD1 & ETCD2 & ETCD3 + PlanetaryFS --> MetadataStor + PlanetaryFS --> ReadWriteCache + MetadataStor --> LocalZDB + ReadWriteCache --> LocalZDB + LocalZDB & PlanetaryFS --> ZeroStor + ZeroStor --> ZDB1 & ZDB2 & ZDB3 & ZDB4 & ZDB5 & ZDB6 & ZDB7 & ZDB8 + end + + +``` \ No newline at end of file diff --git a/collections/technology/qsss/specs_todo/quantum_safe_filesystem_sequence_graph.md b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_sequence_graph.md new file mode 100644 index 0000000..28b43d7 --- /dev/null +++ b/collections/technology/qsss/specs_todo/quantum_safe_filesystem_sequence_graph.md @@ -0,0 +1,40 @@ +## zstor Sequence Diagram + +```mermaid +sequenceDiagram + participant user as user + participant fs as 0-fs + participant lzdb as local 0-db + participant zstor as 0-stor + participant etcd as ETCD + participant zdbs as backend 0-dbs + participant mon as Monitor + + alt Writing data + user->>fs: write data to files + fs->>lzdb: write data blocks + opt Datafile is full + lzdb->>zstor: encode and chunk data file + zstor->>zdbs: write encoded datafile chunks to the different backends + zstor->>etcd: write metadata about encoded file to metadata storage + end + else Reading data + user->>fs: read data from file + fs->>lzdb: read data blocks + opt Datafile is missing + lzdb->>zstor: request retrieval of data file + zstor->>etcd: load file encoding and storage metadata + zstor->>zdbs: read encoded datafile chunks from multiple backends and rebuilds original datafile + zstor->>lzdb: replaces the missing datafile + end + end + + loop Monitor action + mon->>lzdb: delete local data files which are full and have been encoded, AND have not been accessed for some time + mon->>zdbs: monitors health of used namespaces + opt Namespace is lost or corrupted + mon->>zstor: checks storage configuration + mon->>zdbs: rebuilds missing shard on new namespace from storage config + end + end +``` diff --git a/collections/technology/qsss/testplan/img/failure_points.jpg b/collections/technology/qsss/testplan/img/failure_points.jpg new file mode 100644 index 0000000..5bb3924 Binary files /dev/null and b/collections/technology/qsss/testplan/img/failure_points.jpg differ diff --git a/collections/technology/qsss/testplan/img/testplan_points.mmd b/collections/technology/qsss/testplan/img/testplan_points.mmd new file mode 100644 index 0000000..7ffae96 --- /dev/null +++ b/collections/technology/qsss/testplan/img/testplan_points.mmd @@ -0,0 +1,34 @@ +graph TD + subgraph Local laptop, computer or server + user[End User *11* ] + protocol[Storage Protocol *6*] + qsfs[Filesystem *7*] + 0store[Storage Engine *8*] + end + subgraph Grid storage - metadata + etcd1[ETCD-1 *9*] + etcd2[ETCD-2 *9*] + etcd3[ETCD-3 *9*] + end + subgraph Grid storage - zero proof data + zdb1[ZDB-1 *10*] + zdb2[ZDB-2 *10*] + zdb3[ZDB-3 *10*] + zdb4[ZDB-4 *10*] + zdb5[ZDB-5 *10*] + zdb6[ZDB-... *10*] + zdb7[ZDB-N *10*] + user -.- |-1-| protocol + protocol -.- |-2-| qsfs + qsfs --- |-3-| 0store + 0store --- |-4-| etcd1 + 0store --- |-4-| etcd2 + 0store --- |-4-| etcd3 + 0store <-.-> |-5-| zdb1 + 0store <-.-> |-5-| zdb2 + 0store <-.-> |-5-| zdb3 + 0store <-.-> |-5-| zdb4 + 0store <-.-> |-5-| zdb5 + 0store <-.-> |-5-| zdb6 + 0store <-.-> |-5-| zdb7 + end \ No newline at end of file diff --git a/collections/technology/qsss/testplan/testplan.md b/collections/technology/qsss/testplan/testplan.md new file mode 100644 index 0000000..4b4a431 --- /dev/null +++ b/collections/technology/qsss/testplan/testplan.md @@ -0,0 +1,131 @@ +## Quantum Safe Storage Testplan + +### Prerequisites +The quantum safe storage system runs on the following platforms +- bare metal linux installation. +- kubernetes cluster with Helm installation scripts. + +### Installation +For instructions in installing the QSFS please see the manual [here](../manual/README.md). + +#### Bare metal Linux +The software comes as a single binary which will install all the necessary components (local) to run the quantum safe file system. The server in the storage front end and the TF Grid is the storage backend. The storage backend configuration can be provided in two different ways: +- the user has access to the eVDC facility of Threefold and is able to download the kubernetes configuration file. +- the binary has built in options to ask for backend storage components to be provisioned an delivered. + +### Architecture and failure modes + +Quantum Safe Storage is built from a number of components. Components are connected and interacting and therefore there are a number of failure modes that need to be considered for the test plan. + +Failure modes which we have testplans and cases for: + + - [Enduser](#enduser) + - [Storage protocol](#storage-protocol) + - [Filesystem](#filesystem) + - [Storage engine](#storage-engine) + - [Metadata store](#metadata-store) + - [Physical storage devices](#physical-storage-devices) + - [Interaction Enduser - Storage Protocol](#enduser-to-storage-protocol) + - [Interaction Storage Protocol - Filesystem](#storage-protocol---filesystem) + - [Interaction Filesystem - Storage Engine](#filesystem-to-storage-engine) + - [Interaction Storage Engine - Physical Storage Device](#storage-engine-to-physical-storage-device) + - [Interaction Storage Engine - Metadata Store](#storage-engine-to-metadata-store) + +![](img/failure_points.jpg) + +#### Enduser + +Failure scanerio's +- End user enters weird / wrong data at during QSS install +- End user deletes / changes things on the QSS engine host + - End user stops / deletes the storage protocol application of any of its configuration / temp storage facilities + - End user deletes the quantum safe file system and / or it configuration files + - End user deletes the storage enginer and / or its configuration and temp storage files. + + +Tests to conduct +#### Storage protocol + + +Failure scanerio's + +Storage protocol can be anything from IPFS, minio, sFTP and all the other protocols available. The possible failure modes for these are endless to test. For a couple of well knownm protocols we will do some basic testing +- **minio** +- **sFTP** +- **syncthing** +- **scp** + +For all these protocols a number of simple tests can he done: + - stop the protocol binary while data is being pushed in. restart the binary and see if normal operation commences (data loss eg. data in transfer when failure happened is accaptable). + - make changes to the config file (policy, parameters, etc) and see if normal operation commences. + +Tests to conduct +#### Filesystem + +Direct access to the filesystem and eliminates the dependency of the interface protocol. The filesystem provides a well knmow interface to browse, create, store and retrieve data in an easy and structured way. + +Tests to conduct. Testing is required to see if the filesystem can deal with: + - create a large number of nested directories and see if all this is causing problems + - create a large number fo small files and see if this is creating problems + - create a number of very large files (1GB+) and see if this is causing any problems. + - delete a (large) number of files + - delete a (large) number of derectories + - move a (large) number of files + - move a (large) number of directories + +#### Storage engine + +The storage engine takes files (data) and runs a "forward error correcting algorithm" on the data. The algorithm requires a "storage" policy to specify how to treat the inbound data and where to store the resuling data descriptions. This engine is non-redundant at this point in time and we should test how it behaves with certain failure modes. + +Tests to conduct: + - storage policy is change during operation of the storage engine + - physical storage devices are added + - physical storage devices are deleted + - storage policy (example - 16:4) is changed during operation + - other configuration components are changed + - physical storage access passwords + - encryption key + +#### Metadata store + +The metadatastore stores the information needed to retrieve the part of descriptions that make up an original piece of data. These metadata stores are redundant (3x?) and store all data required. + +Testing needs to be done on: + - corruption / deleting one out of the three metadata stores + - corruption / deleting two out of three metadata stores + - rebuilding failed metadata stores + - create high workloads of adding new data and retrieving stored data - no longer available in a local cache. + +#### Physical storage devices + +Physical storage devices are ZDB's on one the the TF Grid networks (Mainnet, Testnet, Devnet). ZDB's manage slices of physical disk space on HDD's and SSD's. They have a very simple operational model and API interface. ZDB's are operating in "append only" mode by default. + +Testing needs to be done: + - create a high workload writing and reading a the same time + - get + +#### Enduser to Storage Protocol + +Failure scanerio's + +Tests to conduct +#### Storage Protocol to Filesystem + +Failure scanerio's + +Tests to conduct +#### Filesystem to Storage Engine + +Failure scanerio's + +Tests to conduct +#### Storage Engine to Physical Storage Device + +Failure scanerio's + +Tests to conduct +#### Storage Engine to Metadata Store + +Failure scanerio's + +Tests to conduct diff --git a/collections/technology/smartcontract_it/img/iac_overview.jpg b/collections/technology/smartcontract_it/img/iac_overview.jpg new file mode 100644 index 0000000..2687c63 Binary files /dev/null and b/collections/technology/smartcontract_it/img/iac_overview.jpg differ diff --git a/collections/technology/smartcontract_it/img/smart_contract_it_.jpg b/collections/technology/smartcontract_it/img/smart_contract_it_.jpg new file mode 100644 index 0000000..e3c86d1 Binary files /dev/null and b/collections/technology/smartcontract_it/img/smart_contract_it_.jpg differ diff --git a/collections/technology/smartcontract_it/img/smartcontract3_flow.jpg b/collections/technology/smartcontract_it/img/smartcontract3_flow.jpg new file mode 100644 index 0000000..801a591 Binary files /dev/null and b/collections/technology/smartcontract_it/img/smartcontract3_flow.jpg differ diff --git a/collections/technology/smartcontract_it/img/smartcontract_3bot.jpg b/collections/technology/smartcontract_it/img/smartcontract_3bot.jpg new file mode 100644 index 0000000..c58ad70 Binary files /dev/null and b/collections/technology/smartcontract_it/img/smartcontract_3bot.jpg differ diff --git a/collections/technology/smartcontract_it/img/smartcontract_iac.jpg b/collections/technology/smartcontract_it/img/smartcontract_iac.jpg new file mode 100644 index 0000000..0718469 Binary files /dev/null and b/collections/technology/smartcontract_it/img/smartcontract_iac.jpg differ diff --git a/collections/technology/smartcontract_it/img/smartcontract_it.jpg b/collections/technology/smartcontract_it/img/smartcontract_it.jpg new file mode 100644 index 0000000..f4e4c88 Binary files /dev/null and b/collections/technology/smartcontract_it/img/smartcontract_it.jpg differ diff --git a/collections/technology/smartcontract_it/smartcontract_3bot.md b/collections/technology/smartcontract_it/smartcontract_3bot.md new file mode 100644 index 0000000..9d262b1 --- /dev/null +++ b/collections/technology/smartcontract_it/smartcontract_3bot.md @@ -0,0 +1,25 @@ +

Smart Contract For IT 3Bot Integration

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Use Cases](#use-cases) +- [Overview](#overview) + +*** + +## Introduction + +The Smart Contract for IT allows you, your company or your community to execute the storage and running of your files and applications securely with consensus and automatic billing. + +## Use Cases + +With the 3Bot integration, you can use smart contracts to define different sets of Internet resources you need for various types of work, such as storing files, running applications, and communicating across the network. + +You can use smart contracts to define different sets of Internet resources , such as storing files, running applications, communicate across the network, etc. + +You can also define a consensus mechanism and create multi-signatures for smart contract execution or completion to deliver appropriate digital services. + +## Overview + +![](img/smartcontract_3bot.jpg) \ No newline at end of file diff --git a/collections/technology/smartcontract_it/smartcontract_iac.md b/collections/technology/smartcontract_it/smartcontract_iac.md new file mode 100644 index 0000000..841d268 --- /dev/null +++ b/collections/technology/smartcontract_it/smartcontract_iac.md @@ -0,0 +1,21 @@ +

Infrastructure As Code

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Overview](#overview) +- [Smart Contract](#smart-contract) + +*** + +## Introduction + +IAC = DevOps is a process framework that ensures collaboration between Development and Operations Team to deploy code to production environment faster in a repeatable and automated way. ... In simple terms, DevOps can be defined as an alignment between development and IT operations with better communication and collaboration. + +## Overview + +![](img/iac_overview.jpg) + +## Smart Contract + +![](img/smartcontract_iac.jpg) \ No newline at end of file diff --git a/collections/technology/smartcontract_it/smartcontract_it.md b/collections/technology/smartcontract_it/smartcontract_it.md new file mode 100644 index 0000000..bec47ca --- /dev/null +++ b/collections/technology/smartcontract_it/smartcontract_it.md @@ -0,0 +1,50 @@ +# Smart Contract for IT + +Ability for every developer to launch IT workloads on the TFGrid using our TFGrid primitives. + +![](img/smart_contract_it_.jpg) + + + +## Smart Contract Together with 3bot + +Is for TFGrid 2.0 + +![](img/smartcontract_it.jpg) + +3Bot is your virtual system administrator and can execute IT tasks on your behalf. + +**STEP 1: IT Experts create smart contracts:** + +IT experts create smart contracts describing what needs to be done in order to deploy this architecture. The smart contract has to be specific and describe each little detail of the IT architecture. The experts create knowledge for the 3bots (it's like god defining our DNA of our cells) + +**STEP 2: Business and or Enduser customers consume smart contracts**: + + Users have digital needs and in order to procure services for their digital needs they will find smart contracts describing applications (application setups) meeting their needs. Consumers will instruct their 3bot to deploy an IT workload following their requirements buy using a smart contract created + +* e.g. give me an archive of 1 PB in CH, e.g. deploy a CRM for 100 users, … +* e.g. deploy my new banking app feature X +* e.g. deploy my artificial intelligence data mining job for … + +**STEP 3: The 3bot executes the smart contract:** + +Creates & Registers the “IT” smart contract in the BCDB (Blockchain Database). This is a complete end-to-end deployment cycle for all sorts of IT deployments - both simple and complicated, bound to one location of many. The 3bot will provision all the compute and storage capacity needed to meet the IT architecture’s requirement and do all the commercial trades required to get this. It will then leave instructions for the nodes in a digital notary system in order for nodes to be able to grab instructions on what they have to do in order to meet smart contract completion. The 3bot remains the orchestrator for this smart contract execution and will store and secure all intermediate and final state information in the notary service (blockchain database). + +**STEP 4: Business IT Workload Stakeholders:** + +is optional but when required stakeholder can be defined to give consensus and sign off on the successful execution of the “IT smart contract” delivering the appropriate digital service. Stakeholders can be defined in a “multi signature” blockchain to provide sign off on regulatory, commercial and other business requirements. Approvals can include IT expert checks the quality of the code, a legal guy checks GDPR, a business person checks budget etc. + +**STEP 5: The capacity layer: 3 Nodes…** + +* thousands of 3 nodes can work together to execute and deliver the “IT Smart Contract” (if required) +* verify if consensus was reached between the business stakeholders +* verify the validity of the smart contract and download the “IT workload definition” +* download the right files to execute the smart contract and each file gets verified (signature) +* run the required processes and again signatures are checked to make sure the workload is pure. +* ensures that no person (hacker or IT person) can ever gain access or influence on the execution process. + + +## Remarks + +- in TFGrid 2.x smart contract for IT is implemented using ThreeFold Explorer and multisignature capabilities. +- in TFGrid 3.0 this is being re-implemented on TF-Chain on Parity/Substrate blockchain, to become a fully decentralized process. See [here](smartcontract_tfgrid3). diff --git a/collections/technology/smartcontract_it/smartcontract_tfgrid3.md b/collections/technology/smartcontract_it/smartcontract_tfgrid3.md new file mode 100644 index 0000000..a5818b5 --- /dev/null +++ b/collections/technology/smartcontract_it/smartcontract_tfgrid3.md @@ -0,0 +1,97 @@ +

Smart Contract on TFGrid 3.0

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Architecture](#architecture) +- [1: To deploy a workload, the user interacts with this smart contract pallet and calls: `create_contract` with the input being:](#1-to-deploy-a-workload-the-user-interacts-with-this-smart-contract-pallet-and-calls-create_contract-with-the-input-being) +- [2: The user sends the contractID and workload through the RMB to the destination Node.](#2-the-user-sends-the-contractid-and-workload-through-the-rmb-to-the-destination-node) +- [3: The Node sends consumption reports to the chain](#3-the-node-sends-consumption-reports-to-the-chain) +- [Notes](#notes) + +*** + +## Introduction + +From TFGrid 3.0, the 'Smart Contract for IT' concept for reserving capacity is fully decentralized and runs on TF-Chain, the ThreeFold blockchain infrastructure on Parity Substrate. + +## Architecture + +Two main components play a role in achieving a decentralised consensus between a user and a farmer. + +- TFGrid Substrate Database Pallet TFGrid +- TFGrid Smart Contract + +The TF-Grid Substrate Database will keep a record of all Entities, Twins, Nodes and Farmers in the TF-Grid network. This makes it easy to integrate the Smart Contract on Substrate as well since we can read from that storage in runtime. + +![flow](img/smartcontract3_flow.jpg) + +The Smart Contract on Substrate works as following: + +## 1: To deploy a workload, the user interacts with this smart contract pallet and calls: `create_contract` with the input being: + +The user must instruct his twin to create the contract. *This program containing his digital twin is yet to be defined.* A contract will always belong to a twin and to a node. This relationship is important because only the user's twin and target node's twin can update the contract. + +```js +contract = { + version: contractVersion, + contract_id: contractID, + twin_id: NumericTwinID for the contract, + // node_address is the node address. + node_id: NumericNodeID + // data is the encrypted deployment body. This encrypted the deployment with the **USER** public key. So only the user can read this data later on (or any other key that he keeps safe). + // this data part is read only by the user and can actually hold any information to help him reconstruct his deployment or can be left empty. + data: encrypted(deployment) // optional + // hash: is the deployment predictable hash. the node must use the same method to calculate the challenge (bytes) to compute this same hash. + //used for validating the deployment from node side. + deployment_hash: hash(deployment), + // public_ips: number of ips that need to be reserved by the contract and used by the deployment + public_ips: 0, + state: ContractState (created, deployed), + public_ips_list: list of public ips on this contract +} +``` + +The `node_id` field is the target node's ID. A user can do lookup for a node to find its corresponding ID. + +The workload data is encrypted by the user and contains the workload definition for the node. + +If `public_ips` is specified, the contract will reserve the number of public ips requested on the node's corresponding farm. If there are not enough ips available an error will be returned. If the contract is canceled by either the user or the node, the IPs for that contract will be freed. + +This pallet saves this data to storage and returns the user a `contract_id`. + +## 2: The user sends the contractID and workload through the RMB to the destination Node. + +The Node reads from the [RMB](https://github.com/threefoldtech/rmb) and sees a deploy command, it reads the contractID and workload definition from the payload. +It decodes the workload and reads the contract from chain using the contract ID, the Node will check if the user that created the contract and the deployment hash on the contract is the same as what the Node receives over RMB. If all things check out, the Node deploys the workload. + +## 3: The Node sends consumption reports to the chain + +The Node periodically sends consumption reports back to the chain for each deployed contract. The chain will compute how much is being used and will bill the user based on the farmers prices (the chain can read these prices by quering the farmers storage and reading the pricing data). See [PricingPolicy](https://github.com/threefoldtech/substrate-pallets/blob/03a5823ce79200709d525ec182036b47a60952ef/pallet-tfgrid/src/types.rs#L120). + +A report looks like: + +json +``` +{ + "contract_id": contractID, + "timestamp": "timestampOfReport", + "cru": cpus, + "sru": ssdInBytes, + "hru": hddInBytes, + "mru": memInBytes, + "nru": trafficInBytes +} +``` + +The node can call `add_reports` on this module to submit reports in batches. + +Usage of SU, CU and NU will be computed based on the prices and the rules that Threefold set out for cloud pricing. + +Billing will be done in Database Tokens and will be send to the corresponding farmer. If the user runs out of funds the chain will set the contract state to `cancelled` or it will be removed from storage. The Node needs to act on this 'contract cancelled' event and decommission the workload. + +The main currency of this chain. More information on this is explained here: TODO + +## Notes + +Sending the workloads encrypted to the chain makes sure that nobody except the user can read his deployment data. It also facilitates a way for the user to recreate his workload data from the chain. \ No newline at end of file diff --git a/collections/technology/smartcontract_it/smartcontract_toc.md b/collections/technology/smartcontract_it/smartcontract_toc.md new file mode 100644 index 0000000..f047e9a --- /dev/null +++ b/collections/technology/smartcontract_it/smartcontract_toc.md @@ -0,0 +1,7 @@ +

Smart Contract IT

+ +

Table of Contents

+ +- [Introduction](./smartcontract_tfgrid3.md) +- [Infrastructure As Code (IAC)](./smartcontract_iac.md) +- [3Bot Integration](./smartcontract_3bot.md) \ No newline at end of file diff --git a/collections/technology/technology_toc.md b/collections/technology/technology_toc.md new file mode 100644 index 0000000..d496344 --- /dev/null +++ b/collections/technology/technology_toc.md @@ -0,0 +1,16 @@ +

Technology

+ +This section covers the technology behind the ThreeFold Grid. ThreeFold has created an amazing technology system to allow anyone to host their applications and data close to them. + +To build on the ThreeFold Grid, refer to the [Developers](../../documentation/developers/developers.md) section. + +

Table of Contents

+ +- [How It Works](./grid3_howitworks.md) +- [Grid Concepts](./concepts/concepts_readme.md) +- [Primitives](./primitives/primitives_toc.md) +- [Quantum Safe Storage](./qsss/qsss_home.md) +- [Smart Contract IT](./smartcontract_it/smartcontract_toc.md) + + + diff --git a/collections/technology/tfgrid_primitives.md b/collections/technology/tfgrid_primitives.md new file mode 100644 index 0000000..de46ec7 --- /dev/null +++ b/collections/technology/tfgrid_primitives.md @@ -0,0 +1,44 @@ +![](img/layer0_.jpg) + +# TFGrid Low Level Functions = Primitives + +The following are the low level constructs which can be deployed on the TFGrid. + +The following functionalities allow you to create any solutions on top of the grid. +Any application which can run on linux can run on the TFGrid. + +### Compute + +- [ZKube](zkube) : kubernetes deployment +- [ZMachine](zmachine) : the container or virtual machine running inside ZOS +- [CoreX](corex) : process manager (optional), can be used to get remote access to your zmachine + +Uses [Compute Units = CU](cloudunits). + +### Storage (uses SU) + +- [ZOS Filesystem](zos_fs) : deduped immutable filesystem +- [ZOS Mount](zmount) : a part of a SSD (fast disk), mounted underneath your zmachine +- [Quantum Safe Filesystem](qsfs) : unbreakable storage system (secondary storage only) +- [Zero-DB](zdb) : the lowest level storage primitive, is a key value stor, used underneath other storage mechanisms typically +- [Zero-Disk](zdisk) : OEM only, virtual disk format + +Uses [Storage Units = SU](cloudunits). + +### Network (uses NU) + +- znet : private network between zmachines +- [Planetary Network](planetary_network) : peer2peer end2end encrypted global network +- znic : interface to planetary network +- [WebGateway](webgw) : interface between internet and znet + + +Uses [Network Units = SU](cloudunits). + +## Zero-OS Advantages + +!!!include:zos_advantages + + +!!!def alias:tfgrid_primitives,grid_primitives + diff --git a/collections/technology/zos/benefits/deterministic_deployment.md b/collections/technology/zos/benefits/deterministic_deployment.md new file mode 100644 index 0000000..c312a37 --- /dev/null +++ b/collections/technology/zos/benefits/deterministic_deployment.md @@ -0,0 +1,25 @@ + +## Deterministic Deployment + +- flists concept (deduped vfilesystem, no install, ...) + +The Dedupe filesystem flist uses fuse = interface which allows you to create the file system interface in user space, it is a virtual filesystem. +Metadata is exposed. The system sees the full tree of the image, but data itself not there, data is downloaded whenever they are accessed. + +There are multiple ways to create an flist: + - Convert an existing docker image which is hosted on the docker hub + - Push an archive like a tjz on the hub + - A library and CLI tool exist to build the flist from scratch: doing it this way, the directory is locally populated, and the flist is then created from the CLI tool. + - A [GitHub action](https://github.com/threefoldtech/publish-flist) allows to build a flist directly from GitHub action, useful for developers on GitHub + +Be aware that the flist system works a bit differently than the usual deployment of containers (dockers), which doesn't do mounting of volumes from your local disk into container for configuration. +With flists you need to modify your image to get configuration from environment. This is basically how docker was originally intended to be used. + + - Smart contract for IT + The smart contract for IT concept is applicable to any workload: containers, VMs, all gateways primitives, volumes, kubernetes and network. + It is a static agreement between farmer and user about deployment of an IT workload. + + - no dynamic behavior for deployment at runtime + + - no process can start unless the files are 100% described on flist level + \ No newline at end of file diff --git a/collections/technology/zos/benefits/docker_compatibility.md b/collections/technology/zos/benefits/docker_compatibility.md new file mode 100644 index 0000000..c2e0221 --- /dev/null +++ b/collections/technology/zos/benefits/docker_compatibility.md @@ -0,0 +1,14 @@ +# Docker compatibility + + +Docker is being recognized as the market leader as a technology provider for containerization technology. Many enterprise and software developers have adopted Docker's technology stack to built a devops (development and operations, more information [here](https://en.wikipedia.org/wiki/DevOps)) "train" (internal process, a way of developing and delivering software) for delivering updates to applications and new applications. Regardless of how this devops "train" is organised it always spits out docker (application) images and deployments methods. Hercules is built with a 100% backwards compatibility in mind to the created docker images and deployment methods. + +A major step in accepting and importing Docker images is to transpose docker images to the [ZOS Filesystem](zos_fs). + +## Features + +- 100 % backwards compatible with all existing and new to be created docker images. +- Easy import and transpose facility +- deduplicated application deployment simplifying aplication image management and versioning + +!!!include:zos_toc \ No newline at end of file diff --git a/collections/technology/zos/benefits/img/network_architecture.jpg b/collections/technology/zos/benefits/img/network_architecture.jpg new file mode 100644 index 0000000..4732889 Binary files /dev/null and b/collections/technology/zos/benefits/img/network_architecture.jpg differ diff --git a/collections/technology/zos/benefits/img/network_architecture2.jpg b/collections/technology/zos/benefits/img/network_architecture2.jpg new file mode 100644 index 0000000..c5308ed Binary files /dev/null and b/collections/technology/zos/benefits/img/network_architecture2.jpg differ diff --git a/collections/technology/zos/benefits/network_wall.md b/collections/technology/zos/benefits/network_wall.md new file mode 100644 index 0000000..3e47799 --- /dev/null +++ b/collections/technology/zos/benefits/network_wall.md @@ -0,0 +1,38 @@ +# Network wall + +![](img/webgateway.jpg) + + +> the best security = no network = no incoming tcp/ip from internet to containers  + +This is done via sockets. + +- TCP router client opens up socket to TCP router server, residing on the web gateway. +- When http arrives on this tcp router server, payload van http is brought back over socket to tcp router client. +- TCP router client sends http request that is made to server residing in the container. +- No TCP comes from outside world to the container, creating the most secure connection possible. +- The TCP router client opens the socket, TCP router server that received http request throws it on the socket. +- On the socket there is only data that comes in, which is replayed. TCP router client does a https request. + +This mechanism is now implemented for https, but in the future also other protocols such as sql, redis, http … can be supported. + +The end result is that only data goes over the network. +If container can no longer go to local tcp stack but only to make outgoing connection to the gateway, then there is no longer tcp coming’s in from outside. + +This is what we call the 'Network wall'. +As a consequence, no tcp/ip is coming in AT ALL, making the full set-up reach unprecedented security. + + +## More detailed explanation + +- Containers are behind NAT. We don’t allow traffic coming in. +- All connection needs to come from container to the outside world. = very neat as network security. +- Connection needs to be outwards, secures against DDOS and other hacking etc, nothing to connect to. +- How to access it ? Drive traffic inside the container: proxy or load balancer which is exposed publicly, rest of the traffic in private network, not accessible. +- So you can limit the number of containers that are really accessible from the outside world. +- You don’t have to worry about ‘how to secure my DB’ as my DB is not exposed, only accessible in Wireguard private network. +- In containers you can specify to have specific IPv6 address, so deploy reverse proxy in container which has public access, = entry point in the network, deploy reverse tcp connection (=tcp router client), connects to the gateways and allows incoming connection. + +!!!def alias:network_wall,net_wall + +!!!include:zos_toc diff --git a/collections/technology/zos/benefits/p2p_networking.md b/collections/technology/zos/benefits/p2p_networking.md new file mode 100644 index 0000000..a2ed92c --- /dev/null +++ b/collections/technology/zos/benefits/p2p_networking.md @@ -0,0 +1,30 @@ +![](img/network_architecture2.jpg) + +# Peer2Peer Network Concept + +## Introduction + +True peer-to-peer is a principle that exists everywhere within Threefold's technology stack, especially on its Network Architecture. Farmers produce IT capacity by connecting hardwares to the network and installing Zero-OS. The peer-to-peer network of devices forms the TF Grid. This TF Grid is a universal substrate of which a large variety of IT workloads exist and run. + +## Peer-to-peer networking + +The TF Grid is built by 3Nodes (hardware + Zero-OS) that are connected to the internet by using the IPv6 protocol. To future-proof this grid, IPv6 has been chosen as ThreeFold Grid's native networking technology. The TF Grid operates on IPv6 (where available) and creates peer-to-peer network connections between all the containers (and other primitives). Please find more about Zero-OS primitives in our [SDK manual](manual3_home) + +This creates a many-to-many web of (encrypted) point-to-point network connections which together make a (private) secure **overlay network**. This network is completely private and connects only the primitives that have been deployed in your network. + +TF Network Characteristics: + +- Connect all containers point-to-point; +- All traffic is encrypted; +- High performance; +- The shortest path between two end-points, multi-homed containers; +- Could span large geographical areas and create virtual data centers; +- All created and made operational **without** public access from the internet. + +## Existing Enterprise Private Networks + +At Threefold, we are aware of the existence of private networks, IPsec, VPN, WAN's and more. We have the facility to create bridges to make those networks part of the deployed private overlay networks. This is in an early stage development, but with the right level(s) of interest this could be built out and carried out in the near future. + +![](img/network_architecture.jpg) + +!!!def alias:quantumsafe_network_concept,qsn_concept \ No newline at end of file diff --git a/collections/technology/zos/benefits/unbreakable_storage.md b/collections/technology/zos/benefits/unbreakable_storage.md new file mode 100644 index 0000000..3bef57f --- /dev/null +++ b/collections/technology/zos/benefits/unbreakable_storage.md @@ -0,0 +1,14 @@ + +## Unbreakable Storage + +- Unlimited history +- Survives network, datacenter or node breakdown +- No silent corruption possible +- Quantum safe (data cannot be decrypted by quantum computers) as long as quantum computer has no access to the metadata +- Self-healing & autocorrecting + + +If you deploy a container with simple disk access, you don’t have it. +Performance is around 50MB/second, if a bit more CPU is given for the distributed storage encoder, we achieve this performance. + +For more information, read [this documentation](../../primitives/storage/qsfs.md). \ No newline at end of file diff --git a/collections/technology/zos/benefits/zero_boot.md b/collections/technology/zos/benefits/zero_boot.md new file mode 100644 index 0000000..ea605fd --- /dev/null +++ b/collections/technology/zos/benefits/zero_boot.md @@ -0,0 +1,40 @@ +## Zero Boot + +> Zero Boot = Zero-OS boot process + +ZOS Boot is a boot facility that allows 3nodes to boot from network boot servers located in the TF Grid. This boot mechanism creates as little as possible operational and administration overhead. ZOS Boot is a crucial part for enabling autonomy by *not* having the operating system installed on local disks on 3nodes. With a boot network facility and no local operating system files you immediate erase a number of operational and administration tasks: + +- to install the operating system to start with +- to keep track of which systems run which version of the operating system (especially in large setups this is a complicated and error prone task) +- to keep track of patches and bug fixes that have been applied to systems + +That's just the administration and operational part of maintaining a server estate with local installed operating system. On the security side of things the benefits are even greater: +- many hacking activities are geared towards adding to or changing parts of the operating system files. This is a threat from local physical access to servers as well as over the network. When there are no local operating system files installed this threat does not exist. +- accidental overwrite, delete or corruption of operating system files. Servers run many processes and many of these processes have administrative access to be able to do what they need to do. Accidental deletion or overwrites of crucial files on disk will make the server fail a reboot. +- access control. I there is no local operating system installed access control, user rights etc etc. are unnecessary functions and features and do not have to be implemented. + +### How + +In this image from fs, a small partition is mounted in memory to start booting the machine, it gets IPXE (downloads what it needs), and then 0-OS boots. +After that, going to the hub, downloading different lists. + +There is 1 main flist that triggers downloads of multiple flists. Read more [here](../../../flist/flist.md). +In there all the components/daemons that do part of the 0-OS. +Also the download of the zos-bins, i.e. external binaries are triggered this way (https://hub.grid.tf/tf-zos-bins). + +The core components of zero-os can be found in: [Zero-OS repo](https://github.com/threefoldtech/zos/tree/master/bins/packages) = If something changes in the directory, a workflow is triggered to rebuild the full flist and push it to the hub. + +When a node discovers there is a new version of one of these lists on the hub, it downloads it, restarts the daemon with the new version. +Over the lifetime of the node, it keeps pulling on the hub directories to check whether new daemons/flists/binaries are available and whether things need get upgraded. + +### Features + +The features of ZOS Boot are: + +- no local operating system installed +- network boot from the grid to get on the grid +- decreased administrative and operational work, allowing for autonomous operations +- increased security +- increased efficiency (deduplication, only one version of the OS stored for thousands of servers) +- all server storage space is available for enduser workloads (average operating system size around 10GB) +- bootloader is less than 1MB in size and can be presented to the servers as a PXE script, USB boot device, ISO boot image. \ No newline at end of file diff --git a/collections/technology/zos/benefits/zero_hacking_surface.md b/collections/technology/zos/benefits/zero_hacking_surface.md new file mode 100644 index 0000000..10a41cf --- /dev/null +++ b/collections/technology/zos/benefits/zero_hacking_surface.md @@ -0,0 +1,6 @@ +## Zero Hacking Surface + +Zero does not mean is not possible but we use this term to specificy that we minized the attack surface for hackers. + +- There is no shell/server interface on zero-os level (our operating system) +- There are no hidden or unintended processes running which are not prevalidatedOne comment: still ssh server running with keys of a few people on each server, not yet disabled. To be disabled in the near future, now still useful to debug but it is a backdoor. The creation of a new primitive where the farmer agrees to give access to administrators under analysis. This way, when a reservation is sent to a node, a ssh server is booted up with chosen key to allow admins to go in. diff --git a/collections/technology/zos/benefits/zero_install.md b/collections/technology/zos/benefits/zero_install.md new file mode 100644 index 0000000..48e8b08 --- /dev/null +++ b/collections/technology/zos/benefits/zero_install.md @@ -0,0 +1,19 @@ +## Zero-OS Installation + +The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed. + +### 3Node Install + +1. Acquire a computer (server). +2. Configure a farm on the TFGrid explorer. +3. Download the bootloader and put on a USB stick or configure a network boot device. +4. Power on the computer and connect to the internet. +5. Boot! The computer will automatically download the components of the operating system (Zero-OS). + +The actual bootloader is very small. It brings up the network interface of your computer and queries TFGeid for the remainder of the boot files needed. + +The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless. + +The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem. + +For more information on setting a 3Node, please refer to the [Farmers documentation](../../../farmers/farmers.md). \ No newline at end of file diff --git a/collections/technology/zos/benefits/zos_advantages.md b/collections/technology/zos/benefits/zos_advantages.md new file mode 100644 index 0000000..7569789 --- /dev/null +++ b/collections/technology/zos/benefits/zos_advantages.md @@ -0,0 +1,134 @@ +

Zero-OS Advantages

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Zero-OS Installation](#zero-os-installation) + - [3Node Install](#3node-install) +- [Unbreakable Storage](#unbreakable-storage) +- [Zero Hacking Surface](#zero-hacking-surface) +- [Zero Boot](#zero-boot) + - [How](#how) + - [Features](#features) +- [Deterministic Deployment](#deterministic-deployment) +- [Zero-OS Protect](#zero-os-protect) + +## Introduction + +We present the many advantages of Zero-OS. + +## Zero-OS Installation + +The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed. + +### 3Node Install + +1. Acquire a computer (server). +2. Configure a farm on the TFGrid explorer. +3. Download the bootloader and put on a USB stick or configure a network boot device. +4. Power on the computer and connect to the internet. +5. Boot! The computer will automatically download the components of the operating system (Zero-OS). + +The actual bootloader is very small. It brings up the network interface of your computer and queries TFGeid for the remainder of the boot files needed. + +The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless. + +The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem. + +For more information on setting a 3Node, please refer to the [Farmers documentation](../../../../documentation/farmers/farmers.md). + + +## Unbreakable Storage + +- Unlimited history +- Survives network, datacenter or node breakdown +- No silent corruption possible +- Quantum safe (data cannot be decrypted by quantum computers) as long as quantum computer has no access to the metadata +- Self-healing & autocorrecting + + +If you deploy a container with simple disk access, you don’t have it. +Performance is around 50MB/second, if a bit more CPU is given for the distributed storage encoder, we achieve this performance. + +For more information, read [this documentation](../../primitives/storage/qsfs.md). + +## Zero Hacking Surface + +Zero does not mean is not possible but we use this term to specificy that we minized the attack surface for hackers. + +- There is no shell/server interface on zero-os level (our operating system) +- There are no hidden or unintended processes running which are not prevalidatedOne comment: still ssh server running with keys of a few people on each server, not yet disabled. To be disabled in the near future, now still useful to debug but it is a backdoor. The creation of a new primitive where the farmer agrees to give access to administrators under analysis. This way, when a reservation is sent to a node, a ssh server is booted up with chosen key to allow admins to go in. + +## Zero Boot + +> Zero Boot = Zero-OS boot process + +ZOS Boot is a boot facility that allows 3nodes to boot from network boot servers located in the TF Grid. This boot mechanism creates as little as possible operational and administration overhead. ZOS Boot is a crucial part for enabling autonomy by *not* having the operating system installed on local disks on 3nodes. With a boot network facility and no local operating system files you immediate erase a number of operational and administration tasks: + +- to install the operating system to start with +- to keep track of which systems run which version of the operating system (especially in large setups this is a complicated and error prone task) +- to keep track of patches and bug fixes that have been applied to systems + +That's just the administration and operational part of maintaining a server estate with local installed operating system. On the security side of things the benefits are even greater: +- many hacking activities are geared towards adding to or changing parts of the operating system files. This is a threat from local physical access to servers as well as over the network. When there are no local operating system files installed this threat does not exist. +- accidental overwrite, delete or corruption of operating system files. Servers run many processes and many of these processes have administrative access to be able to do what they need to do. Accidental deletion or overwrites of crucial files on disk will make the server fail a reboot. +- access control. I there is no local operating system installed access control, user rights etc etc. are unnecessary functions and features and do not have to be implemented. + +### How + +In this image from fs, a small partition is mounted in memory to start booting the machine, it gets IPXE (downloads what it needs), and then 0-OS boots. +After that, going to the hub, downloading different lists. + +There is 1 main flist that triggers downloads of multiple flists. Read more [here](../../../../documentation/developers/flist/flist.md). +In there all the components/daemons that do part of the 0-OS. +Also the download of the zos-bins, i.e. external binaries are triggered this way (https://hub.grid.tf/tf-zos-bins). + +The core components of zero-os can be found in: [Zero-OS repo](https://github.com/threefoldtech/zos/tree/master/bins/packages) = If something changes in the directory, a workflow is triggered to rebuild the full flist and push it to the hub. + +When a node discovers there is a new version of one of these lists on the hub, it downloads it, restarts the daemon with the new version. +Over the lifetime of the node, it keeps pulling on the hub directories to check whether new daemons/flists/binaries are available and whether things need get upgraded. + +### Features + +The features of ZOS Boot are: + +- no local operating system installed +- network boot from the grid to get on the grid +- decreased administrative and operational work, allowing for autonomous operations +- increased security +- increased efficiency (deduplication, only one version of the OS stored for thousands of servers) +- all server storage space is available for enduser workloads (average operating system size around 10GB) +- bootloader is less than 1MB in size and can be presented to the servers as a PXE script, USB boot device, ISO boot image. + + +## Deterministic Deployment + +- flists concept (deduped vfilesystem, no install, ...) + +The Dedupe filesystem flist uses fuse = interface which allows you to create the file system interface in user space, it is a virtual filesystem. +Metadata is exposed. The system sees the full tree of the image, but data itself not there, data is downloaded whenever they are accessed. + +There are multiple ways to create an flist: + - Convert an existing docker image which is hosted on the docker hub + - Push an archive like a tjz on the hub + - A library and CLI tool exist to build the flist from scratch: doing it this way, the directory is locally populated, and the flist is then created from the CLI tool. + - A [GitHub action](https://github.com/threefoldtech/publish-flist) allows to build a flist directly from GitHub action, useful for developers on GitHub + +Be aware that the flist system works a bit differently than the usual deployment of containers (dockers), which doesn't do mounting of volumes from your local disk into container for configuration. +With flists you need to modify your image to get configuration from environment. This is basically how docker was originally intended to be used. + + - Smart contract for IT + The smart contract for IT concept is applicable to any workload: containers, VMs, all gateways primitives, volumes, kubernetes and network. + It is a static agreement between farmer and user about deployment of an IT workload. + + - no dynamic behavior for deployment at runtime + + - no process can start unless the files are 100% described on flist level + +## Zero-OS Protect + +- The operating system of the 3node (Zero-OS) is made to exist in environments without the presence of technical knowhow. 3nodes are made to exist everywhere where network meet a power socket. The OS does not have a login shell and does not allow people to log in with physical access to a keyboard and screen nor does it allows logins over the network. There is no way the 3node accepts user initiated login attempts. +- For certified capacity a group of known strategic vendors are able to lock the [BIOS](https://en.wikipedia.org/wiki/BIOS) of their server range and make sure no-one but them can unlock and change features present in the BIOS. Some vendors have an even higher degree of security and can store private keys in chips in side the computer to provider unique identification based on private keys or have mechanisms to check wether the server has been opened / tampered with in the transportation from the factory / vendor to the Farmer. All of this leads to maximum protection on the hardware level. +- 3nodes boot from a network facility. This means that they do not have local installed operating system files. Also they do not have a local username / password file or database. Viruses and hackers have very little work with if there are no local files to plant viruses or trojan horses in. Also the boot facility provides hashes for the files sent to the booting 3node so that the 3node can check wether is receives the intended file, no more man in the middle attacks. +- The zos_fs provides the same hash and file check mechanism. Every application file presented to a booting container has a hash describing it and the 3node on which the container is booting can verify if the received file matches the previously received hash. +- Every deployment of one or more applications starts with the creation of a (private) [znet](../../primitives/network/znet.md). This private overlay network is single tenant and not connected to the public internet. Every application or service that is started in a container in this overlay network is connection to all of the other containers via a point to point, encrypted network connection. \ No newline at end of file diff --git a/collections/technology/zos/benefits/zos_advantages_toc.md b/collections/technology/zos/benefits/zos_advantages_toc.md new file mode 100644 index 0000000..5c18b95 --- /dev/null +++ b/collections/technology/zos/benefits/zos_advantages_toc.md @@ -0,0 +1,10 @@ +# Zero-OS Advantages + +

Table of Contents

+ +- [Zero-OS Installation](./zero_install.md) +- [Unbreakable Storage](./unbreakable_storage.md) +- [Zero Hacking Surface](./zero_hacking_surface.md) +- [Booting Process](./zero_boot.md) +- [Deterministic Deployment](./deterministic_deployment.md) +- [Zero-OS Protect](./zos_protect.md) \ No newline at end of file diff --git a/collections/technology/zos/benefits/zos_monitoring.md b/collections/technology/zos/benefits/zos_monitoring.md new file mode 100644 index 0000000..90fc551 --- /dev/null +++ b/collections/technology/zos/benefits/zos_monitoring.md @@ -0,0 +1,39 @@ +# ZOS Monitoring + + +ZOS collects data from deployed solutions and applications and presents data in a well known open source monitoring solution called prometheus. + +Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company. + +For more elaborate overviews of Prometheus, see [here](https://prometheus.io/) + +### Features + +- a multi-dimensional data model with time series data identified by metric name and key/value pairs +- PromQL, a flexible query language to leverage this dimensionality +- no reliance on distributed storage; single server nodes are autonomous +- time series collection happens via a pull model over HTTP +- pushing time series is supported via an intermediary gateway +- targets are discovered via service discovery or static configuration +- multiple modes of graphing and dashboarding support + +### Components + +The Prometheus ecosystem consists of multiple components, many of which are optional: + +- the main Prometheus server which scrapes and stores time series data +- client libraries for instrumenting application code +- a push gateway for supporting short-lived jobs +- special-purpose exporters for services like HAProxy, StatsD, Graphite, etc. +- an alertmanager to handle alerts +- various support tools + + +### Roadmap + +- ONLY for OEM partners today + + +!!!def alias:zos_monitoring + +!!!include:zos_toc diff --git a/collections/technology/zos/benefits/zos_protect.md b/collections/technology/zos/benefits/zos_protect.md new file mode 100644 index 0000000..040d8c4 --- /dev/null +++ b/collections/technology/zos/benefits/zos_protect.md @@ -0,0 +1,7 @@ +# ZOS Protect + +- The operating system of the 3node (Zero-OS) is made to exist in environments without the presence of technical knowhow. 3nodes are made to exist everywhere where network meet a power socket. The OS does not have a login shell and does not allow people to log in with physical access to a keyboard and screen nor does it allows logins over the network. There is no way the 3node accepts user initiated login attempts. +- For certified capacity a group of known strategic vendors are able to lock the [BIOS](https://en.wikipedia.org/wiki/BIOS) of their server range and make sure no-one but them can unlock and change features present in the BIOS. Some vendors have an even higher degree of security and can store private keys in chips in side the computer to provider unique identification based on private keys or have mechanisms to check wether the server has been opened / tampered with in the transportation from the factory / vendor to the Farmer. All of this leads to maximum protection on the hardware level. +- 3nodes boot from a network facility. This means that they do not have local installed operating system files. Also they do not have a local username / password file or database. Viruses and hackers have very little work with if there are no local files to plant viruses or trojan horses in. Also the boot facility provides hashes for the files sent to the booting 3node so that the 3node can check wether is receives the intended file, no more man in the middle attacks. +- The zos_fs provides the same hash and file check mechanism. Every application file presented to a booting container has a hash describing it and the 3node on which the container is booting can verify if the received file matches the previously received hash. +- Every deployment of one or more applications starts with the creation of a (private) [znet](../../primitives/network/znet.md). This private overlay network is single tenant and not connected to the public internet. Every application or service that is started in a container in this overlay network is connection to all of the other containers via a point to point, encrypted network connection. \ No newline at end of file diff --git a/collections/technology/zos/img/zero_os_overview.jpg b/collections/technology/zos/img/zero_os_overview.jpg new file mode 100644 index 0000000..0550294 Binary files /dev/null and b/collections/technology/zos/img/zero_os_overview.jpg differ diff --git a/collections/technology/zos/img/zos22.png b/collections/technology/zos/img/zos22.png new file mode 100644 index 0000000..a40cfa3 Binary files /dev/null and b/collections/technology/zos/img/zos22.png differ diff --git a/collections/technology/zos/img/zos_network_overview.jpg b/collections/technology/zos/img/zos_network_overview.jpg new file mode 100644 index 0000000..febb1d6 Binary files /dev/null and b/collections/technology/zos/img/zos_network_overview.jpg differ diff --git a/collections/technology/zos/img/zos_overview_compute_storage.jpg b/collections/technology/zos/img/zos_overview_compute_storage.jpg new file mode 100644 index 0000000..23257e5 Binary files /dev/null and b/collections/technology/zos/img/zos_overview_compute_storage.jpg differ diff --git a/collections/technology/zos/zos.md b/collections/technology/zos/zos.md new file mode 100644 index 0000000..5522e68 --- /dev/null +++ b/collections/technology/zos/zos.md @@ -0,0 +1,44 @@ +![](img/zos22.png) + +# Zero-OS + +![](img/zero_os_overview.jpg) + + +!!!include:whatis_zos + +### Imagine an operating system with the following benefits + +- upto 10x more efficient for certain workloads (e.g. storage) +- no install required +- all files are deduped for the VM's, containers and the ZOS itself, no more data duplicated filesystems +- the hacking footprint is super small, which leads to much more safe systems + - every file is fingerprinted and gets checked at launch time of an application + - there is no shell or server interface on the operating system + - the networks are end2end encrypted between all Nodes +- there is the possibility to completely disconnect the compute/storage from the network service part which means hackers have a lot less chance to get to the data. +- a smart contract for IT layer allows groups of people to deploy IT workloads with concensus and full control +- all workloads which can run on linux can run on Zero-OS but in a much more controlled, private and safe way + +> ThreeFold has created an operating system from scratch, we used the Linux kernel and its components and then build further on it, we have been able to achieve all above benefits. + +## The requirements for our TFGrid based on Zero OS are: + +- **Autonomy**: TF Grid needs to create compute, storage and networking capacity everywhere. We could not rely on a remote (or a local) maintenance of the operating system by owners or operating system administrators. +- **Simplicity**: An operating system should be simple, able to exist anywhere, for anyone, good for the planet. +- **Stateless**. In a grid (peer-to-peer) set up, the sum of the components is providing a stable basis for single elements to fail and not bring the whole system down. Therefore, it is necessary for single elements to be stateless, and the state needs to be stored within the grid. + + + +!!!def alias:zos,zero-os,threefold_operating_system,tf_os,threefold_os + + + + \ No newline at end of file diff --git a/collections/technology/zos/zos_compute_storage.md b/collections/technology/zos/zos_compute_storage.md new file mode 100644 index 0000000..54985b7 --- /dev/null +++ b/collections/technology/zos/zos_compute_storage.md @@ -0,0 +1,6 @@ +## ZOS compute storage overview + +![](img/zos_overview_compute_storage.jpg) + + +!!!include:zos_toc \ No newline at end of file diff --git a/collections/technology/zos/zos_install.md b/collections/technology/zos/zos_install.md new file mode 100644 index 0000000..1f32caf --- /dev/null +++ b/collections/technology/zos/zos_install.md @@ -0,0 +1 @@ +# Zero-OS Install diff --git a/collections/technology/zos/zos_install_mechanism.md b/collections/technology/zos/zos_install_mechanism.md new file mode 100644 index 0000000..4bc9043 --- /dev/null +++ b/collections/technology/zos/zos_install_mechanism.md @@ -0,0 +1,21 @@ +### Zero OS install + +The Zero-OS is delivered to the 3Nodes over the internet network (network boot) and does not need to be installed. + +# Zero-OS Install Mechanism + +## Stateless Install + +1. Acquire a computer (server). +2. Configure a farm on the TFGrid explorer. +3. Download the bootloader and put on a USB stick or configure a network boot device. +4. Power on the computer and connect to the internet. +5. Boot! The computer will automatically download the components of the operating system (Zero-OS). + +The actual bootloader is very small. It brings up the network interface of your computer and queries TFGrid for the remainder of the boot files needed. + +The operating system is not installed on any local storage medium (hard disk, ssd). Zero-OS is stateless. + +The mechanism to allow this to work in a safe and efficient manner is a ThreeFold innovation called our container virtual filesystem. This is explained in more detail [here](flist) + +!!!def alias:zero_os_install \ No newline at end of file diff --git a/collections/technology/zos/zos_network.md b/collections/technology/zos/zos_network.md new file mode 100644 index 0000000..004175b --- /dev/null +++ b/collections/technology/zos/zos_network.md @@ -0,0 +1,6 @@ +# ZOS network overview + +![](img/zos_network_overview.jpg) + + +!!!include:zos_toc diff --git a/collections/technology/zos_advantages.md b/collections/technology/zos_advantages.md new file mode 100644 index 0000000..3a61099 --- /dev/null +++ b/collections/technology/zos_advantages.md @@ -0,0 +1,6 @@ +- [Zero Install](zero_install) +- [Unbreakable Storage](unbreakable_storage) +- [Zero Hacking Surface](zero_hacking_surface) +- [Zero Boot](zero_boot) +- [Deterministic Deployment](deterministic_deployment) +- [ZOS Protect](zos_protect) \ No newline at end of file diff --git a/collections/technology/zos_toc.md b/collections/technology/zos_toc.md new file mode 100644 index 0000000..29f3202 --- /dev/null +++ b/collections/technology/zos_toc.md @@ -0,0 +1,24 @@ +## Zero OS as generator for Compute, Storage, Network capacity. + +### Compute (uses CU) + +- [ZKube](zkube) : kubernetes deployment +- [ZMachine](zmachine) : the container or virtual machine running inside ZOS +- [CoreX](corex) : process manager (optional), can be used to get remote access to your zmachine + +### Storage (uses SU) + +- [ZOS Filesystem](zos_fs) : deduped immutable filesystem +- [ZOS Mount](zmount) : a part of a SSD (fast disk), mounted underneath your zmachine +- [Quantum Safe Filesystem](!@qsss_home) : unbreakable storage system (secondary storage only) +- [Zero-DB](zdb) : the lowest level storage primitive, is a key value stor, used underneath other storage mechanisms typically +- [Zero-Disk](zdisk) : OEM only, virtual disk format + +### Network (uses NU) + +- zos_net : private network between zmachines +- [Planetary Network](planetary_network) : peer2peer end2end encrypted global network +- [WebGateway](webgw) : interface between internet and znet +- zos_bridge: network interface to planetary_net or public ipaddress + + diff --git a/collections/technology/zos_tools/flist_hub.md b/collections/technology/zos_tools/flist_hub.md new file mode 100644 index 0000000..45caca6 --- /dev/null +++ b/collections/technology/zos_tools/flist_hub.md @@ -0,0 +1,4 @@ +!!!include:flist_hub_ + +!!!def alias:tfhub,zflist_hub,flist_hub + diff --git a/collections/technology/zos_tools/flist_hub_.md b/collections/technology/zos_tools/flist_hub_.md new file mode 100644 index 0000000..3f0a572 --- /dev/null +++ b/collections/technology/zos_tools/flist_hub_.md @@ -0,0 +1,12 @@ + +## TFGrid Flist Hub + +We provide a public hub you could use to upload your own Flist or filesystem directly and take advantages of Flist out-of-box on Zero-OS. +You can convert an existing docker image the same way. + +Public hub: [hub.grid.tf](https://hub.grid.tf) + +If you want to experiment the hub and features, you could use the [playground hub](https://playground.hub.grid.tf). +This hub could be reset anytime, don't put sensitive or production code there. + + diff --git a/collections/threefold_token/.collection b/collections/threefold_token/.collection new file mode 100644 index 0000000..e69de29 diff --git a/collections/threefold_token/buy_sell_tft/albedo_buy.md b/collections/threefold_token/buy_sell_tft/albedo_buy.md new file mode 100644 index 0000000..ef6d18e --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/albedo_buy.md @@ -0,0 +1,50 @@ +

Get TFT (Stellar) on Albedo Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Get TFT By Token Swapping](#get-tft-by-token-swapping) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our guide on how to buy TFT tokens (Stellar) via the [Albedo wallet](https://albedo.link/)! Albedo is a secure and trustworthy keystore web app and browser extension designed for Stellar token accounts. With Albedo, you can safely manage and transact with your Stellar account without having to share your secret key with any third parties. + +In this tutorial, we will walk you through the process of buying Stellar TFT tokens using the Albedo wallet. +*** +## Prerequisites + +- **XLM**: When buying TFT tokens using the Albedo wallet, the process involves swapping XLM (Stellar Lumens) or other Stellar tokens into TFT. Please note that a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +- **Create a Wallet and Add TFT Asset**: Create an Albedo Wallet Account and add TFT as an asset. Read [**here**](../storing_tft/storing_tft.md) for the complete manual of how to create an Albedo Wallet. +*** +## Get Started + +### Get TFT By Token Swapping + +Once you have completed the prerequisites, you can get TFT on Albedo by clicking 'Swap' and swapping your existing tokens to TFT, for example, XLM or USDC. + +Insert the amount of TFT you'd like to buy or the amount of XLM you'd like to swap for TFT. Click 'Swap' to confirm the transaction. + +Congratulations. You just swapped some XLM to TFT. Go to 'Balance' page to see your recently purchased TFT tokens. +*** +## Important Notice + +If you are looking for ways to provide liquidity for TFT (Stellar) on Albedo, you will find the according information [here](../liquidity/liquidity_albedo.md). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + + + + + + + diff --git a/collections/threefold_token/buy_sell_tft/bettertoken.md b/collections/threefold_token/buy_sell_tft/bettertoken.md new file mode 100644 index 0000000..3dafa10 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/bettertoken.md @@ -0,0 +1,36 @@ +

Get TFT Stellar via BetterToken

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Get Started](#get-started) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** +## Introduction + +Alternatively, you can get TFT from Mazraa [BetterToken](https://bettertoken.com/) farmers. + +**BetterToken** is the first-ever ThreeFold Farming Cooperative in Europe that aims to make it easy for individuals to engage and become part of the ThreeFold Movement. Their primary goal is to enable anyone to become a farmer in simple ways and assist customers in getting started with their services and applications on the ThreeFold Grid. + +As a ThreeFold Farming Cooperative, BetterToken is dedicated to promoting participation in the ThreeFold ecosystem. They provide support and resources for individuals who want to join the farming community and contribute to the decentralized and sustainable ThreeFold Grid. +*** +## Get Started + +If you are interested in purchasing TFT as a future capacity reservation through BetterToken.com, you can do so using a wire transfer. It's important to note that BetterToken handles orders of $1000 or more. To explore purchase possibilities, you can contact BetterToken directly at **orders@bettertoken.com**. + +By reaching out to them, you can inquire about the process, requirements, and any additional details related to purchasing TFT as a future capacity reservation. BetterToken will be able to provide you with the necessary information and assist you throughout the purchase process. + +Remember to provide them with your specific requirements and any relevant details to ensure a smooth transaction. +*** +## Important Notice + +Remember to exercise caution and verify the authenticity of any communication or transaction related to purchasing TFT. Scammers may try to impersonate BetterToken or ThreeFold representatives, so it's crucial to ensure you are dealing with the official channels and contacts provided by BetterToken. +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + diff --git a/collections/threefold_token/buy_sell_tft/btc_alpha.md b/collections/threefold_token/buy_sell_tft/btc_alpha.md new file mode 100644 index 0000000..c80d633 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/btc_alpha.md @@ -0,0 +1,125 @@ +

Get TFT (Stellar) on BTC-Alpha

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Sign up for an Account](#sign-up-for-an-account) + - [Secure Your Account](#secure-your-account) + - [Deposit Funds to Account](#deposit-funds-to-account) + - [Start Trading](#start-trading) + - [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +Welcome to our tutorial on how to get TFT (Stellar) using [**BTC-Alpha**](https://btc-alpha.com/en)! + +BTC-Alpha is a cryptocurrency exchange platform that provides a secure and user-friendly environment for trading various digital assets, including TFT (Stellar). With its robust features and intuitive interface, BTC-Alpha offers a convenient way to buy and sell cryptocurrencies. + +In this guide, we will walk you through the process of buying TFT on the BTC-Alpha exchange. + +## Prerequisites + +- **ID for verification:** To get TFT (Stellar) on the BTC-Alpha exchange, you will need to have your identification (ID) ready for the verification process. This is a standard requirement to ensure compliance with regulatory guidelines and maintain a secure trading environment. Your ID may include documents such as a valid passport or government-issued identification card. + +- **BTC-Alpha Account**: You must have an active account on the BTC-Alpha exchange. If you don't have one, you can sign up on their website and complete the registration process. Make sure to secure your account with strong passwords and two-factor authentication for enhanced security. + +- **Funds for Deposit**: To buy TFT (Stellar), you will need to have funds in your BTC-Alpha account. BTC-Alpha supports various deposit options, including USDT (Tether), USDC (USD Coin), BTC (Bitcoin), and other tokens listed on the exchange. You can deposit these tokens from your external wallet or exchange into your BTC-Alpha account to use for purchasing TFT. + +## Get Started + +### Sign up for an Account + +**Sign up for a BTC-Alpha Account:** Visit the BTC-Alpha website [https://btc-alpha.com/](https://btc-alpha.com/) and click on the "**Sign up**" button. Fill in the required information, including your email address, a secure password, and any additional details requested for the account creation process. + +**Login to your account**: You will then receive a notification that allow you to login to your new account. + +Now, you can proceed to log in to your account and start exploring the platform. Follow these steps to log in: + +Visit the BTC-Alpha website (https://btc-alpha.com/) on your web browser. Click on the "Login" button located on the top-right corner of the website. + +Enter the email address and password you used during the registration process in the respective fields. + +**Verify Your Account:** After completing the registration, you may need to verify your account by providing some personal identification information. This is a standard procedure for most cryptocurrency exchanges to ensure compliance with regulations and security measures. Follow the instructions provided by BTC-Alpha to complete the verification process. + +Congratulations on completing the registration process for your BTC-Alpha account and are logged in to your account successfully! + +### Secure Your Account + +**Secure Your Account**: Set up two-factor authentication (2FA) to add an extra layer of security to your BTC-Alpha account. This typically involves linking your account to a 2FA app, such as Google Authenticator or Authy, and enabling it for login and transaction verification. + +To enable two-factor authentication (2FA) on your BTC-Alpha account, follow these steps: + +Install either the [2FA Alp Authenticator](https://play.google.com/store/apps/details?id=com.alp.two_fa) or [Google Authenticator](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en&gl=US) app on your mobile phone. You can find Authy on the App Store or Google Play, and Google Authenticator on the Google Play Store or Apple App Store. + +Once you have installed the app, log in to your BTC-Alpha account. + +Right-Click on your name or profile icon located at the top-right corner of the website to access the account settings, and look for the option "**Enable Two-factor Authentication**" and click on it. + +Follow the instructions provided to link your BTC-Alpha account with the Authy or Google Authenticator app. This usually involves scanning a QR code or manually entering a code provided by the app. + +Once the link is established, the app will start generating unique codes that you will need to enter during the login process for additional security. + +By enabling two-factor authentication, you add an extra layer of security to your BTC-Alpha account, helping to protect your funds and personal information. Make sure to keep your mobile device with the authenticator app secure, as it will be required each time you log in to your BTC-Alpha account. + +### Deposit Funds to Account + +To begin trading and acquiring TFT on the BTC-Alpha exchange, you will first need to deposit funds into your account. The specific token you choose to deposit will depend on your preference and the available options on the exchange. + +In this guide, we will focus on depositing BTC (Bitcoin) into your BTC-Alpha account. BTC is a widely recognized and popular cryptocurrency, making it a convenient choice for funding your account. + +To deposit BTC into your BTC-Alpha account, follow these steps: + +Log in to your BTC-Alpha account and Click on the "Wallets" tab located in the top menu. + +Search for "**BTC**" or "Bitcoin" in the list of available cryptocurrencies. + +You will be provided with a BTC deposit address. Copy this address or scan the QR code associated with it. + +Use your personal external BTC wallet or the wallet of another exchange to initiate a withdrawal to the provided deposit address. +Ensure that you specify the correct deposit address and double-check it before confirming the transaction. + +Wait for the transaction to be confirmed on the Bitcoin network. This may take some time depending on network congestion. +Once the transaction is confirmed, the BTC will be credited to your BTC-Alpha account balance. + +You can check your account balance by clicking on the "**Wallets**" tab or by navigating to the "**Balances**" section. + +Please note that the exact steps for depositing BTC may vary depending on the specific wallet or exchange you are using to send the funds. It is essential to double-check the deposit address and follow the instructions provided by your wallet or exchange to ensure a successful deposit. + +### Start Trading + +Once you have signed up for a BTC-Alpha account and completed the necessary verification steps, you will be ready to proceed with depositing funds and buying TFT (Stellar) on the exchange. On this example we will be buying TFT using BTC on BTC-Alpha Exchange. + +To trade BTC for TFT on the BTC-Alpha exchange, follow these steps: + +Log in to your BTC-Alpha account and Click on the "**Trade**" > "**Spot Trading Terminal**" tab located in the top menu. + +In the trading interface, find and select the trading pair that represents the BTC-TFT market at the top right corner. For example, it could be "BTC/TFT" or "TFT/BTC". + +You will be presented with the trading chart and order book for the selected market on the left bottom of the page. Take some time to familiarize yourself with the interface and the available options. In the "Limit" section, enter the desired amount of TFT you want to acquire. You can choose to specify the quantity or the total value in BTC. + +Review the order details, including the price and fees, before proceeding. + +Click on the "**Limit Buy TFT**" button to place your order. Once the order is placed, it will be processed by the exchange. You can monitor the status of your order in the "Open Orders" or "Order History" section. + +If your order is successfully executed, the TFT tokens will be credited to your BTC-Alpha account balance. You can check your account balance by clicking on the "**Wallets**" tab or by navigating to the "**Balances**" section. + +### Important Notice + +While it is possible to keep your TFT in your exchange wallet on BTC-Alpha, it is generally not recommended to store your funds there for an extended period. Public exchanges are more susceptible to security breaches and hacking attempts compared to personal wallets. + +To ensure the safety and security of your TFT holdings, it is advisable to transfer them to a dedicated TFT wallet. There are several options available for creating a TFT wallet, each with its own unique features and benefits. + +To explore different TFT wallet options and choose the one that best suits your needs, you can refer to our comprehensive [**TFT Wallet guide**](../storing_tft/storing_tft.md) that provides a list of recommended TFT wallets. This guide will help you understand the features, security measures, and compatibility of each wallet, enabling you to make an informed decision on where to store your TFT securely. + +Remember, maintaining control over your private keys and taking precautions to protect your wallet information are essential for safeguarding your TFT investments. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/buy_sell_tft.md b/collections/threefold_token/buy_sell_tft/buy_sell_tft.md new file mode 100644 index 0000000..a97b90c --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/buy_sell_tft.md @@ -0,0 +1,20 @@ +

Buy and Sell TFT

+ +> If you're looking for a simple way to get TFT with crypto or fiat, check out the [Quick Start guide](./tft_lobstr/tft_lobstr_short_guide.md)! + +There are multiple ways to buy and sell TFT depending on your preferences and the blockchain network you choose to transact on. + +You can buy and sell TFT on Stellar Chain, Ethereum Chain and BNB Smart Chain, and you can use the [TFT bridges](../tft_bridges/tft_bridges.md) to go from one chain to another. + +With TFTs, you can [deploy workloads](../../system_administrators/getstarted/tfgrid3_getstarted.md) on the ThreeFold Grid and benefit from [staking discounts](../../../knowledge_base/cloud/pricing/staking_discount_levels.md) up to 60%! + +

Table of Contents

+ +- [Quick Start (Stellar)](./tft_lobstr/tft_lobstr_short_guide.md) +- [Lobstr Wallet (Stellar)](./tft_lobstr/tft_lobstr_complete_guide.md) +- [MetaMask (BSC & ETH)](./tft_metamask/tft_metamask.md) +- [Pancake Swap (BSC)](./pancakeswap.md) + +## More on TFT + +The [Threefold token (TFT)](../threefold_token.md) is the token of the Threefold Grid, a decentralized and open-source project offering network, compute and storage capacity. TFTs are created by TFChain, the ThreeFold blockchain, only when new Internet capacity is added to the ThreeFold Grid by cloud service providers (farmers) deploying 3Nodes, a process we call [farming](../../farmers/farmers.md). \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/buy_sell_tft_archive.md b/collections/threefold_token/buy_sell_tft/buy_sell_tft_archive.md new file mode 100644 index 0000000..9e16570 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/buy_sell_tft_archive.md @@ -0,0 +1,37 @@ + +

Buy and Sell TFT: Getting Started

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Set up a Stellar Address](#set-up-a-stellar-address) +- [Methods to Get TFT](#methods-to-get-tft) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +There are multiple ways to acquire [TFT](../threefold_token.md) depending on your preferences and the blockchain network you choose to transact on. Note that you can use the [TFT bridges](../tft_bridges/tft_bridges.md) to go from one chain to another. To start, you need to have a supporting wallet to [store your TFT](../storing_tft/storing_tft.md). + +It's important to explore the available options and select the most convenient and secure method for acquiring TFT. Always exercise caution and ensure the legitimacy and reliability of the platforms or individuals you engage with to obtain TFT. + +## Set up a Stellar Address + +In general, to set up a Stellar address to transact TFT on Stellar chain, you can use any Stellar wallet that has a TFT trustline enabled. Note that on Stellar chain, fees are paid in XLM. + +The easiest way is to simply create an account on the [ThreeFold Connect App](../storing_tft/tf_connect_app.md) (for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885)) and to then use the TFT wallet of the app, which has by default a TFT trustline on Stellar chain and also comes with 1 XLM sponsored by Threefold for transaction fees. + +## Methods to Get TFT + +The ThreeFold manual covers numerous methods [to buy and sell TFT](./buy_sell_tft_methods.md). For a complete tutorial on getting TFT with crypto or fiat, read the [Lobstr guide](./tft_lobstr/tft_lobstr_complete_guide.md). + +If you're interested in trading or swapping other cryptocurrencies for TFT, you can visit various crypto exchanges that list TFT. Additionally, you can leverage swapping services available on decentralized exchanges (DEXs) or automated market makers (AMMs) to exchange your tokens for TFT (BSC). + +Moreover, you can purchase TFT using fiat currency directly from TFT Official's Live Desk, or [ThreeFold's official TFT Shop](https://gettft.com/). Another option is to obtain TFT from ThreeFold Farmers. You can engage with Farmers to purchase TFT directly from them, contributing to the growth and decentralization of the ThreeFold network. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/buy_sell_tft_methods.md b/collections/threefold_token/buy_sell_tft/buy_sell_tft_methods.md new file mode 100644 index 0000000..f7d7175 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/buy_sell_tft_methods.md @@ -0,0 +1,36 @@ +# All Methods + +There are many ways to buy and sell TFT on the different chains where it lives. Explore the possibilities and find the best way for you to transact the token. + +

Table of Contents

+ +- Ethereum and BSC + - [MetaMask](./tft_metamask/tft_metamask.md) + - [1inch.io](./oneinch.md) + + +- BSC + - [Pancake Swap](./pancakeswap.md) + +- Stellar Chain + - [Lobstr Wallet](./tft_lobstr/tft_lobstr.md) + - [Lobstr Wallet: Short Guide](./tft_lobstr/tft_lobstr_short_guide.md) + - [Lobstr Wallet: Complete Guide](./tft_lobstr/tft_lobstr_complete_guide.md) + - [GetTFT.com](./gettft.md) + - [Albedo Wallet](./albedo_buy.md) + - [Solar Wallet](./solar_buy.md) + - [Coinbase (XLM)](./coinbase_xlm.md) + - [StellarTerm](./stellarterm.md) + - [Interstellar](./interstellar.md) + +- CEX + - [BTC-Alpha](./btc_alpha.md) + +- OTC + - [ThreeFold Live Desk](./tf_otc.md) + +- Farmers + - [BetterToken Farmers](./bettertoken.md) + - [Mazraa Farmers](./mazraa.md) + +> Note: You can [use TFT bridges](../tft_bridges/tft_bridges.md) to move from one chain to another. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/coinbase_xlm.md b/collections/threefold_token/buy_sell_tft/coinbase_xlm.md new file mode 100644 index 0000000..a9113cc --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/coinbase_xlm.md @@ -0,0 +1,47 @@ +

Buy XLM on Coinbase

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Tutorial](#tutorial) +- [Important Notice](#important-notice) +- [Withdrawing XLM From Coinbase to Your Stellar Wallet](#withdrawing-xlm-from-coinbase-to-your-stellar-wallet) +*** +## Introduction + +In order to engage in seamless TFT transactions on the Stellar network, it is essential to acquire XLM (Stellar Lumens) as a prerequisite. XLM serves as the native cryptocurrency of the Stellar network, enabling fast and low-cost cross-border transactions. + +This section of the manual will guide you through the process of obtaining XLM using [**Coinbase Exchange**](https://www.coinbase.com/exchange/), a reputable cryptocurrency exchange service. Coinbase offers a user-friendly platform that simplifies the buying, selling, and storing of various cryptocurrencies, including XLM. + +By acquiring XLM, you will have the necessary funds to cover transaction fees on the Stellar network, ensuring smooth and efficient TFT transactions within your chosen Stellar wallet. While this guide focuses on **Coinbase Exchange** as the preferred exchange platform, please note that XLM can also be purchased from other exchanges that support it. + +Follow the steps outlined in this manual to acquire XLM on Coinbase and lay the foundation for your TFT transactions. Let's get started on this crucial prerequisite to enable your seamless engagement with TFT on the Stellar network. +*** +## Prerequisites + +- **Coinbase Exchange Account**: You will need a Coinbase Exchange account to get started using Coinbase. Click [here](https://help.coinbase.com/en/coinbase/getting-started) for the complete guide of getting started with Coinbase. + +- **Fiat Payment Methods**: Additionally, to purchase XLM (Stellar Lumens) on Coinbase Exchange, you will need link a fiat payment method to your Coinbase account, such as a linked bank account or a credit card. This fiat currency will be used to exchange for XLM. Click [here](https://help.coinbase.com/en/coinbase/getting-started/add-a-payment-method/how-do-i-add-a-payment-method-when-using-the-mobile-app) for the complete guide of adding payment method to Coinbase. +*** +## Tutorial + +Coinbase Exchange has taken steps to simplify the process of purchasing Stellar Lumens (XLM) by providing a this [**comprehensive XLM Purchase guide**](https://www.coinbase.com/how-to-buy/stellar). + +This guide offers detailed instructions on how to buy XLM directly from the Coinbase platform. By following the steps outlined in the guide, you can navigate through the buying process seamlessly. + +> Read about the difference in between Coinbase vs Coinbase Wallet [here](https://help.coinbase.com/en/wallet/getting-started/what-s-the-difference-between-coinbase-com-and-wallet) +*** +## Important Notice + +Important Notice: Please be aware that this tutorial specifically focuses on buying XLM (Stellar Lumens) from the **Coinbase Exchange** on [https://www.coinbase.com/exchange/](https://www.coinbase.com/exchange/), not the Coinbase Wallet. It is important to note that as of February 2023, [Coinbase Wallet no longer supports XLM](https://help.coinbase.com/en/wallet/other-topics/move-unsupported-assets). However, rest assured that these two platforms, Coinbase exchange and Coinbase Wallet, are distinct and serve different purposes. + +Remember to exercise caution and verify the compatibility of your chosen wallet or exchange with XLM before initiating any transactions. Keeping this distinction in mind will help ensure a seamless experience and prevent any potential confusion between the different platforms. +*** +## Withdrawing XLM From Coinbase to Your Stellar Wallet + +After successfully purchasing XLM from Coinbase, you can begin the process of withdrawing your XLM to another Stellar wallet of your choice. This allows you to have full control over your XLM and engage in transactions within the Stellar network. + +To learn more about the steps involved in withdrawing XLM from Coinbase to another Stellar wallet, click [here](https://help.coinbase.com/en/exchange/trading-and-funding/withdraw-funds). This resource will provide you with detailed instructions and guidelines on how to initiate the withdrawal process and ensure a smooth transfer of your XLM to your preferred Stellar wallet. + +> Get a TFT (Stellar) Wallet of your choice [here](../storing_tft/storing_tft.md)! diff --git a/collections/threefold_token/buy_sell_tft/gettft.md b/collections/threefold_token/buy_sell_tft/gettft.md new file mode 100644 index 0000000..8153ce8 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/gettft.md @@ -0,0 +1,76 @@ +

Get TFT (Stellar) on GetTFT.com

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Link TF Wallet to GetTFT.com](#link-tf-wallet-to-gettftcom) + - [Get TFT using BTC](#get-tft-using-btc) + - [Get TFT using Fiat Currency (USD / EUR)](#get-tft-using-fiat-currency-usd--eur) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our tutorial on how to buy TFT tokens (Stellar / Native) via [GetTFT.com](https://gettft.com/)! + +GetTFT.com provides a seamless experience for purchasing TFT tokens (Stellar / Native), allowing you to be a part of the ThreeFold ecosystem. As a product developed by the ThreeFold team, GetTFT.com ensures a reliable and secure platform to facilitate the acquisition of TFT tokens. Whether you're an investor looking to support the ThreeFold mission or a technology enthusiast interested in participating in the decentralized internet revolution, this guide will walk you through the process of buying TFT tokens on GetTFT.com. + +On GetTFT.com, you have the flexibility to buy TFT tokens using either Bitcoin (BTC), or fiat currency like US Dollars (USD), or Euros (EUR). This allows you to choose the currency that is most convenient for you. Whether you already hold Bitcoin, prefer to use traditional fiat currencies, or want to explore various options, GetTFT.com caters to your needs. +*** +## Prerequisites + +Before you can buy TFT on [GetTFT.com](https://gettft.com/), there are a few prerequisites you need to fulfill. Here's what you'll need: + +- **TF Connect Wallet on TF Connect App**: Download and install the TF Connect app on your iOS or Android device. Create a TF Connect wallet within the app, following the provided instructions [here](../storing_tft/tf_connect_app.md#create-a-wallet). This wallet will be used to receive and store your TFT tokens purchased on GetTFT.com. + +- **BTC Wallet**: If you prefer to purchase TFT tokens using Bitcoin (BTC), make sure you have a BTC wallet of your own preference. This can be a hardware wallet, software wallet, or any other secure wallet that supports BTC transactions. You will need this wallet to send BTC from your personal wallet to GetTFT.com for the purchase. + +- **Mercuryo.io Account**: If you plan to buy TFT tokens using fiat currency (such as USD or EUR), you will need to create an account on [Mercuryo.io](https://mercuryo.io). Mercuryo.io is a fiat payment provider that allows you to purchase cryptocurrencies using traditional fiat currencies. This account will enable you to complete the fiat-to-TFT purchase process on GetTFT.com. +*** +## Get Started + +### Link TF Wallet to GetTFT.com + +After completing the prerequisites, you can directly go to [GetTFT.com](https://gettft.com/) and click on the 'Get TFT' button to start the process. + +You will be then redirected to the TF wallet connection page. Link your mobile TF Connect App wallet to the transaction page by clicking the 'Login via ThreeFold Connect'. Follow the further instructions on your TFConnect App and complete the linking process. + +#### Get TFT using BTC + +After successfully logged in, you will be redirected to the transaction page. select the amount of TFT you would like to buy and choose the 'BTC' icon as your payment method. Verify that you're providing the correct Stellar TFT wallet address (the wallet you have previously linked), read the t&c of the purchase and hit 'Submit' once everything is verified. + +You will then be redirected to the transfer BTC Page. On this page you will see the instruction on how much BTC you should send and the BTC deposit address for your TFT purchase. transfer the exact BTC amount via your BTC wallet by entering this information manually or by scanning the QR Code provided. + +After completing your transfer, you will receive a notification on the page confirming the successful transaction. It is important to stay on the page until you receive this confirmation to ensure that the transaction is processed correctly. Go to TF Connect App on your mobile to check the successfully purchased TFT. + +Please note that it may take some time, typically up to 30 minutes, for the TFT tokens to appear in your TF Connect wallet app after the transfer process. This delay is normal and can be attributed to various factors, including network congestion and blockchain confirmations. + +During this waiting period, it is important to remain patient and refrain from making multiple purchase attempts or transactions. Rest assured that the TFT tokens will be delivered to your TF Connect wallet as soon as the process is completed. + +#### Get TFT using Fiat Currency (USD / EUR) + +When purchasing TFT tokens using fiat currency on GetTFT.com, the transaction is facilitated through the Mercuryo.io exchange platform. The process involves with firstly buying Bitcoin (BTC) through the fiat gateway provided by Mercuryo.io. After the successful purchase of BTC, GetTFT.com will automatically convert the BTC amount into TFT tokens and send them to your TF Connect wallet. Please note that there is a minimum purchase amount of 110 USD for TFT tokens when using this platform. + +To get started, complete the same login process described on '**Get Started: Link TF Wallet to GetTFT.com**' section above. After successfully logged in, you will be redirected to the transaction page. select the amount of TFT you would like to buy and choose the 'USD' or 'EUR' icon as your payment method. Verify that you're providing the correct Stellar TFT wallet address (the wallet you have previously linked), read the t&c of the purchase and hit '**Submit**' once everything is verified. + +Once clicked, you will be redirected to [https://exchange.mercuryo.io/](https://exchange.mercuryo.io/) widget. Please read [Mercuryo.io's FAQ](https://help.mercuryo.io/) to learn more about Mercuryo widget. Follow the prompts to complete the purchase, which typically involves connecting your bank account or using a supported payment method to transfer the USD equivalent for the desired amount of BTC tokens. + +Once the BTC purchase is complete, the GetTFT.com platform will automatically convert the BTC amount into TFT tokens based on the prevailing exchange rate. The TFT tokens will be sent directly to your TF Connect wallet on the TF Connect app. + +Please note that it may take some time, typically up to 30 minutes, for the TFT tokens to appear in your TF Connect wallet app after the automatic conversion and transfer process. This delay is normal and can be attributed to various factors, including network congestion and blockchain confirmations. + +During this waiting period, it is important to remain patient and refrain from making multiple purchase attempts or transactions. Rest assured that the TFT tokens will be delivered to your TF Connect wallet as soon as the process is completed. +*** +## Important Notice + +If, for any reason, you encounter any issues or face difficulties during the purchase process, we recommend contacting the our customer support team via the popup chat box on the page or by going to [support.grid.tf](https://support.grid.tf/). They will be able to assist you and provide the necessary guidance to resolve any problems you may encounter. + +Remember to reach out to the support team promptly and provide them with relevant details regarding your issue, such as your account information, public wallet address and transaction details. +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/interstellar.md b/collections/threefold_token/buy_sell_tft/interstellar.md new file mode 100644 index 0000000..e12ef25 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/interstellar.md @@ -0,0 +1,51 @@ +

Get TFT (Stellar) on Interstellar

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Get TFT By Trading](#get-tft-by-trading) +- [Store TFT on Interstellar](#store-tft-on-interstellar) +- [Disclaimer](#disclaimer) + +*** + +## Introduction +Welcome to our guide on how to get TFT tokens (Stellar) via the [**Interstellar**](https://interstellar.exchange/)! + +Interstellar is a decentralized exchange built on the Stellar network that enables users to trade various assets, including TFT (Stellar). As an intuitive and user-friendly platform, Interstellar provides a seamless trading experience for Stellar users. With its focus on security and privacy, Interstellar ensures that users maintain control over their funds and private keys. + +In this guide, we will walk you through the process of buying TFT on the Interstellar exchange, allowing you to participate in the vibrant Stellar ecosystem. + +## Prerequisites + +- **XLM**: To get TFT tokens using Interstellar, a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +- **Create Interstellar Account and Add TFT Asset**: Create an Interstellar Account via desktop, and add TFT as an asset. Read [**here**](../storing_tft/interstellar_store.md) for the complete manual of how to create an Interstellar Account. + +## Get Started + +### Get TFT By Trading + +Once you have completed the prerequisites, signed up and added TFT as an asset, you can get TFT on Interstellar by clicking the menu bar and clicking '**Trading**' on the left menu on your homepage. + +On this page, we will try to trade some XLM into TFT. Click on the **custom** icon as shown + +Find **XLM** and **TFT** asset on the '**All (unverified)**' list. + +Once all is set (XLM and TFT), click on '**Go**' to start trading + +You can choose to fullfill sell orders, or create your own buy order. Once the buy order or trade has been fulfilled your TFT will show up in your wallet. Please Remember to review and double-check all order details before confirming the trade. + +## Store TFT on Interstellar + +If you are looking for ways to store TFT on Interstellar, you will find the according information [here](../storing_tft/interstellar_store.md). + +To explore different TFT wallet options and choose the one that best suits your needs, you can refer to our comprehensive [**TFT Wallet guide**](../storing_tft/storing_tft.md) that provides a list of recommended TFT wallets. This guide will help you understand the features, security measures, and compatibility of each wallet, enabling you to make an informed decision on where to store your TFT securely. + +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/mazraa.md b/collections/threefold_token/buy_sell_tft/mazraa.md new file mode 100644 index 0000000..9d1af85 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/mazraa.md @@ -0,0 +1,73 @@ +

Get TFT (Stellar) from Mazraa Farmers

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Get Started](#get-started) + - [Get TFT via Mazraa.io Website](#get-tft-via-mazraaio-website) + - [Get TFT via Wire Transfer](#get-tft-via-wire-transfer) + - [Get TFT via PaypPal](#get-tft-via-payppal) + - [Get TFT via Credit Card (Transcoin)](#get-tft-via-credit-card-transcoin) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Alternatively, you can get TFT from [Mazraa](https://mazraa.io) farmers. + +Mazraa is a pioneering provider of Peer2Peer Internet capacity on the ThreeFold Grid. Founded in the United Arab Emirates in early 2016, Mazraa has been at the forefront of leveraging the ThreeFold technology to offer decentralized and peer-to-peer compute and storage resources. + +Through Mazraa, individuals and organizations can access the capacity pool of storage and compute resources on the ThreeFold Grid. These resources can be utilized for various purposes, such as developing workloads, running applications, or storing data in a secure and decentralized manner. + +By purchasing TFT (ThreeFold Tokens) from Mazraa, individuals can not only support the growth and expansion of the ThreeFold Grid but also gain access to the capacity pool provided by Mazraa. This allows them to leverage the decentralized computing resources for their specific needs. +*** +## Get Started + +If you are interested in purchasing TFT from Mazraa, there are a few options available: + +### Get TFT via Mazraa.io Website + +Go to Mazraa.io and click on 'Purchase TFT' button. + +From there, you will be redirected to the '**Contact**' Page where you can send Mazraa staff the details of your TFT Purchase request. +You will then be contacted from Mazraa's official email **connect@Mazraa.io** to proceed to further steps. + +### Get TFT via Wire Transfer +Wire Transfer: To make a purchase via wire transfer, you can reach out to Mazraa directly at connect@Mazraa.io. They will provide you with the necessary instructions and facilitate the process for you. + +### Get TFT via PaypPal +PayPal (Soon): Mazraa is planning to introduce the option to buy tokens using PayPal. If you prefer this payment method, you can express your interest by contacting Mazraa at connect@Mazraa.io. They will provide you with further details once the option becomes available. + +It's important to note that if you are from the U.S.A., specific instructions may apply. In such cases, it is recommended to reach out to Mazraa directly at connect@Mazraa.io for further guidance and support. + +### Get TFT via Credit Card (Transcoin) + +If you prefer to pay for TFT using a credit card, Mazraa has partnered with Transcoin to offer a convenient and quick purchasing process. Here are the steps to follow: + +Visit Mazraa's Official Transcoin Checkout on [**https://transcoin.me/pay_from_api**](https://transcoin.me/pay_from_api) and enter Contact Details. Fill in your contact details and click on "**Send Verification Code**." This step is necessary for account verification. + +Please keep in mind to double check that you are entering the exact web address provided. Beware of scammers pretending to be Mazraa's Transcoin. + +Verification Code: Check your email for the verification code and enter it on the Transcoin Checkout page. + +Step 4: KYC Procedure: Complete the KYC (Know Your Customer) procedure by providing the required documents to Transcoin for verification. The specific documents needed will vary for each user. + +Specify the amount in euros (€) that you would like to purchase and provide the TFT wallet address where you want to receive the TFT. Your wallet address can be found in the [ThreeFold Connect a wallet](../storing_tft/tf_connect_app.md#create-a-wallet) or any other [TFT (Stellar) Wallet](../storing_tft/storing_tft.md) you prefer. + +Select your preferred payment method from the options provided by Transcoin. + +Once Mazraa receives the transaction details from Transcoin, they will initiate the transfer of TFT tokens to the wallet address you provided. Please note that it may take some time for the tokens to be received in your wallet. + +It's important to keep in mind that the process and timeline may vary, and additional verification steps may be required based on the regulations and policies of both Mazraa and Transcoin. +*** +## Important Notice + +Remember to exercise caution and verify the authenticity of any communication or transaction related to purchasing TFT. Scammers may try to impersonate Mazraa or ThreeFold representatives, so it's crucial to ensure you are dealing with the official channels and contacts provided by Mazraa. +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + diff --git a/collections/threefold_token/buy_sell_tft/oneinch.md b/collections/threefold_token/buy_sell_tft/oneinch.md new file mode 100644 index 0000000..b0aef23 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/oneinch.md @@ -0,0 +1,69 @@ +

Get TFT on 1inch.io (TFT-BSC)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [How to Get TFT on 1inch.io (TFT-BSC)](#how-to-get-tft-on-1inchio-tft-bsc) + - [Connect a BSC Wallet to 1inch.io](#connect-a-bsc-wallet-to-1inchio) + - [Swapping tokens to TFT](#swapping-tokens-to-tft) + - [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our tutorial on how to buy TFT on Binance Smart Chain (BSC) using [1inch.io](https://1inch.io/)! + +1inch is a decentralized exchange (DEX) aggregator that aims to provide the best possible trading rates for users by sourcing liquidity from various DEX platforms. It combines smart contract technology with an intuitive user interface to enable users to swap tokens at the most favorable rates with minimal slippage and fees. 1inch.io scans multiple liquidity sources, including popular DEXs like Uniswap and SushiSwap, to ensure users get the most competitive prices for their trades. For more details about 1inch you can find a review of the 1inch exchange [here](https://www.coinbureau.com/review/1inch-exchange/) + +By following the steps outlined in this guide, you'll be able to purchase TFT tokens on 1inch.io seamlessly and take advantage of the benefits offered by the Binance Smart Chain network. +*** +## Prerequisites + +Before you can buy TFT on 1inch.io, there are a few prerequisites you need to fulfill. Here's what you'll need: + +- **BSC Wallet**: To interact with the Binance Smart Chain and 1inch.io, you'll need a BSC-compatible wallet. [Trust Wallet](https://trustwallet.com/) and [MetaMask](https://metamask.io/) are popular options that support BSC. Make sure to set up and secure your wallet before proceeding. On this tutorial, we will use Metamask as our connecting wallet. + +> [Set up a Metamask Wallet](../storing_tft/metamask.md) +> +> [Set up a Trust Wallet](../storing_tft/trustwallet.md) + +- **Get BNB Tokens**: As the native cryptocurrency of Binance Smart Chain, BNB is required to pay for transaction fees on the network. You will need to have Ensure you have some BNB tokens in your BSC wallet to cover these fees when buying TFT on 1inch.io. Read [this tutorial](https://fortunly.com/articles/how-to-buy-bnb/) to know where you can buy BNB and transfer them to your BSC Wallet. + +> [Get BNB Tokens](https://docs.pancakeswap.finance/readme/get-started/bep20-guide) +*** +## How to Get TFT on 1inch.io (TFT-BSC) + +By utilizing 1inch.io, you can easily convert your existing crypto assets on BSC network into TFT-BSC by using the Swap function. Once you have obtained TFT-BSC, you have the option to bridge it into TFT Native on the Stellar network by utilizing the [TFT-Stellar bridge](../tft_bridges/tfchain_stellar_bridge.md). Let's swap some tokens! + +### Connect a BSC Wallet to 1inch.io + +To get started, head to 1inch.io and click on '**Launch dApp**' icon on the homepage as shown below + +Click on '**Connect Wallet**' account to connect your BSC Wallet. + +A pop up window will appear, and you will be asked to select the network and wallet you would like to use connect to your 1inch.io account. In this case, since we would like to trade TFT on BSC , we would choose the '**BNB Chain**' icon (other name for BSC), for the network, and **Metamask** for the wallet. You can also connect other BSC supported wallet of your preference, such as Trust Wallet, etc. + +You will now be redirected to your Metamask wallet page, or you can click on the 'Metamask' icon on your browser manually to accept the 1inch.io connection request. Select the BSC Wallet you'd like to connect to your 1inch.io account, and click '**Next**'. On the next step, Click '**Connect**' to finalize the connection. + +Once your wallet is connected, you will see that you have your wallet connected on 1inch.io homepage shown as Metamask icon on the top right corner of the 1inch.io homepage. To start swapping tokens to TFT, make sure that BNB Chain icon is shown as selected network on the top right corner of your page, usually it's automatically shown as BNB if you previously connected your Metamask's BNB Wallet. If not, you can change it by clicking on the icon and selecting BNB Chain from the list. + +### Swapping tokens to TFT + +To start swapping tokens to TFT, click on the '**Select Token**' button on the Swap page. Depending which existing BSC-supported tokens that you have in your Metamask wallet, you can swap them to to TFT as long as they're listed on 1inch.io. On this tutorial we will try to swap some BUSD tokens to TFT. + +Type '**TFT on BSC**' on the listing page, and clicked on the result shown as below + +Once selected, define the amount of BNB you'd like to swap to TFT and click '**Confirm Swap**' button. + +Wait for the banner in the upper-right corner informing you about the success of your transaction. You will be notified once the swap is successful, and Congrats! You have just swapped some BUSD to TFT. + +### Important Notice + +If you are looking for ways to provide liquidity for TFT on Binance Smart Chain on 1inch.io, you will find the according information [here](../liquidity/liquidity_1inch.md). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/pancakeswap.md b/collections/threefold_token/buy_sell_tft/pancakeswap.md new file mode 100644 index 0000000..ed09435 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/pancakeswap.md @@ -0,0 +1,55 @@ +

Get TFT: Pancake Swap (BSC)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Buy TFT on PancakeSwap](#buy-tft-on-pancakeswap) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this guide, we present how to buy and sell ThreeFold Tokens on BNB Smart Chain (BSC) using [Pancake Swap](https://pancakeswap.finance/). + +**BNB Smart Chain** is a blockchain network that enables the execution of smart contracts and decentralized applications, while **Pancake Swap** is a popular decentralized exchange (DEX) built on BSC. + +## Prerequisites + +Before you can buy TFT on Pancake Swap, there are a few prerequisites you need to fulfill. Here's what you'll need: + +- **BSC Wallet**: To interact with the BNB Smart Chain and Pancake Swap, you'll need a BSC-compatible wallet. [MetaMask](https://metamask.io/) is a popular option that supports BSC. Make sure to set up and secure your wallet before proceeding. + +> [Set up a MetaMask Wallet](../storing_tft/metamask.md) + +- **Connect BSC Wallet to Pancake Swap**: Visit the Pancake Swap website and connect your BSC wallet to your Pancake Swap account. + +> [Connect Wallet to Pancake Swap](https://docs.pancakeswap.finance/readme/get-started/connection-guide) + +- **Get BNB Tokens**: As the native cryptocurrency of BNB Smart Chain, BNB is required to pay for transaction fees on the network. Ensure you have some BNB tokens in your BSC wallet to cover these fees when buying TFT on Pancake Swap. Read [this tutorial](https://fortunly.com/articles/how-to-buy-bnb/) to know where you can buy BNB and transfer them to your BSC Wallet. + +> [Get BNB Tokens](https://docs.pancakeswap.finance/readme/get-started/bep20-guide) + +## Buy TFT on PancakeSwap + +On Pancake Swap, you can easily convert your existing crypto assets on BSC network into TFT-BSC by using the Swap function. Once you have obtained TFT-BSC, you have the option to bridge it into TFT Native on the Stellar network by utilizing the [TFT-Stellar bridge](../tft_bridges/tfchain_stellar_bridge.md). Let's swap some tokens! + +Now that you're all set, go to your [PancakeSwap homepage](https://pancakeswap.finance/) and click on **Trade > Swap** button as shown. Please Make sure you're on **BNB Smart Chain** network. + +On the Swap Page, you would see that it displays swapping from BNB to CAKE as default. Here we'd like to change that to BNB (or any other tokens you have) to TFT. click on '**CAKE**' and find TFT on the token listing pop up page, and click '**Import**'. + +Once selected, define how much BNB or other token you would like to swap into TFT, and click on the '**Swap**' button. + +That's it! You have officially swapped BNB into TFT. + +## Important Notice + +If you are looking for ways to provide liquidity for TFT on BNB Smart Chain on Pancake Swap, you will find the according information [here](../liquidity/liquidity_pancake.md). + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/buy_sell_tft/quick_start_lobstr.md b/collections/threefold_token/buy_sell_tft/quick_start_lobstr.md new file mode 100644 index 0000000..05cf8c1 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/quick_start_lobstr.md @@ -0,0 +1 @@ +# Quick Start diff --git a/collections/threefold_token/buy_sell_tft/solar_buy.md b/collections/threefold_token/buy_sell_tft/solar_buy.md new file mode 100644 index 0000000..5155d82 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/solar_buy.md @@ -0,0 +1,47 @@ +

Get TFT(Stellar) on Solar Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Get TFT by Trading](#get-tft-by-trading) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our guide on how to buy TFT tokens (Stellar) via the [**Solar Wallet**](https://solarwallet.io/)! + +**Solar Wallet** is a user-friendly wallet designed for storing and managing Stellar-based assets like the ThreeFold Token (TFT). It provides a secure way to store your TFT tokens and access them conveniently. With Solar Wallet, you have full control over your assets and can interact with various Stellar services and decentralized applications. Solar Wallet is available as a web-based wallet and also offers mobile versions for iOS and Android devices. This guide will explain how to store TFT (Stellar) on Solar Wallet, including setup, adding tokens, and important security tips. +*** +## Prerequisites + +- **XLM**: When getting TFT tokens using the Solar wallet, the process involves swapping XLM (Stellar Lumens) or other Stellar tokens into TFT. Please note that a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +- **Create a Wallet and Add TFT Asset**: Create a Solar Wallet Account via the mobile app or desktop, and add TFT as an asset. Read [**here**](../storing_tft/storing_tft.md) for the complete manual of how to create an Albedo Wallet. + +## Get Started + +### Get TFT by Trading + + you can start trading TFT on Solar by clicking the menu bar and clicking '**TFT**' icon or 'My Account' on your wallet homepage to start trading your existing tokens to TFT, for example, XLM or USDC. + +You will now be redirected to your Asset List. Click on the **TFT asset** to start trading. + +You will now be redirected to TFT asser info page. Start trading TFT by clicking '**Trade**'. + +To start buying TFT, click '**Buy**' on the asset trading page. + +Choose the trading pair token that you want to trade with, in this tutorial we will be trading XLM to TFT. Specify the amount of XLM you would like to sell, and click '**Place order**' to start trading. + +Confirm your trade on the pop up box shown. + +Wait until your order is successfully made. You will then be redirected to the wallet homepage, and Congratulations! The new TFT asset has been successfully added to your Solar Wallet account. + +> Read the full details about Solar trading feature on [Lobstr's knowledge base](https://docs.solarwallet.io/guide/08-dex.html#trade-view). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/stellarterm.md b/collections/threefold_token/buy_sell_tft/stellarterm.md new file mode 100644 index 0000000..be4aba0 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/stellarterm.md @@ -0,0 +1,72 @@ +

Get TFT(Stellar) on StellarTerm

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Login with an Existing Wallet](#login-with-an-existing-wallet) + - [Adding TFT as an Asset](#adding-tft-as-an-asset) + - [Get TFT by Trading](#get-tft-by-trading) + - [Get TFT by Swapping](#get-tft-by-swapping) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our tutorial on how to get TFT (Stellar) using [**Stellarterm**](https://stellarterm.com/)! + +Stellarterm is a user-friendly decentralized exchange (DEX) built on the Stellar blockchain. It allows you to trade and manage various assets directly from your Stellar wallet. With Stellarterm, you have full control over your funds as it operates in a non-custodial manner, meaning you retain ownership of your private keys and maintain complete security. + +In this tutorial, we will walk you through the process of buying TFT (ThreeFold Token) on Stellarterm by connecting your preferred Stellar wallet. By following the steps outlined in this guide, you will be able to access the Stellarterm DEX, navigate the trading interface, and execute transactions to acquire TFT tokens seamlessly. +*** +## Prerequisites + +- **An external Stellar Wallet**: You should have a Stellar wallet of your choice set up for it to be connected to StellarTerm. Read [**here**](../storing_tft/storing_tft.md) for the complete list of Stellar Wallet you can use. + +- **XLM**: When buying TFT tokens using StellarTerm, the process involves swapping XLM (Stellar Lumens) or other Stellar tokens into TFT. Please note that a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. +*** +## Get Started + +### Login with an Existing Wallet + +Once you have completed the prerequisites, you can get TFT on StellarTerm by going to [https://stellarterm.com/](https://stellarterm.com/), click '**LOGIN**' button in the top right corner and welect your desired wallet provider from the list of options. + +Once connected, you will see your wallet's account details and balances on the dashboard. If you already have added TFT as an asset in your wallet, it will show TFT on the asset list. + +### Adding TFT as an Asset + +- IF you didn't already have TFT asset on your wallet, you can add it by going to [https://stellarterm.com/markets/](https://stellarterm.com/markets/) , go to the very bottom of the page, find the '**Exchange Pair**' section, and Select the trading pair you wish to trade. For example, if you want to trade TFT for XLM, choose the TFT/XLM pair, and click on '**Start Trading**'. + +- You can also add TFT by manually entering TFT asset infos on 'Markets' > 'Custom Exchange Pair' at the very bottom of the page, + - Asset Code: TFT + - Issuer Account ID or federation: `GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47` + +Select the trading pair you wish to trade. For example, XLM and click on the trading pair. + +- Or you can also start trading by directly going to [https://stellarterm.com/exchange/TFT-threefold.io/XLM-native](https://stellarterm.com/exchange/TFT-threefold.io/XLM-native) + +**IMPORTANT**: It is important to ensure that you also see the name "**threefold.io**" next to the logo, the correct Asset code, or the correct Issuer Account ID or federation, as this verifies that you are selecting the genuine TFT asset associated with ThreeFold. **Beware of imposters or fraudulent assets that may attempt to mimic TFT.** ThreeFold cannot assume responsibility for any errors or mistakes made during the trustline creation process done by users. If you have any uncertainties or doubts, it is always recommended to seek assistance from official support channels or trusted sources to ensure the accuracy of the trustline configuration. + + ### Get TFT by Trading + +Once you completed the previous step, you can start trading TFT on StelarTerm by placing a new trade offer on the 'Create new offer' section of the TFT/XLM exchange page, or any other trading pair you prefer. Please Remember to review and double-check all order details before confirming the trade. + +> Read the full details about StellarTerm's trading feature on [StellarTerm's knowledge base](https://stellarterm.freshdesk.com/support/solutions/articles/151000012428-basics-how-to-place-an-offer-on-sdex-with-stellarterm). + +### Get TFT by Swapping + +You can also get TFT on StellarTerm by using it's '**Swap**' feature by going to [https://stellarterm.com/swap/](https://stellarterm.com/swap/) and Select the trading pair you wish to trade. For example, if you want to trade TFT for XLM, choose the TFT/XLM pair. + +Enter the amount of the selected token you wish to swap from XLM to TFT. Review the details of the swap, including the estimated rate and any applicable fees. + +Click on the "Swap" button to initiate the swap. + +Stellarterm will generate a transaction for you to review. Confirm the transaction details, including the amount and destination address. If everything appears correct, approve and sign the transaction using your connected wallet. + +Stellarterm will process the swap transaction on the Stellar network. Once the transaction is confirmed, you will see the acquired TFT tokens in your wallet's balance. +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/tf_otc.md b/collections/threefold_token/buy_sell_tft/tf_otc.md new file mode 100644 index 0000000..137bd80 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tf_otc.md @@ -0,0 +1,40 @@ +

Get TFT (Stellar) via ThreeFold's Live Desk

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Get Started](#get-started) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +If you have a large sum TFT purchase or non-standard purchase/sell requests that may be challenging to process through other methods, ThreeFold offers an Over the Counter (OTC) service. The OTC service allows you to make custom purchases or sell requests for TFT directly through ThreeFold. + +## Get Started + +Here's how you can proceed with an OTC purchase or sell request: + +**Contact the OTC Team**: To initiate an OTC purchase/sell request, send an email to **otc@gettft.com**. Make sure to cc: **info@threefold.io** as well. + +Provide Required Information: In your email, include the following details: + +**Your Name:** Provide your full name for identification purposes. +**Residency:** Specify your country of residency. +**Amount of TFT:** Indicate the quantity of TFT you wish to purchase or sell. +**Preferred Purchase/Sell Options**: State your preferred method or options for the transaction. +**Preferred Price:** If applicable, mention your desired price for the purchase or sell. +**Double-check the Email Address:** It's essential to ensure that you send your email to the correct address, which is **otc@gettft.com** cc to **info@threefold.io**. + +Be cautious and vigilant of potential scams or fraudulent individuals pretending to be ThreeFold employees. Please note that ThreeFold's employees never sells TFT personally on a 1-1 basis, and **sales are not conducted via Telegram, WhatsApp, or any other chat platform.** + +Take necessary precautions and exercise due diligence when engaging in any financial transactions. ThreeFold is committed to maintaining the security and integrity of its services, and the OTC service is only designed to handle larger or non-standard TFT transactions efficiently. + +Remember to follow the provided instructions, include accurate information, and be cautious of potential fraudulent activities. By adhering to the official process, you can proceed with your TFT purchase/sell request through the OTC service provided by ThreeFold. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/tft_getting_started.md b/collections/threefold_token/buy_sell_tft/tft_getting_started.md new file mode 100644 index 0000000..0667376 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_getting_started.md @@ -0,0 +1,53 @@ + +

Getting Started

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Buying and Storing TFTs](#buying-and-storing-tfts) +- [How to set up a Stellar address for TFT transactions](#how-to-set-up-a-stellar-address-for-tft-transactions) +- [Disclaimer](#disclaimer) + - [Learn All the Methods](#learn-all-the-methods) + +*** + +## Introduction + +There are multiple ways to acquire TFT depending on your preferences and the blockchain network you choose to transact on. To start, you need to have a supporting wallet to store your TFTs. Read more about how to store your TFTs [here](../storing_tft/storing_tft.md) + +If you're interested in trading or swapping other cryptocurrencies for TFT, you can visit various crypto exchanges that list TFT as shown on te next section of this page. Additionally, you can leverage swapping services available on decentralized exchanges (DEXs) or automated market makers (AMMs) to exchange your tokens for TFT (BSC). + +Moreover, you can purchase TFT using fiat currency directly from TFT Official's Live Desk, or [ThreeFold's official TFT Shop](https://gettft.com/). Another option is to obtain TFT from ThreeFold Farmers. You can engage with Farmers to purchase TFT directly from them, contributing to the growth and decentralization of the ThreeFold network. + +It's important to explore the available options and select the most convenient and secure method for acquiring TFT. Always exercise caution and ensure the legitimacy and reliability of the platforms or individuals you engage with to obtain TFT. + + + +## Buying and Storing TFTs + +Discover step-by-step instructions on buying and storing TFTs across different platforms. + +If you're looking to navigate the [TFT Ecosystem](https://library.threefold.me/info/manual/#/tokens/threefold__tft_ecosystem), this collection of tutorials and manuals is here to help. Learn how to purchase, trade, and securely store your TFTs with ease. + +For a comprehensive introduction to TFT, we recommend exploring the [TFT Home Section in the ThreeFold Library](https://library.threefold.me/info/threefold#/tokens/threefold__tokens_home). + + + +## How to set up a Stellar address for TFT transactions + +In general, to set up a Stellar address to transact TFT on Stellar chain, you can use any Stellar wallet that has a TFT trustline enabled. Note that on Stellar chain, fees are paid in XLM. + +The easiest way is to simply create an account on the ThreeFold Connect App (for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885)) and to then use the TFT wallet of the app, which has by default a TFT trustline on Stellar chain and also comes with 1 XLM sponsored by Threefold for transaction fees. + + + +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + +### Learn All the Methods + +You can learn [all the different ways to transact TFT](./buy_sell_tft.md). \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_1.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_1.png new file mode 100644 index 0000000..265dcc1 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_1.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_10.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_10.png new file mode 100644 index 0000000..37e04bb Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_10.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_11.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_11.png new file mode 100644 index 0000000..6f05038 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_11.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_12.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_12.png new file mode 100644 index 0000000..ad81a3d Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_12.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_13.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_13.png new file mode 100644 index 0000000..8d808cc Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_13.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_14.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_14.png new file mode 100644 index 0000000..014edde Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_14.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_15.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_15.png new file mode 100644 index 0000000..895d432 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_15.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_16.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_16.png new file mode 100644 index 0000000..b8ca3c9 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_16.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_17.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_17.png new file mode 100644 index 0000000..5919d0c Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_17.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_18.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_18.png new file mode 100644 index 0000000..8ea142f Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_18.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_19.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_19.png new file mode 100644 index 0000000..14688ab Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_19.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_2.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_2.png new file mode 100644 index 0000000..cb638bd Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_2.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_20.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_20.png new file mode 100644 index 0000000..b072502 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_20.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_21.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_21.png new file mode 100644 index 0000000..709e50a Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_21.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_22.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_22.png new file mode 100644 index 0000000..6e588cd Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_22.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_23.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_23.png new file mode 100644 index 0000000..b47c4f0 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_23.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_24.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_24.png new file mode 100644 index 0000000..df06bec Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_24.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_25.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_25.png new file mode 100644 index 0000000..7ba5402 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_25.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_26.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_26.png new file mode 100644 index 0000000..f34d4ff Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_26.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_27.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_27.png new file mode 100644 index 0000000..1de6ee5 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_27.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_28.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_28.png new file mode 100644 index 0000000..c3e8cd0 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_28.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_29.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_29.png new file mode 100644 index 0000000..888067f Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_29.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_3.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_3.png new file mode 100644 index 0000000..4a18f4c Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_3.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_30.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_30.png new file mode 100644 index 0000000..f28e697 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_30.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_31.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_31.png new file mode 100644 index 0000000..84fe32e Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_31.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_32.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_32.png new file mode 100644 index 0000000..3ab05eb Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_32.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_33.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_33.png new file mode 100644 index 0000000..b30050a Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_33.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_34.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_34.png new file mode 100644 index 0000000..553db13 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_34.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_4.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_4.png new file mode 100644 index 0000000..b2a0d03 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_4.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_5.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_5.png new file mode 100644 index 0000000..2b28aef Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_5.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_6.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_6.png new file mode 100644 index 0000000..a3601b3 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_6.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_7.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_7.png new file mode 100644 index 0000000..879a735 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_7.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_8.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_8.png new file mode 100644 index 0000000..b1a9321 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_8.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_9.png b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_9.png new file mode 100644 index 0000000..eb2e80e Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/gettft_9.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_swap.jpeg b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_swap.jpeg new file mode 100644 index 0000000..0ca13b9 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_swap.jpeg differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_trade.jpeg b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_trade.jpeg new file mode 100644 index 0000000..28a5ad0 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/lobstr_trade.jpeg differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/img/threefold__lobstr_swap_tft_.jpg b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/threefold__lobstr_swap_tft_.jpg new file mode 100644 index 0000000..d8955a6 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_lobstr/img/threefold__lobstr_swap_tft_.jpg differ diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr.md b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr.md new file mode 100644 index 0000000..25fdda7 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr.md @@ -0,0 +1,10 @@ +

Get TFT on Lobstr Wallet

+ +It is very easy to purchase TFT on Lobstr Wallet. + +We present here a quick guide to give you the essential information to purchase TFT and a complete guide to guide you step-by-step in the process. + +

Table of Contents

+ +- [Quick Guide](./tft_lobstr_short_guide.md) +- [Complete Guide](./tft_lobstr_complete_guide.md) \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md new file mode 100644 index 0000000..e1fd2f6 --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md @@ -0,0 +1,200 @@ +

Get TFT: Lobstr Wallet (Stellar)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Download the App and Create an Account](#download-the-app-and-create-an-account) +- [Connect Your TF Connect App Wallet](#connect-your-tf-connect-app-wallet) +- [Buy XLM with Fiat Currency](#buy-xlm-with-fiat-currency) +- [Swap XLM for TFT](#swap-xlm-for-tft) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this guide, you'll learn how to buy ThreeFold Tokens with [Lobstr](https://lobstr.co/) using a credit or a debit card and a TF Connect wallet. This is a simple method that works well for small to medium purchases. + +Lobstr is an app for managing digital assets like TFT on the Stellar blockchain. In this case, we'll first obtain Stellar's native currency, Lumens (XLM) and swap them for TFT. + +> Note that it is possible to do these steps without connecting the Lobstr wallet to the TF Connect App wallet (read [docs](./tft_lobstr_short_guide.md)). But doing this has a clear advantage: when we buy and swap on Lobstr, the TFT is directly accessible on the TF Connect app wallet. + +## Download the App and Create an Account + +Go on [www.lobstr.co](https://www.lobstr.co) and download the Lobstr app. +You can download it for Android or iOS. + +![image](./img/gettft_1.png) + +We will show here the steps for Android, but it is very similar with iOS. +Once you've clicked on the Android button, you can click install on the Google Store page: + +![image](./img/gettft_2.png) + +Once the app is downloaded, open it: + +![image](./img/gettft_3.png) + +On the Lobstr app, click on **Create Account**: + +![image](./img/gettft_4.png) + +You will then need to enter your email address: + +![image](./img/gettft_5.png) + +Then, choose a safe password for your account: + +![image](./img/gettft_6.png) + +Once this is done, you will need to verify your email. + +Click on **Verify Email** and then go check your email inbox. + +![image](./img/gettft_7.png) + +Simply click on **Verify Email** on the email you've received. + +![image](./img/gettft_8.png) + +Once your email is verified, you can sign in to your Lobstr account: + +![image](./img/gettft_9.png) + +![image](./img/gettft_10.png) + + + +## Connect Your TF Connect App Wallet + +You will then need to either create a new wallet or connect an existing wallet. + +For this guide, we will show how to connect your TF Connect app wallet, but please note that you can create a new wallet on Lobstr by clicking **Create Stellar Wallet**. + +Since we are working with the Threefold ecosystem, it is very easy and practical to simply connect your TF Connect app wallet to Lobstr. This way, when you buy XLM and swap XLM tokens for TFTs, they will be directly available on your TF Connect app wallet. + +![image](./img/gettft_11.png) + +To connect your TF Connect app wallet, you will need to find your Stellar address and chain secret key. +This is very simple to do. + +Click on **I have a public or secret key**. + +![image](./img/gettft_12.png) + +As you can see on this next picture, you need the Stellar address and secret key to properly connect your TF Connect app wallet to Lobstr: + +![image](./img/gettft_18.png) + +To find your Stellar address and secret key, go on the TF Connect app and select the **Wallet** section: + +![image](./img/gettft_13.png) + +At the top of the section, click on the **copy** button to copy your Stellar Address: + +![image](./img/gettft_17.png) + +Now, we will find the Stellar secret key. +At the botton of the section, click on the encircled **i** button: + +![image](./img/gettft_14.png) + +Next, click on the **eye** button to reveal your secret key: + +![image](./img/gettft_15.png) + +You can now simply click on the **copy** button on the right: + +![image](./img/gettft_16.png) + +That's it! You've now connected your TF Connect app wallet to your Lobstr account. + +## Buy XLM with Fiat Currency + +Now, all we need to do, is buy XLM and then swap it for TFT. +It will be directly available in your TF Connect App wallet. + +On the Lobstr app, click on the top right menu button: + +![image](./img/gettft_19.png) + +Then, click on **Buy Crypto**: + +![image](./img/gettft_20.png) + +By default, the crypto selected is XLM. This is alright for us as we will quickly swap the XLM for TFT. + +On the Buy Crypto page, you can choose the type of Fiat currency you want. +By default it is in USD. To select some othe fiat currency, you can click on **ALL** and see the available fiat currencies: + +![image](./img/gettft_21.png) + +You can search or select the current you want for the transfer: + +![image](./img/gettft_22.png) + +You will then need to decide how much XLM you want to buy. Note that there can be a minimum amount. +Once you chose the desired amount, click on **Continue**. + +![image](./img/gettft_23.png) + +Lobstr will then ask you to proceed to a payment method. In this case, it is Moonpay. +Note that in some cases, your credit card won't accept Moonpay payments. You will simply need to confirm with them that you agree with transacting with Moonpay. This can be done by phone. Check with your bank and credit card company if this applies. + +![image](./img/gettft_24.png) + +Once you've set up your Moonpay payment method, you will need to process and confirm the transaction: + +![image](./img/gettft_25.png) +![image](./img/gettft_26.png) + +You will then see a processing window. +This process is usually fast. Within a few minutes, you should receive your XLM. + +![image](./img/gettft_27.png) + +Once the XLM is delivered, you will receive a notification: + +![image](./img/gettft_28.png) + +When your transaction is complete, you will see this message: + +![image](./img/gettft_29.png) + +On the Trade History page, you can choose to download the csv file version of your transaction: + +![image](./img/gettft_30.png) + +That's it! You've bought XLM on Lobstr and Moonpay. + +## Swap XLM for TFT + +Now we want to swap the XLM tokens for the Threefold tokens (TFT). +This is even easier than the previous steps. + +Go to the Lobstr Home menu and select **Swap**: + +![image](./img/gettft_31.png) + +On the **Swap** page, write "tft" and select the Threefold token: + +![image](./img/gettft_32.png) + +Select the amount of XLM you want to swap. It is recommended to keep at least 1 XLM in your wallet for transaction fees. + +![image](./img/gettft_33.png) + +Within a few seconds, you will receive a confirmation that your swap is completed: +Note that the TFT is directly sent on your TF Connect app wallet. + +![image](./img/gettft_34.png) + +That's it. You've swapped XLM for TFT. + +You can now use your TFT to deploy workloads on the Threefold Grid. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md new file mode 100644 index 0000000..eb1d86e --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md @@ -0,0 +1,76 @@ +

Get TFT: Quick Start (Stellar)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Install Lobstr](#install-lobstr) +- [Create Wallet](#create-wallet) +- [Buy XLM](#buy-xlm) +- [Swap XLM for TFT](#swap-xlm-for-tft) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this guide, we show how to buy and sell ThreeFold Tokens with [Lobstr wallet](https://lobstr.co/) using a credit or a debit card. This is a simple method that works well for small to medium purchases. You can buy TFT on Lobstr with your smartphone (Android and iOS) and your computer. + +Lobstr is an app for managing digital assets like TFT on the Stellar blockchain. In this case, we'll first obtain Stellar's native currency, Lumens (XLM) and swap them for TFT. + +> [Moonpay](https://www.moonpay.com/) is the service integrated in Lobstr to enable users to buy digital assets like XLM with fiat currencies like US Dollars and Euros. Moonpay is available in most regions, but there are some [exceptions](https://support.moonpay.com/hc/en-gb/articles/6557330712721-What-are-our-non-supported-countries-states-and-territories-for-on-ramp-product). If your country or state isn't supported by Moonpay, you will need to find another cryptocurrency exchange or on ramp to obtain XLM. From there, you can follow the rest of this guide. + +## Install Lobstr + +To get Lobstr, just head to [lobstr.co](https://lobstr.co/) where you'll find buttons to download the app for both Android and iOS. Once the app is installed, open it to continue. To use Lobstr directly on your browser, simply click on **Get Started** in the top menu of the Lobstr website. This allows you to use Lobstr on your computer instead of your smartphone. + +Hit the **Create Account** button and proceed to create an account by entering your email address and choosing a password. Verify your email address and then sign in. + +## Create Wallet + +After you sign in to Lobstr for the first time, you'll be prompted to create or connect a Stellar wallet. If you already have the TF Connect App, follow the instructions at [this link](./tft_lobstr_complete_guide.html#connect-your-tf-connect-app-wallet) to import your existing wallet into Lobstr. Otherwise, just press the **Create Stellar Wallet** button. + +Lobstr will then present you with 12 words to write down and store safely. Keep in mind that anyone who can access these words can also access any funds in your account. They can also be used to recover your account later. Be sure not to lose them. + +## Buy XLM + +In this step, we'll buy XLM with a credit or debit card (actually a few forms of bank transfer and other payment methods are [supported](https://support.moonpay.com/customers/docs/moonpays-supported-payment-methods-1) too). + +On the Lobstr app, click on the hamburger menu button: + +![image](./img/gettft_19.png) + +Then click on **Buy Crypto**: + +![image](./img/gettft_20.png) + +By default XLM is selected to buy, which is what we want. Above you can choose how much to buy in the currency of your choosing. Press **ALL** to see a full list of available currencies. + +Once you've entered the amount you want to buy, hit **Continue** and proceed through the checkout process with Moonpay. On the final screen, you'll see a message that it can take some time to complete the order: + +![image](./img/gettft_27.png) + +Usually this happens quickly and you'll receive a notification if notifications are enabled for the Lobstr app. + +## Swap XLM for TFT + +Once you have the XLM, use the hamburger menu again and this time select **Swap**: + +![image](./img/gettft_31.png) + +Enter TFT in the search bar and select the entry that includes **threefold.io**. + +> Be careful at this step! There are fake scam coins using the ThreeFold logo. We can't remove these tokens from Stellar, unfortunately, so you will need to be sure to choose the right one. + +![image](./img/gettft_32.png) + +On the next screen, XLM will be automatically selected as the currency to trade for TFT. The amount that's available to trade will be shown in blue. You can just tap this amount to trade the maximum amount. A couple XLM will be reserved to keep your account open and pay for future Stellar transaction fees. + +Hit the green button at the bottom to complete the trade. There will be a few more prompts and potentially some warnings about scam tokens (again, look for **threefold.io**). + +When you're finished you'll see a screen that says **Swap completed**. Congrats, you just bought TFT! + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../../knowledge_base/legal/definitions_legal.md) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide_archive.md b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide_archive.md new file mode 100644 index 0000000..7d30e5e --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_lobstr/tft_lobstr_short_guide_archive.md @@ -0,0 +1,58 @@ +

Lobstr Wallet: Quick Guide

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Get TFT By Swapping](#get-tft-by-swapping) + - [Get TFT by Trading](#get-tft-by-trading) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this tutorial, we will guide you through the process of getting TFT on the [Lobstr Wallet](https://lobstr.co/). The wallet is available for both desktop and mobile as a web and mobile app. + +Lobstr Wallet is a secure and user-friendly wallet designed specifically for the Stellar blockchain. It allows you to store, manage, and transact with your Stellar-based assets, including TFT (ThreeFold Token). + +## Prerequisites + +- **XLM**: When getting TFT tokens using the Lobstr wallet, the process involves swapping XLM (Stellar Lumens) or other Stellar tokens into TFT. Please note that a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +- **Create a Lobstr Wallet and Add TFT Asset**: Create a [Lobstr Wallet](https://lobstr.co/) account via the mobile app or desktop, and [add TFT as an asset](https://lobstr.freshdesk.com/support/solutions/articles/151000001061-adding-custom-assets-on-lobstr) (with either the code `TFT` or the home domain `threefold.io`). + +## Get Started + +### Get TFT By Swapping + +On this tutorial, we will be using the mobile app to guide you through the process of buying TFT via Lobstr. + +Once you have completed the prerequisites, you can get TFT on Lobstr by clicking the menu bar and clicking '**Swap**' to start swapping your existing tokens to TFT, for example, XLM or USDC. + +![](IMG/../img/lobstr_swap.jpeg) + +Insert the amount of TFT you'd like to buy or the amount of XLM you'd like to swap for TFT. Click '**Swap XLM to TFT**' to confirm the transaction. + +![](img/threefold__lobstr_swap_tft_.jpg) + + Congratulations. You just swapped some XLM to TFT. Go to 'Assets' page from the menu bar to see your recently purchased TFT tokens. + +### Get TFT by Trading + +For advanced traders, Lobstr provides access to the full orderbook trading functionality in the Trade section. + +You can start trading TFT on Lobstr by clicking the menu bar and clicking '**Trade**' to start trading your existing tokens to TFT, for example, XLM or USDC. + +![](IMG/../img/lobstr_trade.jpeg) + +You can choose to fullfill sell orders, or create your own buy order. Once the buy order or trade has been fulfilled your TFT will show up in your wallet. + +> Read the full details about Lobstr trading feature on [Lobstr's knowledge base](https://lobstr.freshdesk.com/support/solutions/articles/151000001080-trading-in-lobstr-wallet). + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../wiki/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_1.png b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_1.png new file mode 100644 index 0000000..0db09d3 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_1.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_2.png b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_2.png new file mode 100644 index 0000000..b7b21f2 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_2.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_3.png b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_3.png new file mode 100644 index 0000000..85b01b2 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_3.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_4.png b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_4.png new file mode 100644 index 0000000..753c0dd Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_4.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_5.png b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_5.png new file mode 100644 index 0000000..073a4a7 Binary files /dev/null and b/collections/threefold_token/buy_sell_tft/tft_metamask/img/tft_on_ethereum_image_5.png differ diff --git a/collections/threefold_token/buy_sell_tft/tft_metamask/tft_metamask.md b/collections/threefold_token/buy_sell_tft/tft_metamask/tft_metamask.md new file mode 100644 index 0000000..99fea4c --- /dev/null +++ b/collections/threefold_token/buy_sell_tft/tft_metamask/tft_metamask.md @@ -0,0 +1,91 @@ +

Get TFT: MetaMask (BSC & ETH)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFT Addresses](#tft-addresses) + - [Ethereum Chain Address](#ethereum-chain-address) + - [BSC Address](#bsc-address) +- [Add TFT to Metamask](#add-tft-to-metamask) +- [Buy TFT on Metamask](#buy-tft-on-metamask) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this guide, we present how to buy and sell ThreeFold Tokens on BNB Smart Chain and Ethereum using [Metamask](https://metamask.io/). + +**BNB Smart Chain** and **Ethereum** chain are blockchain networks that enable the execution of smart contracts and decentralized applications, while **MetaMask** is a software cryptocurrency wallet used to interact with Ethereum and BNB Smart Chain. + +## TFT Addresses + +With MetaMask, you can buy and sell TFT on both BNB Smart Chain and the Ethereum chain. Make sure to use the correct TFT address when doing transactions. + +### Ethereum Chain Address + +The ThreeFold Token (TFT) is available on Ethereum. +It is implemented as a wrapped asset with the following token address: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +### BSC Address + +The ThreeFold Token (TFT) is available on BSC. +It is implemented as a wrapped asset with the following token address: + +``` +0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf +``` + +## Add TFT to Metamask + +We present the steps on Ethereum chain. Make sure to switch to BSC and to use the TFT BSC address if you want to proceed on BSC. + +Open Metamask and import the ThreeFold Token. First click on `import tokens`: + +![Metamask-Main|297x500](./img/tft_on_ethereum_image_1.png) + +Then, choose `Custom Token`: + +![Metamask-ImportToken|298x500](./img/tft_on_ethereum_image_2.png) + +To add the ThreeFold Token, paste its Ethereum address in the field `Token contract address field`. The address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +Once you paste the TFT contract address, the parameter `Token symbol` should automatically be filled with `TFT`. + +Click on the button `Add Custom Token`. + +![Metamask-importCustomToken|297x500](./img/tft_on_ethereum_image_3.png) + +To confirm, click on the button `Import tokens`: + +![Metamask-ImporttokensQuestion|298x500](./img/tft_on_ethereum_image_4.png) + +TFT is now added to Metamask. + + +## Buy TFT on Metamask + +Liquidity is present on Ethereum so you can use the "Swap" functionality from Metamask directly or go to [Uniswap](https://app.uniswap.org/#/swap) to swap Ethereum, or any other token, to TFT. + +When using Uniswap, paste the TFT token address in the field `Select a token` to select TFT on Ethereum. The TFT token address is the following: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +![Uniswap-selecttoken|315x500](./img/tft_on_ethereum_image_5.png) + + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/img/albedo_balance.png b/collections/threefold_token/img/albedo_balance.png new file mode 100644 index 0000000..52cef5e Binary files /dev/null and b/collections/threefold_token/img/albedo_balance.png differ diff --git a/collections/threefold_token/img/albedo_home.png b/collections/threefold_token/img/albedo_home.png new file mode 100644 index 0000000..52cef5e Binary files /dev/null and b/collections/threefold_token/img/albedo_home.png differ diff --git a/collections/threefold_token/img/threefold__albedo_send_receive.png b/collections/threefold_token/img/threefold__albedo_send_receive.png new file mode 100644 index 0000000..3f3b773 Binary files /dev/null and b/collections/threefold_token/img/threefold__albedo_send_receive.png differ diff --git a/collections/threefold_token/img/token_distribution.png b/collections/threefold_token/img/token_distribution.png new file mode 100644 index 0000000..a05ad5a Binary files /dev/null and b/collections/threefold_token/img/token_distribution.png differ diff --git a/collections/threefold_token/liquidity/img/1inch_pool_details.png b/collections/threefold_token/liquidity/img/1inch_pool_details.png new file mode 100644 index 0000000..07521c7 Binary files /dev/null and b/collections/threefold_token/liquidity/img/1inch_pool_details.png differ diff --git a/collections/threefold_token/liquidity/img/1inch_pool_total.png b/collections/threefold_token/liquidity/img/1inch_pool_total.png new file mode 100644 index 0000000..de818e7 Binary files /dev/null and b/collections/threefold_token/liquidity/img/1inch_pool_total.png differ diff --git a/collections/threefold_token/liquidity/img/1inch_provide.png b/collections/threefold_token/liquidity/img/1inch_provide.png new file mode 100644 index 0000000..cb577ca Binary files /dev/null and b/collections/threefold_token/liquidity/img/1inch_provide.png differ diff --git a/collections/threefold_token/liquidity/img/1inch_submit.png b/collections/threefold_token/liquidity/img/1inch_submit.png new file mode 100644 index 0000000..93dead2 Binary files /dev/null and b/collections/threefold_token/liquidity/img/1inch_submit.png differ diff --git a/collections/threefold_token/liquidity/img/1inch_tftpool.png b/collections/threefold_token/liquidity/img/1inch_tftpool.png new file mode 100644 index 0000000..b8cd2ce Binary files /dev/null and b/collections/threefold_token/liquidity/img/1inch_tftpool.png differ diff --git a/collections/threefold_token/liquidity/img/albedo_confirm.png b/collections/threefold_token/liquidity/img/albedo_confirm.png new file mode 100644 index 0000000..d09eefc Binary files /dev/null and b/collections/threefold_token/liquidity/img/albedo_confirm.png differ diff --git a/collections/threefold_token/liquidity/img/albedo_liquidity.png b/collections/threefold_token/liquidity/img/albedo_liquidity.png new file mode 100644 index 0000000..b27aef6 Binary files /dev/null and b/collections/threefold_token/liquidity/img/albedo_liquidity.png differ diff --git a/collections/threefold_token/liquidity/img/assets.png b/collections/threefold_token/liquidity/img/assets.png new file mode 100644 index 0000000..96ed489 Binary files /dev/null and b/collections/threefold_token/liquidity/img/assets.png differ diff --git a/collections/threefold_token/liquidity/img/liquidity_1inch_unlock.png b/collections/threefold_token/liquidity/img/liquidity_1inch_unlock.png new file mode 100644 index 0000000..e449cce Binary files /dev/null and b/collections/threefold_token/liquidity/img/liquidity_1inch_unlock.png differ diff --git a/collections/threefold_token/liquidity/img/liquidity_approve.jpeg b/collections/threefold_token/liquidity/img/liquidity_approve.jpeg new file mode 100644 index 0000000..e4fd5b6 Binary files /dev/null and b/collections/threefold_token/liquidity/img/liquidity_approve.jpeg differ diff --git a/collections/threefold_token/liquidity/img/liquidity_busd.jpeg b/collections/threefold_token/liquidity/img/liquidity_busd.jpeg new file mode 100644 index 0000000..6721463 Binary files /dev/null and b/collections/threefold_token/liquidity/img/liquidity_busd.jpeg differ diff --git a/collections/threefold_token/liquidity/img/pancake_liquidity.png b/collections/threefold_token/liquidity/img/pancake_liquidity.png new file mode 100644 index 0000000..c16b551 Binary files /dev/null and b/collections/threefold_token/liquidity/img/pancake_liquidity.png differ diff --git a/collections/threefold_token/liquidity/img/threefold__confirmation.jpg b/collections/threefold_token/liquidity/img/threefold__confirmation.jpg new file mode 100644 index 0000000..b9c1f11 Binary files /dev/null and b/collections/threefold_token/liquidity/img/threefold__confirmation.jpg differ diff --git a/collections/threefold_token/liquidity/img/threefold__lp_tokens.jpg b/collections/threefold_token/liquidity/img/threefold__lp_tokens.jpg new file mode 100644 index 0000000..a7b0432 Binary files /dev/null and b/collections/threefold_token/liquidity/img/threefold__lp_tokens.jpg differ diff --git a/collections/threefold_token/liquidity/img/threefold__supply.jpg b/collections/threefold_token/liquidity/img/threefold__supply.jpg new file mode 100644 index 0000000..1b2eda6 Binary files /dev/null and b/collections/threefold_token/liquidity/img/threefold__supply.jpg differ diff --git a/collections/threefold_token/liquidity/liquidity_1inch.md b/collections/threefold_token/liquidity/liquidity_1inch.md new file mode 100644 index 0000000..99845ad --- /dev/null +++ b/collections/threefold_token/liquidity/liquidity_1inch.md @@ -0,0 +1,79 @@ +

Provide TFT (BSC) Liquidity on Pancake Swap

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Become a TFT LP on 1inch.io](#become-a-tft-lp-on-1inchio) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In the case of TFT on Binance Smart Chain (BSC) and [1inch.io](https://1inch.io/), becoming a liquidity provider involves providing TFT(BSC) and another token (such as BUSD) to the TFT- BSC liquidity pool on 1inch.io. + +By adding liquidity to this pool, the LP helps to ensure that there is a consistent and sufficient supply of TFT available for trading on the BSC network. This contributes to the overall liquidity of the TFT token on 1inch.io, making it easier for users to buy and sell TFT(BSC) without experiencing significant price slippage. + +By participating in the liquidity provision process, you actively contribute to the growth and adoption of the TFT token. As more users trade TFT(BSC) on 1inch.io, the liquidity and trading volume increase, which can attract more traders and investors to the token. This increased activity can lead to a broader awareness of TFT and potentially drive its value and market presence. + +## Prerequisites + +BBefore you can buy TFT on 1inch.io, there are a few prerequisites you need to fulfill. Here's what you'll need: + +- **BSC Wallet**: To interact with the Binance Smart Chain and 1inch.io, you'll need a BSC-compatible wallet. [MetaMask](https://metamask.io/) is a popular option that supports BSC. Make sure to set up and secure your wallet before proceeding. On this tutorial, we will use Metamask as our connecting wallet. + +> [Set up a Metamask Wallet](../storing_tft/metamask.md) + +- **Get BNB Tokens**: As the native cryptocurrency of Binance Smart Chain, BNB is required to pay for transaction fees on the network. You will need to have Ensure you have some BNB tokens in your BSC wallet to cover these fees when buying TFT on 1inch.io. Read [this tutorial](https://fortunly.com/articles/how-to-buy-bnb/) to know where you can buy BNB and transfer them to your BSC Wallet. + +> [Get BNB Tokens](https://docs.pancakeswap.finance/readme/get-started/bep20-guide) + +## Become a TFT LP on 1inch.io + +Anyone who fullfill the prerequisites can create a TFT(BSC) liquidity pool on 1inch.io. Currently as June 2023 on 1inch.io there is one existing BUSD - TFT(BSC) liquidity pool available where you can participate in providing liquidity into on address: + +[**https://app.1inch.io/#/56/earn/pools?filter=TFT**](https://app.1inch.io/#/56/earn/pools?filter=TFT). + +Click on the arrow as shown below to see the details of the BUSD - TFT pool. + +![](./img/1inch_tftpool.png) + +Next, click on the '**Provide Liquidity**' button in the lower right corner. A new window will open as shown below. Click on 'Provide Liquidity' icon to start the transaction process. + +![](./img/1inch_pool_details.png) + +A new window will open that will ask you to enter the amount of BUSD tokens or the amount of TFT(BSC) tokens you want to provide. As you can see in the screenshot below, before you can proceed, you have to unlock the tokens. This is required and will allow 1inch to execute smart contract transactions on your behalf. Unlocking tokens costs a small amount of BNB fees. +To unlock, click on the "Unlock" button and follow the instructions given by 1inch. You will have to confirm the transaction on your wallet. + +![](./img/liquidity_1inch_unlock.png) + +Once the tokens are unlocked, you can provide liquidity. To do so, enter the amount of BUSD tokens or the amount TFT(BSC) tokens you want to provide. The amount of the other token will adjust accordingly. + +To make the process a bit easier to understand, you can also enter the USD$ value in the corresponding field and it will adjust the token amounts automatically, according to the amount you specify. You can also click on the "Max:" link to provide the maximum amount you have available. + +On below screenshot example, we are providing $2100 into the pool, which means roughly 169354,8378 TFT(BSC) tokens, which also means we have to provide around 2100 BUSD tokens (note that these numbers will be different when you do this as prices fluctuate). +Once you have dialed in the number of tokens you want to provide to the pool, click on the "**Provide liquidity**" button. + +![](./img/1inch_provide.png) + +In the next step, you will have to confirm the transaction with your wallet. Once confirmed, you will get this screen as shown below. Make sure you bookmark the link to the Etherscan transaction, you will need it to confirm everything went according to plan. Once the transaction is confirmed on chain (see Etherscan), you are basically done. + +![](./img/1inch_submit.png) + +Once done, you can go back to the TFT pool section on 1inch.io Pools page on [**https://app.1inch.io/#/56/earn/pools?filter=TFT**](https://app.1inch.io/#/56/earn/pools?filter=TFT). and expand the BUSD-TFT pool entry. There you can see your liquidity in the pool. + +![](./img/1inch_pool_total.png) + +Note that the number of TFT and BUSD tokens will change constantly and will not be the same as you initially provided. This is because of the way liquidity pools work and is absolutely normal. + +## Important Notice + +It's important to note that being a liquidity provider involves certain risks, such as impermanent loss, which occurs when the value of the tokens in the liquidity pool fluctuates. However, if you believe in the potential of TFT and want to actively contribute to the liquidity ecosystem on 1inch.io, becoming a liquidity provider can be a rewarding opportunity to earn fees and support the growth of the platform. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/liquidity/liquidity_albedo.md b/collections/threefold_token/liquidity/liquidity_albedo.md new file mode 100644 index 0000000..7528868 --- /dev/null +++ b/collections/threefold_token/liquidity/liquidity_albedo.md @@ -0,0 +1,58 @@ +

Provide TFT (Stellar) Liquidity on Albedo Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Provide Liquidity](#provide-liquidity) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +Becoming a TFT liquidity provider on the Albedo wallet allows you to actively participate in the ecosystem by contributing to the liquidity of TFT tokens. As a liquidity provider, you play a crucial role in facilitating efficient trading and swapping of TFT tokens for other assets. By providing liquidity, you can earn rewards in the form of transaction fees and other incentives. This tutorial will guide you through the process of becoming a TFT liquidity provider on the Albedo wallet, enabling you to contribute to the vibrant TFT ecosystem while potentially earning additional benefits for your participation. + +## Prerequisites + +To become a liquidity provider on Albedo and join Stellar DEX Liquidity Pools, you'll need the following prerequisites: + +- **Albedo Wallet and TFT asset setup**: Set up an Albedo wallet and add the TFT (ThreeFold Token) asset to your Albedo wallet. + +- **XLM for Transactions**: Have a sufficient amount of XLM (Stellar Lumens) in your Albedo wallet to cover transaction fees. XLM is required to execute transactions and interact with the Stellar network. + +- **Sufficient TFT**: To become a liquidity provider on Albedo, you will need a sufficient amount of TFT (Stellar) tokens. You can acquire these tokens by either depositing them from another wallet into your Albedo wallet or swapping other tokens within your Albedo wallet to obtain TFT. Additionally, you'll need an equivalent value of another token that you want to contribute to the desired liquidity pool. TFT tokens represent your share in the liquidity pool and allow you to earn a portion of the trading fees. + +## Get Started + +Anyone who fullfill the prerequisites can create a liquidity pool on Albedo wallet. In this guide, we will focus on depositing TFT and USDC to provide liquidity for the pool as an example. + +After creating and activating your account on Albedo, the next step is to activate both the TFT and USDC assets within your Albedo wallet. Ensure that you have an adequate balance of both assets for the liquidity provision. Once you have activated and balanced your assets, you are ready to add liquidity to the TFT <> USDC pool on Albedo. + +### Provide Liquidity + +You can become an LP by signing into your [Albedo wallet homepage](https://albedo.link/) and click on **Liquidity** button on the navbar as shown. Please Make sure you're on **Public** network. select the two assets you’d like to provide liquidity to, in this case, TFT & USDC. + +Set the amount of TFT and USDC you would like to provide to this pool. + +![](./img/albedo_liquidity.png) + +Click '**Deposit Liquidity to the Pool**' button to confirm the transaction. + +![](./img/albedo_confirm.png) + +Congrats, you’ve just added liquidity to the TFT <> USDC pool on Albedo Wallet. + +## Important Notice + +It's important to note that being a liquidity provider involves certain risks, such as impermanent loss, which occurs when the value of the tokens in the liquidity pool fluctuates. However, if you believe in the potential of TFT and want to actively contribute to the liquidity ecosystem on PancakeSwap, becoming a liquidity provider can be a rewarding opportunity to earn fees and support the growth of the platform. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. + + diff --git a/collections/threefold_token/liquidity/liquidity_pancake.md b/collections/threefold_token/liquidity/liquidity_pancake.md new file mode 100644 index 0000000..b2d378f --- /dev/null +++ b/collections/threefold_token/liquidity/liquidity_pancake.md @@ -0,0 +1,73 @@ +

Provide TFT Liquidity on Pancake Swap

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Become a TFT - BSC LP](#become-a-tft---bsc-lp) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +In the case of TFT on Binance Smart Chain (BSC) and PancakeSwap, becoming a liquidity provider involves providing TFT and another token (such as BNB) to the TFT-BNB liquidity pool on PancakeSwap. + +By adding liquidity to this pool, the LP helps to ensure that there is a consistent and sufficient supply of TFT(BSC) available for trading on the BSC network. This contributes to the overall liquidity of the TFT token on PancakeSwap, making it easier for users to buy and sell TFT(BSC) without experiencing significant price slippage. + +Becoming a liquidity provider for TFT(BSC) on PancakeSwap can have several benefits. Firstly, as an LP, you earn trading fees proportional to the amount of liquidity you have provided. **Anyone interested in providing Liquidity to the ThreeFold pools on PancakeSwap will be incentivized and rewarded with trading fees. 0.17% of all trading fees of all transactions go to Liquidity providers.** These fees are distributed to LPs based on their share of the total liquidity pool. + +Additionally, by participating in the liquidity provision process, you actively contribute to the growth and adoption of the TFT token. As more users trade TFT on PancakeSwap, the liquidity and trading volume increase, which can attract more traders and investors to the token. This increased activity can lead to a broader awareness of TFT and potentially drive its value and market presence. + +## Prerequisites + +Before you can become a TFT(BSC) LP on Pancake Swap, there are a few prerequisites you need to fulfill. Here's what you'll need: + +- **BSC Wallet**: To interact with the Binance Smart Chain and Pancake Swap, you'll need a BSC-compatible wallet. [MetaMask](https://metamask.io/) is a popular option that supports BSC. Make sure to set up and secure your wallet before proceeding. + +> [Set up a Metamask Wallet](../storing_tft/metamask.md) + +- **Connect BSC Wallet TO Pancake Swap**: Visit the Pancake Swap website and connect your BSC wallet to your Pancake Swap account. + +> [Connect Wallet to Pancake Swap](https://docs.pancakeswap.finance/readme/get-started/connection-guide) + +- **Get BNB Tokens**: As the native cryptocurrency of Binance Smart Chain, BNB is required to pay for transaction fees on the network. Ensure you have some BNB tokens in your BSC wallet to cover these fees when buying TFT(BSC) on Pancake Swap. Read [this tutorial](https://fortunly.com/articles/how-to-buy-bnb/) to know where you can buy BNB and transfer them to your BSC Wallet. + +> [Get BNB Tokens](https://docs.pancakeswap.finance/readme/get-started/bep20-guide) + +- **Sufficient TFT**: To participate as a liquidity provider on PancakeSwap, you will need to acquire **TFT(BSC)** tokens and deposit them into your BSC Wallet, along with an equivalent value of another token, into the desired liquidity pool. The TFT tokens act as your share of the liquidity pool and entitle you to a portion of the trading fees generated by the platform. + +## Become a TFT - BSC LP + +Anyone who fullfill the prerequisites can create a liquidity pool on Pancake Swap. + +Now that you're all set, you can become an LP by going to your [PancakeSwap homepage](https://pancakeswap.finance/) and click on **Trade > Liquidity** button on the navbar as shown. Please Make sure you're on **BNB Smart Chain** network. + +![](./img/pancake_liquidity.png) + +You will now be directed to the Liquidity page. When providing liquidity for TFT(BSC) on PancakeSwap, we recommend you to pair it with BUSD (Binance USD), which already has significant liquidity. The ratio between TFT(BSC) and BUSD in the existing pool is fixed by market prices. Simply choose the amount of liquidity you want to provide for both currencies. Consider the potential risks associated with impermanent loss. Join the TFT pool on PancakeSwap to contribute and trade TFT(BSC) tokens. + +![](./img/liquidity_busd.jpeg) + +Click “approve” for both currencies, then c;ick 'confirm' to accept the wallet prompts and fees. + +![](./img/liquidity_approve.jpeg) + +![](./img/threefold__confirmation.jpg) + +Click “supply” and confirm the wallet prompt and fee. + +![](./img/threefold__supply.jpg) + +Congratulations! You have now exchanged the selected amounts for both currencies for LPs (Liquidity Tokens) and you are providing liquidity and earning a part of the transaction fees. + +![](./img/threefold__lp_tokens.jpg) + +## Important Notice + +It's important to note that being a liquidity provider involves certain risks, such as impermanent loss, which occurs when the value of the tokens in the liquidity pool fluctuates. However, if you believe in the potential of TFT and want to actively contribute to the liquidity ecosystem on PancakeSwap, becoming a liquidity provider can be a rewarding opportunity to earn fees and support the growth of the platform. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/liquidity/liquidity_readme.md b/collections/threefold_token/liquidity/liquidity_readme.md new file mode 100644 index 0000000..3a1d592 --- /dev/null +++ b/collections/threefold_token/liquidity/liquidity_readme.md @@ -0,0 +1,33 @@ +

Become a TFT Liquidity Provider

+ +

Table of Contents

+ +- [Liquidity Provider on Pancake Swap](./liquidity_pancake.md) +- [Liquidity Provider on 1inch.io](./liquidity_1inch.md) +- [Liquidity Provider on Albedo](./liquidity_albedo.md) + + +*** + +## Introduction + +A **liquidity provider** (LP) is an individual or entity that contributes liquidity to a decentralized exchange or automated market maker (AMM) platform. In the context of cryptocurrency, a liquidity provider typically deposits an equal value of two different tokens into a liquidity pool. By doing so, they enable traders to easily swap between these tokens at any time, ensuring there is sufficient liquidity in the market. + +Indeed, becoming a TFT liquidity provider can vary based on the platform, wallet, or exchange, as well as the underlying blockchain. The pairing token also plays a role in determining the available liquidity pools for TFT. + +For instance, on PancakeSwap, you can become a TFT(BSC) liquidity provider by pairing it with BNB (Binance Coin) on the Binance Smart Chain (BSC). On the other hand, platforms like 1inch.io may offer different pairing options, such as TFT(BSC) with BUSD (Binance USD) or other compatible tokens. + +Similarly, the blockchain being used can affect the liquidity provision options. For example, you can become a TFT liquidity provider on the Stellar blockchain using wallets like Albedo, specifically designed for Stellar assets. However, you cannot become a TFT(Stellar) liquidity provider on PancakeSwap, but you can become a TFT(BSC) liquidity provider instead, since it currently supports assets on the Binance Smart Chain (BSC), not the Stellar blockchain. + +It is important to consider the specific platform, wallet, or exchange you are using, as well as the blockchain and available pairing tokens, to determine the appropriate method for becoming a TFT liquidity provider. Ensure you follow the guidelines and instructions provided by the respective platforms to successfully participate in the desired liquidity pool. + +*** +## Important Notice + +It's important to note that being a liquidity provider involves certain risks, such as impermanent loss, which occurs when the value of the tokens in the liquidity pool fluctuates. However, if you believe in the potential of TFT and want to actively contribute to the liquidity ecosystem on PancakeSwap, becoming a liquidity provider can be a rewarding opportunity to earn fees and support the growth of the platform. +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/storing_tft/albedo_store.md b/collections/threefold_token/storing_tft/albedo_store.md new file mode 100644 index 0000000..b849c0b --- /dev/null +++ b/collections/threefold_token/storing_tft/albedo_store.md @@ -0,0 +1,102 @@ +

Store TFT (Stellar) on Albedo Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Create and Fund Albedo Wallet](#create-and-fund-albedo-wallet) + - [Add TFT asset to Albedo Wallet](#add-tft-asset-to-albedo-wallet) + - [Storing / Receiving and Sending TFT](#storing--receiving-and-sending-tft) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our guide on how to store TFT tokens (Stellar) via the [Albedo wallet](https://albedo.link/)! + +Albedo is a secure and trustworthy keystore web app and browser extension designed for Stellar token accounts. With Albedo, you can safely manage and transact with your Stellar account without having to share your secret key with any third parties. +Developed by the creators of stellar.expert explorer, Albedo offers a range of features, including storage, swaps, and participation in liquidity pools. It is an open-source solution that can be accessed directly from your browser or installed as a browser add-on, currently available for Chrome and Firefox. + +In this tutorial, we will walk you through the process of storing Stellar TFT tokens using the Albedo wallet. +*** +## Prerequisites + +- **XLM**: When storing TFT tokens using the Albedo wallet, a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +There are multiple ways to acquire XLM and send it to your wallet. One option is to utilize XLM-supported exchanges, which provide a convenient platform for purchasing XLM. Click [here](https://www.coinlore.com/coin/stellar/exchanges) to access a comprehensive list of exchanges that support XLM. + +As an example, we have created a tutorial specifically focusing on how to buy XLM on Coinbase, one of the popular cryptocurrency exchanges. This tutorial provides step-by-step instructions on the process of purchasing XLM on **Coinbase Exchange**. You can find the tutorial [**here**](../buy_sell_tft/coinbase_xlm.md). +*** +## Get Started + +### Create and Fund Albedo Wallet + +Go to [https://albedo.link/signup](https://albedo.link/signup) to start your sign up process. + +![](img/albedo_signup.png) + +Ensure you save your 24-word recovery passphrase. This is very important! +Click “**I saved recovery phrase**” (Again, it is critical that you save this recovery phrase somewhere, and do so securely). This key is the only way you could recover your account if you ever lose access to it. + +![](img/albedo_secret.png) + +Congrats! You just created an Albedo wallet. To get started swapping tokens into TFT, firstly we would need to fund your wallet with XLM by clicking '**fund it**' on the homepage. + +![](img/albedo_fund.png) + +Send some XLM for a third party wallets or stellar exchanger of your choice to your Albedo XLM wallet. + +![](img/albedo_receive.png) + +Once you sent the XLM, you will see the balance added to the homepage. now we are ready to do some token transactions! + +![](img/albedo_home.png) + +### Add TFT asset to Albedo Wallet + +To store TFT in our Albedo Wallet, we will need to have a TFT wallet added into our account. This is done by creating a trustline for TFT. Creating a trustline means granting permission for your Albedo wallet to recognize and interact with TFT tokens on the Stellar network. It allows you to view your TFT balance, send and receive TFT tokens, and engage in swapping or trading activities involving TFT within the Albedo wallet. + +To add a trustline, go to the “**Balance**” section, and click “**Add trustline**” button. + +![](img/albedo_activate.png) + +A popup window will appear presenting Albedo's asset list. In the search field that appears, type in '**TFT**' as the asset you want to add. + +**It is important to ensure that you also see the name "threefold.io" next to the logo** , as this verifies that you are selecting the genuine TFT asset associated with ThreeFold. **Beware of imposters or fraudulent assets that may attempt to mimic TFT.** ThreeFold cannot assume responsibility for any errors or mistakes made during the trustline creation process done by users. If you have any uncertainties or doubts, it is always recommended to seek assistance from official support channels or trusted sources to ensure the accuracy of the trustline configuration. + +![](img/albedo_select_asset.png) + +Confirm the selected asset by pressing “**Add trustline**”. + +![](img/albedo_trustline.png) + +Congrats! You have now successfully added TFT as an asset to your Albedo wallet. + +### Storing / Receiving and Sending TFT + +You can now store TFT by depositing it from another wallet by copying your public Albedo wallet address and sending it to the withdrawer. + +You can also transfer TFT to another stellar wallet by clicking the '**Transfer**' icon on your wallet navbar and specify the amount of TFT and the receipient's wallet address and memo (if any), and click 'Transfer' button to finish the transaction. + +![](img/albedo_send_receive.png) +*** +## Important Notice + +If you are looking for ways to get / purchase TFT (Stellar) on Albedo, you will find the according information [here](../buy_sell_tft/albedo_buy.md). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + + + + + + + + + diff --git a/collections/threefold_token/storing_tft/btc_alpha_deposit.md b/collections/threefold_token/storing_tft/btc_alpha_deposit.md new file mode 100644 index 0000000..bd65ebc --- /dev/null +++ b/collections/threefold_token/storing_tft/btc_alpha_deposit.md @@ -0,0 +1,131 @@ +

Store TFT on BTC-Alpha Exchange

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Sign up for an Account](#sign-up-for-an-account) + - [Secure Your Account](#secure-your-account) +- [Deposit TFT(Stellar) to Account](#deposit-tftstellar-to-account) + - [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our tutorial on how to get TFT (Stellar) using [**BTC-Alpha**](https://btc-alpha.com/en)! + + +BTC-Alpha is a cryptocurrency exchange platform that provides a secure and user-friendly environment for trading various digital assets, including TFT (Stellar). With its robust features and intuitive interface, BTC-Alpha offers a convenient way to buy and sell cryptocurrencies. + +In this guide, we will walk you through the process of storing TFT on the BTC-Alpha exchange by depositing TFT from your external wallet to your BTC-Alpha Exchange. + +> If you are looking for ways to get / purchase TFT (Stellar) on BTC-Alpha by trading, you will find the according information [here](../buy_sell_tft/btc_alpha.md). +*** +## Prerequisites + +- **ID for verification:** To get TFT (Stellar) on the BTC-Alpha exchange, you will need to have your identification (ID) ready for the verification process. This is a standard requirement to ensure compliance with regulatory guidelines and maintain a secure trading environment. Your ID may include documents such as a valid passport or government-issued identification card. + +- **BTC-Alpha Account**: You must have an active account on the BTC-Alpha exchange. If you don't have one, you can sign up on their website and complete the registration process. Make sure to secure your account with strong passwords and two-factor authentication for enhanced security. +*** +## Get Started + +### Sign up for an Account + +**Sign up for a BTC-Alpha Account:** Visit the BTC-Alpha website [https://btc-alpha.com/](https://btc-alpha.com/) and click on the "**Sign up**" button. Fill in the required information, including your email address, a secure password, and any additional details requested for the account creation process. +![](img/alpha_signup.png) + +**Login to your account**: You will then receive a notification that allow you to login to your new account. + +![](img/alpha_login.png) + +Now, you can proceed to log in to your account and start exploring the platform. Follow these steps to log in: + +Visit the BTC-Alpha website (https://btc-alpha.com/) on your web browser. Click on the "Login" button located on the top-right corner of the website. + +Enter the email address and password you used during the registration process in the respective fields. + +![](img/alpha_email.png) + + +**Verify Your Account:** After completing the registration, you may need to verify your account by providing some personal identification information. This is a standard procedure for most cryptocurrency exchanges to ensure compliance with regulations and security measures. Follow the instructions provided by BTC-Alpha to complete the verification process. + +![](img/alpha_verify.png) + +Congratulations on completing the registration process for your BTC-Alpha account and are logged in to your account successfully! + +![](img/alpha_home.png) + +### Secure Your Account + +**Secure Your Account**: Set up two-factor authentication (2FA) to add an extra layer of security to your BTC-Alpha account. This typically involves linking your account to a 2FA app, such as Google Authenticator or Authy, and enabling it for login and transaction verification. + +To enable two-factor authentication (2FA) on your BTC-Alpha account, follow these steps: + +Install either the [2FA Alp Authenticator](https://play.google.com/store/apps/details?id=com.alp.two_fa) or [Google Authenticator](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en&gl=US) app on your mobile phone. You can find Authy on the App Store or Google Play, and Google Authenticator on the Google Play Store or Apple App Store. + +Once you have installed the app, log in to your BTC-Alpha account. + +Right-Click on your name or profile icon located at the top-right corner of the website to access the account settings, and look for the option "**Enable Two-factor Authentication**" and click on it. + +![](img/alpha_auth.png) + +Follow the instructions provided to link your BTC-Alpha account with the Authy or Google Authenticator app. This usually involves scanning a QR code or manually entering a code provided by the app. + +![](img/alpha_2fa.png) + +Once the link is established, the app will start generating unique codes that you will need to enter during the login process for additional security. + +By enabling two-factor authentication, you add an extra layer of security to your BTC-Alpha account, helping to protect your funds and personal information. Make sure to keep your mobile device with the authenticator app secure, as it will be required each time you log in to your BTC-Alpha account. +*** +## Deposit TFT(Stellar) to Account + +To begin storing **TFT** on the BTC-Alpha exchange, you will need to deposit some TFT funds into your account. + +To deposit TFT into your BTC-Alpha account, follow these steps: + +Log in to your BTC-Alpha account and Click on the "Wallets" tab located in the top menu. + +![](img/alpha_wallet.png) + +Search for "**TFT**" in the list of available cryptocurrencies. + +![](./img/alpha_tft.png) + +You will be provided with a TFT deposit address. +Make sure you choose **ThreeFold Token (Stellar)** as your deposit method. + +To deposit, Copy this address or scan the QR code associated with it. As well as the Memo. + +Use your personal external TFT wallet or the wallet of another exchange to initiate a withdrawal to the provided deposit address. + +Ensure that you specify the correct deposit address and Memo. Double-check it before confirming the transaction. + +**IMPORTANT**: It is crucial to always include the correct memo when sending TFT (Stellar) tokens to ensure the transaction is processed accurately. Failing to include the memo or entering an incorrect memo can lead to the loss of TFT tokens. + +Wait for the transaction to be confirmed on the Stellar network. This may take some time depending on network congestion. +Once the transaction is confirmed, the TFT will be credited to your BTC-Alpha account balance. + +You can check your account balance by clicking on the "**Wallets**" tab or by navigating to the "**Balances**" section. + +Please note that the exact steps for depositing TFT may vary depending on the specific wallet or exchange you are using to send the funds. It is essential to double-check the deposit address and follow the instructions provided by your wallet or exchange to ensure a successful deposit. + +### Important Notice + +While it is possible to keep your TFT in your exchange wallet on BTC-Alpha, it is generally not recommended to store your funds there for an extended period. Public exchanges are more susceptible to security breaches and hacking attempts compared to personal wallets. + +To ensure the safety and security of your TFT holdings, it is advisable to transfer them to a dedicated TFT wallet. There are several options available for creating a TFT wallet, each with its own unique features and benefits. + +To explore different TFT wallet options and choose the one that best suits your needs, you can refer to our comprehensive [**TFT Wallet guide**](../storing_tft/tf_connect_app.md#create-a-wallet) that provides a list of recommended TFT wallets. This guide will help you understand the features, security measures, and compatibility of each wallet, enabling you to make an informed decision on where to store your TFT securely. + +Remember, maintaining control over your private keys and taking precautions to protect your wallet information are essential for safeguarding your TFT investments. + +If you are looking for ways to get / purchase TFT (Stellar) on BTC-Alpha by trading, you will find the according information [here](../buy_sell_tft/btc_alpha.md). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + diff --git a/collections/threefold_token/storing_tft/hardware_wallet.md b/collections/threefold_token/storing_tft/hardware_wallet.md new file mode 100644 index 0000000..a00c9f9 --- /dev/null +++ b/collections/threefold_token/storing_tft/hardware_wallet.md @@ -0,0 +1,75 @@ + +

Hardware Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Setting up a TFT Trustline on Stellar Blockchain](#setting-up-a-tft-trustline-on-stellar-blockchain) +- [Conclusion](#conclusion) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +You can store TFT on a hardwallet for optimal security. + +Also, if you are a farmer, instead of using the ThreeFold Connect app to receive your farming rewards (TFTs), you can also have your farming rewards sent to a hardware wallet. In this case, you will need to enable a TFT trustline to receive TFT. + +You can store TFT on a hardware wallet on Stellar chain, BSC or the Ethereum chain. For this guide, we show how to store TFT on a hardware wallet on Stellar chain. + +To store TFT on Stellar, you can use any hardware wallet that supports the Stellar Blockchain, such as a [Ledger](https://www.ledger.com/). + +## Setting up a TFT Trustline on Stellar Blockchain + +Setting up a trustline is simple. + +We will show an example with a Ledger hardware wallet. The process is similar with other hardware wallets. Just make sure that your hardware wallet is compatible with the Stellar blockchain, as the TFT from farming rewards will be sent on the Stellar Blockchain. + +* First, [download Ledger Live](https://www.ledger.com/ledger-live/download) and download the Stellar application in **Manager**. +* Go on an official Stellar website such as [Stellarterm](https://stellarterm.com/) or [StellarX](https://www.stellarx.com/). For this example, we will be using Stellarterm.com. + +* Unlock your Ledger by entering your password. + +* Select the Stellar App. + * In our case, we do not need to create an account on Stellerterm, since we are using a hardware wallet. + +* On Stellarterm.com, click on the button **LOGIN**. + +![farming_wallet_7](./img/farming_wallet_7.png) + + +* At the bottom of the page, select the option **Ledger** or another option if you are using a different hardware wallet. + +![farming_wallet_8](./img/farming_wallet_8.png) + +* Click **Connect with Ledger**. + +![farming_wallet_9](./img/farming_wallet_9.png) + + +* Read and accept the Terms of Use. + +![farming_wallet_10](./img/farming_wallet_10.png) + + +* On the main page, click on **Assets**. + +![farming_wallet_11](./img/farming_wallet_11.png) + +* Scroll down and write **Threefold.io** in the Text box. Select **TFT Threefold.io**. Click **Accept**. Then follow the steps presented on your hardware wallet to confirm the trust line. + +![farming_wallet_12](./img/farming_wallet_12.png) +![farming_wallet_13](./img/farming_wallet_13.png) + +## Conclusion + +You now have a TFT trust line on the Stellar Blockchain. You can now receive TFT on this wallet. This is a perfect setup to farm TFT safely. + +When it comes to choosing where to send your farming rewards, you simply need to enter the address associated with your hardware wallet. + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/storing_tft/img/INTER_TFT.png b/collections/threefold_token/storing_tft/img/INTER_TFT.png new file mode 100644 index 0000000..6d2ff0a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/INTER_TFT.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_activate.png b/collections/threefold_token/storing_tft/img/albedo_activate.png new file mode 100644 index 0000000..4fe338e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_activate.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_balance.jpeg b/collections/threefold_token/storing_tft/img/albedo_balance.jpeg new file mode 100644 index 0000000..ca1d07e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_balance.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/albedo_fund.png b/collections/threefold_token/storing_tft/img/albedo_fund.png new file mode 100644 index 0000000..a0bcc0c Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_fund.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_home.png b/collections/threefold_token/storing_tft/img/albedo_home.png new file mode 100644 index 0000000..52cef5e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_home.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_receive.png b/collections/threefold_token/storing_tft/img/albedo_receive.png new file mode 100644 index 0000000..bf1f2a0 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_receive.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_secret.png b/collections/threefold_token/storing_tft/img/albedo_secret.png new file mode 100644 index 0000000..7e76183 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_secret.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_select_asset.png b/collections/threefold_token/storing_tft/img/albedo_select_asset.png new file mode 100644 index 0000000..f06f2e3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_select_asset.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_send_receive.png b/collections/threefold_token/storing_tft/img/albedo_send_receive.png new file mode 100644 index 0000000..3f3b773 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_send_receive.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_signup.png b/collections/threefold_token/storing_tft/img/albedo_signup.png new file mode 100644 index 0000000..f70733b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_signup.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_swap.png b/collections/threefold_token/storing_tft/img/albedo_swap.png new file mode 100644 index 0000000..f7e0943 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_swap.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_swap1.png b/collections/threefold_token/storing_tft/img/albedo_swap1.png new file mode 100644 index 0000000..d215a22 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_swap1.png differ diff --git a/collections/threefold_token/storing_tft/img/albedo_trustline.png b/collections/threefold_token/storing_tft/img/albedo_trustline.png new file mode 100644 index 0000000..8e43505 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/albedo_trustline.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_2fa.png b/collections/threefold_token/storing_tft/img/alpha_2fa.png new file mode 100644 index 0000000..2714d3e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_2fa.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_auth.png b/collections/threefold_token/storing_tft/img/alpha_auth.png new file mode 100644 index 0000000..df46d5a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_auth.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_deposit.png b/collections/threefold_token/storing_tft/img/alpha_deposit.png new file mode 100644 index 0000000..8039284 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_deposit.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_email.png b/collections/threefold_token/storing_tft/img/alpha_email.png new file mode 100644 index 0000000..84fceea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_email.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_home.png b/collections/threefold_token/storing_tft/img/alpha_home.png new file mode 100644 index 0000000..faacb0b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_home.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_limit.png b/collections/threefold_token/storing_tft/img/alpha_limit.png new file mode 100644 index 0000000..f0b361a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_limit.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_login.png b/collections/threefold_token/storing_tft/img/alpha_login.png new file mode 100644 index 0000000..3c93fdb Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_login.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_search.png b/collections/threefold_token/storing_tft/img/alpha_search.png new file mode 100644 index 0000000..3880273 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_search.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_signup.png b/collections/threefold_token/storing_tft/img/alpha_signup.png new file mode 100644 index 0000000..5840e77 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_signup.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_tft.png b/collections/threefold_token/storing_tft/img/alpha_tft.png new file mode 100644 index 0000000..227bff3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_tft.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_trade.png b/collections/threefold_token/storing_tft/img/alpha_trade.png new file mode 100644 index 0000000..ef7c9cb Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_trade.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_verify.png b/collections/threefold_token/storing_tft/img/alpha_verify.png new file mode 100644 index 0000000..8065339 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_verify.png differ diff --git a/collections/threefold_token/storing_tft/img/alpha_wallet.png b/collections/threefold_token/storing_tft/img/alpha_wallet.png new file mode 100644 index 0000000..01b2916 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/alpha_wallet.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_1.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_1.png new file mode 100644 index 0000000..6e1f766 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_1.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_10.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_10.png new file mode 100644 index 0000000..d990568 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_10.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_11.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_11.png new file mode 100644 index 0000000..1a0d5b9 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_11.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_12.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_12.png new file mode 100644 index 0000000..9ffb19d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_12.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_13.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_13.png new file mode 100644 index 0000000..1b96f81 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_13.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_14.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_14.png new file mode 100644 index 0000000..c0b301f Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_14.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_15.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_15.png new file mode 100644 index 0000000..120be6a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_15.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_16.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_16.png new file mode 100644 index 0000000..781bf30 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_16.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_17.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_17.png new file mode 100644 index 0000000..51130a3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_17.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_18.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_18.png new file mode 100644 index 0000000..8fe5d2f Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_18.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_19.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_19.png new file mode 100644 index 0000000..6c6796c Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_19.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_2.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_2.png new file mode 100644 index 0000000..564b312 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_2.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_20.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_20.png new file mode 100644 index 0000000..da5c0ca Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_20.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_21.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_21.png new file mode 100644 index 0000000..27abf0c Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_21.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_22.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_22.png new file mode 100644 index 0000000..adde6b0 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_22.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_23.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_23.png new file mode 100644 index 0000000..aabf0a0 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_23.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_24.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_24.png new file mode 100644 index 0000000..01ea640 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_24.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_25.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_25.png new file mode 100644 index 0000000..72566b2 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_25.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_26.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_26.png new file mode 100644 index 0000000..e9931d3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_26.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_27.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_27.png new file mode 100644 index 0000000..b20e7c1 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_27.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_28.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_28.png new file mode 100644 index 0000000..ddace3b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_28.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_29.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_29.png new file mode 100644 index 0000000..52a66cd Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_29.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_3.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_3.png new file mode 100644 index 0000000..0a45687 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_3.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_30.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_30.png new file mode 100644 index 0000000..2e30308 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_30.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_31.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_31.png new file mode 100644 index 0000000..ff1b9ac Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_31.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_32.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_32.png new file mode 100644 index 0000000..ab617bf Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_32.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_33.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_33.png new file mode 100644 index 0000000..3c6d591 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_33.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_34.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_34.png new file mode 100644 index 0000000..944688f Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_34.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_35.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_35.png new file mode 100644 index 0000000..845bd35 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_35.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_36.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_36.png new file mode 100644 index 0000000..b294bcb Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_36.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_37.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_37.png new file mode 100644 index 0000000..c61e06e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_37.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_38.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_38.png new file mode 100644 index 0000000..732f98a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_38.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_39.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_39.png new file mode 100644 index 0000000..0c78bc0 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_39.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_4.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_4.png new file mode 100644 index 0000000..11ef6d7 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_4.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_40.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_40.png new file mode 100644 index 0000000..6651627 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_40.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_41.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_41.png new file mode 100644 index 0000000..839e929 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_41.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_42.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_42.png new file mode 100644 index 0000000..5f84480 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_42.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_43.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_43.png new file mode 100644 index 0000000..eb3a017 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_43.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_44.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_44.png new file mode 100644 index 0000000..4a10071 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_44.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_45.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_45.png new file mode 100644 index 0000000..0d6e86d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_45.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_46.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_46.png new file mode 100644 index 0000000..2039941 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_46.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_47.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_47.png new file mode 100644 index 0000000..71ea72e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_47.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_48.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_48.png new file mode 100644 index 0000000..fd357c7 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_48.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_49.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_49.png new file mode 100644 index 0000000..0f82c97 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_49.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_5.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_5.png new file mode 100644 index 0000000..2dd845d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_5.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_50.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_50.png new file mode 100644 index 0000000..0ecbc06 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_50.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_51.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_51.png new file mode 100644 index 0000000..86ecf04 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_51.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_52.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_52.png new file mode 100644 index 0000000..9d2b5a4 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_52.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_53.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_53.png new file mode 100644 index 0000000..141c24d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_53.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_54.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_54.png new file mode 100644 index 0000000..2c97d1b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_54.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_55.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_55.png new file mode 100644 index 0000000..2d7fe0e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_55.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_56.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_56.png new file mode 100644 index 0000000..e090e08 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_56.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_57.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_57.png new file mode 100644 index 0000000..b9289a4 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_57.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_58.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_58.png new file mode 100644 index 0000000..a78d6d8 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_58.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_59.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_59.png new file mode 100644 index 0000000..4f6e7f2 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_59.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_6.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_6.png new file mode 100644 index 0000000..5766b18 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_6.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_7.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_7.png new file mode 100644 index 0000000..c9256be Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_7.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_8.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_8.png new file mode 100644 index 0000000..0901cf6 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_8.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_tf_wallet_9.png b/collections/threefold_token/storing_tft/img/farming_tf_wallet_9.png new file mode 100644 index 0000000..a913e4b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_tf_wallet_9.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_10.png b/collections/threefold_token/storing_tft/img/farming_wallet_10.png new file mode 100644 index 0000000..7ae95fc Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_10.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_11.png b/collections/threefold_token/storing_tft/img/farming_wallet_11.png new file mode 100644 index 0000000..6ce3313 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_11.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_12.png b/collections/threefold_token/storing_tft/img/farming_wallet_12.png new file mode 100644 index 0000000..b26559e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_12.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_13.png b/collections/threefold_token/storing_tft/img/farming_wallet_13.png new file mode 100644 index 0000000..9486799 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_13.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_5.png b/collections/threefold_token/storing_tft/img/farming_wallet_5.png new file mode 100644 index 0000000..9a83dd3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_5.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_7.png b/collections/threefold_token/storing_tft/img/farming_wallet_7.png new file mode 100644 index 0000000..ce1f1ba Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_7.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_8.png b/collections/threefold_token/storing_tft/img/farming_wallet_8.png new file mode 100644 index 0000000..b8039d9 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_8.png differ diff --git a/collections/threefold_token/storing_tft/img/farming_wallet_9.png b/collections/threefold_token/storing_tft/img/farming_wallet_9.png new file mode 100644 index 0000000..882bcf7 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/farming_wallet_9.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_deposit.png b/collections/threefold_token/storing_tft/img/inter_deposit.png new file mode 100644 index 0000000..e59bb17 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_deposit.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_funded.png b/collections/threefold_token/storing_tft/img/inter_funded.png new file mode 100644 index 0000000..3fe4ef6 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_funded.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_home.png b/collections/threefold_token/storing_tft/img/inter_home.png new file mode 100644 index 0000000..382c8c3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_home.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_home1.png b/collections/threefold_token/storing_tft/img/inter_home1.png new file mode 100644 index 0000000..fb71577 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_home1.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_key.png b/collections/threefold_token/storing_tft/img/inter_key.png new file mode 100644 index 0000000..42ec8ea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_key.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_login.png b/collections/threefold_token/storing_tft/img/inter_login.png new file mode 100644 index 0000000..108bced Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_login.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_ok.png b/collections/threefold_token/storing_tft/img/inter_ok.png new file mode 100644 index 0000000..8c3cc6d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_ok.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_pass.png b/collections/threefold_token/storing_tft/img/inter_pass.png new file mode 100644 index 0000000..d3a3fce Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_pass.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_popup.png b/collections/threefold_token/storing_tft/img/inter_popup.png new file mode 100644 index 0000000..5954232 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_popup.png differ diff --git a/collections/threefold_token/storing_tft/img/inter_success.png b/collections/threefold_token/storing_tft/img/inter_success.png new file mode 100644 index 0000000..dc3ad7a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/inter_success.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_asset.jpeg b/collections/threefold_token/storing_tft/img/lobstr_asset.jpeg new file mode 100644 index 0000000..6ffbf2b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_asset.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_assets.png b/collections/threefold_token/storing_tft/img/lobstr_assets.png new file mode 100644 index 0000000..f7a6715 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_assets.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_download.png b/collections/threefold_token/storing_tft/img/lobstr_download.png new file mode 100644 index 0000000..3dd7f3c Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_download.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_fed.png b/collections/threefold_token/storing_tft/img/lobstr_fed.png new file mode 100644 index 0000000..74b1d5d Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_fed.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_key.png b/collections/threefold_token/storing_tft/img/lobstr_key.png new file mode 100644 index 0000000..af2322e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_key.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_new.png b/collections/threefold_token/storing_tft/img/lobstr_new.png new file mode 100644 index 0000000..7994dad Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_new.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_pass.png b/collections/threefold_token/storing_tft/img/lobstr_pass.png new file mode 100644 index 0000000..b72a626 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_pass.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_verify.png b/collections/threefold_token/storing_tft/img/lobstr_verify.png new file mode 100644 index 0000000..846de25 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_verify.png differ diff --git a/collections/threefold_token/storing_tft/img/lobstr_web.png b/collections/threefold_token/storing_tft/img/lobstr_web.png new file mode 100644 index 0000000..318c0cc Binary files /dev/null and b/collections/threefold_token/storing_tft/img/lobstr_web.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_add.png b/collections/threefold_token/storing_tft/img/metamask_add.png new file mode 100644 index 0000000..bae4fd7 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_add.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_choose.png b/collections/threefold_token/storing_tft/img/metamask_choose.png new file mode 100644 index 0000000..d86ac35 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_choose.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_confirm.png b/collections/threefold_token/storing_tft/img/metamask_confirm.png new file mode 100644 index 0000000..dd75dea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_confirm.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_custom.png b/collections/threefold_token/storing_tft/img/metamask_custom.png new file mode 100644 index 0000000..8255593 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_custom.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_import.png b/collections/threefold_token/storing_tft/img/metamask_import.png new file mode 100644 index 0000000..5c02164 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_import.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_installed.png b/collections/threefold_token/storing_tft/img/metamask_installed.png new file mode 100644 index 0000000..8fc1c32 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_installed.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_new.png b/collections/threefold_token/storing_tft/img/metamask_new.png new file mode 100644 index 0000000..4efc236 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_new.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_pass.png b/collections/threefold_token/storing_tft/img/metamask_pass.png new file mode 100644 index 0000000..069a091 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_pass.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_privatekey.png b/collections/threefold_token/storing_tft/img/metamask_privatekey.png new file mode 100644 index 0000000..d584de2 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_privatekey.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_success.png b/collections/threefold_token/storing_tft/img/metamask_success.png new file mode 100644 index 0000000..dd75dea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_success.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_switch.png b/collections/threefold_token/storing_tft/img/metamask_switch.png new file mode 100644 index 0000000..0d05255 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_switch.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_tft.png b/collections/threefold_token/storing_tft/img/metamask_tft.png new file mode 100644 index 0000000..dd75dea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_tft.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_tft_auto.png b/collections/threefold_token/storing_tft/img/metamask_tft_auto.png new file mode 100644 index 0000000..996499c Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_tft_auto.png differ diff --git a/collections/threefold_token/storing_tft/img/metamask_tft_home.png b/collections/threefold_token/storing_tft/img/metamask_tft_home.png new file mode 100644 index 0000000..d0a2dda Binary files /dev/null and b/collections/threefold_token/storing_tft/img/metamask_tft_home.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_account.png b/collections/threefold_token/storing_tft/img/solar_account.png new file mode 100644 index 0000000..55d157f Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_account.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_add.png b/collections/threefold_token/storing_tft/img/solar_add.png new file mode 100644 index 0000000..10e5eca Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_add.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_asset.png b/collections/threefold_token/storing_tft/img/solar_asset.png new file mode 100644 index 0000000..c47ede4 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_asset.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_confirm.png b/collections/threefold_token/storing_tft/img/solar_confirm.png new file mode 100644 index 0000000..e5a03b8 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_confirm.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_copy.jpeg b/collections/threefold_token/storing_tft/img/solar_copy.jpeg new file mode 100644 index 0000000..266484f Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_copy.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/solar_desktop.jpeg b/collections/threefold_token/storing_tft/img/solar_desktop.jpeg new file mode 100644 index 0000000..27f14f5 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_desktop.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/solar_home.jpeg b/collections/threefold_token/storing_tft/img/solar_home.jpeg new file mode 100644 index 0000000..4898bbf Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_home.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/solar_home.png b/collections/threefold_token/storing_tft/img/solar_home.png new file mode 100644 index 0000000..530adb4 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_home.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_install.png b/collections/threefold_token/storing_tft/img/solar_install.png new file mode 100644 index 0000000..380f6d1 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_install.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_receive.jpeg b/collections/threefold_token/storing_tft/img/solar_receive.jpeg new file mode 100644 index 0000000..4898bbf Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_receive.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/solar_search.png b/collections/threefold_token/storing_tft/img/solar_search.png new file mode 100644 index 0000000..0247b7e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_search.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_secret.jpeg b/collections/threefold_token/storing_tft/img/solar_secret.jpeg new file mode 100644 index 0000000..7a677a6 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_secret.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/solar_success.png b/collections/threefold_token/storing_tft/img/solar_success.png new file mode 100644 index 0000000..57eb039 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_success.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_tft.png b/collections/threefold_token/storing_tft/img/solar_tft.png new file mode 100644 index 0000000..b94af2a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_tft.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_tftinfo.png b/collections/threefold_token/storing_tft/img/solar_tftinfo.png new file mode 100644 index 0000000..a5df0f1 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_tftinfo.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_verify.png b/collections/threefold_token/storing_tft/img/solar_verify.png new file mode 100644 index 0000000..39a462e Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_verify.png differ diff --git a/collections/threefold_token/storing_tft/img/solar_xlm.jpeg b/collections/threefold_token/storing_tft/img/solar_xlm.jpeg new file mode 100644 index 0000000..75612ea Binary files /dev/null and b/collections/threefold_token/storing_tft/img/solar_xlm.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/threefold__trustwallet_overview.jpg b/collections/threefold_token/storing_tft/img/threefold__trustwallet_overview.jpg new file mode 100644 index 0000000..e8e84e4 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/threefold__trustwallet_overview.jpg differ diff --git a/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_added.jpg b/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_added.jpg new file mode 100644 index 0000000..23cc0a3 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_added.jpg differ diff --git a/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_config.jpg b/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_config.jpg new file mode 100644 index 0000000..a62a447 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/threefold__trustwallet_tft_config.jpg differ diff --git a/collections/threefold_token/storing_tft/img/threefold__xlm_solar_tft_manual_image_19.jpg b/collections/threefold_token/storing_tft/img/threefold__xlm_solar_tft_manual_image_19.jpg new file mode 100644 index 0000000..7a677a6 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/threefold__xlm_solar_tft_manual_image_19.jpg differ diff --git a/collections/threefold_token/storing_tft/img/trust_backup.png b/collections/threefold_token/storing_tft/img/trust_backup.png new file mode 100644 index 0000000..8d2d170 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_backup.png differ diff --git a/collections/threefold_token/storing_tft/img/trust_create.png b/collections/threefold_token/storing_tft/img/trust_create.png new file mode 100644 index 0000000..f98174a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_create.png differ diff --git a/collections/threefold_token/storing_tft/img/trust_created.png b/collections/threefold_token/storing_tft/img/trust_created.png new file mode 100644 index 0000000..f1a9fc5 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_created.png differ diff --git a/collections/threefold_token/storing_tft/img/trust_notfound.jpeg b/collections/threefold_token/storing_tft/img/trust_notfound.jpeg new file mode 100644 index 0000000..fe46b8b Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_notfound.jpeg differ diff --git a/collections/threefold_token/storing_tft/img/trust_notfound.png b/collections/threefold_token/storing_tft/img/trust_notfound.png new file mode 100644 index 0000000..c70f2af Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_notfound.png differ diff --git a/collections/threefold_token/storing_tft/img/trust_recover.png b/collections/threefold_token/storing_tft/img/trust_recover.png new file mode 100644 index 0000000..7b4a869 Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_recover.png differ diff --git a/collections/threefold_token/storing_tft/img/trust_verify.png b/collections/threefold_token/storing_tft/img/trust_verify.png new file mode 100644 index 0000000..5600e1a Binary files /dev/null and b/collections/threefold_token/storing_tft/img/trust_verify.png differ diff --git a/collections/threefold_token/storing_tft/interstellar_store.md b/collections/threefold_token/storing_tft/interstellar_store.md new file mode 100644 index 0000000..3c85bd5 --- /dev/null +++ b/collections/threefold_token/storing_tft/interstellar_store.md @@ -0,0 +1,116 @@ +

Interstellar (TFT-Stellar)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Create a New Interstellar account](#create-a-new-interstellar-account) + - [Create a New Wallet](#create-a-new-wallet) + - [Adding TFT as an Asset](#adding-tft-as-an-asset) +- [Purchase TFT on Interstellar](#purchase-tft-on-interstellar) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +Welcome to our guide on how to store TFT tokens (Stellar) via the [**Interstellar**](https://interstellar.exchange/)! + +Interstellar is a decentralized exchange built on the Stellar network that enables users to trade various assets, including TFT (Stellar). As an intuitive and user-friendly platform, Interstellar provides a seamless trading experience for Stellar users. With its focus on security and privacy, Interstellar ensures that users maintain control over their funds and private keys. + +In this guide, we will walk you through the process of buying TFT on the Interstellar exchange, allowing you to participate in the vibrant Stellar ecosystem. + +> If you are looking for ways to get / purchase TFT (Stellar) on Interstellar by trading, you will find the according information [here](../buy_sell_tft/interstellar.md). + +## Prerequisites + +- **XLM**: To get TFT tokens using Interstellar, a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +If you already have some XLMs stored in another Stellar wallet or exchange, you could simply withdraw them to your new Interstellar account after you complete the signup process (we will explain how to do it later on). If not, there are multiple ways to acquire XLM and send it to your wallet. One option is to utilize XLM-supported exchanges, which provide a convenient platform for purchasing XLM. Click [**here**](https://www.coinlore.com/coin/stellar/exchanges) to access a comprehensive list of exchanges that support XLM. + +As an example, we have created a tutorial specifically focusing on how to buy XLM on Coinbase, one of the popular cryptocurrency exchanges. This tutorial provides step-by-step instructions on the process of purchasing XLM on **Coinbase Exchange**. You can find the tutorial [**here**](../buy_sell_tft/coinbase_xlm.md). + + +## Get Started + +### Create a New Interstellar account + +For this guide, we will focus on creating the desktop version of Interstellar account. Go to [https://interstellar.exchange/](https://interstellar.exchange/) from your desktop and click the '**Login/signup**' button. + +![](img/inter_login.png) + +after selecting the preferred language and theme (dark/light) by following the login dialogue, you will be redirected to Interstellar homepage. Click '**Enter Account**' to start the signup process. + +![](img/inter_home.png) + +Create a new password for your new Interstellar account, and click '**Login**' + +![](img/inter_pass.png) + +### Create a New Wallet + +You are now redirected to your new Interstellar wallet homepage. You can either generate a new wallet randomly, or creating a new custom wallet. On this case, we will generate a new wallet randomly by clicking '**Generate Random Wallet**' button. + +![](img/inter_home1.png) + +Write down your Secret Key, which is needed to recover access to your account in case of a password loss or if your phone is lost or stolen. The word order is very important. + +Note: It is essential to save your secret key securely during the process of creating a new account or importing an existing one. The secret key is a critical component that grants access to your wallet and funds. Make sure to store it in a safe and offline location, such as a password manager or a physical backup, to prevent unauthorized access. Do not share your secret key with anyone and exercise caution to protect your assets. + +Tick all the boxes and click '**Go to Account**' to proceed. + +![](img/inter_key.png) + +Congratulations! You have successfully created a new Interstellar wallet. + +![](img/inter_success.png) + +you will have to send funds (digital currencies that run on the Stellar Blockchain only) to one of the wallets attached to your account. + +We have funded our wallet with $408 or 10.000 XLMs for this manual as you can see these are now shown in the account overview. + +![](img/inter_funded.png) + +This step is obligatory because you will need to pay some XLM in order to start adding TFT asset to your wallet in the following tutorial below. + +### Adding TFT as an Asset + +To store and trade TFT in your Interstellar wallet, we will need to have a TFT wallet added into our account. This is done by adding TFT Asset. This will allow you to view your TFT balance, send and receive TFT tokens, and engage in trading activities involving TFT within the Solar wallet. + +To add the TFT (ThreeFold Token) asset to your Solar Wallet account, follow these steps: + +Click on '**Add Token**' button on your wallet homepage. + +![](img/inter_success.png) + +At the pop-up 'Select Asset; window, and click on All (unverified) option. From there you can search for **TFT** and it will show TFT and Click on the TFT icon. + +**IMPORTANT**: It is important to ensure that you also see the name "**threefold.io**" next to the logo , as this verifies that you are selecting the genuine TFT asset associated with ThreeFold. **Beware of imposters or fraudulent assets that may attempt to mimic TFT.** ThreeFold cannot assume responsibility for any errors or mistakes made during the trustline creation process done by users. If you have any uncertainties or doubts, it is always recommended to seek assistance from official support channels or trusted sources to ensure the accuracy of the trustline configuration. + +![](img/inter_popup.png) + + +Click '**Trust Asset**' once you have confirmed that the TFT asset you're adding is the official one from https://threefold.io. Remember that you have to firstly fund your wallet with some XLM, otherwise, this step won't be done successfully. + + +![](img/INTER_TFT.png) + +Congratulations! TFT asset has been successfully added to your account. + +![](img/inter_ok.png) + +You can now store TFT into your Interstellar account by clicking on the TFT Asset icon, and clicking 'Receive' to deposit some TFT from another wallet to your TFT Asset wallet. + +![](img/inter_deposit.png) + +## Purchase TFT on Interstellar + +If you are looking for ways to get / purchase TFT (Stellar) on Interstellar by trading, you will find the according information [here](../buy_sell_tft/interstellar.md). + +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + diff --git a/collections/threefold_token/storing_tft/lobstr_wallet.md b/collections/threefold_token/storing_tft/lobstr_wallet.md new file mode 100644 index 0000000..f43f174 --- /dev/null +++ b/collections/threefold_token/storing_tft/lobstr_wallet.md @@ -0,0 +1,103 @@ +

Lobstr Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Create a New Lobstr Wallet](#create-a-new-lobstr-wallet) + - [Fund XLM to Wallet](#fund-xlm-to-wallet) + - [Adding TFT Asset to Lobstr Wallet](#adding-tft-asset-to-lobstr-wallet) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +Welcome to our guide on how to store TFT tokens (Stellar) via the [**Lobstr Wallet**](https://lobstr.co/)! + +Lobster Wallet is a secure and user-friendly wallet designed specifically for the Stellar blockchain. It allows you to store, manage, and transact with your Stellar-based assets, including TFT (ThreeFold Token). In this tutorial, we will guide you through the process of setting up Lobster Wallet and storing your TFT (Stellar) tokens. By following the steps outlined in this guide, you will be able to access and manage your TFT assets with ease and confidence using the Lobster Wallet platform. + +## Prerequisites + +- **XLM**: When storing TFT tokens using Lobstr wallet, a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +If you already have some XLMs stored in another Stellar wallet or exchange, you could simply withdraw them to your new Lobstr wallet account after you complete the signup process. If not, there are multiple ways to acquire XLM and send it to your wallet. One option is to utilize XLM-supported exchanges, which provide a convenient platform for purchasing XLM. Click [**here**](https://www.coinlore.com/coin/stellar/exchanges) to access a comprehensive list of exchanges that support XLM. + +You could also purchase some XLMs directly inside yor new Lobstr wallet account by using Fiat currency payment like credit and debit cards. + +## Get Started + +### Create a New Lobstr Wallet + +For this guide, we will focus on creating the mobile app version of Lobstr Wallet. Go to [https://lobstr.co/](https://lobstr.co/) from your mobile app and download the Lobstr Wallet App to your mobile phone. + +![](./img/lobstr_download.png) + +You can also create and use the web version of Lobstr wallet by signing up on the official website at [hhttps://lobstr.co/](https://lobstr.co/). Go to [Lobstr's knowledge base](https://lobstr.freshdesk.com/support/solutions/articles/151000001052-how-to-create-an-account-in-lobstr-) for the web sign up tutorial. + +Insert your email address and choose a strong and unique password. + +![](./img/lobstr_pass.png) + +Verify your account by clicking on the '**Verify Email**' button in the message sent to your email address. + +![](./img/lobstr_verify.png) + +Press the '**Create Wallet**' button if you'd like to create a new Stellar wallet. + +![](./img/lobstr_new.png) + +Write down your Recovery Phrase, which is needed to recover access to your account in case of a password loss or if your phone is lost or stolen. The word order is very important. + +Note: It is essential to save your secret key securely during the process of creating a new account or importing an existing one. The secret key is a critical component that grants access to your wallet and funds. Make sure to store it in a safe and offline location, such as a password manager or a physical backup, to prevent unauthorized access. Do not share your secret key with anyone and exercise caution to protect your assets. + +![](./img/lobstr_key.png) + +(Optional) Set a federation address, which is the unique name of your new Stellar wallet that can be shared with others to receive funds and click '**Save**'. + +![](./img/lobstr_fed.png) + +Congratulations! You just created a new Lobstr wallet account. + +### Fund XLM to Wallet + +To initiate the process of storing TFT in Solar, the first step is to fund your wallet with XLM (Stellar Lumens). Transfer at least 1 XLM to your wallet to activate it. + +To purchase Stellar lumens with Lobstr using credit / debit card, you can use the ‘[**Buy Lumens**](https://lobstr.co/buy-xlm/)’ option on the LOBSTR web and mobile apps. + +Note: Services relating to credit card payments are provided by[Moonpay](https://www.moonpay.com/), which is a separate platform owned by a third party. Click [here](https://lobstr.freshdesk.com/support/solutions/articles/151000001053-buying-crypto-with-lobstr-wallet) to read the full tutorial. + +Once XLM is successfuly sent, it will appear on your Lobstr wallet homepage. + +### Adding TFT Asset to Lobstr Wallet + +To store TFT in your Lobstr Wallet, we will need to have a TFT wallet added into our account. This is done by adding TFT Asset. This will allow you to view your TFT balance, send and receive TFT tokens, and engage in trading activities involving TFT within the Solar wallet. + +To add the TFT (ThreeFold Token) asset to your Solar Wallet account, follow these steps: + +Open the "Assets" screen from the left side menu + +![](./img/lobstr_assets.png) + +Use the search at the top of the "Assets" screen. From there you can search for TFT and it will show TFT and Click on Add button. + +**IMPORTANT**: It is important to ensure that you also see the name "**threefold.io**" next to the logo , as this verifies that you are selecting the genuine TFT asset associated with ThreeFold. **Beware of imposters or fraudulent assets that may attempt to mimic TFT.** ThreeFold cannot assume responsibility for any errors or mistakes made during the trustline creation process done by users. If you have any uncertainties or doubts, it is always recommended to seek assistance from official support channels or trusted sources to ensure the accuracy of the trustline configuration. + +![](./img/lobstr_asset.jpeg) + +Congratulations! TFT asset has been successfully added to your Lobstr Wallet account. If you've performed the steps above correctly and there are no assets displayed, contact [Lobstr support](https://lobstr.freshdesk.com/support/tickets/new) with the details and the home domain of the asset. + +You can now store TFT by depositing it from another wallet on your TFT Asset page. + +## Important Notice + +If you are looking for ways to get / purchase TFT (Stellar) on Lobstr Wallet by trading or swapping, you will find the according information [here](../buy_sell_tft/tft_lobstr/tft_lobstr_complete_guide.md). + + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/storing_tft/metamask.md b/collections/threefold_token/storing_tft/metamask.md new file mode 100644 index 0000000..a70c8d6 --- /dev/null +++ b/collections/threefold_token/storing_tft/metamask.md @@ -0,0 +1,142 @@ +

MetaMask Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [TFT Addresses](#tft-addresses) + - [Ethereum Chain Address](#ethereum-chain-address) + - [BSC Address](#bsc-address) +- [Get Started](#get-started) + - [Install \& Create Metamask Account](#install--create-metamask-account) + - [Configure a BSC Wallet (Mainnet) for your TFT](#configure-a-bsc-wallet-mainnet-for-your-tft) + - [Switch Network from ETH to BSC (Auto Add)](#switch-network-from-eth-to-bsc-auto-add) + - [Add TFT Asset to BSC Wallet (Mainnet)](#add-tft-asset-to-bsc-wallet-mainnet) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +[MetaMask](https://metamask.io/) is a popular browser extension wallet that allows users to interact with various blockchain networks, including BSC and the Ethereum networks. By following the steps outlined in this guide, you'll be able to configure your MetaMask wallet to support TFT tokens on BSC and seamlessly participate in the TFT ecosystem. So let's dive in and explore how to set up TFT on MetaMask with Binance Smart Chain. + +## TFT Addresses + +With MetaMask, you can store TFT on both Binance Smart Chain and the Ethereum chain. Make sure to use the correct TFT address when doing transactions. + +### Ethereum Chain Address + +The ThreeFold Token (TFT) is available on Ethereum. +It is implemented as a wrapped asset with the following token address: + +``` +0x395E925834996e558bdeC77CD648435d620AfB5b +``` + +### BSC Address + +The ThreeFold Token (TFT) is available on BSC. +It is implemented as a wrapped asset with the following token address: + +``` +0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf +``` + +## Get Started + +We present the steps on BSC. Make sure to switch to Ethereum and to use the TFT Ethereum address if you want to proceed on Ethereum. + +### Install & Create Metamask Account + +Go to the [Metamask website](https://metamask.io/) to download and install the Metamask extension for your browser. Follow [this tutorial](https://support.metamask.io/hc/en-us/articles/360015489531-Getting-started-with-MetaMask) to install metamask to your preferred browser and create. + +Once you’ve installed MetaMask, you’ll see the small fox icon at the top right of your screen, and a notification will appear, letting you know that the install was successful. + +![](./img/metamask_installed.png) + +**Create a Metamask account** + +![](./img/metamask_new.png) + +Assuming you are a new user, click the “**Create a Wallet**” button on the right to get started with a new wallet. + +![](./img/metamask_pass.png) + +This is your password for access to your wallet on your local device. As with all online accounts, you should ensure that you are using a strong password. On the next page, you’ll be given a set of 12 words that act as your **private key**. + +![](./img/metamask_privatekey.png) + +You can think of these words as your master key to all your digital assets. If you have lost or forgotten your password, you can use this set of 12 words to restore your wallet on any device. However, if anyone were to gain access to these 12 words, they could do the same. + +This passphrase is very important, so be sure to keep it safe. Once you have written them down, you can click the “Next” button. Then, just to make sure that you have written them down, MetaMask asks you to confirm your secret recovery phrase by selecting the words in the order they were originally given. You must write down your 12 words in the correct order. + +Once you’ve entered those words, you can click “Continue.” Then, you’re done! It’s that easy. + +### Configure a BSC Wallet (Mainnet) for your TFT + +MetaMask supports various wallet networks, including Ethereum, Binance Smart Chain (BSC), and more. By adding the BSC network, you can seamlessly manage and transact with BSC-based assets like TFT tokens directly within your MetaMask wallet. + + To configure a BSC wallet on MetaMask, you can add the BSC network by specifying the network details such as the chain ID, RPC URL, and symbol. + +### Switch Network from ETH to BSC (Auto Add) + +You might notice straight away that we’re still dealing with an Ethereum wallet. But we want to have a Wallet on BSC Network. To add a new network, click it to access a drop-down menu and hit the "**Add Network**” tab to proceed to the next step. + +![](./img/metamask_add.png) + +You can add BNB Smart Chain from the list of popular networks by clicking "Add", and approving the request once it pops up. + +![](./img/metamask_choose.png) + +After it's added. you will realize that your mainnet network has turned into “**Smart Chain**.” If you wish to change back to the Ethereum network, simply press on the “Smart Chain” menu on the top. + +![](./img/metamask_switch.png) + + +### Add TFT Asset to BSC Wallet (Mainnet) + +Now that you have successfully created a BSC Wallet, we will need to add TFT onto your BSC Wallet. + +To add, go to your 'assets' page and click on '**Import Tokens**' at the bottom of the page. + +![](./img/metamask_import.png) + +You can add TFT onto your wallet by either automatically adding it by typing '**TFT ON BSC (TFT)**' on '**search**' page, + +![](./img/metamask_tft_auto.png) + +or by going to '**custom token**' page and entering TFT details manually. Make sure you enter the right details below to add TFT to your wallet successfully: + +- **Token Contract Address**: ```0x8f0fb159380176d324542b3a7933f0c2fd0c2bbf``` + +Metamask will add the rest of the details, once added, click 'Add custom token'. + +![](./img/metamask_custom.png) + +After the click, you will be asked by Metamask to confirm the token import. Click '**Import Tokens**'. + +![](./img/metamask_confirm.png) + +Congratulations, you have successfully added TFT to your BSC Wallet on Metamask. + +![](./img/metamask_tft.png) + +You should be seeing TFT listed on your Wallet homepage's 'Assets' section. Now that we've successfully listed TFT, we can go ahead and do TFT transactions via Metamask. + +![](./img/metamask_tft_home.png) + +## Important Notice + +To deposit TFT tokens to your MetaMask BSC wallet, **you can only initiate a transfer or swap from any other wallet or exchange platform that operates on the Binance Smart Chain (BSC) network.** Ensure that the platform you are using is on BSC to avoid the risk of losing tokens. + +For example, you cannot transfer TFT tokens directly from the TFConnect app to MetaMask, because TFT tokens on TFT Connect operate on the Stellar network, while TFT on MetaMask lives on Binance Smart Chain (BSC) Network. + +But don't worry! You can still swap your Stellar TFT into BSC TFT and vice versa by bridging them using our [TFT BSC Bridge](https://bridge.bsc.threefold.io/). See tutorial [here](../tft_bridges/bsc_stellar_bridge.md). + +You can also buy TFTs on BSC-supported exchangers like [Pancake Swap](https://pancakeswap.finance/). See tutorial [here](../buy_sell_tft/pancakeswap.md) + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/storing_tft/solar_wallet.md b/collections/threefold_token/storing_tft/solar_wallet.md new file mode 100644 index 0000000..3cd8d62 --- /dev/null +++ b/collections/threefold_token/storing_tft/solar_wallet.md @@ -0,0 +1,122 @@ +

Solar Wallet (TFT-Stellar)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Get Started](#get-started) + - [Create a New Solar Wallet](#create-a-new-solar-wallet) + - [Fund XLM to New Wallet](#fund-xlm-to-new-wallet) + - [Adding TFT Asset to Solar Wallet](#adding-tft-asset-to-solar-wallet) +- [Storing / Receiving and Sending TFT](#storing--receiving-and-sending-tft) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +Welcome to our guide on how to store TFT tokens (Stellar) via the [**Solar Wallet**](https://solarwallet.io/)! + +**Solar Wallet** is a user-friendly wallet designed for storing and managing Stellar-based assets like the ThreeFold Token (TFT). It provides a secure way to store your TFT tokens and access them conveniently. With Solar Wallet, you have full control over your assets and can interact with various Stellar services and decentralized applications. Solar Wallet is available as a web-based wallet and also offers mobile versions for iOS and Android devices. This guide will explain how to store TFT (Stellar) on Solar Wallet, including setup, adding tokens, and important security tips. +*** +## Prerequisites + +- **XLM**: When storing TFT tokens using the Solar wallet, a certain amount of XLM funding is required to facilitate the sending and receiving of assets on the Stellar network. + +There are multiple ways to acquire XLM and send it to your wallet. One option is to utilize XLM-supported exchanges, which provide a convenient platform for purchasing XLM. Click [**here**](https://www.coinlore.com/coin/stellar/exchanges) to access a comprehensive list of exchanges that support XLM. + +As an example, we have created a tutorial specifically focusing on how to buy XLM on Coinbase, one of the popular cryptocurrency exchanges. This tutorial provides step-by-step instructions on the process of purchasing XLM on **Coinbase Exchange**. You can find the tutorial [**here**](../buy_sell_tft/coinbase_xlm.md). +*** +## Get Started + +### Create a New Solar Wallet + +For this guide, we will focus on using the desktop wallet version of Solar Wallet for MacOS. You can download the appropriate version for your operating system by visiting the official website at [https://solarwallet.io/](https://solarwallet.io/) and clicking '**Get Wallet Now**'. + +![](img/solar_home.png) + +![](img/solar_desktop.jpeg) + +Note: It is essential to save your secret key securely during the process of creating a new account or importing an existing one. The secret key is a critical component that grants access to your wallet and funds. Make sure to store it in a safe and offline location, such as a password manager or a physical backup, to prevent unauthorized access. Do not share your secret key with anyone and exercise caution to protect your assets. + +![](img/solar_secret.jpeg) + +After the download is finished, locate the downloaded file on your computer and double-click on it to initiate the installation process. Follow the on-screen instructions to install Solar Wallet on your macOS device. This may involve dragging the application to your Applications folder or following prompts from the installation wizard. + +![](img/solar_install.png) + +Once downloaded, open your Solar wallet and congratulations! You just created a new Solar wallet account. + +![](img/solar_home.jpeg) + +### Fund XLM to New Wallet + +To initiate the process of storing TFT in Solar, the first step is to fund your wallet with XLM (Stellar Lumens). By funding your wallet with XLM, you will have the necessary cryptocurrency to cover future transaction fees and engage in token transaction activities on Solar Wallet. Follow these steps: + +On the Solar Wallet homepage, click on the '**Receive**' button. This will provide you with a wallet address to receive XLM. + +![](img/solar_receive.jpeg) + +Copy the generated wallet address or use the provided QR code to receive XLM from your preferred source. You can use an external exchange, another wallet, or any platform that supports XLM transfers. + +![](img/solar_copy.jpeg) + +Send the desired amount of XLM to the generated wallet address. Ensure that you are sending XLM from a compatible source, and double-check the address to prevent any errors. Wait for the XLM transaction to be confirmed on the Stellar network. This typically takes a few moments, but it may vary depending on network congestion. + +Once XLM is successfuly sent, it will appear on your Solar homepage. + +![](img/solar_xlm.jpeg) + +### Adding TFT Asset to Solar Wallet + +To store TFT in our Solar Wallet, we will need to have a TFT wallet added into our account. This is done by adding TFT Asset. This will allow you to view your TFT balance, send and receive TFT tokens, and engage in trading activities involving TFT within the Solar wallet. + +To add the TFT (ThreeFold Token) asset to your Solar Wallet account, follow these steps: + +Click on the top-right menu icon in your Solar Wallet interface. This will open a dropdown menu with various options. From the dropdown menu, select "**Assets & Balances.**" This will navigate you to the Assets & Balances section of your wallet. + +![](img/solar_asset.png) + +In the Assets & Balances section, click on the "**+ Add Asset To Your Account" button**. This will allow you to add a new asset to your wallet. + +![](img/solar_account.png) + +A search box will appear. Type "**TFT**" in the search box to find the ThreeFold Token asset. Click on the search result as shown below. + +![](img/solar_search.png) + +**IMPORTANT**: It is important to ensure that you also see the name "**threefold.io**" next to the logo , as this verifies that you are selecting the genuine TFT asset associated with ThreeFold. **Beware of imposters or fraudulent assets that may attempt to mimic TFT.** ThreeFold cannot assume responsibility for any errors or mistakes made during the trustline creation process done by users. If you have any uncertainties or doubts, it is always recommended to seek assistance from official support channels or trusted sources to ensure the accuracy of the trustline configuration. + +Once you see the TFT asset, click on the "**Add Asset To Account**" button. Please note that adding the asset will require a small amount of XLM to set up a trustline for the TFT asset. + +![](img/solar_tftinfo.png) + +Confirm Adding TFT Asset to your wallet. + +![](img/solar_confirm.png) + +After confirming the transaction, wait until the process is successful. + +![](img/solar_success.png) + +the TFT icon will now be displayed in your wallet overview. + +![](img/solar_tft.png) + +Congratulations! TFT asset has been successfully added to your Solar Wallet account. +*** +## Storing / Receiving and Sending TFT + +You can now store TFT by depositing it from another wallet by clicking '**Receive**' on your TFT Asset page, and copying your public TFT Solar wallet address and sending it to the withdrawer. + +You can also transfer TFT to another stellar wallet by clicking the '**Send**' icon on your wallet navbar and following further instructions. +*** +## Important Notice + +If you are looking for ways to get / purchase TFT (Stellar) on Solar Wallet, you will find the according information [here](../buy_sell_tft/solar_buy.md). +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + diff --git a/collections/threefold_token/storing_tft/storing_tft.md b/collections/threefold_token/storing_tft/storing_tft.md new file mode 100644 index 0000000..7d05dd6 --- /dev/null +++ b/collections/threefold_token/storing_tft/storing_tft.md @@ -0,0 +1,16 @@ +

Storing TFT

+ +In this section, we present different ways to store ThreeFold tokens. + +The simplest way is to use the ThreeFold Connect app to store your TFT. You can also use a hardware wallet for additional security. + +There are various options available to store TFTs, each with its own benefits and considerations. When it comes to storing TFTs (ThreeFold Tokens), users have the flexibility to choose between different wallet options depending on the blockchain platform on which the tokens are issued. + +If TFTs are issued on the Stellar blockchain, users can opt to store them in Stellar-compatible wallets. On the other hand, if TFTs are bridged and swapped from Stellar to the Binance Smart Chain (BSC) or Ethereum Network, users can utilize BSC-compatible and ETH-compatible wallets to store their tokens. + +

Table of Contents

+ +- [ThreeFold Connect App (Stellar)](./tf_connect_app.md) +- [Lobstr Wallet (Stellar)](./lobstr_wallet.md) +- [MetaMask (BSC & ETH)](./metamask.md) +- [Hardware Wallet](./hardware_wallet.md) \ No newline at end of file diff --git a/collections/threefold_token/storing_tft/tf_connect_app.md b/collections/threefold_token/storing_tft/tf_connect_app.md new file mode 100644 index 0000000..03bcdda --- /dev/null +++ b/collections/threefold_token/storing_tft/tf_connect_app.md @@ -0,0 +1,312 @@ +

ThreeFold Connect App Wallet

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [ThreeFold Connect Installation](#threefold-connect-installation) + - [Verify your identity by Email](#verify-your-identity-by-email) + - [Change email associated with TF account](#change-email-associated-with-tf-account) +- [Create a Wallet](#create-a-wallet) +- [See ThreeFold Connect App and Wallet Transactions](#see-threefold-connect-app-and-wallet-transactions) +- [Connect to the Planetary Network](#connect-to-the-planetary-network) +- [Show Seed Phrase - Remove Account from TF Connect App](#show-seed-phrase---remove-account-from-tf-connect-app) +- [Recover Account on the ThreeFold Connect App](#recover-account-on-the-threefold-connect-app) +- [Import Farm from the TF Connect App to the TF Dashboard](#import-farm-from-the-tf-connect-app-to-the-tf-dashboard) +- [Import TF Dashboard Wallet to the TF Connect App Wallet](#import-tf-dashboard-wallet-to-the-tf-connect-app-wallet) +- [Send and Receive TFT between TF Connect Wallets](#send-and-receive-tft-between-tf-connect-wallets) + - [Send TFT](#send-tft) + - [Receive TFT](#receive-tft) + - [Send TFT to Hardware Wallet on Stellar Blockchain](#send-tft-to-hardware-wallet-on-stellar-blockchain) +- [Disclaimer](#disclaimer) + +*** + +## Introduction + +In this section, we cover the basics of the ThreeFold Connect app. + +This app is available for [Android](https://play.google.com/store/apps/details?id=org.jimber.threebotlogin&hl=en&gl=US) and [iOS](https://apps.apple.com/us/app/threefold-connect/id1459845885). + +- Note that for Android phones, you need at minimum Android Nougat, the 8.0 software version. +- Note that for iOS phones, you need at minimum iOS 14.5. It will be soon available to iOS 13. + +![farming_wallet_5](./img/farming_wallet_5.png) + +## ThreeFold Connect Installation + +Either use the links above, or search for the ThreeFold Connect app on the Apple Store or the Google Play store. Then install and open the app. If you want to leave a 5 star review of the app, no one here will stop you! + +![farming_tf_wallet_1](./img/farming_tf_wallet_1.png) +![farming_tf_wallet_2](./img/farming_tf_wallet_2.png) + +When you try to open the app, if you get an error message such as : "Error in initialization in Flagsmith...", you might need to upgrade your phone to a newer software version (8.0 for Android and 13 for iOS). + +Once you are in the application, you will see some introduction pages to help you familiarize with the TF Connect app. You will also be asked to read and accept ThreeFold's Terms and conditions. + +![farming_tf_wallet_3](./img/farming_tf_wallet_3.png) +![farming_tf_wallet_4](./img/farming_tf_wallet_4.png) + +You will then be asked to either *SIGN UP* or *RECOVER ACCOUNT*. For now, we will show how to sign up. Later in the guide, we will show you how to recover an account. + +![farming_tf_wallet_5](./img/farming_tf_wallet_5.png) + +You will then be asked to choose a *Threefold Connect Id*. This ID will be used, as well as the seed phrase, when you want to recover an account. Choose wisely. And do not forget it! Here we will use TFExample, as an example. + +![farming_tf_wallet_6](./img/farming_tf_wallet_6.png) + +Next, you need to add a valid email address. This will be used as a broad KYC. You will need to access your email and confirm the validation email from ThreeFold to use properly the TF Connect app wallet. + +![farming_tf_wallet_7](./img/farming_tf_wallet_7.png) + +Then, the next step is crucial! Make sure no one is around looking at your screen. You will be shown your seed phrase. Keep this in a secure and offline place. You will need the 3bot ID and the seed phrase to recover your account. This seed phrase is of utmost important. Do not lose it nor give it to anyone. + +![farming_tf_wallet_8](./img/farming_tf_wallet_8.png) + +Once you've hit Next, you will be asked to write down 3 random words of your seed phrase. This is a necessary step to ensure you have taken the time to write down your seed phrase. + +![farming_tf_wallet_9](./img/farming_tf_wallet_9.png) + +Then, you'll be asked to confirm your TF 3bot name and the associated email. + +![farming_tf_wallet_10](./img/farming_tf_wallet_10.png) + +Finally, you will be asked to choose a 4-digit pin. This will be needed to use the ThreeFold Connect app. If you ever forget this 4-digit pin, you will need to recover your account from your 3bot name and your seed phrase. You will need to confirm the new pin in the next step. + +![farming_tf_wallet_11](./img/farming_tf_wallet_11.png) + +That's it! You've created your ThreeFold Connect account. You can press the hamburger menu on the top left to explore the ThreeFold Connect app. + +![farming_tf_wallet_12](./img/farming_tf_wallet_12.png) + +In the next step, we will create a ThreeFold Connect Wallet. You'll see, it's very simple! + +### Verify your identity by Email + +Once you've created your account, an email will be sent to the email address you've chosen in the account creation process. + +To verify your email, go on your email account and open the email sent by *info@openkyc.live* with the subject *Verify your email address*. + +In this email, click on the link *Verify my email address*. This will lead you to a *login.threefold.me* link. The process should be automatic. Once this is done, you will receive a confirmation on screen, as well as on your phone. + +![farming_tf_wallet_39](./img/farming_tf_wallet_39.png) + +![farming_tf_wallet_40](./img/farming_tf_wallet_40.png) + +![farming_tf_wallet_41](./img/farming_tf_wallet_41.png) + +If for some reason, you did not receive the verification email, simply click on *Verify* and another email will be sent. + +![farming_tf_wallet_42](./img/farming_tf_wallet_42.png) + +![farming_tf_wallet_43](./img/farming_tf_wallet_43.png) + +### Change email associated with TF account + +If you want to change your email, simply click on the *pencil* next to your email and write another email. You will need to redo the KYC verification process. + +![farming_tf_wallet_44](./img/farming_tf_wallet_44.png) + + +## Create a Wallet + +To create a wallet, click on the ThreeFold Connect app menu. This is what you see. Choose *Wallet*. + +![farming_tf_wallet_13](./img/farming_tf_wallet_13.png) + +Once you are in the section *Wallet*, click on *Create Initial Wallet*. If it doesn't work the first time, retry some more. If you have trouble creating a wallet, make sure your connection is reliable. You can try a couple of minutes later if it still doesn't work. With a reliable connection, there shouldn't be any problem. Contact TF Support if problems persist. + +![farming_tf_wallet_14](./img/farming_tf_wallet_14.png) + +This is what you see when the TF Grid is initializing your wallet. + +![farming_tf_wallet_15](./img/farming_tf_wallet_15.png) + +Once your wallet is initialized, you will see *No blanace found for this wallet*. You can click on this button to enter the wallet. + +![farming_tf_wallet_16](./img/farming_tf_wallet_16.png) + +Once inside your wallet, this is what you see. + +![farming_tf_wallet_17](./img/farming_tf_wallet_17.png) + +We will now see where the Stellar and the TF Chain Addresses and Secrets are to be found. We will also changing the wallet name. To do so, click on the *circled i* at the bottom right of the screen. + +![farming_tf_wallet_18](./img/farming_tf_wallet_18.png) + +![farming_tf_wallet_19](./img/farming_tf_wallet_19.png) + +You can choose the name you want for your wallet. Here we use TFWalletExample. Note that you can also use alphanumeric characters. + +![farming_tf_wallet_20](./img/farming_tf_wallet_20.png) + +At the top of the section *Wallet*, we can see that the name has changed. + +![farming_tf_wallet_21](./img/farming_tf_wallet_21.png) + +Now, if you want to copy your Stellar Address, simply click on the button presented with the green circle. To access the TF Chain address, click on the button presented with the red circle. When your phone has copied the address, the TF Connect app will give show a confirmation message as shown below. + +![farming_tf_wallet_22](./img/farming_tf_wallet_22.png) + +In some situations, you will want to access the Stellar and TF Chain secrets. To do so, simply click on the "eye" button of the desired chain, and then copy the secret. + +![farming_tf_wallet_23](./img/farming_tf_wallet_23.png) + +## See ThreeFold Connect App and Wallet Transactions + +To see your transactions, simply click on the two arrows at the bottom of the screen, as shown below. + +![farming_tf_wallet_29](./img/farming_tf_wallet_29.png) + +## Connect to the Planetary Network + +To connect to the Planetary Network, click on the Planetary Network on the TF menu as shown below. + +![farming_tf_wallet_30](./img/farming_tf_wallet_30.png) + +Connecting to the Planetary Network couldn't be easier. Simply click on the connection button and you will see *Connected* on the screen once the connection is settled. + +![farming_tf_wallet_31](./img/farming_tf_wallet_31.png) + +## Show Seed Phrase - Remove Account from TF Connect App + +To see your seed phrase or remove your account from the TF Connect app, choose *Settings* in the ThreeFold Connect app menu. + +![farming_tf_wallet_32](./img/farming_tf_wallet_32.png) + +First, to see your seed phrase, click on this button as shown below: + +![farming_tf_wallet_31](./img/farming_tf_wallet_38.png) + +You will then be able to see your seed phrase. You can make sure you have your seed phrase somewhere safe, offline, before removing your account. + +Now, we will remove the account from the ThreeFold Connect app. In Settings, click on the arrow circled in green and click on the red button with a white dashed in it. Beware: once done, you can only recover your account with your **3bot ID** and your **seed phrase**. + +![farming_tf_wallet_33](./img/farming_tf_wallet_33.png) + +You will be asked to confirm your action as a security check. + +![farming_tf_wallet_34](./img/farming_tf_wallet_34.png) + +## Recover Account on the ThreeFold Connect App + +Once you're removed your account, if you want to recover your account, choose the option *RECOVER ACCOUNT* on the opening screen on the app. + +![farming_tf_wallet_35](./img/farming_tf_wallet_35.png) + +You will be asked to enter your *3bot ID* as well as your *seed phrase*. + +![farming_tf_wallet_36](./img/farming_tf_wallet_36.png) + +You will then be asked to choose and confirm a new 4-digit pin code. Once this is done, you will receive the following confirmation: + +![farming_tf_wallet_37](./img/farming_tf_wallet_37.png) + +That's it! You've recovered your account. + + +## Import Farm from the TF Connect App to the TF Dashboard + +If you want to import your farm from the ThreeFold Connect app to the ThreeFold Dashboard, follow these steps. You will need to use the old version of the Dashboard for this ([https://old.dashboard.grid.tf](https://old.dashboard.grid.tf)). + +Note that as of now, you cannot import your farm from the TF Dashboard to the ThreeFold Connect app, but it is possible to import your wallet. + +First, you want to find the TF Chain Secret, this is, in short, a hex version of the private key. To find the secret, head over to the *Farmer migration* (via the TF Menu). + +In the *Farming migration* section, click on the arrow (in green here) of the farm you want to export on the ThreeFold Dashboard. + +![farming_tf_wallet_45](./img/farming_tf_wallet_45.png) + +Then, click on the arrow (in green) to see your TF Chain Secret. + +![farming_tf_wallet_46](./img/farming_tf_wallet_46.png) + +Click on the button to copy the Secret. The app will show a confirmation message. + +![farming_tf_wallet_47](./img/farming_tf_wallet_47.png) + +Now head over to the browser with your polkadot.js extension. For more information on this, check the section [Creating a Polkadot.js account](#1-creating-a-polkadotjs-account). + +On your browser, click on the extension button (in green). + +![farming_tf_wallet_48](./img/farming_tf_wallet_48.png) + +Select the polkadot{.js} extension. + +![farming_tf_wallet_49](./img/farming_tf_wallet_49.png) + +Click on the *plus* button as shown in green. + +![farming_tf_wallet_50](./img/farming_tf_wallet_50.png) + +Choose the option *Import account from pre-existing seed*. + +![farming_tf_wallet_51](./img/farming_tf_wallet_51.png) + +In the box *EXISTING 12 OR 24-WORD MNEMONIC SEED*, paste the TF Chain Secret. Note that this Secret is a HEX version of your seed phrase. + +![farming_tf_wallet_52](./img/farming_tf_wallet_52.png) + +Choose a name for your account. Choose a password. + +![farming_tf_wallet_53](./img/farming_tf_wallet_53.png) + +When you go on the [ThreeFold Dashboard](https://old.dashboard.grid.tf), you will now see your newly added account. Click on it. + +![farming_tf_wallet_54](./img/farming_tf_wallet_54.png) + +In the Farm section, you can now see your farm. You have successfully move the farm from the ThreeFold Connect app to the ThreeFold Dashboard. + +![farming_tf_wallet_55](./img/farming_tf_wallet_55.png) + +You can see here that the farming reward address is the same as before. + +![farming_tf_wallet_56](./img/farming_tf_wallet_56.png) + +That's it! You have successfully imported the farm from the ThreeFold Connect app to the ThreeFold Dashboard. + + + +## Import TF Dashboard Wallet to the TF Connect App Wallet + +Now that we've seen how to go from the TF Connect app to the ThreeFold Dashboard, we will now show how to go the other way around. This method is very simple. You will need your TF Dashboard seed phrase handy. + +Go in the Wallet section of the ThreeFold Connect app and click on import at the bottom right (in green). + +![farming_tf_wallet_57](./img/farming_tf_wallet_57.png) + +Then simply name your wallet and enter the TF Dashboard seed phrase. + +![farming_tf_wallet_58](./img/farming_tf_wallet_58.png) + +Then in the Wallet section, you will now see the wallet. + +![farming_tf_wallet_59](./img/farming_tf_wallet_59.png) + + + +## Send and Receive TFT between TF Connect Wallets + +To send and receive TFT between TF Connect Wallet, go into Wallet and select the wallet you want to use.Remember that you must always send and receive TFT on the same chain, so choose either Stellar or TFChain. + +### Send TFT + +To send tokens, select *Send Coins* in the wallet section. To send TFT, you can scan the QR code of the address you wish to send tokens to. This will enter automatically the necessary information. Make sure to double check that the information is correct to avoid any complications. Otherwise, you can simply enter the correct address in the section *To*. Choose the amount you want to send. Then click on *SEND TOKENS*. + +Note that, for such transactions, there is a maximum fee of 0.10 TFT on the Stellar blockchain, and a maximum fee of 0.01 TFT on the TFChain. This amount is taken from the amount you are sending. It is not taken directly in your wallet. + +### Receive TFT + +To receive tokens, select *Receive Coins* in the wallet section. To receive TFT, you can generate a QR code to share to the person waiting to send you tokens. Otherwise, the sender can simply use your Stellar or TFChain address and send you TFT. + +To generate the QR Code, select the chain you want to use, Stellar or TFChain, enter the amount and the message if needed and click on *GENERATE QR CODE*. Note that there is no message option for TFChain, only for Stellar. This will generate a QR Code that can be scanned by other devices. + + +### Send TFT to Hardware Wallet on Stellar Blockchain + +Before sending TFT to a hardware wallet, make sure the hardware wallet has a TFT trustline on the Stellar Blockchain. For more information, read [this section](./hardware_wallet.md). + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. diff --git a/collections/threefold_token/storing_tft/trustwallet.md b/collections/threefold_token/storing_tft/trustwallet.md new file mode 100644 index 0000000..bbd044d --- /dev/null +++ b/collections/threefold_token/storing_tft/trustwallet.md @@ -0,0 +1,84 @@ +

Store TFT-BSC on Trust Wallet (BSC)

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How to Store TFT on Trust Wallet (BSC)](#how-to-store-tft-on-trust-wallet-bsc) + - [Download and Create a Trust Wallet Account](#download-and-create-a-trust-wallet-account) + - [Add TFT to Trust Wallet](#add-tft-to-trust-wallet) +- [Important Notice](#important-notice) +- [Disclaimer](#disclaimer) +*** +## Introduction + +[Trust Wallet](https://trustwallet.com/) is a widely used, free, and non-custodial mobile wallet designed for storing cryptocurrencies and non-fungible tokens (NFTs). It operates as a hot wallet, meaning it is connected directly to the internet, and is available on both iOS and Android devices. Since its acquisition by Binance in 2018, Trust Wallet has become Binance's official decentralized wallet. It supports a vast array of digital assets, with over 4.5 million supported, and is compatible with more than 65 blockchains. Trusted by millions of users, Trust Wallet stands out for its seamless integration with platforms on the Binance Smart Chain (BEP-20), including popular ones like PancakeSwap. +*** +## How to Store TFT on Trust Wallet (BSC) + +### Download and Create a Trust Wallet Account + +Once you have downloaded the app (iOS / Android) via [https://trustwallet.com/](https://trustwallet.com/), Select “Create a new wallet” and press “Continue” to accept the terms. Get a pencil and paper ready because the warning Trust Wallet gives you is real: If you lose your recovery words (sometimes also known as a seed phrase or recovery phrase) you may lose access to your wallet and the crypto within it forever. + +![](./img/trust_create.png) + +A new screen will appear, prompting you to write down your recovery phrase. It is important you manually write it down and keep it in a safe, private place. Keeping the words in digital form is less secure and not recommended. + +![](./img/trust_backup.png) + +The recovery phase for Trust Wallet consists of 12 words. These words will be used in case you lose access to your wallet – and they are the only way to regain access to the wallet. So we’ll say it again: Keep them in a safe, private place. + +![](./img/trust_recover.png) + +To verify that you backed up your recovery phrases, Trust Wallet will prompt you to write them in sequential order as you’ve received them. + +![](./img/trust_verify.png) + +You will get a screen stating, “Your wallet was successfully created.” + +![](./img/trust_created.png) + +### Add TFT to Trust Wallet + +To add TFT to your Trust Wallet, you need to configure it manually as a 'custom token'. In Trust Wallet, a custom token refers to a token that is not natively supported or pre-listed on the wallet's default token list. + +In the 'Tokens' page, click on the 'settings' icon on the upper right corner to start adding a custom token. + +![](./img/threefold__trustwallet_overview.jpg) + +Search for TFT, and you will see a “No Asset Found” message with a Add Custom Token button. Click on the 'Add Custom Token' button. You will be directed to the Custom Token page. + +![](./img/trust_notfound.png) + +On the Custom Token page, configure TFT in the wallet by completing following info : + +Network: Smart Chain +**Contract Address: 0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf** +Name: TFT +Symbol: TFT +Decimals: 7 + +![](./img/threefold__trustwallet_tft_config.jpg) + +Once this configuration is done, your TFT wallet is ready. + +![](./img/threefold__trustwallet_tft_added.jpg) +*** +## Important Notice + +To deposit TFT tokens to your Trust BSC wallet, **you can only initiate a transfer or swap from any other wallet or exchange platform that operates on the Binance Smart Chain (BSC) network.** Ensure that the platform you are using is on BSC to avoid the risk of losing tokens. + +For example, you cannot transfer TFT tokens directly from the TFConnect app to MetaMask, because TFT on TFT Connect Wallet operate on the Stellar network, while TFT on MetaMask lives on Binance Smart Chain (BSC) Network. + +But don't worry! You can still swap your Stellar TFT into BSC TFT and vice versa by bridging them using our [Stellar-BSC Bridge](https://bridge.bsc.threefold.io/). See tutorial [here](../tft_bridges/bsc_stellar_bridge.md). + +You can also buy and swap TFTs on BSC-supported exchangers by connecting your Trust Wallet to platforms like [Pancake Swap](https://pancakeswap.finance/). See the tutorial [here](../buy_sell_tft/pancakeswap.md) +*** +## Disclaimer + +The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. + +**The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](https://library.threefold.me/info/legal/#/legal__disclaimer) and seek advice from a qualified financial professional if needed. + + + + diff --git a/collections/threefold_token/tft_bridges/bsc_stellar_bridge.md b/collections/threefold_token/tft_bridges/bsc_stellar_bridge.md new file mode 100644 index 0000000..88f5152 --- /dev/null +++ b/collections/threefold_token/tft_bridges/bsc_stellar_bridge.md @@ -0,0 +1,116 @@ +

BSC-Stellar Bridge

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How to Use the BSC-Stellar Bridge](#how-to-use-the-bsc-stellar-bridge) + - [Bridge from Stellar to BSC](#bridge-from-stellar-to-bsc) + - [Bridge from BSC to Stellar](#bridge-from-bsc-to-stellar) +- [Setting Up TFT on Metamask](#setting-up-tft-on-metamask) +- [Bridge Fees](#bridge-fees) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +We present here the main steps to use the BSC-Stellar Bridge. + + + +## How to Use the BSC-Stellar Bridge + +To use the BSC-Stellar Bridge, follow the following steps. If this is your first time with MetaMask and BSC, read the section [Setting up TFT on Metamask](#setting-up-tft-on-metamask). + +It will cost 100 TFT* to bridge from Stellar to BSC, and 1 TFT to bridge from BSC to Stellar. There is also a fixed fee of 0.01 TFT when using the Stellar blockchain. Those fees are taken from the total of what you are bridging. + +*For example, if you Bridge 200 TFT, from Stellar to BSC, you will receive 100 TFT. + +> Note: The bridge will process deposits/withdrawals within 48 hours. + + +### Bridge from Stellar to BSC + +**Pre-requisites:** + +* Metamask account +* TF Connect App+Wallet +* TFT on Stellar Blockchain + +**Steps** + +1. Go to the BSC-Stellar [Bridge website](https://bridge.bsc.threefold.io/). +2. Connect your MetaMask Wallet. +3. Sign in with MetaMask. +4. Choose the option *Deposit from Stellar*. +5. Agree to the *ThreeFold Terms*. +6. Read and tick the box of the *Warning Message*. +7. On your phone, open up your ThreeFold Connect App and go to the wallet section. +8. Select the option *Send*. +9. Select the *Stellar* chain. +10. Click on the button *Scan QR Code* . The QR Code option automatically fill up your *MESSAGE*. +11. Scan the QR code that appears on the Bridge window (or write the information manually). +12. Make sure the *MESSAGE* is correctly entered. +13. Press *Send Tokens*. +14. Press *Confirm*. + +In this method, you use the Bridge directly. Thus, it is normal if you do not see your standard MetaMask address. MetaMask is on ERC20 and TFT is on Stellar. You are sending TFT to the Bridge's address, and the Bridge sends money to your wallet afterward. + + + +### Bridge from BSC to Stellar + +**Pre-requisites:** + +* Metamask account +* TF Connect App+Wallet +* BNB for gas fees +* TFT + +**Steps** + +1. Go to the BSC-Stellar [Bridge website](https://bridge.bsc.threefold.io/). +2. Connect your MetaMask Wallet. +3. Sign in with MetaMask. +4. Choose the option *Deposit from BSC*. +5. Agree to the *ThreeFold Terms*. +6. Read and tick the box of the *Warning Message*. +7. On your phone, open up your ThreeFold Connect App and go to the wallet section. +8. Copy your Stellar address. +9. Paste your Stellar address in the proper field on The BSC-Stellar Bridge. +10. Enter the amount of TFT you want to bridge. +11. Click on *Withdraw*. +12. Follow the instructions on your Metamask Wallet. + +**General Tips** + +* It's a good idea to start with a small amount the first time. +* The process is usually quick, but it can take up to 48h. In doubt, contact [TF Support](https://threefoldfaq.crisp.help/en/). +* Going from Stellar to BSC costs 100 TFT. +* Going from BSC to Stellar costs 1 TFT. +* There is also fixed fee of 0.01 TFT when using the Stellar Blockchain. +* Gas fees on BSC is usually around 5-20 gwei. +* You can try the bridge later if gas fees are high at the moment of your transaction. + + + +## Setting Up TFT on Metamask + +* Download Metamask [here](https://metamask.io/download/). Then, install the Metamask extension in your local browser. +* Create a Metamask account +* Switch the network to `Binance chain` . You will have to create a new network with following information): + * Mainnet + * Network Name: Smart Chain + * New RPC URL: https://bsc-dataseed.binance.org/ + * ChainID: 56 + * Symbol: BNB + * Block Explorer URL: [https://bscscan.com](https://bscscan.com/) +* Add TFT token in Metamask -> custom token -> contract address = `0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf` + +## Bridge Fees + +To learn more about the bridge fees, read [this documentation](../transaction_fees.md). + +## Questions and Feedback + +If you have any question, feel free to write a post on the [Threefold Forum](https://forum.threefold.io/). diff --git a/collections/threefold_token/tft_bridges/bsc_stellar_bridge_verification.md b/collections/threefold_token/tft_bridges/bsc_stellar_bridge_verification.md new file mode 100644 index 0000000..92948a3 --- /dev/null +++ b/collections/threefold_token/tft_bridges/bsc_stellar_bridge_verification.md @@ -0,0 +1,98 @@ +

BSC-Stellar Bridge Verification

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [From Stellar to BSC](#from-stellar-to-bsc) +- [From BSC to Stellar](#from-bsc-to-stellar) +- [Conclusion](#conclusion) + +*** + +## Introduction + +In this guide, we show how to verify transactions on the BSC-Stellar bridge. + +When using the TFT bridge to Binance Chain (BSC), it's not simple to verify that tokens arrived at the destination wallet. The core reason is that it's not a regular token transfer, and so it doesn't show up that way in blockchain explorers. + +Instead, the result of using the bridge is a contract call that actually mints wrapped TFT on Binance Chain. The corresponding tokens are vaulted on Stellar, and when the bridge is used in the opposite direction, TFT on Binance Chain are burnt and then released on Stellar. Thus, the total number of TFT in circulation is constant throughout these operations. + +What we can do, instead of looking at token transfers, is to look for the mint events themselves. By parsing together data from a few different sources, we can verify that tokens sent to the bridge address on Stellar indeed arrived at their destination on Binance Chain. + +## From Stellar to BSC + +We start with a bridge example going from Stellar to BSC. For this tutorial, we'll use an example transaction found by looking at the transaction history from the [bridge wallet](https://stellar.expert/explorer/public/account/GBFFWXWBZDILJJAMSINHPJEUJKB3H4UYXRWNB4COYQAF7UUQSWSBUXW5). This wallet uses both directions of bridging. In our case, we want to look for an inbound transaction. Here's an example: + +![](./img/bsc_stellar_picture_1.png) + +The first thing to do is decode the destination wallet address on Binance Chain, which is contained in the memo we see here on Stellar. It's encoded in base 64 and we can convert back to the original hex using a tool like this [Base64 to Hex Converter](https://base64.guru/converter/decode/hex): + +![](./img/bsc_stellar_picture_2.png) + +The output is the destination address on Binance Chain. Since we usually write hex values with a leading `0x`, the full address in the normal format is `0x64df465bbcee5db45131e9406662818e8ba34fc0`. + +The other thing to note is the date and time of the original Stellar transaction. There are sometimes delays on the bridge, but we know that the outbound transaction on Binance Chain will always happen after the inbound transaction on Stellar. In this case, we are looking at the most recent transaction on the Stellar side of the bridge, so we can just look for the most recent transaction on the Binance side too. + +To do that, we'll go to the [Bitquery explorer](https://explorer.bitquery.io/bsc) for BSC. We're looking for the token contract for TFT on Binance Chain, which you can find in our documentation: `0x8f0fb159380176d324542b3a7933f0c2fd0c2bbf`. + +On the [contract page](https://explorer.bitquery.io/bsc/token/0x8f0fb159380176d324542b3a7933f0c2fd0c2bbf), click **Events**: + +![](./img/bsc_stellar_picture_3.png) + +Then on the row **Mint** events, click on the icon aligned with the **Event Count** column: + +![](./img/bsc_stellar_picture_4.png) + +We then arrive at [this page](https://explorer.bitquery.io/bsc/txs/events?contract=0x8f0fb159380176d324542b3a7933f0c2fd0c2bbf&event=85a66b9141978db9980f7e0ce3b468cebf4f7999f32b23091c5c03e798b1ba7a): + +![](./img/bsc_stellar_picture_5.png) + +You can use the **Date range** selector here to look for events in the past. In this case, we'll just look for the latest one, since that's what we're using for our example. Click the transaction link and then copy the transaction hash from the next page: + +![](./img/bsc_stellar_picture_6.png) + +To get a look into the contract call, we switch over to [BscScan](https://bscscan.com/) at this point for a better view. Search for the transaction hash and then select the event log. We will then see the output address and the amount of TFT minted in the data below: + +![](./img/bsc_stellar_picture_7.png) + +We can see that the address matches the one we decoded from the Stellar memo. As for the **tokens** amount, we need to account fo the fact that TFT uses 7 decimal places. For this reason, we move the decimal place by dividing by 1e7 (i.e. 1x10⁷): + +![](./img/bsc_stellar_picture_8.png) + +The original transaction on Stellar was for 2600 TFT, and the output after subtracting the 100 TFT bridge fee is 2500 TFT. + +## From BSC to Stellar + +Now, we will see a bridge example going from BSC to Stellar. This time, we start at the BscScan explorer. + +Here is an example bridge transaction, as seen from the account transactions view, which is the default view if you search for a wallet address: + +![](./img/BSC%20to%20Stellar1.jpeg) + +We can identify it because it's using the **Withdraw** method in a transaction to the TFT contract address on BSC. + +If we open the [Transaction Details page](https://bscscan.com/tx/0xae2a9b5cdad652ecb1e6252ee44a7f0e3c5fc9cdf1df9fddff3b0c100c4b3cb5) by clicking on the transaction hash and switch to the **Logs** view, we can see more details: + +![](./img/BSC%20to%20Stellar2.png) + +In particular, this shows us the destination address on Stellar and the TFT amount. To get the decimal form, we once again divide by 1e7 (i.e. 1x10⁷). + +Back on StellarExpert, we can find a transaction on the same date just shortly after the transaction on BSC, for the same amount of TFT minus the 1 TFT bridge fee. It originates from the bridge address on Stellar and the destination is the address we see in the contract call above: + +![](./img/BSC%20to%20Stellar3.png) + +As a final step, we double-check that the transaction we see on Stellar is actually the result of the bridge interaction we saw on BSC. It's possible, after all, that the user has sent multiple transactions with the same amount. To do this, we look at the memo on the Stellar transaction. As above, we need to convert from base 64 to hex again. To do so, we can once again use the [Base64 to HEX Converter](https://base64.guru/converter/decode/hex): + +![](./img/BSC%20to%20Stellar4.png) + +If the output hex doesn't already look familiar, you can compare it to the transaction hash from above, while remembering that `0x` is just a formatting convention indicating that hex data follows. Indeed, we can even search it on BscScan, to come full circle back to transaction details page we have seen before. + +We have made a direct link between the use of the bridge contract on BSC and the resulting payment from the bridge on Stellar. + +## Conclusion + +In this guide, we covered how to verify bridge transactions going from BSC to Stellar and from Stellar to BSC. + +In the world of public blockchains, all data is recorded and accessible, but sometimes it takes some investigation to find what we are looking for. + +If you have any questions, you can ask the ThreeFold community for help on the [ThreeFold Forum](http://forum.threefold.io/) or on the [ThreeFold Grid Tester Community](https://t.me/threefoldtesting) on Telegram. \ No newline at end of file diff --git a/collections/threefold_token/tft_bridges/img/BSC to Stellar1.jpeg b/collections/threefold_token/tft_bridges/img/BSC to Stellar1.jpeg new file mode 100644 index 0000000..4a8d7ab Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/BSC to Stellar1.jpeg differ diff --git a/collections/threefold_token/tft_bridges/img/BSC to Stellar2.png b/collections/threefold_token/tft_bridges/img/BSC to Stellar2.png new file mode 100644 index 0000000..fe848ed Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/BSC to Stellar2.png differ diff --git a/collections/threefold_token/tft_bridges/img/BSC to Stellar3.png b/collections/threefold_token/tft_bridges/img/BSC to Stellar3.png new file mode 100644 index 0000000..ab0c2c3 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/BSC to Stellar3.png differ diff --git a/collections/threefold_token/tft_bridges/img/BSC to Stellar4.png b/collections/threefold_token/tft_bridges/img/BSC to Stellar4.png new file mode 100644 index 0000000..7720da1 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/BSC to Stellar4.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_1.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_1.png new file mode 100644 index 0000000..a429ecd Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_1.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_2.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_2.png new file mode 100644 index 0000000..3594fd2 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_2.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_3.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_3.png new file mode 100644 index 0000000..4034630 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_3.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_4.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_4.png new file mode 100644 index 0000000..e0b9e59 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_4.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_5.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_5.png new file mode 100644 index 0000000..19706aa Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_5.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_6.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_6.png new file mode 100644 index 0000000..3400567 Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_6.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_7.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_7.png new file mode 100644 index 0000000..b53b23f Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_7.png differ diff --git a/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_8.png b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_8.png new file mode 100644 index 0000000..b0297fe Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/bsc_stellar_picture_8.png differ diff --git a/collections/threefold_token/tft_bridges/img/tft_bridges_diagram.png b/collections/threefold_token/tft_bridges/img/tft_bridges_diagram.png new file mode 100644 index 0000000..b2d666f Binary files /dev/null and b/collections/threefold_token/tft_bridges/img/tft_bridges_diagram.png differ diff --git a/collections/threefold_token/tft_bridges/tfchain_stellar_bridge.md b/collections/threefold_token/tft_bridges/tfchain_stellar_bridge.md new file mode 100644 index 0000000..a09fd29 --- /dev/null +++ b/collections/threefold_token/tft_bridges/tfchain_stellar_bridge.md @@ -0,0 +1,43 @@ +

TFChain-Stellar Bridges: Main Net and Test Net

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How to Use the TFChain-Stellar Bridge](#how-to-use-the-tfchain-stellar-bridge) +- [Bridge Fees](#bridge-fees) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +We present here the general steps to use the TFChain-Stellar Bridge. Note that the only difference between using the Main net or Test net TFChain-Stellar bridge lies in the ThreeFold Dashboard URL. + +Read the [Dashboard section](../../dashboard/tfchain/tf_token_bridge.md) for more information. + + + +## How to Use the TFChain-Stellar Bridge + +1. Go on the ThreeFold Dashboard + 1. [Main Net](https://dashboard.grid.tf/) + 2. [Test Net](https://dashboard.test.grid.tf/) +2. Go to **TFChain** -> **TF Token Bridge** +3. 2-Way Bridge: + * Transfer TFT from Stellar to TFChain + * Click on *Deposit* + * Transfer TFT from TFChain to Stellar + * Click on *Withdraw* + +Note: If you are on ThreeFold Connect App, you can export your account to the Polkadot extension. Look at the section [Move Farm from the TF app to the TF Portal (polkadot.js)](../storing_tft/tf_connect_app.md#move-farm-from-the-tf-connect-app-to-the-tf-portal-polkadotjs). + + + +## Bridge Fees + +To learn more about the bridge fees, read [this documentation](../transaction_fees.md). + +## Questions and Feedback + +If you have any question, feel free to write a post on the [Threefold Forum](https://forum.threefold.io/). + diff --git a/collections/threefold_token/tft_bridges/tft_bridges.md b/collections/threefold_token/tft_bridges/tft_bridges.md new file mode 100644 index 0000000..509c380 --- /dev/null +++ b/collections/threefold_token/tft_bridges/tft_bridges.md @@ -0,0 +1,69 @@ +

TFT Bridges

+ +

Table of Contents

+ +- [TFChain-Stellar Bridge](./tfchain_stellar_bridge.md) +- [BSC-Stellar Bridge](./bsc_stellar_bridge.md) + - [BSC-Stellar Bridge Verification](./bsc_stellar_bridge_verification.md) +- [Ethereum-Stellar Bridge](./tft_ethereum/tft_ethereum.md) +- [Bridge Fees](../transaction_fees.md) + +*** + +## Introduction + +The ThreeFold Token (TFT) exists on different chains. To transfer TFTs between chains, you can use different TFT Bridges. + +The following diagram shows the different bridges and ways to transfer ThreeFold Tokens (TFT) from one chain to another. + +> Note: You can click on a given bridge to access its related guide. + +```mermaid + +graph LR + A((TFChain-MainNet)) === id1(Stellar / TFChain MainNet Bridge) === B((Stellar Chain)); + C((TFChain-TestNet)) === id2(Stellar / TFChain TestNet Bridge) === B((Stellar Chain)); + B((Stellar Chain)) === id3(Stellar / BSC Bridge) === E((Binance Smart Chain)); + B((Stellar Chain)) === id4(Stellar / Eth Bridge) === D((Ethereum Chain)); + + click id1 "./tfchain_stellar_bridge.html" + click id2 "./tfchain_stellar_bridge.html" + click id3 "./bsc_stellar_bridge.html" + click id4 "./tft_ethereum/tft_ethereum.html" + +``` + +## Links + +The links to the bridges for TFT are the following: + +* Stellar-Ethereum Bridge + * This bridge is accessible at the following link: [https://bridge.eth.threefold.io/](https://bridge.eth.threefold.io/) + * Read [this guide](./tft_ethereum/tft_ethereum.md) for more information +* Stellar-BSC Bridge + * This bridge is accessible at the following link: [https://bridge.bsc.threefold.io/](https://bridge.bsc.threefold.io/) + * Read [this guide](./bsc_stellar_bridge.html) for more information +* The TFChain Main net Bridge + * This bridge is accessible on the ThreeFold Main Net Dashboard: [https://dashboard.grid.tf/](https://dashboard.grid.tf/). + * Read [this guide](./tfchain_stellar_bridge.html) for more information +* The TFChain Test net Bridge + * This bridge is accessible on the ThreeFold Test Net Dashboard: [https://dashboard.test.grid.tf/](https://dashboard.test.grid.tf/). + * Read [this guide](./tfchain_stellar_bridge.html) for more information + +## Chains Functions + +The different bridges help you move your TFT and achieve different goals: + +* The TFChain-Stellar Bridge is used to go between the Stellar Chain and TF Chain for Main net and Test net. +* The BSC-Stellar Bridge is used to go between the Stellar Chain and Binance Smart Chain (BSC). +* The Stellar-Ethereum Bridge is used to go between the Stellar Chain and the Ethereum blockchain. + +As shown in the diagram, to go from BSC to TF Chain, or from TF Chain to BSC, you need to use first the BSC-Stellar bridge, then the Stellar-TFChain bridge. To go from the Ethereum blockchain to TFChain, you need to use the Ethereum-Stellar bridge then the Stellar-TFChain bridge. + +BSC, Stellar and Ethereum can be used to sell/buy TFT, while TFChain can be used to deploy Dapps on the [ThreeFold Dashboard](https://dashboard.grid.tf). The TFT minting process happens on Stellar Blockchain. + +> Note: You should always start with a small amount the first time you try a bridge. + +## Bridge Details + +When you bridge TFT from Stellar to another chain, the TFT on Stellar is vaulted. When you bridge TFT back to Stellar, the TFT on the other chain is burned and the vaulted TFT is released. \ No newline at end of file diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_1.png b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_1.png new file mode 100644 index 0000000..0db09d3 Binary files /dev/null and b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_1.png differ diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_2.png b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_2.png new file mode 100644 index 0000000..b7b21f2 Binary files /dev/null and b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_2.png differ diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_3.png b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_3.png new file mode 100644 index 0000000..85b01b2 Binary files /dev/null and b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_3.png differ diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_4.png b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_4.png new file mode 100644 index 0000000..753c0dd Binary files /dev/null and b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_4.png differ diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_5.png b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_5.png new file mode 100644 index 0000000..073a4a7 Binary files /dev/null and b/collections/threefold_token/tft_bridges/tft_ethereum/img/tft_on_ethereum_image_5.png differ diff --git a/collections/threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md b/collections/threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md new file mode 100644 index 0000000..def5d6a --- /dev/null +++ b/collections/threefold_token/tft_bridges/tft_ethereum/tft_ethereum.md @@ -0,0 +1,36 @@ +

Ethereum-Stellar Bridge

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [How to Use the Ethereum-Stellar Bridge](#how-to-use-the-ethereum-stellar-bridge) +- [Bridge Fees](#bridge-fees) +- [Questions and Feedback](#questions-and-feedback) + +*** + +## Introduction + +The TFT Stellar-Ethereum bridge serves as a vital link between the Stellar and Ethereum blockchains, enabling the seamless transfer of TFT tokens between these two networks. This bridge enhances interoperability and expands the utility of TFT by allowing users to leverage the strengths of both platforms. With the bridge in place, TFT holders can convert their tokens from the Stellar network to the Ethereum network and vice versa, unlocking new possibilities for engagement with decentralized applications, smart contracts, and the vibrant Ethereum ecosystem. This bridge promotes liquidity, facilitates cross-chain transactions, and encourages collaboration between the Stellar and Ethereum communities. + + + +## How to Use the Ethereum-Stellar Bridge + +The easiest way to transfer TFT between Ethereum and Stellar is to use the [TFT Ethereum Bridge](https://bridge.eth.threefold.io). We present here the main steps on how to use this bridge. + +When you go to the [TFT Ethereum-Stellar bridge website](https://bridge.eth.threefold.io/), connect your Ethereum wallet. Then the bridge will present a QR code which you scan with your Stellar wallet. This will populate a transaction with the bridge wallet as the destination and an encoded form of your Ethereum address as the memo. The bridge will scan the transaction, decode the Ethereum wallet address, and deliver newly minted TFT on Ethereum, minus the bridge fees. + +For the reverse operation, going from Ethereum to Stellar, there is a smart contract interaction that burns TFT on Ethereum while embedding your Stellar wallet address. The bridge will scan that transaction and release TFT from its vault wallet to the specified Stellar address, again minus the bridge fees. + +Note that the contract address for TFT on Ethereum is the following: `0x395E925834996e558bdeC77CD648435d620AfB5b`. + +To see the ThreeFold Token on Etherscan, check [this link](https://etherscan.io/token/0x395E925834996e558bdeC77CD648435d620AfB5b). + +## Bridge Fees + +To learn more about the bridge fees, read [this documentation](../../transaction_fees.md). + +## Questions and Feedback + +If you have any question, feel free to write a post on the [Threefold Forum](https://forum.threefold.io/). \ No newline at end of file diff --git a/collections/threefold_token/tft_intro.md b/collections/threefold_token/tft_intro.md new file mode 100644 index 0000000..2472a94 --- /dev/null +++ b/collections/threefold_token/tft_intro.md @@ -0,0 +1,13 @@ +# An Introduction to the ThreeFold Token (TFT) + +ThreeFold tokens, or TFTs, are exclusively generated when new capacity is added to the TF Grid. There are no centralized issuers. Tokens have not been created out of thin air. + +While the ThreeFold Grid can expand, a maximum of 4 Billion TFTs can ever be in circulation. This limit ensures stability of value and incentivization for all stakeholders. + +TFT lives on the Stellar Blockchain. TFT holders benefit from a big ecosystem of proven wallets and mediums of exchange. + +By employing Stellar technology, TFT transactions and smart contracts are powered by one of the most energy-efficient blockchains available. Furthermore, TFT is the medium of exchange on the greenest internet network in the world. The market for farming, cultivating and trading TFT is open to all. + +Anyone with internet connection, power supply and necessary hardware can become a Farmer or trade ThreeFold tokens (TFT). + +By farming, buying, holding, and utilizing ThreeFold Tokens, you are actively supporting the expansion of the ThreeFold Grid and its use cases — creating a more sustainable, fair, and equally accessible Internet. \ No newline at end of file diff --git a/collections/threefold_token/threefold_token.md b/collections/threefold_token/threefold_token.md new file mode 100644 index 0000000..de0079f --- /dev/null +++ b/collections/threefold_token/threefold_token.md @@ -0,0 +1,95 @@ +

ThreeFold Token

+ +

Table of Contents

+ +- [Introduction to TFT](#introduction-to-tft) +- [Chains with TFT](#chains-with-tft) +- [TFT Contract Addresses on Chains](#tft-contract-addresses-on-chains) +- [Bridges Between Chains](#bridges-between-chains) +- [Storing TFT](#storing-tft) +- [Buy and Sell TFT](#buy-and-sell-tft) +- [Liquidity Provider (LP)](#liquidity-provider-lp) +- [Transaction Fees](#transaction-fees) +- [Deploy on the TFGrid with TFT](#deploy-on-the-tfgrid-with-tft) +- [Disclaimer](#disclaimer) + +*** + +## Introduction to TFT + +The ThreeFold Token (TFT) is a decentralized digital currency used to buy autonomous and decentralized Internet services (compute, storage, and application) on the ThreeFold Grid. + +ThreeFold Tokens are generated through a process called “Farming”. Farming happens when active internet capacity is added to the ThreeFold Grid. Independent farmers earn ThreeFold Tokens (TFT) by providing neutral and decentralized internet capacity, thus expending the usable TF Grid. Therefore no central entity controls the internet. + +> [Get an overview of the ThreeFold token](../../knowledge_base/about/token_overview/token_overview.md) + +## Chains with TFT + +TFT lives on 4 different chains: TFChain, Stellar chain, Ethereum chain and Binance Smart Chain. + +- TFT are minted on the Stellar chain. +- TFT are used to deploy workloads on TFChain. +- TFT can be transacted on Ethereum chain, Binance Smart Chain and Stellar chain. + +## TFT Contract Addresses on Chains + +The TFT contract address on different chains are the following: + +- [TFT Contract address on Stellar](https://stellarchain.io/assets/TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47) + - ``` + TFT-GBOVQKJYHXRR3DX6NOX2RRYFRCUMSADGDESTDNBDS6CDVLGVESRTAC47 + ``` +- [TFT Contract address on Ethereum](https://etherscan.io/token/0x395E925834996e558bdeC77CD648435d620AfB5b) + - ``` + 0x395E925834996e558bdeC77CD648435d620AfB5b + ``` +- [TFT Contract address on BSC](https://bscscan.com/address/0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf) + - ``` + 0x8f0FB159380176D324542b3a7933F0C2Fd0c2bbf + ``` + +## Bridges Between Chains + +[Bridges](./tft_bridges/tft_bridges.md) are available to easily navigate between the chains. + +- [TFChain-Stellar Bridge](./tft_bridges/tfchain_stellar_bridge.html) +- [BSC-Stellar Bridge](./tft_bridges/bsc_stellar_bridge.html) +- [Ethereum-Stellar Bridge](./tft_bridges/tft_ethereum/tft_ethereum.html) + +## Storing TFT + +There are many ways to store TFT. The [TF Connect app wallet](./storing_tft/tf_connect_app.md) and a [hardwallet wallet](./storing_tft/hardware_wallet.md) are two of the many possibilities. + +> [Easily Store TFT](./storing_tft/storing_tft.md) + +## Buy and Sell TFT + +You can [buy and sell TFT](./buy_sell_tft/buy_sell_tft.md) with cryptocurrencies on Stellar Chain, Ethereum Chain and BNB Smart Chain. + +Using Lobstr is very effective to buy TFT with fiat or crypto in no time: + +> [Get TFT: Quick Guide](./buy_sell_tft/tft_lobstr/tft_lobstr_short_guide.md) + +## Liquidity Provider (LP) + +A liquidity provider (LP) is an individual or entity that contributes liquidity to a decentralized exchange or automated market maker (AMM) platform + +> [Become a Liquidity Provider](./liquidity/liquidity_readme.md) + +## Transaction Fees + +Each time transactions are done on chains, transaction fees apply. + +> Learn about [Transaction Fees](./transaction_fees.md) + +## Deploy on the TFGrid with TFT + +You can do almost anything on the TFGrid: as long as you're doing Linux stuff, ZOS got your back! + +> [Get Started on the TFGrid](../system_administrators/getstarted/tfgrid3_getstarted.md) + +## Disclaimer + +> The information provided in this tutorial or any related discussion is not intended as investment advice. The purpose is to provide educational and informational content only. Investing in cryptocurrencies or any other assets carries inherent risks, and it is crucial to conduct your own research and exercise caution before making any investment decisions. +> +> **The ThreeFold Token (TFT)** is not to be considered as a traditional investment instrument. The value of cryptocurrencies can be volatile, and there are no guarantees of profits or returns. Always be aware of the risks involved and make informed choices based on your own assessment and understanding. We strongly encourage you to read our [full disclaimer](../../knowledge_base/legal/disclaimer.md) and seek advice from a qualified financial professional if needed. \ No newline at end of file diff --git a/collections/threefold_token/transaction_fees.md b/collections/threefold_token/transaction_fees.md new file mode 100644 index 0000000..0684559 --- /dev/null +++ b/collections/threefold_token/transaction_fees.md @@ -0,0 +1,56 @@ +

TFT Transaction Fees

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Bridge Fees](#bridge-fees) +- [Chain Fees](#chain-fees) +- [Notes](#notes) + +*** + +## Introduction + +Here are the TFT transaction fees. Note that these values can be subject to change. We will do our best to update this section of the manual as needed. + +## Bridge Fees + +The following are the fees for each type of bridge transfer. + +- BSC-Stellar bridge + - From Stellar to BSC + - 100 TFT + - From BSC to Stellar + - 1 TFT +- Eth-Stellar bridge + - From Stellar to Eth + - 2000 TFT + - From Eth to Stellar + - 1 TFT +- TFChain-Stellar bridge + - From Stellar to TFChain + - 1 TFT + - From TFChain to Stellar + - 1 TFT + +## Chain Fees + +The following are the transaction fees for each chain. Every time a transaction is done on a given chain, there are fees for the operation on chain. + +- Stellar chain + - 0.01 TFT +- TFChain + - 0.001 TFT +- Ethereum chain + - Fees are the gas price + - Consult the Ethereum official documentation for more details +- BSC + - Fees are the gas price + - Consult the BSC official documentation for more details + +## Notes + +Here are some notes to take into account when doing TFT transfers: + +* The fees paid directly by the users will be shown in the user's wallet at transaction time, not on the bridge page. +* The bridge fees can vary based on current on-chain gas prices. The current fee will always be shown on the bridge page before a transaction is initiated. \ No newline at end of file diff --git a/heroscript/manual/book_collections.md b/heroscript/manual/book_collections.md index 38a27b0..4412742 100644 --- a/heroscript/manual/book_collections.md +++ b/heroscript/manual/book_collections.md @@ -8,6 +8,29 @@ !!doctree.add url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/dashboard' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/developers' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/farmers' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/system_administrators' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/threefold_token' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/faq' + +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/about' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/technology' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/manual_legal' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/farming' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/cloud' +!!doctree.add + url:'https://git.ourworld.tf/tfgrid/info_tfgrid/src/branch/development_manual/collections/collaboration' ```