- [Master and Workers tabs](#master-and-workers-tabs)
- [Kubeconfig](#kubeconfig)
- [Manage Workers](#manage-workers)
***
## Introduction
Kubernetes is the standard container orchestration tool.
On the TF grid, Kubernetes clusters can be deployed out of the box. We have implemented [K3S](https://k3s.io/), a full-blown Kubernetes offering that uses only half of the memory footprint. It is packaged as a single binary and made more lightweight to run workloads in resource-constrained locations (fits e.g. IoT, edge, ARM workloads).
-`Cluster Token`: It's used for authentication between your worker nodes and master node. You could use the auto-generated one or type your own.
## Master and Workers tabs
![](./img/solutions_k8s_master.png)
![](./img/solutions_k8s_workers.png)
> Currently, we only support "single-master-multi-worker" k8s clusters. So you could always add more than one worker node by clicking on the **+** in the ***Worker*** tab.
## Kubeconfig
Once the cluster is ready, you can SSH into the cluster using `ssh root@IP`
Onced connected via SSH, you can execute commands on the cluster like `kubectl get nodes`, and to get the kubeconfig, you can find it in `/root/.kube/config`
> if it doesn't exist in `/root/.kube/config` it can be in `/etc/rancher/k3s/k3s.yaml`
If you want to use kubectl through another machine, you need to change the line `server: https://127.0.0.1:6443` to be `server: https://PLANETARYIP_OR_PUBLICIP/6443`
replace PLANETARYIP_OR_PUBLICIP with the IP you want to reach th cluster through.
## Manage Workers
Add or Remove workers in any **Kubernetes cluster**.