580 lines
11 KiB
Markdown
580 lines
11 KiB
Markdown
---
|
|
sidebar_position: 3
|
|
---
|
|
|
|
# Deployment Tutorials
|
|
|
|
Learn by example with these practical deployment tutorials for Mycelium Cloud.
|
|
|
|
## Prerequisites
|
|
|
|
Before starting these tutorials, ensure you have:
|
|
|
|
- ✅ Deployed cluster with kubectl access
|
|
- ✅ Mycelium running for network access
|
|
- ✅ kubectl configured with your cluster
|
|
|
|
```bash
|
|
# Verify your setup
|
|
kubectl get nodes
|
|
|
|
# Should show your cluster nodes
|
|
```
|
|
|
|
## Tutorial 1: Hello World with Nginx
|
|
|
|
Deploy a simple web server to verify your cluster is working.
|
|
|
|
### Step 1: Create the Deployment
|
|
|
|
Save as `hello-world-deploy.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: hello-world
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
app: hello-world
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: hello-world
|
|
spec:
|
|
containers:
|
|
- name: nginx
|
|
image: nginx:1.21
|
|
ports:
|
|
- containerPort: 80
|
|
```
|
|
|
|
Apply it:
|
|
|
|
```bash
|
|
kubectl apply -f hello-world-deploy.yaml
|
|
```
|
|
|
|
### Step 2: Expose the Service
|
|
|
|
Save as `hello-world-svc.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: hello-world-service
|
|
spec:
|
|
selector:
|
|
app: hello-world
|
|
ports:
|
|
- port: 80
|
|
targetPort: 80
|
|
type: ClusterIP
|
|
```
|
|
|
|
Apply it:
|
|
|
|
```bash
|
|
kubectl apply -f hello-world-svc.yaml
|
|
```
|
|
|
|
### Step 3: Access Your Application
|
|
|
|
```bash
|
|
# Port forward to local machine
|
|
kubectl port-forward service/hello-world-service 8080:80
|
|
```
|
|
|
|
Open `http://localhost:8080` - you should see the Nginx welcome page!
|
|
|
|
### Cleanup
|
|
|
|
```bash
|
|
kubectl delete -f hello-world-deploy.yaml
|
|
kubectl delete -f hello-world-svc.yaml
|
|
```
|
|
|
|
## Tutorial 2: Python Servers with Load Balancing
|
|
|
|
Deploy multiple Python HTTP servers to demonstrate load balancing.
|
|
|
|
### Step 1: Create the Deployments
|
|
|
|
Save as `python-servers.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: python-server-1
|
|
spec:
|
|
replicas: 2
|
|
selector:
|
|
matchLabels:
|
|
app: python-server
|
|
server-id: "1"
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: python-server
|
|
server-id: "1"
|
|
spec:
|
|
containers:
|
|
- name: python-server
|
|
image: python:3.9-slim
|
|
command: ["python", "-c"]
|
|
args:
|
|
- |
|
|
import http.server, socketserver, json
|
|
class Handler(http.server.SimpleHTTPRequestHandler):
|
|
def do_GET(self):
|
|
self.send_response(200)
|
|
self.send_header('Content-type', 'application/json')
|
|
self.end_headers()
|
|
response = {"server": "Python Server 1", "pod": "$(hostname)"}
|
|
self.wfile.write(json.dumps(response).encode())
|
|
with socketserver.TCPServer(("", 8000), Handler) as httpd:
|
|
httpd.serve_forever()
|
|
ports:
|
|
- containerPort: 8000
|
|
---
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: python-server-2
|
|
spec:
|
|
replicas: 2
|
|
selector:
|
|
matchLabels:
|
|
app: python-server
|
|
server-id: "2"
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: python-server
|
|
server-id: "2"
|
|
spec:
|
|
containers:
|
|
- name: python-server
|
|
image: python:3.9-slim
|
|
command: ["python", "-c"]
|
|
args:
|
|
- |
|
|
import http.server, socketserver, json
|
|
class Handler(http.server.SimpleHTTPRequestHandler):
|
|
def do_GET(self):
|
|
self.send_response(200)
|
|
self.send_header('Content-type', 'application/json')
|
|
self.end_headers()
|
|
response = {"server": "Python Server 2", "pod": "$(hostname)"}
|
|
self.wfile.write(json.dumps(response).encode())
|
|
with socketserver.TCPServer(("", 8000), Handler) as httpd:
|
|
httpd.serve_forever()
|
|
ports:
|
|
- containerPort: 8000
|
|
---
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: python-server-3
|
|
spec:
|
|
replicas: 2
|
|
selector:
|
|
matchLabels:
|
|
app: python-server
|
|
server-id: "3"
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: python-server
|
|
server-id: "3"
|
|
spec:
|
|
containers:
|
|
- name: python-server
|
|
image: python:3.9-slim
|
|
command: ["python", "-c"]
|
|
args:
|
|
- |
|
|
import http.server, socketserver, json
|
|
class Handler(http.server.SimpleHTTPRequestHandler):
|
|
def do_GET(self):
|
|
self.send_response(200)
|
|
self.send_header('Content-type', 'application/json')
|
|
self.end_headers()
|
|
response = {"server": "Python Server 3", "pod": "$(hostname)"}
|
|
self.wfile.write(json.dumps(response).encode())
|
|
with socketserver.TCPServer(("", 8000), Handler) as httpd:
|
|
httpd.serve_forever()
|
|
ports:
|
|
- containerPort: 8000
|
|
```
|
|
|
|
### Step 2: Create Load Balancing Service
|
|
|
|
Save as `python-lb-service.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: python-lb
|
|
spec:
|
|
selector:
|
|
app: python-server
|
|
ports:
|
|
- port: 80
|
|
targetPort: 8000
|
|
type: ClusterIP
|
|
```
|
|
|
|
### Step 3: Deploy Everything
|
|
|
|
```bash
|
|
kubectl apply -f python-servers.yaml
|
|
kubectl apply -f python-lb-service.yaml
|
|
|
|
# Wait for pods to be ready
|
|
kubectl get pods -l app=python-server
|
|
```
|
|
|
|
### Step 4: Test Load Balancing
|
|
|
|
```bash
|
|
# Port forward the service
|
|
kubectl port-forward service/python-lb 8080:80
|
|
|
|
# In another terminal, test the load balancing
|
|
for i in {1..10}; do
|
|
curl http://localhost:8080
|
|
done
|
|
```
|
|
|
|
You'll see responses from different servers and pods, showing the load balancing in action!
|
|
|
|
### Cleanup
|
|
|
|
```bash
|
|
kubectl delete -f python-servers.yaml
|
|
kubectl delete -f python-lb-service.yaml
|
|
```
|
|
|
|
## Tutorial 3: Stateful Application with Persistent Storage
|
|
|
|
Deploy a simple application that persists data.
|
|
|
|
### Step 1: Create Persistent Volume Claim
|
|
|
|
Save as `storage.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: v1
|
|
kind: PersistentVolumeClaim
|
|
metadata:
|
|
name: data-pvc
|
|
spec:
|
|
accessModes:
|
|
- ReadWriteOnce
|
|
resources:
|
|
requests:
|
|
storage: 1Gi
|
|
```
|
|
|
|
### Step 2: Create Stateful Deployment
|
|
|
|
Save as `stateful-app.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: data-app
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
app: data-app
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: data-app
|
|
spec:
|
|
containers:
|
|
- name: app
|
|
image: busybox:latest
|
|
command: ["sh", "-c"]
|
|
args:
|
|
- |
|
|
echo "Starting data application..."
|
|
while true; do
|
|
date >> /data/log.txt
|
|
echo "Written at $(date)" >> /data/log.txt
|
|
sleep 10
|
|
done
|
|
volumeMounts:
|
|
- name: data
|
|
mountPath: /data
|
|
volumes:
|
|
- name: data
|
|
persistentVolumeClaim:
|
|
claimName: data-pvc
|
|
```
|
|
|
|
### Step 3: Deploy and Verify
|
|
|
|
```bash
|
|
kubectl apply -f storage.yaml
|
|
kubectl apply -f stateful-app.yaml
|
|
|
|
# Wait for pod to be ready
|
|
kubectl get pods -l app=data-app
|
|
|
|
# Check the logs being written
|
|
kubectl exec -it $(kubectl get pod -l app=data-app -o name) -- cat /data/log.txt
|
|
```
|
|
|
|
### Step 4: Test Persistence
|
|
|
|
```bash
|
|
# Delete the pod
|
|
kubectl delete pod -l app=data-app
|
|
|
|
# Wait for new pod to start
|
|
kubectl get pods -l app=data-app
|
|
|
|
# Check data persisted
|
|
kubectl exec -it $(kubectl get pod -l app=data-app -o name) -- cat /data/log.txt
|
|
```
|
|
|
|
The data from before the pod deletion should still be there!
|
|
|
|
### Cleanup
|
|
|
|
```bash
|
|
kubectl delete -f stateful-app.yaml
|
|
kubectl delete -f storage.yaml
|
|
```
|
|
|
|
## Tutorial 4: Multi-Tier Application
|
|
|
|
Deploy a simple web app with a database backend.
|
|
|
|
### Step 1: Deploy Redis Database
|
|
|
|
Save as `redis.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: redis
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
app: redis
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: redis
|
|
spec:
|
|
containers:
|
|
- name: redis
|
|
image: redis:7-alpine
|
|
ports:
|
|
- containerPort: 6379
|
|
---
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: redis
|
|
spec:
|
|
selector:
|
|
app: redis
|
|
ports:
|
|
- port: 6379
|
|
targetPort: 6379
|
|
type: ClusterIP
|
|
```
|
|
|
|
### Step 2: Deploy Frontend Application
|
|
|
|
Save as `frontend.yaml`:
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: frontend
|
|
spec:
|
|
replicas: 3
|
|
selector:
|
|
matchLabels:
|
|
app: frontend
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: frontend
|
|
spec:
|
|
containers:
|
|
- name: frontend
|
|
image: nginx:alpine
|
|
ports:
|
|
- containerPort: 80
|
|
env:
|
|
- name: REDIS_HOST
|
|
value: "redis"
|
|
- name: REDIS_PORT
|
|
value: "6379"
|
|
---
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: frontend
|
|
spec:
|
|
selector:
|
|
app: frontend
|
|
ports:
|
|
- port: 80
|
|
targetPort: 80
|
|
type: ClusterIP
|
|
```
|
|
|
|
### Step 3: Deploy and Access
|
|
|
|
```bash
|
|
kubectl apply -f redis.yaml
|
|
kubectl apply -f frontend.yaml
|
|
|
|
# Wait for all pods
|
|
kubectl get pods
|
|
|
|
# Port forward to access frontend
|
|
kubectl port-forward service/frontend 8080:80
|
|
```
|
|
|
|
Open `http://localhost:8080` to access your multi-tier application.
|
|
|
|
### Cleanup
|
|
|
|
```bash
|
|
kubectl delete -f frontend.yaml
|
|
kubectl delete -f redis.yaml
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Resource Limits
|
|
|
|
Always set resource limits:
|
|
|
|
```yaml
|
|
resources:
|
|
requests:
|
|
memory: "64Mi"
|
|
cpu: "250m"
|
|
limits:
|
|
memory: "128Mi"
|
|
cpu: "500m"
|
|
```
|
|
|
|
### Health Checks
|
|
|
|
Add liveness and readiness probes:
|
|
|
|
```yaml
|
|
livenessProbe:
|
|
httpGet:
|
|
path: /health
|
|
port: 8080
|
|
initialDelaySeconds: 30
|
|
periodSeconds: 10
|
|
|
|
readinessProbe:
|
|
httpGet:
|
|
path: /ready
|
|
port: 8080
|
|
initialDelaySeconds: 5
|
|
periodSeconds: 5
|
|
```
|
|
|
|
### Labels and Annotations
|
|
|
|
Use descriptive labels:
|
|
|
|
```yaml
|
|
metadata:
|
|
labels:
|
|
app: myapp
|
|
version: v1.0
|
|
environment: production
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Pods Not Starting
|
|
|
|
```bash
|
|
# Check pod status
|
|
kubectl describe pod <pod-name>
|
|
|
|
# Check logs
|
|
kubectl logs <pod-name>
|
|
|
|
# Check events
|
|
kubectl get events --sort-by='.lastTimestamp'
|
|
```
|
|
|
|
### Service Not Accessible
|
|
|
|
```bash
|
|
# Check service
|
|
kubectl describe service <service-name>
|
|
|
|
# Check endpoints
|
|
kubectl get endpoints <service-name>
|
|
|
|
# Test from within cluster
|
|
kubectl run -it --rm debug --image=alpine --restart=Never -- sh
|
|
# Then: wget -O- http://service-name
|
|
```
|
|
|
|
### Resource Issues
|
|
|
|
```bash
|
|
# Check node resources
|
|
kubectl top nodes
|
|
|
|
# Check pod resources
|
|
kubectl top pods
|
|
|
|
# Check resource requests/limits
|
|
kubectl describe nodes
|
|
```
|
|
|
|
## Next Steps
|
|
|
|
Now that you've completed these tutorials:
|
|
|
|
- Deploy your own applications
|
|
- Explore [Helm](https://helm.sh/) for package management
|
|
- Learn about [Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress/) for advanced routing
|
|
- Study [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for databases
|
|
- Explore [ConfigMaps and Secrets](https://kubernetes.io/docs/concepts/configuration/) for configuration management
|
|
|
|
## Resources
|
|
|
|
- **Kubernetes Documentation**: [kubernetes.io/docs](https://kubernetes.io/docs/)
|
|
- **kubectl Cheat Sheet**: [kubernetes.io/docs/reference/kubectl/cheatsheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/)
|
|
- **Mycelium Cloud FAQ**: [codescalers.github.io/www_kubecloud/faq](https://codescalers.github.io/www_kubecloud/faq)
|
|
- **Community**: [Telegram](https://t.me/threefold/1)
|
|
|
|
---
|
|
|
|
:::tip Keep Learning
|
|
|
|
These tutorials cover the basics. The real power of Kubernetes comes from combining these concepts to build complex, scalable applications on the ThreeFold Grid!
|
|
:::
|