Files
docs_tfgrid_get_started/docs/mycelium-cloud/tutorial.md

11 KiB

sidebar_position
sidebar_position
3

Deployment Tutorials

Learn by example with these practical deployment tutorials for Mycelium Cloud.

Prerequisites

Before starting these tutorials, ensure you have:

  • Deployed cluster with kubectl access
  • Mycelium running for network access
  • kubectl configured with your cluster
# Verify your setup
kubectl get nodes

# Should show your cluster nodes

Tutorial 1: Hello World with Nginx

Deploy a simple web server to verify your cluster is working.

Step 1: Create the Deployment

Save as hello-world-deploy.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

Apply it:

kubectl apply -f hello-world-deploy.yaml

Step 2: Expose the Service

Save as hello-world-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-world-service
spec:
  selector:
    app: hello-world
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

Apply it:

kubectl apply -f hello-world-svc.yaml

Step 3: Access Your Application

# Port forward to local machine
kubectl port-forward service/hello-world-service 8080:80

Open http://localhost:8080 - you should see the Nginx welcome page!

Cleanup

kubectl delete -f hello-world-deploy.yaml
kubectl delete -f hello-world-svc.yaml

Tutorial 2: Python Servers with Load Balancing

Deploy multiple Python HTTP servers to demonstrate load balancing.

Step 1: Create the Deployments

Save as python-servers.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-server-1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: python-server
      server-id: "1"
  template:
    metadata:
      labels:
        app: python-server
        server-id: "1"
    spec:
      containers:
      - name: python-server
        image: python:3.9-slim
        command: ["python", "-c"]
        args:
        - |
          import http.server, socketserver, json
          class Handler(http.server.SimpleHTTPRequestHandler):
              def do_GET(self):
                  self.send_response(200)
                  self.send_header('Content-type', 'application/json')
                  self.end_headers()
                  response = {"server": "Python Server 1", "pod": "$(hostname)"}
                  self.wfile.write(json.dumps(response).encode())
          with socketserver.TCPServer(("", 8000), Handler) as httpd:
              httpd.serve_forever()
        ports:
        - containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-server-2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: python-server
      server-id: "2"
  template:
    metadata:
      labels:
        app: python-server
        server-id: "2"
    spec:
      containers:
      - name: python-server
        image: python:3.9-slim
        command: ["python", "-c"]
        args:
        - |
          import http.server, socketserver, json
          class Handler(http.server.SimpleHTTPRequestHandler):
              def do_GET(self):
                  self.send_response(200)
                  self.send_header('Content-type', 'application/json')
                  self.end_headers()
                  response = {"server": "Python Server 2", "pod": "$(hostname)"}
                  self.wfile.write(json.dumps(response).encode())
          with socketserver.TCPServer(("", 8000), Handler) as httpd:
              httpd.serve_forever()
        ports:
        - containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-server-3
spec:
  replicas: 2
  selector:
    matchLabels:
      app: python-server
      server-id: "3"
  template:
    metadata:
      labels:
        app: python-server
        server-id: "3"
    spec:
      containers:
      - name: python-server
        image: python:3.9-slim
        command: ["python", "-c"]
        args:
        - |
          import http.server, socketserver, json
          class Handler(http.server.SimpleHTTPRequestHandler):
              def do_GET(self):
                  self.send_response(200)
                  self.send_header('Content-type', 'application/json')
                  self.end_headers()
                  response = {"server": "Python Server 3", "pod": "$(hostname)"}
                  self.wfile.write(json.dumps(response).encode())
          with socketserver.TCPServer(("", 8000), Handler) as httpd:
              httpd.serve_forever()
        ports:
        - containerPort: 8000

Step 2: Create Load Balancing Service

Save as python-lb-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: python-lb
spec:
  selector:
    app: python-server
  ports:
  - port: 80
    targetPort: 8000
  type: ClusterIP

Step 3: Deploy Everything

kubectl apply -f python-servers.yaml
kubectl apply -f python-lb-service.yaml

# Wait for pods to be ready
kubectl get pods -l app=python-server

Step 4: Test Load Balancing

# Port forward the service
kubectl port-forward service/python-lb 8080:80

# In another terminal, test the load balancing
for i in {1..10}; do
  curl http://localhost:8080
done

You'll see responses from different servers and pods, showing the load balancing in action!

Cleanup

kubectl delete -f python-servers.yaml
kubectl delete -f python-lb-service.yaml

Tutorial 3: Stateful Application with Persistent Storage

Deploy a simple application that persists data.

Step 1: Create Persistent Volume Claim

Save as storage.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Step 2: Create Stateful Deployment

Save as stateful-app.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: data-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: data-app
  template:
    metadata:
      labels:
        app: data-app
    spec:
      containers:
      - name: app
        image: busybox:latest
        command: ["sh", "-c"]
        args:
        - |
          echo "Starting data application..."
          while true; do
            date >> /data/log.txt
            echo "Written at $(date)" >> /data/log.txt
            sleep 10
          done
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: data-pvc

Step 3: Deploy and Verify

kubectl apply -f storage.yaml
kubectl apply -f stateful-app.yaml

# Wait for pod to be ready
kubectl get pods -l app=data-app

# Check the logs being written
kubectl exec -it $(kubectl get pod -l app=data-app -o name) -- cat /data/log.txt

Step 4: Test Persistence

# Delete the pod
kubectl delete pod -l app=data-app

# Wait for new pod to start
kubectl get pods -l app=data-app

# Check data persisted
kubectl exec -it $(kubectl get pod -l app=data-app -o name) -- cat /data/log.txt

The data from before the pod deletion should still be there!

Cleanup

kubectl delete -f stateful-app.yaml
kubectl delete -f storage.yaml

Tutorial 4: Multi-Tier Application

Deploy a simple web app with a database backend.

Step 1: Deploy Redis Database

Save as redis.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        ports:
        - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379
  type: ClusterIP

Step 2: Deploy Frontend Application

Save as frontend.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: nginx:alpine
        ports:
        - containerPort: 80
        env:
        - name: REDIS_HOST
          value: "redis"
        - name: REDIS_PORT
          value: "6379"
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

Step 3: Deploy and Access

kubectl apply -f redis.yaml
kubectl apply -f frontend.yaml

# Wait for all pods
kubectl get pods

# Port forward to access frontend
kubectl port-forward service/frontend 8080:80

Open http://localhost:8080 to access your multi-tier application.

Cleanup

kubectl delete -f frontend.yaml
kubectl delete -f redis.yaml

Best Practices

Resource Limits

Always set resource limits:

resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

Health Checks

Add liveness and readiness probes:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Labels and Annotations

Use descriptive labels:

metadata:
  labels:
    app: myapp
    version: v1.0
    environment: production

Troubleshooting

Pods Not Starting

# Check pod status
kubectl describe pod <pod-name>

# Check logs
kubectl logs <pod-name>

# Check events
kubectl get events --sort-by='.lastTimestamp'

Service Not Accessible

# Check service
kubectl describe service <service-name>

# Check endpoints
kubectl get endpoints <service-name>

# Test from within cluster
kubectl run -it --rm debug --image=alpine --restart=Never -- sh
# Then: wget -O- http://service-name

Resource Issues

# Check node resources
kubectl top nodes

# Check pod resources
kubectl top pods

# Check resource requests/limits
kubectl describe nodes

Next Steps

Now that you've completed these tutorials:

Resources


:::tip Keep Learning

These tutorials cover the basics. The real power of Kubernetes comes from combining these concepts to build complex, scalable applications on the ThreeFold Grid! :::