14 KiB
Mycelium Cloud - Redis Cache Example
A complete, production-ready example for deploying a Redis cache with web interface on Mycelium Cloud Kubernetes cluster. Features multi-container pod architecture, visual Redis management, and comprehensive caching patterns.
📁 What This Contains
This directory contains everything you need to deploy a Redis cache system:
- redis-cache.md - This comprehensive guide
- redis-cache-deployment.yaml - Multi-container pod deployment
- redis-cache-service.yaml - LoadBalancer service configuration
- redis.conf - Redis server configuration
- web-interface.py - Python Flask web interface code (mounted via ConfigMap)
🚀 Quick Start (3 minutes)
# 1. Deploy the application (creates ConfigMap, Deployment, and Service)
kubectl apply -f redis-cache-deployment.yaml
kubectl apply -f redis-cache-service.yaml
# 2. Wait for pods to be ready
kubectl wait --for=condition=ready pod -l app=redis-cache --timeout=120s
# 3. Access Redis via CLI
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h localhost -p 6379 ping
# 4. Test data storage
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h localhost -p 6379 SET test "Hello from Mycelium Cloud!"
# 5. Access web interface
kubectl port-forward service/redis-cache-service 8380:8080 &
curl http://localhost:8380
Expected Result: Redis responds with "PONG" and stores/retrieves data successfully. Web interface displays Redis statistics and key management tools.
📋 What You'll Learn
- ✅ Advanced Kubernetes patterns (multi-container pods)
- ✅ Redis deployment and configuration
- ✅ ConfigMap usage for application code
- ✅ LoadBalancer services on Mycelium Cloud
- ✅ Port-forwarding for multiple services (Redis + Web)
- ✅ Production caching patterns
- ✅ Web-based Redis management
- ✅ Resource limits and container orchestration
🏗️ Architecture
This example uses a multi-container pod pattern with two-file approach:
- redis-cache-deployment.yaml - Contains ConfigMap, Deployment with 2 containers
- redis-cache-service.yaml - Contains networking configuration
Network Flow:
kubectl port-forward → LoadBalancer Service → Pod (redis-cache + redis-web-interface)
Multi-Container Architecture:
- redis-cache: Redis 7-alpine server (port 6379)
- redis-web-interface: Python Flask web app (port 8080)
- ConfigMap: Mounted web interface code (web-interface.py)
🔧 Files Explanation
redis-cache-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
replicas: 1
selector:
matchLabels:
app: redis-cache
template:
metadata:
labels:
app: redis-cache
spec:
containers:
- name: redis-cache
image: redis:7-alpine
ports:
- containerPort: 6379
name: redis
command: ["redis-server"]
args: ["--bind", "0.0.0.0", "--protected-mode", "no", "--maxmemory", "64mb"]
resources:
requests:
memory: "32Mi"
cpu: "100m"
limits:
memory: "64Mi"
cpu: "200m"
- name: redis-web-interface
image: python:3.11-alpine
ports:
- containerPort: 8080
name: web
command: ["/bin/sh", "-c"]
args:
- |
pip install flask redis &&
python /app/web-interface.py
volumeMounts:
- name: web-interface
mountPath: /app
resources:
requests:
memory: "16Mi"
cpu: "50m"
limits:
memory: "32Mi"
cpu: "100m"
volumes:
- name: web-interface
configMap:
name: redis-web-interface
What it does:
- Creates multi-container pod with Redis server + web interface
- ConfigMap mounts web interface code
- Resource limits for both containers
- Redis configured with memory limits and LRU eviction
redis-cache-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-cache-service
spec:
selector:
app: redis-cache
ports:
- name: redis
port: 6379
targetPort: 6379
- name: web
port: 8080
targetPort: 8080
type: LoadBalancer
What it does:
- Creates LoadBalancer service
- Exposes both Redis (6379) and web (8080) ports
- Routes traffic to multi-container pod
🌐 Access Methods
Method 1: Port-Forward (Recommended for Mycelium Cloud)
Access Redis CLI:
# Keep terminal open, forward Redis port
kubectl port-forward service/redis-cache-service 6379:6379
# In another terminal, test Redis
redis-cli -h localhost -p 6379 ping
redis-cli -h localhost -p 6379 set mykey "Hello Redis!"
redis-cli -h localhost -p 6379 get mykey
Access Web Interface:
# Forward web interface port
kubectl port-forward service/redis-cache-service 8380:8080
# Access via browser: http://localhost:8380
curl http://localhost:8380
Background Mode:
# Start both port-forwards in background
nohup kubectl port-forward service/redis-cache-service 6379:6379 8380:8080 > redis-access.log 2>&1 &
# Test access
curl http://localhost:8380
redis-cli -h localhost -p 6379 ping
Method 2: Direct Pod Access (Inside Cluster)
Redis CLI Access:
# Execute commands directly in Redis container
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli
# Or run single commands
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli -h localhost -p 6379 ping
Web Interface Test:
# Test web interface from within pod
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-web-interface -- python3 -c "import requests; r = requests.get('http://localhost:8080', timeout=5); print(f'Web interface: {r.status_code}')"
Method 3: LoadBalancer IP Access (If Available)
# Get LoadBalancer IP (may be internal on Mycelium Cloud)
kubectl get svc redis-cache-service
# Access Redis (if external IP available)
redis-cli -h <external-ip> -p 6379 ping
# Access web interface (if external IP available)
curl http://<external-ip>:8080
📊 Web Interface Features
The Redis web interface provides:
Dashboard:
- Real-time Redis statistics
- Memory usage monitoring
- Connected clients count
- Uptime tracking
Key Management:
- View all Redis keys
- Add/update/delete keys
- Search with patterns (user:, session:)
- TTL management
Cache Examples:
- API cache simulation
- Session storage demo
- Counter/rate limiting examples
- Memory usage patterns
Quick Actions:
- Add sample data
- Clear all data
- Refresh statistics
- Memory information
🔍 Troubleshooting
Check Deployment Status
# Check pods status (should show 2/2 Ready)
kubectl get pods -l app=redis-cache
# Check service details
kubectl get svc redis-cache-service
# Check ConfigMap
kubectl get configmap redis-web-interface
# Check events
kubectl get events --field-selector involvedObject.name=$(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}')
Common Issues
Pod Not Starting
# Check pod status and events
kubectl describe pod -l app=redis-cache
# Check container logs
kubectl logs -l app=redis-cache
kubectl logs -l app=redis-cache -c redis-cache
kubectl logs -l app=redis-cache -c redis-web-interface --previous
Web Interface Failing
# Check web interface logs
kubectl logs -l app=redis-cache -c redis-web-interface
# Verify ConfigMap is mounted
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-web-interface -- ls /app/
# Test Flask startup manually
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-web-interface -- python3 -c "print('Flask available')"
Redis Connection Issues
# Test Redis connectivity from web interface container
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-web-interface -- python3 -c "import redis; r = redis.Redis(host='localhost', port=6379); print(r.ping())"
# Check Redis configuration
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli config get "*"
Port Conflicts
# Check if ports are in use
lsof -i :6379
lsof -i :8080
# Kill conflicting processes
kill -9 $(lsof -ti:6379)
kill -9 $(lsof -ti:8080)
🛠️ Common Operations
Scaling
# Scale to 3 replicas (note: Redis is single-instance by nature)
kubectl scale deployment redis-cache --replicas=1
# Check distribution
kubectl get pods -o wide
Updates
# Update Redis image
kubectl set image deployment/redis-cache redis-cache=redis:7.2-alpine
# Restart deployment
kubectl rollout restart deployment/redis-cache
# Check rollout status
kubectl rollout status deployment/redis-cache
Data Management
# Access Redis CLI
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli
# Common Redis commands inside pod:
# KEYS *
# INFO memory
# FLUSHALL
# DBSIZE
Monitoring
# View logs from both containers
kubectl logs -f deployment/redis-cache
kubectl logs -f deployment/redis-cache -c redis-cache
kubectl logs -f deployment/redis-cache -c redis-web-interface
# Monitor resource usage
kubectl top pod -l app=redis-cache
# Check Redis info
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli INFO
🧹 Cleanup
When you're done testing:
# Delete the application and service
kubectl delete -f redis-cache-deployment.yaml -f redis-cache-service.yaml
# Wait for cleanup
kubectl wait --for=delete pod -l app=redis-cache --timeout=60s
# Kill any port-forwards
lsof -ti:6379 | xargs kill -9 2>/dev/null || true
lsof -ti:8080 | xargs kill -9 2>/dev/null || true
# Verify cleanup
kubectl get all -l app=redis-cache
kubectl get configmap redis-web-interface 2>/dev/null || echo "ConfigMap deleted"
🎯 What This Demonstrates
This example shows:
- Advanced Kubernetes patterns - multi-container pods, ConfigMaps
- Production Redis deployment - memory limits, configuration management
- Web-based management - Flask interface for Redis operations
- Mycelium Cloud networking - LoadBalancer services, port-forwarding
- Container orchestration - resource management, health monitoring
- Development workflows - testing, debugging, scaling patterns
🔗 Next Steps
Once you understand this example, try:
- Redis Clustering - Multiple Redis instances with data sharding
- Persistent Storage - Redis persistence with volumes
- Redis Sentinel - High availability Redis setup
- Cache Patterns - Implement cache-aside, write-through patterns
- Integration - Connect web applications to Redis cache
- Monitoring - Add Prometheus metrics for Redis
📚 More Examples
Other available examples:
- hello-world/ - Basic web application deployment
- nginx-static/ - Static website hosting
- python-flask/ - Python API server with multiple endpoints
💡 Pro Tips
- Multi-Container Access: Use
-c container-nameto access specific containers - ConfigMap Updates: Modify web-interface.py and restart deployment
- Redis Testing: Use
redis-clifor quick testing and monitoring - Web Interface: Great for visual debugging and demonstrating Redis concepts
- Memory Management: Redis memory limits prevent resource exhaustion
- Network Testing: Use
kubectl execfor internal cluster testing - Background Services: Combine multiple port-forwards with
&
🔧 Redis-Specific Tips
Data Types Demo
# Strings
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli SET user:1001 "Alice Johnson"
# Hashes
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli HSET user:1002 name "Bob Smith" age "30"
# Lists
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli LPUSH queue:tasks "task1" "task2"
# Sets
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli SADD tags:web "redis" "caching" "performance"
Performance Testing
# Simple load test
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli --latency-history -i 1
# Memory usage
kubectl exec -it $(kubectl get pod -l app=redis-cache -o jsonpath='{.items[0].metadata.name}') -c redis-cache -- redis-cli INFO memory
🎉 Success Indicators
You'll know everything is working when:
- ✅
kubectl get podsshows "2/2 Running" for redis-cache pod - ✅
kubectl get svcshows redis-cache-service with LoadBalancer type - ✅
redis-cli -h localhost -p 6379 pingreturns "PONG" - ✅
kubectl execcommands work in both containers - ✅ Web interface accessible at http://localhost:8380
- ✅ Redis operations (SET/GET) work via CLI and web interface
- ✅ No errors in
kubectl get events
Congratulations! You've successfully deployed a production-ready Redis cache system on Mycelium Cloud! 🚀
📖 File Contents
For reference, here are the complete file contents:
redis-cache-deployment.yaml
[Complete deployment configuration with multi-container pod setup]
redis-cache-service.yaml
[Complete service configuration with LoadBalancer type]
redis.conf
[Redis server configuration with memory limits and performance settings]
web-interface.py
[Complete Flask web application for Redis management]
🆘 Support
If you encounter issues:
- Check the troubleshooting section above
- Verify your kubeconfig is set correctly:
kubectl get nodes - Ensure your cluster is healthy:
kubectl get pods --all-namespaces - Check Redis logs:
kubectl logs -l app=redis-cache -c redis-cache - Test web interface functionality via browser at http://localhost:8380
- Verify ConfigMap mounting:
kubectl exec -it <pod-name> -c redis-web-interface -- ls /app/
For more help, visit our documentation or contact support.