eu-staging Environment: Minikube with its own ArgoCD#
What We Built#
A staging environment on a dedicated Minikube profile, following the same
isolated-ArgoCD pattern as eu-prod-minikube. This completes the three-tier pipeline:
| |
All environments run the same image versions: svc1:nginx:1.27, svc2:nginx:1.26.
Promotion means bumping these tags in Git - ArgoCD on each cluster picks up the change.
What’s Different from eu-prod-minikube#
| eu-prod-minikube | eu-staging-minikube | |
|---|---|---|
| Minikube profile | eu-prod | eu-staging |
| kubectl context | eu-prod | eu-staging |
| Replicas | 2 | 1 |
| Namespace | alpha-prod | alpha-staging |
| Hostnames | *.eu-prod-minikube.ravikrs.local | *.eu-staging-minikube.ravikrs.local |
| Access method | kubectl port-forward | kubectl port-forward |
Port Access: Why kubectl port-forward Instead of minikube tunnel#
On macOS, Rancher Desktop’s Lima SSH tunnel holds ports 80 and 443 on all interfaces
(*:80, *:443). When minikube tunnel assigns 127.0.0.1 as the LoadBalancer IP
(Docker driver behaviour), any request to 127.0.0.1:443 hits Rancher Desktop’s Traefik
rather than Minikube’s - producing a 404.
kubectl port-forward sidesteps this entirely: it creates a direct tunnel through the
Kubernetes API server to the Traefik pod, on ports the host doesn’t own (8880/8443).
| |
Full debug walkthrough: docs/13-debugging-port-conflict-minikube-rancher.md
Cluster Setup#
1. Start the Minikube profile#
| |
If Minikube fails with docker: No such file or directory (happens after Rancher Desktop
restarts - the Docker socket path shifts):
| |
Memory constraint: Each Minikube profile uses ~1.5–2GB of the Rancher Desktop VM. Stop any other Minikube profile before starting eu-staging to avoid API server OOM crashes:
minikube stop -p eu-prod
2. Install ArgoCD#
| |
3. Register the GitHub repo via CLI#
The Kubernetes Secret (bootstrap/repo-secret.yaml) alone is not reliably picked up by
ArgoCD v3. Register the repo explicitly through the ArgoCD API before applying the bootstrap:
| |
4. Bootstrap#
| |
ArgoCD deploys everything in sync-wave order:
| Wave | What deploys |
|---|---|
| 0 | cert-manager |
| 1 | cert-manager-config (ClusterIssuer), reloader |
| 2 | Traefik |
| 3 | argocd-config (ingress + password + insecure mode) |
| 4 | ApplicationSet → svc1, svc2 |
Watch progress:
| |
All apps reach Synced. Services and Traefik will show Progressing health indefinitely
- this is expected because the Docker driver never assigns a LoadBalancer IP. Services still work via port-forward.
5. Add hostnames to /etc/hosts#
| |
6. Start port-forward (keep running in a terminal)#
| |
7. Access#
| Service | URL |
|---|---|
| ArgoCD | https://argocd.eu-staging-minikube.ravikrs.local:8443 |
| svc1 | https://svc1.eu-staging-minikube.ravikrs.local:8443 |
| svc2 | https://svc2.eu-staging-minikube.ravikrs.local:8443 |
Login: admin / admin
Cluster Lifecycle#
| |
Why Not k3d?#
k3d was the original plan (lighter, faster startup, Docker-based port mapping). On macOS with Rancher Desktop, two issues blocked it:
inotify limits: Rancher Desktop’s Lima VM defaults to
fs.inotify.max_user_instances=128. k3s inside a Docker container needs hundreds of inotify instances for kubelet + containerd file watchers. Without raising this limit (sysctl -w fs.inotify.max_user_instances=512) the k3s node never registers. The fix is temporary - it resets on VM restart.API server instability under load: Even after fixing inotify, the k3s API server inside the k3d container became unresponsive after ArgoCD started reconciling (TLS handler timeouts,
http2: client connection force closed). Likely resource pressure from running k3s + ArgoCD all inside Docker containers on a shared VM.
Minikube runs k3s in a dedicated VM, giving it stable resources and avoiding the Docker-in-Docker nesting that causes the inotify and load issues.