Skip to main content
  1. Posts/
  2. Learning ArgoCD/
  3. Environments/

eu-staging Environment: Minikube with its own ArgoCD

·743 words·4 mins
Ravi Singh
Author
Ravi Singh
Software engineer with 15+ years building backend systems and cloud platforms across fintech, automotive, and academia. I write about the things I build, debug, and learn — so I don’t forget them.
Learning ArgoCD - This article is part of a series.
Part 3: This Article

eu-staging Environment: Minikube with its own ArgoCD
#

What We Built
#

A staging environment on a dedicated Minikube profile, following the same isolated-ArgoCD pattern as eu-prod-minikube. This completes the three-tier pipeline:

1
2
3
eu-dev-rancher  →  eu-staging-minikube  →  eu-prod-minikube
  (Rancher/k3s)      (Minikube)              (Minikube)
  ArgoCD (dev)       ArgoCD (staging)        ArgoCD (prod)

All environments run the same image versions: svc1:nginx:1.27, svc2:nginx:1.26. Promotion means bumping these tags in Git - ArgoCD on each cluster picks up the change.


What’s Different from eu-prod-minikube
#

eu-prod-minikubeeu-staging-minikube
Minikube profileeu-prodeu-staging
kubectl contexteu-prodeu-staging
Replicas21
Namespacealpha-prodalpha-staging
Hostnames*.eu-prod-minikube.ravikrs.local*.eu-staging-minikube.ravikrs.local
Access methodkubectl port-forwardkubectl port-forward

Port Access: Why kubectl port-forward Instead of minikube tunnel
#

On macOS, Rancher Desktop’s Lima SSH tunnel holds ports 80 and 443 on all interfaces (*:80, *:443). When minikube tunnel assigns 127.0.0.1 as the LoadBalancer IP (Docker driver behaviour), any request to 127.0.0.1:443 hits Rancher Desktop’s Traefik rather than Minikube’s - producing a 404.

kubectl port-forward sidesteps this entirely: it creates a direct tunnel through the Kubernetes API server to the Traefik pod, on ports the host doesn’t own (8880/8443).

1
Browser:8443 → kubectl port-forward → kube-apiserver → Traefik pod → Ingress → Pod

Full debug walkthrough: docs/13-debugging-port-conflict-minikube-rancher.md


Cluster Setup
#

1. Start the Minikube profile
#

1
2
3
minikube start -p eu-staging
kubectl config use-context eu-staging
kubectl get nodes   # wait for Ready

If Minikube fails with docker: No such file or directory (happens after Rancher Desktop restarts - the Docker socket path shifts):

1
DOCKER_HOST=unix:///Users/ravisingh/.rd/docker.sock minikube start -p eu-staging

Memory constraint: Each Minikube profile uses ~1.5–2GB of the Rancher Desktop VM. Stop any other Minikube profile before starting eu-staging to avoid API server OOM crashes: minikube stop -p eu-prod

2. Install ArgoCD
#

1
2
3
4
kubectl create namespace argocd
kubectl apply -n argocd --server-side --force-conflicts \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=180s

3. Register the GitHub repo via CLI
#

The Kubernetes Secret (bootstrap/repo-secret.yaml) alone is not reliably picked up by ArgoCD v3. Register the repo explicitly through the ArgoCD API before applying the bootstrap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Port-forward ArgoCD server
kubectl port-forward svc/argocd-server -n argocd 18080:443 &

# Get the auto-generated initial password (before wave 3 overwrites it)
INITIAL_PW=$(kubectl get secret argocd-initial-admin-secret \
  -n argocd -o jsonpath='{.data.password}' | base64 -d)

# Login and add repo
argocd login localhost:18080 --username admin --password "$INITIAL_PW" --insecure
argocd repo add https://github.com/ravikrs/learning-argocd \
  --username ravikrs \
  --password <github-pat>
argocd repo list   # confirm STATUS = Successful

4. Bootstrap
#

1
kubectl apply -f environments/eu-staging-minikube/bootstrap.yaml

ArgoCD deploys everything in sync-wave order:

WaveWhat deploys
0cert-manager
1cert-manager-config (ClusterIssuer), reloader
2Traefik
3argocd-config (ingress + password + insecure mode)
4ApplicationSet → svc1, svc2

Watch progress:

1
watch kubectl get applications -n argocd

All apps reach Synced. Services and Traefik will show Progressing health indefinitely

  • this is expected because the Docker driver never assigns a LoadBalancer IP. Services still work via port-forward.

5. Add hostnames to /etc/hosts
#

1
2
3
4
5
sudo tee -a /etc/hosts <<'EOF'
127.0.0.1  svc1.eu-staging-minikube.ravikrs.local
127.0.0.1  svc2.eu-staging-minikube.ravikrs.local
127.0.0.1  argocd.eu-staging-minikube.ravikrs.local
EOF

6. Start port-forward (keep running in a terminal)
#

1
kubectl --context eu-staging port-forward -n ingress svc/traefik 8880:80 8443:443

7. Access
#

ServiceURL
ArgoCDhttps://argocd.eu-staging-minikube.ravikrs.local:8443
svc1https://svc1.eu-staging-minikube.ravikrs.local:8443
svc2https://svc2.eu-staging-minikube.ravikrs.local:8443

Login: admin / admin


Cluster Lifecycle
#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Stop (preserves state)
minikube stop -p eu-staging

# Start again
minikube start -p eu-staging

# Delete completely
minikube delete -p eu-staging

# List all profiles
minikube profile list

Why Not k3d?
#

k3d was the original plan (lighter, faster startup, Docker-based port mapping). On macOS with Rancher Desktop, two issues blocked it:

  1. inotify limits: Rancher Desktop’s Lima VM defaults to fs.inotify.max_user_instances=128. k3s inside a Docker container needs hundreds of inotify instances for kubelet + containerd file watchers. Without raising this limit (sysctl -w fs.inotify.max_user_instances=512) the k3s node never registers. The fix is temporary - it resets on VM restart.

  2. API server instability under load: Even after fixing inotify, the k3s API server inside the k3d container became unresponsive after ArgoCD started reconciling (TLS handler timeouts, http2: client connection force closed). Likely resource pressure from running k3s + ArgoCD all inside Docker containers on a shared VM.

Minikube runs k3s in a dedicated VM, giving it stable resources and avoiding the Docker-in-Docker nesting that causes the inotify and load issues.

Learning ArgoCD - This article is part of a series.
Part 3: This Article