eu-prod Environment: Minikube with its own ArgoCD#
Architecture Decision: Isolated ArgoCD per Cluster#
The Two Models#
When you have multiple Kubernetes clusters, there are two ways to run ArgoCD:
Model A - Single ArgoCD, multiple cluster targets
| |
One pane of glass. You register external clusters with argocd cluster add.
ArgoCD on Rancher Desktop reaches into Minikube’s API server to apply resources.
Model B - One ArgoCD per cluster (what we’re doing)
| |
Each cluster is fully self-contained. Dev and prod are isolated - a broken dev ArgoCD cannot affect prod deployments, and vice versa.
Why Model B for This Repo#
- Blast-radius isolation: prod failures don’t bleed into dev control plane
- Closer to real org patterns: many companies run isolated ArgoCD per tier (dev/staging/prod) or per team, especially when prod needs stricter access
- Simpler cluster registration: each ArgoCD only manages its own cluster
using
https://kubernetes.default.svc- no external cluster credentials needed - Prepares for GitOps promotion: to promote from dev → prod, you update Git and wait for prod’s ArgoCD to pick it up - the promotion is a pull, not a push
The trade-off: no single pane of glass. You need to open two ArgoCD UIs to see both environments.
What Changed from eu-dev-rancher#
| eu-dev-rancher | eu-prod-minikube | |
|---|---|---|
| Cluster | Rancher Desktop | Minikube (-p eu-prod) |
| kubectl context | rancher-desktop | eu-prod |
| ArgoCD | Shared with host | Own instance |
| Replicas | 1 | 2 |
| svc1 image | nginx:1.27 | nginx:1.27 |
| svc2 image | nginx:1.26 | nginx:1.26 |
| Hostnames | *.eu-dev-rancher.ravikrs.local | *.eu-prod-minikube.ravikrs.local |
Note: All three environments (dev/staging/prod) were normalized to the same image tags (
svc1:1.27,svc2:1.26) as a baseline. Promotion is modelled by bumping a tag in one environment’s values file and pushing to Git.
LoadBalancer + minikube tunnel Explained#
What LoadBalancer means in Kubernetes#
When you create a Service with type: LoadBalancer, Kubernetes asks the
underlying cloud provider to provision an external load balancer (an AWS ALB,
a GCP Network LB, etc.) and assign a public IP. That IP is written back into
Service.status.loadBalancer.ingress[0].ip.
On Rancher Desktop, this works automatically because Rancher Desktop ships with a built-in network layer that handles LoadBalancer services.
On Minikube, there is no cloud provider - so the Service gets created, but
EXTERNAL-IP stays <pending> forever:
| |
What minikube tunnel does#
minikube tunnel runs a process on your Mac that:
- Watches for LoadBalancer services in the Minikube cluster
- Assigns each one a local IP (typically
127.0.0.1) - Sets up routing so traffic to that IP reaches the Minikube VM
Once the tunnel is running, the Service gets a real external IP:
| |
The tunnel needs sudo (it modifies routing tables) and must stay running -
if you kill it, the external IP disappears and Traefik becomes unreachable.
Running the tunnel#
In a dedicated terminal (leave it open):
| |
It will ask for your sudo password. Keep this terminal open as long as you want to reach the prod cluster.
Add /etc/hosts entries#
Once the tunnel assigns 127.0.0.1:
| |
Bootstrap Process#
1. Start the Minikube profile#
| |
If Minikube fails with docker: No such file or directory, the Docker socket path has
shifted after a Rancher Desktop restart. Use the explicit socket path:
| |
2. Install ArgoCD#
| |
3. Register the GitHub repo#
The K8s Secret alone is not reliably picked up by ArgoCD. Use the CLI:
| |
4. Bootstrap#
| |
ArgoCD then self-manages from Git:
- Wave 0: cert-manager
- Wave 1: cert-manager-config (ClusterIssuers), reloader
- Wave 2: traefik
- Wave 3: argocd-config (argocd-server insecure mode, admin password, ingress)
- Wave 4: appset (discovers services from
environments/eu-prod-minikube/services/)
Gotcha: Port Conflict with Rancher Desktop (Docker Driver)#
Minikube with the Docker driver runs inside Rancher Desktop’s Docker daemon.
When minikube tunnel assigns 127.0.0.1 as the LoadBalancer external IP,
it does NOT create a kernel network route - instead it relies on Docker’s port
forwarding. But Rancher Desktop’s Lima SSH process already holds port bindings
for *:80 and *:443 (for its own k3s Traefik). Any traffic to
127.0.0.1:443 goes to Rancher Desktop’s Traefik, not Minikube’s.
Symptom: curl to https://<prod-host> returns 404 - Rancher Desktop’s
Traefik receives the request and finds no matching route for the prod hostname.
Fix: Use kubectl port-forward to bypass the LoadBalancer entirely:
| |
Then access services on port 8443 (HTTPS) or 8080 (HTTP):
| |
Why this works: kubectl port-forward creates a direct tunnel to the Traefik
pod through the Kubernetes API, completely bypassing the LoadBalancer and
Docker’s network stack. The request goes: Mac → kube-apiserver → Traefik pod.
Permanent fix options (if you want port 443 to work cleanly):
- Switch Minikube to a non-Docker driver (e.g.
qemu2):minikube start --driver=qemu2 - Stop Rancher Desktop’s port 443 binding (disrupts dev environment)
- Use different ports in Traefik (non-standard 8443 exposed as 443 is already what we have)
For learning purposes, port-forward is the pragmatic choice.
Gotcha: ArgoCD Redirect Loop (ERR_TOO_MANY_REDIRECTS)#
Symptom: Opening https://argocd.eu-prod-minikube.ravikrs.local:8443 in a
browser shows ERR_TOO_MANY_REDIRECTS.
Cause: ArgoCD reads server.insecure: "true" from argocd-cmd-params-cm
as an environment variable at pod startup. The ConfigMap is applied by
ArgoCD at wave 3 - but the argocd-server pod was already running from the
initial install. The env var ARGOCD_SERVER_INSECURE was never injected into
the running pod.
Without insecure mode active, argocd-server runs in HTTPS mode and redirects any plain HTTP request back to HTTPS. Traefik terminates TLS and forwards HTTP to argocd-server, argocd-server redirects back to HTTPS, Traefik serves it as HTTP again - infinite loop.
Verify:
| |
Fix: restart argocd-server so it picks up the ConfigMap:
| |
Permanent fix (already in Git): The argocd-server Deployment carries the annotation
reloader.stakater.com/auto: "true". Reloader (wave 1) watches all namespaces matching
kubernetes.io/metadata.name in (argocd). When wave 3 updates argocd-cmd-params-cm,
Reloader detects the change (argocd-server references this ConfigMap via envFrom),
and automatically restarts argocd-server - no manual intervention needed.
How the annotation is applied without conflicting with ArgoCD’s upstream install manifest:
argocd-config uses ServerSideApply=true, so ArgoCD only claims ownership of the
reloader.stakater.com/auto annotation field. The upstream field manager retains ownership
of all other Deployment fields - no conflict.
Simpler alternative: If you want to remove Stakater Reloader entirely, delete the
reloader.yamlApplication andargocd-server-deployment-patch.yamlfrom Git, and document a one-time manual step after bootstrap:
1kubectl rollout restart deployment/argocd-server -n argocdThe redirect loop only happens once per fresh bootstrap, so the manual step is a reasonable trade-off for a learning environment.
Accessing prod ArgoCD#
| |
Login: admin / admin
Cluster Lifecycle#
| |
Memory constraint: Each Minikube profile uses ~1.5–2GB of the Rancher Desktop VM’s memory. With Rancher Desktop’s default 6GB VM, running eu-staging + eu-prod simultaneously alongside Rancher Desktop’s own k3s causes API server crashes. Run one Minikube cluster at a time - stop the other before starting a new one.
Cluster-to-Cluster Comparison#
To see both environments at a glance:
| |
The same service names (svc1, svc2) run in all three, with the same image tags
(svc1:1.27, svc2:1.26) as the current baseline. Promotion is modelled by bumping
a tag in a specific environment’s values file in Git.