Skip to main content
  1. Posts/
  2. Learning ArgoCD/
  3. Environments/

eu-prod Environment: Minikube with its own ArgoCD

·1563 words·8 mins
Ravi Singh
Author
Ravi Singh
Software engineer with 15+ years building backend systems and cloud platforms across fintech, automotive, and academia. I write about the things I build, debug, and learn — so I don’t forget them.
Learning ArgoCD - This article is part of a series.
Part 1: This Article

eu-prod Environment: Minikube with its own ArgoCD
#

Architecture Decision: Isolated ArgoCD per Cluster
#

The Two Models
#

When you have multiple Kubernetes clusters, there are two ways to run ArgoCD:

Model A - Single ArgoCD, multiple cluster targets

1
2
3
4
Rancher Desktop
  └── ArgoCD (one instance)
        ├── deploys to → rancher-desktop  (dev)
        └── deploys to → minikube         (prod)

One pane of glass. You register external clusters with argocd cluster add. ArgoCD on Rancher Desktop reaches into Minikube’s API server to apply resources.

Model B - One ArgoCD per cluster (what we’re doing)

1
2
3
Rancher Desktop          Minikube
  └── ArgoCD (dev)         └── ArgoCD (prod)
        └── manages dev          └── manages prod

Each cluster is fully self-contained. Dev and prod are isolated - a broken dev ArgoCD cannot affect prod deployments, and vice versa.

Why Model B for This Repo
#

  • Blast-radius isolation: prod failures don’t bleed into dev control plane
  • Closer to real org patterns: many companies run isolated ArgoCD per tier (dev/staging/prod) or per team, especially when prod needs stricter access
  • Simpler cluster registration: each ArgoCD only manages its own cluster using https://kubernetes.default.svc - no external cluster credentials needed
  • Prepares for GitOps promotion: to promote from dev → prod, you update Git and wait for prod’s ArgoCD to pick it up - the promotion is a pull, not a push

The trade-off: no single pane of glass. You need to open two ArgoCD UIs to see both environments.


What Changed from eu-dev-rancher
#

eu-dev-ranchereu-prod-minikube
ClusterRancher DesktopMinikube (-p eu-prod)
kubectl contextrancher-desktopeu-prod
ArgoCDShared with hostOwn instance
Replicas12
svc1 imagenginx:1.27nginx:1.27
svc2 imagenginx:1.26nginx:1.26
Hostnames*.eu-dev-rancher.ravikrs.local*.eu-prod-minikube.ravikrs.local

Note: All three environments (dev/staging/prod) were normalized to the same image tags (svc1:1.27, svc2:1.26) as a baseline. Promotion is modelled by bumping a tag in one environment’s values file and pushing to Git.


LoadBalancer + minikube tunnel Explained
#

What LoadBalancer means in Kubernetes
#

When you create a Service with type: LoadBalancer, Kubernetes asks the underlying cloud provider to provision an external load balancer (an AWS ALB, a GCP Network LB, etc.) and assign a public IP. That IP is written back into Service.status.loadBalancer.ingress[0].ip.

On Rancher Desktop, this works automatically because Rancher Desktop ships with a built-in network layer that handles LoadBalancer services.

On Minikube, there is no cloud provider - so the Service gets created, but EXTERNAL-IP stays <pending> forever:

1
2
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)
traefik   LoadBalancer   10.101.9.188   <pending>     80:30368/TCP,443:30596/TCP

What minikube tunnel does
#

minikube tunnel runs a process on your Mac that:

  1. Watches for LoadBalancer services in the Minikube cluster
  2. Assigns each one a local IP (typically 127.0.0.1)
  3. Sets up routing so traffic to that IP reaches the Minikube VM

Once the tunnel is running, the Service gets a real external IP:

1
2
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)
traefik   LoadBalancer   10.101.9.188   127.0.0.1     80:30368/TCP,443:30596/TCP

The tunnel needs sudo (it modifies routing tables) and must stay running - if you kill it, the external IP disappears and Traefik becomes unreachable.

Running the tunnel
#

In a dedicated terminal (leave it open):

1
minikube tunnel

It will ask for your sudo password. Keep this terminal open as long as you want to reach the prod cluster.

Add /etc/hosts entries
#

Once the tunnel assigns 127.0.0.1:

1
2
3
4
5
sudo sh -c 'cat >> /etc/hosts <<EOF
127.0.0.1  argocd.eu-prod-minikube.ravikrs.local
127.0.0.1  svc1.eu-prod-minikube.ravikrs.local
127.0.0.1  svc2.eu-prod-minikube.ravikrs.local
EOF'

Bootstrap Process
#

1. Start the Minikube profile
#

1
2
3
minikube start -p eu-prod
kubectl config use-context eu-prod
kubectl get nodes   # wait for Ready

If Minikube fails with docker: No such file or directory, the Docker socket path has shifted after a Rancher Desktop restart. Use the explicit socket path:

1
DOCKER_HOST=unix:///Users/ravisingh/.rd/docker.sock minikube start -p eu-prod

2. Install ArgoCD
#

1
2
3
4
kubectl --context eu-prod create namespace argocd
kubectl --context eu-prod apply -n argocd --server-side --force-conflicts \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl --context eu-prod wait --for=condition=Ready pods --all -n argocd --timeout=180s

3. Register the GitHub repo
#

The K8s Secret alone is not reliably picked up by ArgoCD. Use the CLI:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Port-forward ArgoCD server
kubectl --context eu-prod port-forward svc/argocd-server -n argocd 18080:443 &

# Get the auto-generated initial password
INITIAL_PW=$(kubectl --context eu-prod get secret argocd-initial-admin-secret \
  -n argocd -o jsonpath='{.data.password}' | base64 -d)

# Login and add repo
argocd login localhost:18080 --username admin --password "$INITIAL_PW" --insecure
argocd repo add https://github.com/ravikrs/learning-argocd \
  --username ravikrs \
  --password <github-pat>

4. Bootstrap
#

1
kubectl --context eu-prod apply -f environments/eu-prod-minikube/bootstrap.yaml

ArgoCD then self-manages from Git:

  • Wave 0: cert-manager
  • Wave 1: cert-manager-config (ClusterIssuers), reloader
  • Wave 2: traefik
  • Wave 3: argocd-config (argocd-server insecure mode, admin password, ingress)
  • Wave 4: appset (discovers services from environments/eu-prod-minikube/services/)

Gotcha: Port Conflict with Rancher Desktop (Docker Driver)
#

Minikube with the Docker driver runs inside Rancher Desktop’s Docker daemon. When minikube tunnel assigns 127.0.0.1 as the LoadBalancer external IP, it does NOT create a kernel network route - instead it relies on Docker’s port forwarding. But Rancher Desktop’s Lima SSH process already holds port bindings for *:80 and *:443 (for its own k3s Traefik). Any traffic to 127.0.0.1:443 goes to Rancher Desktop’s Traefik, not Minikube’s.

Symptom: curl to https://<prod-host> returns 404 - Rancher Desktop’s Traefik receives the request and finds no matching route for the prod hostname.

Fix: Use kubectl port-forward to bypass the LoadBalancer entirely:

1
kubectl --context minikube port-forward -n ingress svc/traefik 8443:443 8080:80

Then access services on port 8443 (HTTPS) or 8080 (HTTP):

1
2
3
https://svc1.eu-prod-minikube.ravikrs.local:8443
https://svc2.eu-prod-minikube.ravikrs.local:8443
https://argocd.eu-prod-minikube.ravikrs.local:8443

Why this works: kubectl port-forward creates a direct tunnel to the Traefik pod through the Kubernetes API, completely bypassing the LoadBalancer and Docker’s network stack. The request goes: Mac → kube-apiserver → Traefik pod.

Permanent fix options (if you want port 443 to work cleanly):

  • Switch Minikube to a non-Docker driver (e.g. qemu2): minikube start --driver=qemu2
  • Stop Rancher Desktop’s port 443 binding (disrupts dev environment)
  • Use different ports in Traefik (non-standard 8443 exposed as 443 is already what we have)

For learning purposes, port-forward is the pragmatic choice.


Gotcha: ArgoCD Redirect Loop (ERR_TOO_MANY_REDIRECTS)
#

Symptom: Opening https://argocd.eu-prod-minikube.ravikrs.local:8443 in a browser shows ERR_TOO_MANY_REDIRECTS.

Cause: ArgoCD reads server.insecure: "true" from argocd-cmd-params-cm as an environment variable at pod startup. The ConfigMap is applied by ArgoCD at wave 3 - but the argocd-server pod was already running from the initial install. The env var ARGOCD_SERVER_INSECURE was never injected into the running pod.

Without insecure mode active, argocd-server runs in HTTPS mode and redirects any plain HTTP request back to HTTPS. Traefik terminates TLS and forwards HTTP to argocd-server, argocd-server redirects back to HTTPS, Traefik serves it as HTTP again - infinite loop.

Verify:

1
2
3
kubectl --context minikube exec -n argocd deploy/argocd-server -- \
  printenv ARGOCD_SERVER_INSECURE
# (empty = not set = insecure mode is NOT active)

Fix: restart argocd-server so it picks up the ConfigMap:

1
2
3
4
5
6
7
kubectl --context minikube rollout restart deployment/argocd-server -n argocd
kubectl --context minikube rollout status deployment/argocd-server -n argocd

# Verify it's now active
kubectl --context minikube exec -n argocd deploy/argocd-server -- \
  printenv ARGOCD_SERVER_INSECURE
# true

Permanent fix (already in Git): The argocd-server Deployment carries the annotation reloader.stakater.com/auto: "true". Reloader (wave 1) watches all namespaces matching kubernetes.io/metadata.name in (argocd). When wave 3 updates argocd-cmd-params-cm, Reloader detects the change (argocd-server references this ConfigMap via envFrom), and automatically restarts argocd-server - no manual intervention needed.

How the annotation is applied without conflicting with ArgoCD’s upstream install manifest: argocd-config uses ServerSideApply=true, so ArgoCD only claims ownership of the reloader.stakater.com/auto annotation field. The upstream field manager retains ownership of all other Deployment fields - no conflict.

Simpler alternative: If you want to remove Stakater Reloader entirely, delete the reloader.yaml Application and argocd-server-deployment-patch.yaml from Git, and document a one-time manual step after bootstrap:

1
kubectl rollout restart deployment/argocd-server -n argocd

The redirect loop only happens once per fresh bootstrap, so the manual step is a reasonable trade-off for a learning environment.


Accessing prod ArgoCD
#

1
2
3
4
5
6
7
8
9
# Port-forward Traefik (works reliably with Docker driver + Rancher Desktop)
kubectl --context minikube port-forward -n ingress svc/traefik 8443:443 8080:80

# Then open
open https://argocd.eu-prod-minikube.ravikrs.local:8443

# Or direct port-forward to argocd-server (bypasses Traefik entirely)
kubectl --context minikube port-forward svc/argocd-server -n argocd 8081:80
open http://localhost:8081

Login: admin / admin


Cluster Lifecycle
#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Stop (preserves state)
minikube stop -p eu-prod

# Start again
DOCKER_HOST=unix:///Users/ravisingh/.rd/docker.sock minikube start -p eu-prod

# Delete completely
minikube delete -p eu-prod

# List all profiles
minikube profile list

Memory constraint: Each Minikube profile uses ~1.5–2GB of the Rancher Desktop VM’s memory. With Rancher Desktop’s default 6GB VM, running eu-staging + eu-prod simultaneously alongside Rancher Desktop’s own k3s causes API server crashes. Run one Minikube cluster at a time - stop the other before starting a new one.


Cluster-to-Cluster Comparison
#

To see both environments at a glance:

1
2
3
4
5
6
7
8
# dev
kubectl --context rancher-desktop get pods -n alpha-dev

# staging
kubectl --context eu-staging get pods -n alpha-staging

# prod
kubectl --context eu-prod get pods -n alpha-prod

The same service names (svc1, svc2) run in all three, with the same image tags (svc1:1.27, svc2:1.26) as the current baseline. Promotion is modelled by bumping a tag in a specific environment’s values file in Git.

Learning ArgoCD - This article is part of a series.
Part 1: This Article