Installing ArgoCD on Rancher Desktop (Local) # Prerequisites # 1 2 3 # Confirm you're on the right cluster kubectl config current-context # should print: rancher-desktop kubectl get nodes # should show lima-rancher-desktop Ready 1. Install ArgoCD # 1 2 3 4 kubectl create namespace argocd kubectl apply -n argocd --server-side --force-conflicts \ -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml --server-side --force-conflicts is required. The ArgoCD install manifest is large enough that a plain kubectl apply (client-side) hits the annotation size limit and silently skips some CRDs - including ApplicationSet. Without it the argocd-applicationset-controller will CrashLoopBackOff immediately after install. --force-conflicts handles any field manager conflicts on re-apply.
Deploying a Sample App via ArgoCD # What We Built # A minimal nginx app managed entirely by ArgoCD using Kustomize overlays.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 config/sample-app/ base/ deployment.yaml # nginx:1.27, replicas: 2 service.yaml # ClusterIP on port 80 kustomization.yaml overlays/ local/ kustomization.yaml # references ../../base apps/sample-app/ application.yaml # ArgoCD Application CRD bootstrap/ repo-secret.yaml # git-ignored - applied manually once Key Concepts # The GitOps Loop # 1 Push to Git → ArgoCD detects change → Syncs to cluster → Cluster matches Git ArgoCD continuously compares desired state (Git) against actual state (cluster). Any drift is automatically corrected because selfHeal: true is set in the sync policy.
Local DNS via Traefik Ingress (ravikrs.local) # What We Set Up # A local DNS convention using /etc/hosts so all cluster services are reachable via hostname - no kubectl port-forward needed. Traefik (bundled with Rancher Desktop k3s) is the single ingress entrypoint.
URL Service http://argocd.ravikrs.local ArgoCD UI http://sample-app.ravikrs.local nginx sample app How It Works # 1 2 Browser → /etc/hosts resolves ravikrs.local → 127.0.0.1 → Traefik (port 80) → routes by hostname → backend Service Rancher Desktop binds the k3s Traefik LoadBalancer to 127.0.0.1:80 on your Mac. Any hostname you point at 127.0.0.1 in /etc/hosts will reach Traefik, which then routes to the correct Service based on the host: rule in the Ingress manifest.
Helm Chart Managed by ArgoCD # What We Built # A single reusable Helm chart at charts/backend-service/ shared by three services (svc1, svc2, svc3), all deployed into the dev namespace on Rancher Desktop. Environment-specific configuration lives outside the chart in environments/<env>/, one folder per environment. Each environment folder is self-contained - when a second cluster is added (e.g. minikube for staging), its ArgoCD instance points only at its own environment folder.
ApplicationSet - Auto-Discover Services from Git # What We Built # A eu-dev environment that runs on the same Rancher Desktop cluster as dev, deploying into namespace alpha-dev. The difference is in how ArgoCD Applications are created:
Environment How Applications are created Namespace dev One Application YAML per service, applied manually with kubectl dev eu-dev One ApplicationSet, applied once; services auto-discovered from values files alpha-dev Both use the same charts/backend-service/ Helm chart. dev is kept as a reference for the manual pattern.
App-of-Apps Bootstrap Pattern # What We Built # A root ArgoCD Application that watches environments/dev/apps/ and manages every Application manifest it finds there. A single kubectl apply bootstraps all services in the environment; after that, adding or removing a service is a git operation only.
How It Works # 1 2 3 4 5 6 7 8 9 kubectl apply -f environments/dev/bootstrap.yaml │ │ watches environments/dev/apps/ ▼ Application: root-dev │ ├── svc1.yaml → Application: svc1-dev → Deployment, Service, Ingress in ns: dev ├── svc2.yaml → Application: svc2-dev → Deployment, Service, Ingress in ns: dev └── svc3.yaml → Application: svc3-dev → Deployment, Service, Ingress in ns: dev ArgoCD manages Application objects the same way it manages any other Kubernetes resource. The child Applications are reconciled from git - if you delete environments/dev/apps/svc2.yaml, ArgoCD deletes svc2-dev (and its workloads, because prune: true).
Cert Manager - TLS via ArgoCD # What We Built # This doc covers the learning exercise: deploying cert-manager as a manually applied ArgoCD Application, creating a self-signed CA issuer, and enabling TLS for eu-dev services via values file changes.
Current approach: in eu-dev-rancher, cert-manager is managed as wave 0 in the sync wave sequence - no manual kubectl apply needed. The config/cert-manager/ ClusterIssuer manifests used here are reused as-is by eu-dev-rancher. See docs/09-sync-waves-cluster-complete.md.
App-of-Apps vs ApplicationSet - When to Use Which # The Core Difference # Both patterns answer the same question: how do I manage many ArgoCD Applications without manually applying each one? But they solve it at different levels.
App-of-Apps - a manually maintained parent Application that watches a directory of hand-written child Application manifests. You own every YAML.
Sync Waves: Cluster-Complete Bootstrap # What This Covers # How to bootstrap a fully self-contained cluster environment - cert-manager, Traefik, ArgoCD ingress, and services - using ArgoCD sync waves, with a single kubectl apply as the only manual step after ArgoCD itself is installed.
What Are Sync Waves? # ArgoCD processes resources in a sync operation in wave order. Each wave must reach Healthy before the next wave starts.