Problem # Adding a custom favicon to a Hugo site is straightforward — drop a favicon.ico into static/. But making it appear circular in the browser tab is trickier than it sounds.
ImageMagick’s alpha channel handling for ICO files is unreliable. The corners either don’t go transparent or the 1-bit alpha makes edges look jagged. The browser ends up showing a square icon regardless.
Installing ArgoCD on Rancher Desktop (Local) # Prerequisites # # Confirm you're on the right cluster kubectl config current-context # should print: rancher-desktop kubectl get nodes # should show lima-rancher-desktop Ready 1. Install ArgoCD # kubectl create namespace argocd kubectl apply -n argocd --server-side --force-conflicts \ -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml --server-side --force-conflicts is required. The ArgoCD install manifest is large enough that a plain kubectl apply (client-side) hits the annotation size limit and silently skips some CRDs - including ApplicationSet. Without it the argocd-applicationset-controller will CrashLoopBackOff immediately after install. --force-conflicts handles any field manager conflicts on re-apply.
Deploying a Sample App via ArgoCD # What We Built # A minimal nginx app managed entirely by ArgoCD using Kustomize overlays.
config/sample-app/ base/ deployment.yaml # nginx:1.27, replicas: 2 service.yaml # ClusterIP on port 80 kustomization.yaml overlays/ local/ kustomization.yaml # references ../../base apps/sample-app/ application.yaml # ArgoCD Application CRD bootstrap/ repo-secret.yaml # git-ignored - applied manually once Key Concepts # The GitOps Loop # Push to Git → ArgoCD detects change → Syncs to cluster → Cluster matches Git ArgoCD continuously compares desired state (Git) against actual state (cluster). Any drift is automatically corrected because selfHeal: true is set in the sync policy.
Local DNS via Traefik Ingress (ravikrs.local) # What We Set Up # A local DNS convention using /etc/hosts so all cluster services are reachable via hostname - no kubectl port-forward needed. Traefik (bundled with Rancher Desktop k3s) is the single ingress entrypoint.
URL Service http://argocd.ravikrs.local ArgoCD UI http://sample-app.ravikrs.local nginx sample app How It Works # Browser → /etc/hosts resolves ravikrs.local → 127.0.0.1 → Traefik (port 80) → routes by hostname → backend Service Rancher Desktop binds the k3s Traefik LoadBalancer to 127.0.0.1:80 on your Mac. Any hostname you point at 127.0.0.1 in /etc/hosts will reach Traefik, which then routes to the correct Service based on the host: rule in the Ingress manifest.
Helm Chart Managed by ArgoCD # What We Built # A single reusable Helm chart at charts/backend-service/ shared by three services (svc1, svc2, svc3), all deployed into the dev namespace on Rancher Desktop. Environment-specific configuration lives outside the chart in environments/<env>/, one folder per environment. Each environment folder is self-contained - when a second cluster is added (e.g. minikube for staging), its ArgoCD instance points only at its own environment folder.
ApplicationSet - Auto-Discover Services from Git # What We Built # A eu-dev environment that runs on the same Rancher Desktop cluster as dev, deploying into namespace alpha-dev. The difference is in how ArgoCD Applications are created:
Environment How Applications are created Namespace dev One Application YAML per service, applied manually with kubectl dev eu-dev One ApplicationSet, applied once; services auto-discovered from values files alpha-dev Both use the same charts/backend-service/ Helm chart. dev is kept as a reference for the manual pattern.
App-of-Apps Bootstrap Pattern # What We Built # A root ArgoCD Application that watches environments/dev/apps/ and manages every Application manifest it finds there. A single kubectl apply bootstraps all services in the environment; after that, adding or removing a service is a git operation only.
How It Works # kubectl apply -f environments/dev/bootstrap.yaml │ │ watches environments/dev/apps/ ▼ Application: root-dev │ ├── svc1.yaml → Application: svc1-dev → Deployment, Service, Ingress in ns: dev ├── svc2.yaml → Application: svc2-dev → Deployment, Service, Ingress in ns: dev └── svc3.yaml → Application: svc3-dev → Deployment, Service, Ingress in ns: dev ArgoCD manages Application objects the same way it manages any other Kubernetes resource. The child Applications are reconciled from git - if you delete environments/dev/apps/svc2.yaml, ArgoCD deletes svc2-dev (and its workloads, because prune: true).
Cert Manager - TLS via ArgoCD # What We Built # This doc covers the learning exercise: deploying cert-manager as a manually applied ArgoCD Application, creating a self-signed CA issuer, and enabling TLS for eu-dev services via values file changes.
Current approach: in eu-dev-rancher, cert-manager is managed as wave 0 in the sync wave sequence - no manual kubectl apply needed. The config/cert-manager/ ClusterIssuer manifests used here are reused as-is by eu-dev-rancher. See docs/09-sync-waves-cluster-complete.md.