Helm Chart Managed by ArgoCD#
What We Built#
A single reusable Helm chart at charts/backend-service/ shared by three services
(svc1, svc2, svc3), all deployed into the dev namespace on Rancher Desktop.
Environment-specific configuration lives outside the chart in environments/<env>/,
one folder per environment. Each environment folder is self-contained - when a
second cluster is added (e.g. minikube for staging), its ArgoCD instance points
only at its own environment folder.
Cluster Plan#
| Environment | Cluster | Namespace | Deploy pattern | Status |
|---|---|---|---|---|
| dev | Rancher Desktop | dev | manual Applications (App-of-Apps) | learning exercise |
| eu-dev | Rancher Desktop | alpha-dev | ApplicationSet (manual apply) | learning exercise |
| eu-dev-rancher | Rancher Desktop | alpha-dev | ApplicationSet via sync waves (GitOps) | current |
| eu-staging | Minikube | alpha-staging | ApplicationSet via sync waves (GitOps) | future |
| eu-prod | TBD | alpha-prod | ApplicationSet via sync waves (GitOps) | future |
Naming convention:
- Folder =
<region>-<env>-<cluster>- identifies the cluster (one folder per ArgoCD instance) - Namespace =
<team>-<env>- team owns the namespace, region lives at cluster level - Application name =
<team>-<svc>-<region>-<env>-<cluster>- unique across a shared ArgoCD instance - ApplicationSet name =
<team>-<region>-<env>-<cluster>-services
dev and eu-dev are preserved as learning references. eu-dev-rancher is the
current environment - it uses sync waves to bootstrap the full cluster from a single
kubectl apply. See docs/09-sync-waves-cluster-complete.md. All environments share
the same charts/backend-service/ chart.
Final Structure#
| |
Values layering#
Helm merges values left to right - later files override earlier ones:
| |
Each environment values file is fully self-contained: it sets the service name, image repository, and all env-specific overrides. Anything not set falls through to the chart default.
Step 1 - Scaffold the chart#
| |
Step 2 - Remove boilerplate files#
| |
templates/ should now contain only: _helpers.tpl, deployment.yaml,
service.yaml, ingress.yaml.
Step 3 - Simplify deployment.yaml#
The helm create deployment references the service account we just removed.
Replace charts/backend-service/templates/deployment.yaml with:
| |
Step 4 - Create environment values files#
Each file is self-contained: service identity (nameOverride, image.repository)
and env-specific overrides (replicaCount, image.tag, ingress host) all in one place.
dev#
environments/dev/values/svc1.yaml:
| |
environments/dev/values/svc2.yaml:
| |
environments/dev/values/svc3.yaml:
| |
staging (create when minikube cluster is ready)#
environments/staging/values/svc1.yaml:
| |
Step 5 - Validate locally with helm template#
| |
Check: correct name, replicas, image tag, and ingress host per service.
Step 6 - Create ArgoCD Application manifests#
The env values file now lives outside the chart directory, so ArgoCD needs the
multiple sources feature (ArgoCD ≥ 2.6) to reach it. The second source
(ref: values) acts as a pointer to the repo root, enabling $values/ prefixed
paths in valueFiles.
environments/dev/apps/svc1.yaml:
| |
Repeat for svc2.yaml and svc3.yaml - only name and valueFiles path change.
All three services deploy into the same dev namespace.
When the staging cluster is ready, environments/staging/apps/svc1.yaml uses
the same structure but points at environments/staging/values/svc1.yaml.
All environments use targetRevision: HEAD - see the Chart Strategy section
for why.
Step 7 - Add /etc/hosts entries#
| |
Staging/prod entries get added when those clusters are active.
Step 8 - Commit and push#
| |
Step 9 - Apply and verify (start with svc1-dev)#
| |
Open in browser:
ApplicationSet Pattern (Done)#
The 3 manual Application manifests in environments/dev/apps/ were superseded by
a single ApplicationSet using a git file generator - implemented first in eu-dev
(see docs/05-applicationset.md), then promoted to the fully GitOps-managed
eu-dev-rancher setup where the ApplicationSet itself is managed by ArgoCD as wave 4
(see docs/09-sync-waves-cluster-complete.md).
App-of-Apps vs ApplicationSet#
Both patterns solve the same access problem: dev teams don’t have kubectl access
to the argocd namespace, so they can’t create Application CRDs directly. Both
require the platform team to apply something once. The difference is what dev teams
do after that bootstrap step.
App-of-Apps - devs write Application manifests
Platform applies one root Application pointing at environments/dev/apps/. ArgoCD
then manages that directory from git - any Application YAML committed there gets
picked up and applied automatically.
| |
Dev teams still write Application YAMLs. They control repoURL, path,
syncPolicy, namespace. Platform removed the need for kubectl access, not the
need to understand Application manifests.
ApplicationSet - devs drop a values file
Platform applies one ApplicationSet. The git file generator watches for values files and auto-creates Applications from a fixed template.
| |
Dev teams never write an Application manifest. The template is owned by platform.
Why platform teams prefer ApplicationSet
The key difference is who owns the Application template:
| App-of-Apps | ApplicationSet | |
|---|---|---|
| Application template written by | Dev team | Platform team |
Dev can change repoURL? | Yes | No - template is fixed |
Dev can change syncPolicy? | Yes | No |
| Dev can target wrong namespace? | Possible (AppProject limits it) | Impossible - template controls it |
| Onboarding a new service | Write Application YAML | Drop a values file |
With ApplicationSet, platform controls the pattern centrally. All services across all teams are guaranteed to follow the same sync policy, use the same chart, and deploy only to allowed namespaces. Dev teams can’t misconfigure it - there is nothing to misconfigure.
Can an ApplicationSet be created via the ArgoCD UI?
No. The ArgoCD UI’s “New App” button creates Application CRDs only. ApplicationSets
require kubectl or a CI/CD pipeline. In practice platform teams either:
- Apply it once via
kubectl(common for small setups, good enough for this repo) - Manage it via App-of-Apps: the root Application watches a
platform/directory in git that contains the ApplicationSet YAML - so even the AppSet is git-managed
The full bootstrap pattern combines both:
| |
After that single kubectl apply, everything else flows from git.
Chart Strategy - HEAD for all environments#
All environments in this repo use targetRevision: HEAD. This section explains
the options considered and why this was the right choice here.
Options considered#
Option 1 - Pinned chart versions per environment (chart-releaser or OCI registry)
Each environment’s Application pins a specific chart version:
| |
A chart change is packaged and published (via chart-releaser to GitHub Releases,
or via helm push to ACR/GHCR). Staging/prod are only updated when someone
explicitly bumps targetRevision in the Application manifest via PR.
When this makes sense: a platform team maintains a chart that multiple app teams consume. Version pinning protects app teams from unexpected template changes they didn’t ask for. Also common when compliance requires an explicit artifact version trail.
The cost: real operational overhead - coordinate chart bumps, maintain a publishing pipeline, update version references per environment. Two things to promote instead of one (image tag + chart version).
Option 2 - HEAD everywhere + PR review as the gate
All environments point at HEAD. A chart template change is just a PR. Branch
protection on main requires approval before merge. Git history is the audit
trail. No separate versioning machinery.
Option 3 - HEAD + feature flags for gradual rollout
Same as Option 2, but risky template changes are gated behind a values boolean so they can be enabled per environment independently without branching the chart.
Why we chose Option 2 + 3 (HEAD + feature flags)#
- This repo’s chart is owned and consumed by one team - no coordination problem to solve
- PR review on
mainis a sufficient gate for chart changes - Chart templates in a typical microservices setup change rarely; images change constantly - the version pinning overhead is not worth it
- Feature flags cover the cases where a template change needs to roll out gradually across environments
The chart versioning pattern (Option 1) is worth knowing - you will encounter it when working with platform-team-owned charts or shared internal chart libraries. For this repo it’s unnecessary complexity.
Feature flags for gradual chart rollouts#
When a new template feature is ready for dev but not staging/prod yet, gate it
behind a values boolean. The flag defaults to false in values.yaml, so
existing environments are unaffected until they opt in.
Example - adding Prometheus scrape annotations:
| |
| |
Enable it in dev first:
| |
Staging and prod values don’t set it → falls back to false → feature is
invisible there. When confident, flip it on in staging values via PR, then prod.
Same chart HEAD throughout, zero risk to environments that haven’t opted in.
Image tags - ArgoCD Image Updater (reference)#
Image tags and chart versions are independent concerns:
| Controls | Updated by | |
|---|---|---|
image.tag in values file | Which Docker image the pod runs | CI or ArgoCD Image Updater |
targetRevision in Application | Which chart HEAD commit ArgoCD renders | Always HEAD in this repo |
ArgoCD Image Updater watches a container registry for new image tags and writes the updated tag back to git:
| |
Policy options:
semver- latest tag matching a constraint (e.g.~1.27)latest- most recently pushed tagdigest- tracks a mutable tag (e.g.main) by digest
For staging/prod, use a conservative policy (explicit semver tags only) or disable Image Updater and require manual PRs for promotion.
Third-party charts - always pin versions#
The HEAD strategy applies only to charts you own. For any third-party chart (cert-manager, ingress-nginx, external-secrets, etc.) always pin a specific version:
| |
A third-party chart bump can introduce breaking changes silently. Treat upgrades as deliberate, reviewed PRs.
Key Concepts#
Why folder-per-environment over file-per-environment#
File-per-env (values-dev.yaml) | Folder-per-env (environments/dev/) | |
|---|---|---|
| Single cluster | Simple, works well | Slight overkill |
| Multi-cluster | Awkward - all env configs in one place | Natural - each cluster’s ArgoCD watches its own folder |
| Access control | Hard to restrict who can edit prod values | Can add branch protections or CODEOWNERS per folder |
| Bootstrap | Point one ArgoCD at everything | Point each ArgoCD only at its env folder |
Promoting an image tag dev → staging#
- Verify dev is stable
- Update
image.taginenvironments/staging/values/svc1.yaml - Open PR → review → merge
- Staging ArgoCD detects HEAD change → syncs → new tag deployed
Git history is the full audit trail of every promotion.
ServiceAccount: when you need it#
Skip the ServiceAccount template (as we did here) when the pod just serves HTTP traffic and never talks to the Kubernetes API.
Add it back when:
- The pod needs to call the k8s API (operators, controllers, tools)
- Using IRSA on EKS (pod needs AWS permissions via a linked IAM role)
- Using Workload Identity on GKE (pod needs GCP permissions)
- You want explicit RBAC restrictions rather than relying on the default SA
Gotchas#
ComparisonError after push - after committing the chart and pushing, ArgoCD showed
app path does not existeven though the files were on GitHub. The repo-server had a stale cache. Fix:argocd app get svc1-dev --hard-refresh.ArgoCD server insecure mode not applied -
server.insecure: truewas set inargocd-cmd-params-cmbut the deployment was never restarted, so the pod was still serving TLS. Fix:kubectl rollout restart deployment/argocd-server -n argocd. Verify withkubectl exec <pod> -n argocd -- env | grep INSECURE.ArgoCD CLI gRPC-web broken through Traefik - CLI v3.x with
--grpc-webthrough a plain HTTP Traefik ingress returns EOF. The REST API (/api/v1/session) works fine but the CLI uses gRPC. Workaround: fetch a token viacurland write it directly to~/.config/argocd/configwithplain-text: trueandgrpc-web: true. Token refresh script:1 2 3TOKEN=$(curl -s -X POST http://argocd.ravikrs.local/api/v1/session \ -H "Content-Type: application/json" \ -d '{"username":"admin","password":"<password>"}' | python3 -c "import sys,json; print(json.load(sys.stdin)['token'])")Port-forward drops h2c connections -
kubectl port-forwardwith--plaintextcausessocat: Connection reset by peerbecause the CLI sends HTTP/2 cleartext (h2c) which the ArgoCD server rejects at the transport level. Use the ingress + token workaround above instead of port-forward for the CLI.