Skip to main content
  1. Posts/
  2. Learning ArgoCD/
  3. Core Patterns/

Helm Chart Managed by ArgoCD

·2871 words·14 mins
Ravi Singh
Author
Ravi Singh
Software engineer with 15+ years building backend systems and cloud platforms across fintech, automotive, and academia. I write about the things I build, debug, and learn — so I don’t forget them.
Table of Contents
Learning ArgoCD - This article is part of a series.
Part 1: This Article

Helm Chart Managed by ArgoCD
#

What We Built
#

A single reusable Helm chart at charts/backend-service/ shared by three services (svc1, svc2, svc3), all deployed into the dev namespace on Rancher Desktop. Environment-specific configuration lives outside the chart in environments/<env>/, one folder per environment. Each environment folder is self-contained - when a second cluster is added (e.g. minikube for staging), its ArgoCD instance points only at its own environment folder.


Cluster Plan
#

EnvironmentClusterNamespaceDeploy patternStatus
devRancher Desktopdevmanual Applications (App-of-Apps)learning exercise
eu-devRancher Desktopalpha-devApplicationSet (manual apply)learning exercise
eu-dev-rancherRancher Desktopalpha-devApplicationSet via sync waves (GitOps)current
eu-stagingMinikubealpha-stagingApplicationSet via sync waves (GitOps)future
eu-prodTBDalpha-prodApplicationSet via sync waves (GitOps)future

Naming convention:

  • Folder = <region>-<env>-<cluster> - identifies the cluster (one folder per ArgoCD instance)
  • Namespace = <team>-<env> - team owns the namespace, region lives at cluster level
  • Application name = <team>-<svc>-<region>-<env>-<cluster> - unique across a shared ArgoCD instance
  • ApplicationSet name = <team>-<region>-<env>-<cluster>-services

dev and eu-dev are preserved as learning references. eu-dev-rancher is the current environment - it uses sync waves to bootstrap the full cluster from a single kubectl apply. See docs/09-sync-waves-cluster-complete.md. All environments share the same charts/backend-service/ chart.


Final Structure
#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
charts/
  backend-service/
    Chart.yaml
    values.yaml                        ← helm create defaults, kept as-is
    templates/
      _helpers.tpl
      deployment.yaml                  ← simplified (serviceAccount removed)
      service.yaml
      ingress.yaml

environments/
  dev/                                 ← learning exercise: manual Application pattern (App-of-Apps)
    bootstrap.yaml                     ← root Application (kubectl apply once)
    apps/
      svc1.yaml                        ← ArgoCD Application manifest
      svc2.yaml
      svc3.yaml
    values/
      svc1.yaml                        ← dev-specific Helm values
      svc2.yaml
      svc3.yaml
  eu-dev/                              ← learning exercise: ApplicationSet pattern (applied manually)
    appset.yaml                        ← single ApplicationSet, applied once via kubectl
    values/
      svc1.yaml                        ← namespace: alpha-dev, app: alpha-svc1-eu-dev
      svc2.yaml
      svc3.yaml
  eu-dev-rancher/                      ← current: cluster-complete bootstrap via sync waves
    bootstrap.yaml                     ← root Application - single kubectl apply entry point
    platform/                          ← orchestration layer (Application/ApplicationSet CRDs)
      cert-manager.yaml                ← wave 0
      cert-manager-config.yaml         ← wave 1
      traefik.yaml                     ← wave 2
      argocd-config.yaml               ← wave 3
      appset.yaml                      ← wave 4 (git-managed by ArgoCD)
    argocd/                            ← cluster-specific ArgoCD manifests
    services/                          ← service values files, auto-discovered by AppSet
      svc1.yaml
      svc2.yaml
  eu-staging/                          ← future: EU staging cluster
    bootstrap.yaml
    platform/
    services/

Values layering
#

Helm merges values left to right - later files override earlier ones:

1
2
charts/backend-service/values.yaml     ← chart defaults (helm create)
  + environments/dev/values/svc1.yaml  ← service identity + env overrides

Each environment values file is fully self-contained: it sets the service name, image repository, and all env-specific overrides. Anything not set falls through to the chart default.


Step 1 - Scaffold the chart
#

1
helm create charts/backend-service

Step 2 - Remove boilerplate files
#

1
2
3
4
5
rm charts/backend-service/templates/hpa.yaml
rm charts/backend-service/templates/serviceaccount.yaml
rm charts/backend-service/templates/NOTES.txt
rm -rf charts/backend-service/templates/tests
rm charts/backend-service/.helmignore

templates/ should now contain only: _helpers.tpl, deployment.yaml, service.yaml, ingress.yaml.


Step 3 - Simplify deployment.yaml
#

The helm create deployment references the service account we just removed. Replace charts/backend-service/templates/deployment.yaml with:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "backend-service.fullname" . }}
  labels:
    {{- include "backend-service.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "backend-service.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "backend-service.selectorLabels" . | nindent 8 }}
        {{- with .Values.podLabels }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          {{- with .Values.resources }}
          resources:
            {{- toYaml . | nindent 12 }}
          {{- end }}

Step 4 - Create environment values files
#

Each file is self-contained: service identity (nameOverride, image.repository) and env-specific overrides (replicaCount, image.tag, ingress host) all in one place.

dev
#

environments/dev/values/svc1.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
nameOverride: svc1
fullnameOverride: svc1

replicaCount: 1

image:
  repository: nginx
  tag: "1.27"

ingress:
  enabled: true
  className: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
  hosts:
    - host: svc1.dev.ravikrs.local
      paths:
        - path: /
          pathType: Prefix

environments/dev/values/svc2.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
nameOverride: svc2
fullnameOverride: svc2

replicaCount: 1

image:
  repository: nginx
  tag: "1.26"

ingress:
  enabled: true
  className: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
  hosts:
    - host: svc2.dev.ravikrs.local
      paths:
        - path: /
          pathType: Prefix

environments/dev/values/svc3.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
nameOverride: svc3
fullnameOverride: svc3

replicaCount: 1

image:
  repository: nginx
  tag: "1.25"

ingress:
  enabled: true
  className: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
  hosts:
    - host: svc3.dev.ravikrs.local
      paths:
        - path: /
          pathType: Prefix

staging (create when minikube cluster is ready)
#

environments/staging/values/svc1.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
nameOverride: svc1
fullnameOverride: svc1

replicaCount: 2

image:
  repository: nginx
  tag: "1.27"        # pinned - updated manually after dev is verified

ingress:
  enabled: true
  className: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
  hosts:
    - host: svc1.staging.ravikrs.local
      paths:
        - path: /
          pathType: Prefix

Step 5 - Validate locally with helm template
#

1
2
3
4
5
6
7
# svc1 dev
helm template svc1 charts/backend-service \
  -f environments/dev/values/svc1.yaml

# svc2 dev
helm template svc2 charts/backend-service \
  -f environments/dev/values/svc2.yaml

Check: correct name, replicas, image tag, and ingress host per service.


Step 6 - Create ArgoCD Application manifests
#

The env values file now lives outside the chart directory, so ArgoCD needs the multiple sources feature (ArgoCD ≥ 2.6) to reach it. The second source (ref: values) acts as a pointer to the repo root, enabling $values/ prefixed paths in valueFiles.

environments/dev/apps/svc1.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: svc1-dev
  namespace: argocd
spec:
  project: default
  sources:
    - repoURL: https://github.com/ravikrs/learning-argocd
      targetRevision: HEAD
      path: charts/backend-service
      helm:
        valueFiles:
          - $values/environments/dev/values/svc1.yaml
    - repoURL: https://github.com/ravikrs/learning-argocd
      targetRevision: HEAD
      ref: values
  destination:
    server: https://kubernetes.default.svc
    namespace: dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Repeat for svc2.yaml and svc3.yaml - only name and valueFiles path change. All three services deploy into the same dev namespace.

When the staging cluster is ready, environments/staging/apps/svc1.yaml uses the same structure but points at environments/staging/values/svc1.yaml. All environments use targetRevision: HEAD - see the Chart Strategy section for why.


Step 7 - Add /etc/hosts entries
#

1
2
3
4
5
sudo tee -a /etc/hosts <<EOF
127.0.0.1 svc1.dev.ravikrs.local
127.0.0.1 svc2.dev.ravikrs.local
127.0.0.1 svc3.dev.ravikrs.local
EOF

Staging/prod entries get added when those clusters are active.


Step 8 - Commit and push
#

1
2
3
git add charts/backend-service environments/
git commit -m "add shared helm chart and dev environment for svc1/svc2/svc3"
git push

Step 9 - Apply and verify (start with svc1-dev)
#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Apply svc1-dev first to validate the pattern
kubectl apply -f environments/dev/apps/svc1.yaml

# Watch sync status
argocd app get svc1-dev

# Once healthy, apply svc2 and svc3
kubectl apply -f environments/dev/apps/svc2.yaml
kubectl apply -f environments/dev/apps/svc3.yaml

# Check all apps
argocd app list

# Verify pods - all three services land in the same namespace
kubectl get pods -n dev

Open in browser:


ApplicationSet Pattern (Done)
#

The 3 manual Application manifests in environments/dev/apps/ were superseded by a single ApplicationSet using a git file generator - implemented first in eu-dev (see docs/05-applicationset.md), then promoted to the fully GitOps-managed eu-dev-rancher setup where the ApplicationSet itself is managed by ArgoCD as wave 4 (see docs/09-sync-waves-cluster-complete.md).


App-of-Apps vs ApplicationSet
#

Both patterns solve the same access problem: dev teams don’t have kubectl access to the argocd namespace, so they can’t create Application CRDs directly. Both require the platform team to apply something once. The difference is what dev teams do after that bootstrap step.

App-of-Apps - devs write Application manifests

Platform applies one root Application pointing at environments/dev/apps/. ArgoCD then manages that directory from git - any Application YAML committed there gets picked up and applied automatically.

1
2
3
Platform applies: root Application (once, via kubectl)
Dev team adds:    environments/dev/apps/svc4.yaml  ← full Application manifest
ArgoCD:           detects new file → creates Application CRD in argocd namespace

Dev teams still write Application YAMLs. They control repoURL, path, syncPolicy, namespace. Platform removed the need for kubectl access, not the need to understand Application manifests.

ApplicationSet - devs drop a values file

Platform applies one ApplicationSet. The git file generator watches for values files and auto-creates Applications from a fixed template.

1
2
3
Platform applies: ApplicationSet (once, via kubectl)
Dev team adds:    environments/dev/values/svc4.yaml  ← just values, no YAML boilerplate
ArgoCD:           generates Application from template automatically

Dev teams never write an Application manifest. The template is owned by platform.

Why platform teams prefer ApplicationSet

The key difference is who owns the Application template:

App-of-AppsApplicationSet
Application template written byDev teamPlatform team
Dev can change repoURL?YesNo - template is fixed
Dev can change syncPolicy?YesNo
Dev can target wrong namespace?Possible (AppProject limits it)Impossible - template controls it
Onboarding a new serviceWrite Application YAMLDrop a values file

With ApplicationSet, platform controls the pattern centrally. All services across all teams are guaranteed to follow the same sync policy, use the same chart, and deploy only to allowed namespaces. Dev teams can’t misconfigure it - there is nothing to misconfigure.

Can an ApplicationSet be created via the ArgoCD UI?

No. The ArgoCD UI’s “New App” button creates Application CRDs only. ApplicationSets require kubectl or a CI/CD pipeline. In practice platform teams either:

  • Apply it once via kubectl (common for small setups, good enough for this repo)
  • Manage it via App-of-Apps: the root Application watches a platform/ directory in git that contains the ApplicationSet YAML - so even the AppSet is git-managed

The full bootstrap pattern combines both:

1
2
3
4
kubectl apply -f root-app.yaml          ← one command ever, by platform
  → root app manages platform/appset.yaml from git
    → ApplicationSet generates service Applications from values files
      → dev teams onboard by adding environments/<env>/values/<svc>.yaml

After that single kubectl apply, everything else flows from git.


Chart Strategy - HEAD for all environments
#

All environments in this repo use targetRevision: HEAD. This section explains the options considered and why this was the right choice here.


Options considered
#

Option 1 - Pinned chart versions per environment (chart-releaser or OCI registry)

Each environment’s Application pins a specific chart version:

1
2
3
dev:      targetRevision: HEAD
staging:  targetRevision: v0.2.0
prod:     targetRevision: v0.1.0

A chart change is packaged and published (via chart-releaser to GitHub Releases, or via helm push to ACR/GHCR). Staging/prod are only updated when someone explicitly bumps targetRevision in the Application manifest via PR.

When this makes sense: a platform team maintains a chart that multiple app teams consume. Version pinning protects app teams from unexpected template changes they didn’t ask for. Also common when compliance requires an explicit artifact version trail.

The cost: real operational overhead - coordinate chart bumps, maintain a publishing pipeline, update version references per environment. Two things to promote instead of one (image tag + chart version).

Option 2 - HEAD everywhere + PR review as the gate

All environments point at HEAD. A chart template change is just a PR. Branch protection on main requires approval before merge. Git history is the audit trail. No separate versioning machinery.

Option 3 - HEAD + feature flags for gradual rollout

Same as Option 2, but risky template changes are gated behind a values boolean so they can be enabled per environment independently without branching the chart.


Why we chose Option 2 + 3 (HEAD + feature flags)
#

  • This repo’s chart is owned and consumed by one team - no coordination problem to solve
  • PR review on main is a sufficient gate for chart changes
  • Chart templates in a typical microservices setup change rarely; images change constantly - the version pinning overhead is not worth it
  • Feature flags cover the cases where a template change needs to roll out gradually across environments

The chart versioning pattern (Option 1) is worth knowing - you will encounter it when working with platform-team-owned charts or shared internal chart libraries. For this repo it’s unnecessary complexity.


Feature flags for gradual chart rollouts
#

When a new template feature is ready for dev but not staging/prod yet, gate it behind a values boolean. The flag defaults to false in values.yaml, so existing environments are unaffected until they opt in.

Example - adding Prometheus scrape annotations:

1
2
3
# charts/backend-service/values.yaml (helm create default, add this block)
metrics:
  enabled: false
1
2
3
4
5
6
# charts/backend-service/templates/deployment.yaml (inside template metadata)
{{- if .Values.metrics.enabled }}
annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "{{ .Values.service.port }}"
{{- end }}

Enable it in dev first:

1
2
3
# environments/dev/values/svc1.yaml
metrics:
  enabled: true

Staging and prod values don’t set it → falls back to false → feature is invisible there. When confident, flip it on in staging values via PR, then prod. Same chart HEAD throughout, zero risk to environments that haven’t opted in.


Image tags - ArgoCD Image Updater (reference)
#

Image tags and chart versions are independent concerns:

ControlsUpdated by
image.tag in values fileWhich Docker image the pod runsCI or ArgoCD Image Updater
targetRevision in ApplicationWhich chart HEAD commit ArgoCD rendersAlways HEAD in this repo

ArgoCD Image Updater watches a container registry for new image tags and writes the updated tag back to git:

1
2
3
4
New image pushed to registry (e.g. nginx:1.28)
  → Image Updater detects it matches policy (e.g. semver ~1.x)
  → Writes image.tag: "1.28" to environments/dev/values/svc1.yaml
  → ArgoCD detects the git change → syncs → pod updated

Policy options:

  • semver - latest tag matching a constraint (e.g. ~1.27)
  • latest - most recently pushed tag
  • digest - tracks a mutable tag (e.g. main) by digest

For staging/prod, use a conservative policy (explicit semver tags only) or disable Image Updater and require manual PRs for promotion.


Third-party charts - always pin versions
#

The HEAD strategy applies only to charts you own. For any third-party chart (cert-manager, ingress-nginx, external-secrets, etc.) always pin a specific version:

1
2
3
4
source:
  chart: cert-manager
  repoURL: https://charts.jetstack.io
  targetRevision: v1.14.0    ← never use HEAD or latest for charts you don't control

A third-party chart bump can introduce breaking changes silently. Treat upgrades as deliberate, reviewed PRs.


Key Concepts
#

Why folder-per-environment over file-per-environment
#

File-per-env (values-dev.yaml)Folder-per-env (environments/dev/)
Single clusterSimple, works wellSlight overkill
Multi-clusterAwkward - all env configs in one placeNatural - each cluster’s ArgoCD watches its own folder
Access controlHard to restrict who can edit prod valuesCan add branch protections or CODEOWNERS per folder
BootstrapPoint one ArgoCD at everythingPoint each ArgoCD only at its env folder

Promoting an image tag dev → staging
#

  1. Verify dev is stable
  2. Update image.tag in environments/staging/values/svc1.yaml
  3. Open PR → review → merge
  4. Staging ArgoCD detects HEAD change → syncs → new tag deployed

Git history is the full audit trail of every promotion.

ServiceAccount: when you need it
#

Skip the ServiceAccount template (as we did here) when the pod just serves HTTP traffic and never talks to the Kubernetes API.

Add it back when:

  • The pod needs to call the k8s API (operators, controllers, tools)
  • Using IRSA on EKS (pod needs AWS permissions via a linked IAM role)
  • Using Workload Identity on GKE (pod needs GCP permissions)
  • You want explicit RBAC restrictions rather than relying on the default SA

Gotchas
#

  • ComparisonError after push - after committing the chart and pushing, ArgoCD showed app path does not exist even though the files were on GitHub. The repo-server had a stale cache. Fix: argocd app get svc1-dev --hard-refresh.

  • ArgoCD server insecure mode not applied - server.insecure: true was set in argocd-cmd-params-cm but the deployment was never restarted, so the pod was still serving TLS. Fix: kubectl rollout restart deployment/argocd-server -n argocd. Verify with kubectl exec <pod> -n argocd -- env | grep INSECURE.

  • ArgoCD CLI gRPC-web broken through Traefik - CLI v3.x with --grpc-web through a plain HTTP Traefik ingress returns EOF. The REST API (/api/v1/session) works fine but the CLI uses gRPC. Workaround: fetch a token via curl and write it directly to ~/.config/argocd/config with plain-text: true and grpc-web: true. Token refresh script:

    1
    2
    3
    
    TOKEN=$(curl -s -X POST http://argocd.ravikrs.local/api/v1/session \
      -H "Content-Type: application/json" \
      -d '{"username":"admin","password":"<password>"}' | python3 -c "import sys,json; print(json.load(sys.stdin)['token'])")
  • Port-forward drops h2c connections - kubectl port-forward with --plaintext causes socat: Connection reset by peer because the CLI sends HTTP/2 cleartext (h2c) which the ArgoCD server rejects at the transport level. Use the ingress + token workaround above instead of port-forward for the CLI.

Learning ArgoCD - This article is part of a series.
Part 1: This Article