Skip to main content
  1. Posts/
  2. Learning ArgoCD/
  3. Operations & Debugging/

Stakater Reloader: Why We Added It, and Why We Removed It

·771 words·4 mins
Ravi Singh
Author
Ravi Singh
Software engineer with 15+ years building backend systems and cloud platforms across fintech, automotive, and academia. I write about the things I build, debug, and learn — so I don’t forget them.
Learning ArgoCD - This article is part of a series.
Part 2: This Article

Stakater Reloader: Why We Added It, and Why We Removed It
#

The Problem It Was Trying to Solve
#

Kubernetes does not restart pods when a ConfigMap or Secret they reference is updated. This is by design - but it creates a gap in GitOps workflows: you change argocd-cmd-params-cm, ArgoCD syncs the ConfigMap, and nothing happens. The pod continues running with the old in-memory values.

The specific trigger here was enabling server.insecure: "true" in argocd-cmd-params-cm. ArgoCD syncs the ConfigMap but argocd-server does not restart, so the flag has no effect until you manually force a rollout.

Stakater Reloader is a small controller that watches ConfigMaps and Secrets and automatically triggers a rolling restart of annotated Deployments when they change. It looked like the right tool.


Why We Removed It
#

Pitfall 1: You cannot GitOps-patch an upstream-managed Deployment
#

argocd-server is installed by the ArgoCD upstream install manifest, not by this repo. To add the Reloader annotation you need to modify the Deployment. The natural GitOps instinct is to commit a partial Deployment manifest to the repo and let ArgoCD apply it.

This is a trap. Without a kustomization.yaml, ArgoCD treats every .yaml in the target path as a standalone raw manifest. A Deployment with only metadata.annotations and no spec fails Kubernetes validation on creation:

1
2
3
Deployment.apps "argocd-server" is invalid:
  spec.selector: Required value
  spec.template.spec.containers: Required value

The error only surfaces at CREATE time - when the Deployment doesn’t exist yet. As long as the Deployment was already running, server-side apply (SSA) would merge the annotation in. The manifest was “working” but silently broken: if the Deployment was ever lost (cluster restart, re-apply of the ArgoCD install manifest), ArgoCD would fail to recreate it and the pod would disappear entirely.

Pitfall 2: prune + selfHeal turns a latent bug into an outage
#

argocd-config runs with prune: true and selfHeal: true. When the argocd-server Deployment was lost, ArgoCD:

  1. Detected the resource was Missing.
  2. Tried to CREATE it from the partial patch manifest - failed validation.
  3. Retried every reconcile cycle (5 times, then gave up).
  4. Left argocd-server permanently Missing, taking down the entire ArgoCD UI.

The health status Missing on argocd-config with the error spec.selector: Required value is the signature of this failure.

Pitfall 3: Iterating to fix it compounds the problem
#

The commit history records six attempts to get the patch working (add containers merge key, revert SSA patch attempt, use auto annotation, etc.). Each iteration introduced a new variant of the same fundamental problem. This cost significant debugging time and caused multiple ArgoCD outages.

Pitfall 4: It adds an extra Helm release with its own sync-wave complexity
#

Reloader installs as a separate Helm chart in its own namespace, requiring a sync-wave dependency. Getting watchGlobally and namespaceSelector right took multiple fix commits and is not obvious to reason about.


What to Do Instead
#

The actual rule: manually rollout after ConfigMap changes
#

Kubernetes does not hot-reload pods when ConfigMaps change. For environment-variable or flag-based config (like argocd-cmd-params-cm), a rollout is always required. Just do it explicitly:

1
2
3
# After ArgoCD syncs argocd-cmd-params-cm (e.g. adding server.insecure: "true"):
kubectl rollout restart deployment/argocd-server -n argocd
kubectl rollout status deployment/argocd-server -n argocd

This is one command. It is less infrastructure than running a separate controller, and it makes the dependency explicit rather than magic.

For GitOps-owned services
#

If a service is managed entirely by this repo (e.g. via a Helm chart with values files), the Reloader podAnnotations pattern does work cleanly - ArgoCD renders the annotation into the Deployment via Helm, and the Deployment spec is complete. The problem is exclusively with upstream-installed components like argocd-server.

We removed the podAnnotations.reloader.stakater.com/auto: "true" from the svc1 values files as well, since there is no active need for auto-restart there.


Summary
#

QuestionAnswer
Does Kubernetes restart pods when a ConfigMap changes?No - never, by design
Can you patch an upstream Deployment via a partial manifest in ArgoCD?No - Kubernetes validates the full spec on CREATE; SSA only helps on UPDATE
What happens when prune: true meets a broken patch manifest?The pod disappears and ArgoCD can’t recreate it - full outage
What is the right operational pattern?kubectl rollout restart deployment/<name> -n <ns> after CM changes

Recovery Reference
#

If argocd-server goes Missing due to this failure pattern:

1
2
3
4
5
6
7
# 1. Restore the Deployment from the upstream install manifest
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# 2. Wait for argocd-server to be Running
kubectl rollout status deployment/argocd-server -n argocd

# 3. ArgoCD will re-sync argocd-config and the Missing status will clear
Learning ArgoCD - This article is part of a series.
Part 2: This Article