By IT Defined Team | April 15, 2026
How to set up ArgoCD on AWS EKS for real production workloads — app-of-apps pattern, secrets, RBAC, SSO, progressive delivery, and the gotchas nobody warns you about.
Why GitOps, in one paragraph
Push-based deploys put cluster credentials in your CI system, create audit-trail mess, and make rollbacks awkward. GitOps inverts the model — your Git repo is the source of truth, ArgoCD watches it, and the cluster pulls changes. Better security boundary, better audit trail, easier rollbacks (git revert), and drift detection comes free. If you're at a company that does more than 10 deploys a week, GitOps pays for itself.
What we'll set up
By the end of this post you'll have:
- ArgoCD installed on EKS via Helm
- App-of-apps pattern for managing multiple applications
- External Secrets Operator pulling from AWS Secrets Manager
- RBAC properly configured for a real team
- SSO with AWS IAM Identity Center
- Progressive delivery with Argo Rollouts
- Drift detection and self-healing
I'll skip the absolute basics — assuming you have an EKS cluster, kubectl access, and Helm installed. If not, get those first.
Step 1: Install ArgoCD
Use the Helm chart, not the raw manifests. Easier to manage going forward.
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
kubectl create namespace argocd
helm install argocd argo/argo-cd \
--namespace argocd \
--set server.service.type=ClusterIP \
--set server.ingress.enabled=true \
--set server.ingress.ingressClassName=alb \
--version 7.7.0Get the initial admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dLogin at the URL exposed by your ALB. Change the admin password immediately. Then plan to disable the local admin user once SSO is set up — we'll get there.
Step 2: The app-of-apps pattern
This is the part that most tutorials skip and most teams struggle with.
If you have 50 applications, you don't want to create 50 ArgoCD Application resources by hand. The app-of-apps pattern says: create one Application that points to a directory in Git containing all your other Application manifests. ArgoCD reads them, applies them, and now you've got declarative management of all your apps.
Repo structure:
infra-config/
├── argocd-apps/
│ ├── root-app.yaml
│ ├── prod/
│ │ ├── api.yaml
│ │ ├── worker.yaml
│ │ └── admin.yaml
│ └── staging/
│ ├── api.yaml
│ ├── worker.yaml
│ └── admin.yaml
└── helm-charts/
└── ...your charts here...
The root-app.yaml looks like this:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/yourorg/infra-config
targetRevision: main
path: argocd-apps/prod
directory:
recurse: false
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: trueApply this once. ArgoCD now manages every application in the prod directory automatically. Add a new app? Add a YAML file to the directory, commit, push, done.
Caveat: be very careful with prune: true on the root app. If someone accidentally deletes the directory, ArgoCD will delete all your applications. We add a finalizer to the root app and use sync windows for an extra safety net.
Step 3: Secrets — External Secrets Operator with Secrets Manager
Don't store secrets in Git. Even encrypted with sealed-secrets, you're rotating keys and the workflow is annoying. In 2026, the better pattern is External Secrets Operator (ESO) syncing from AWS Secrets Manager into Kubernetes secrets.
Install ESO:
helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets \
--create-namespaceCreate an IAM role for ESO with permissions to read your specific secrets. Use IRSA — annotate the ESO service account with eks.amazonaws.com/role-arn.
Create a SecretStore pointing to AWS Secrets Manager:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: ap-south-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-saThen ExternalSecret resources pull specific secrets:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets
kind: SecretStore
target:
name: db-credentials
data:
- secretKey: password
remoteRef:
key: prod/db/main
property: passwordPods reference the resulting Kubernetes Secret normally. Rotation in Secrets Manager propagates to Kubernetes within the refresh interval. Beautiful.
Step 4: RBAC for a real team
Default ArgoCD RBAC has two roles: admin (everything) and readonly (look but don't touch). For a real team you need more nuance.
Edit the argocd-rbac-cm ConfigMap:
policy.csv: |
p, role:developer, applications, get, */*, allow
p, role:developer, applications, sync, dev/*, allow
p, role:developer, applications, sync, staging/*, allow
p, role:developer, applications, action/*, dev/*, allow
p, role:devops, applications, *, */*, allow
p, role:devops, clusters, *, *, allow
p, role:devops, repositories, *, *, allow
g, your-org:developers, role:developer
g, your-org:devops, role:devops
Developers can sync to dev and staging. They can read everything. They can't touch prod or change cluster config. DevOps team has full access.
This kind of granularity is essential. The number of times I've seen "everyone has admin" causing accidents is depressing.
Step 5: SSO with AWS IAM Identity Center
Stop using the local admin user. Set up SSO with IAM Identity Center (formerly AWS SSO).
In Identity Center, create an application for ArgoCD using SAML 2.0. Configure the assertion to include user groups.
In ArgoCD, configure dex (the OIDC provider built into ArgoCD) to trust Identity Center's SAML. There's a documented path that takes about 30 minutes if you've never done it before.
Once SSO works, edit argocd-cm to disable the local admin:
data:admin.enabled: "false"
Now access is exclusively through your SSO. Centralized auth, easier offboarding, satisfies most compliance teams.
Step 6: Progressive delivery with Argo Rollouts
ArgoCD on its own does standard rolling deploys. Argo Rollouts gives you canary, blue-green, and metric-based progressive delivery.
Install:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yamlReplace your Deployment with a Rollout. Same fields, plus a strategy block:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: api
spec:
replicas: 10
strategy:
canary:
steps:
- setWeight: 10
- pause: { duration: 5m }
- setWeight: 25
- pause: { duration: 10m }
- setWeight: 50
- pause: { duration: 10m }
- setWeight: 100
analysis:
templates:
- templateName: success-rateDeploy goes 10% → wait → 25% → wait → ... If your analysis template (which can query Prometheus) detects elevated error rates, the rollout pauses or aborts automatically.
This is the progressive delivery pattern that mature teams run. Worth the setup time.
Step 7: Drift detection and self-healing
ArgoCD with selfHeal: true automatically reverts manual changes to the cluster. Someone kubectl edit'd a deployment? ArgoCD reverts it on next sync.Sounds great. Sometimes annoying. If you're debugging in prod and need a temporary change, ArgoCD will fight you.
Best practice: selfHeal on for non-prod, more cautious in prod. We use sync windows that disable auto-sync during scheduled maintenance windows.
The gotchas nobody warns you about
ArgoCD doesn't manage CRDs gracefully. Installing a Helm chart with CRDs and then upgrading? You'll hit issues. Use the ServerSideApply: true sync option.
ArgoCD's built-in Helm support is opinionated. It runs helm template, not helm install. If your charts use Helm hooks for installation logic, they won't run. Workaround: use sync hooks instead.Notifications via Slack work but require setting up the argocd-notifications-controller. Don't forget — silent ArgoCD is a worse outage signal than no ArgoCD.
Multi-cluster setups need careful planning. Don't manage 10 production clusters from one ArgoCD instance unless you really know what you're doing. We usually run one ArgoCD per cluster.
What we teach in our AWS DevOps program
ArgoCD shows up in week 10 of our 20-week program at IT Defined. Students set up exactly this stack — ArgoCD, ESO, Argo Rollouts — on EKS. By the end they're comfortable with GitOps as a workflow, which is increasingly what hiring managers want.If you want guided practice on this with feedback on your setup, our program covers it hands-on. Or work through this post on your own — the docs are good once you have the architecture in your head.
Frequently asked questions
FluxCD or ArgoCD?
Both work. ArgoCD has a better UI and broader adoption in 2026. FluxCD is a tighter integration with Kubernetes if you don't need a UI. Most companies pick ArgoCD.
Can ArgoCD manage Helm charts that aren't in Git?
Yes — ArgoCD can pull from Helm chart repos directly. Useful for off-the-shelf charts (Prometheus, ingress-nginx). For your own apps, keep them in Git.
How do I handle database migrations with GitOps?
Run them as Kubernetes Jobs in a pre-sync hook. ArgoCD waits for the job to complete before applying the rest.
Is GitOps overkill for small teams?
If you have under 5 services and deploy weekly, probably yes. Plain Helm + CI is simpler. GitOps shines once complexity grows.
About IT Defined
IT Defined is a software training institute in Whitefield, Bangalore, offering hands-on programs in AWS DevOps, Full-Stack MERN, Python, and Cybersecurity. We've trained over 2,000 students with live projects, mock interviews, and placement support.
Visit: itdefined.org | Phone: +91 6363730986 | Email: info@itdefined.org