Securely login to Argocd via web UI with SSO Azure Active Directory

As a Senior DevSecOps Engineer, I’m dedicated to building secure, resilient, and scalable cloud-native infrastructures tailored for modern applications. With a strong focus on microservices architecture, I design solutions that empower development teams to deliver and scale applications swiftly and securely. I’m skilled in breaking down monolithic systems into agile, containerised microservices that are easy to deploy, manage, and monitor.
Leveraging a suite of DevOps and DevSecOps tools—including Kubernetes, Docker, Helm, Terraform, and Jenkins—I implement CI/CD pipelines that support seamless deployments and automated testing. My expertise extends to security tools and practices that integrate vulnerability scanning, automated policy enforcement, and compliance checks directly into the SDLC, ensuring that security is built into every stage of the development process.
Proficient in multi-cloud environments like AWS, Azure, and GCP, I work with tools such as Prometheus, Grafana, and ELK Stack to provide robust monitoring and logging for observability. I prioritise automation, using Ansible, GitOps workflows with ArgoCD, and IaC to streamline operations, enhance collaboration, and reduce human error.
Beyond my technical work, I’m passionate about sharing knowledge through blogging, community engagement, and mentoring. I aim to help organisations realize the full potential of DevSecOps—delivering faster, more secure applications while cultivating a culture of continuous improvement and security awareness.
Introduction
In modern DevOps environments, secure and centralized access control is critical — especially when managing production-grade Kubernetes clusters with GitOps tools like Argo CD. Rather than relying on local user accounts or static passwords, integrating Single Sign-On (SSO) using enterprise identity providers like Azure Active Directory (Azure AD) ensures that authentication is secure, scalable, and aligned with your organization’s identity governance policies.
In this article, we’ll walk through how to configure Argo CD to authenticate users via Azure AD using OpenID Connect (OIDC), enabling seamless and secure access through the web UI. By the end of this guide, you'll have a setup that supports workload identity, group-based role access, and a flexible RBAC model that maps Azure AD groups to Argo CD roles — allowing you to manage access across multiple teams efficiently and securely.
Whether you're a platform engineer, SRE, or DevOps lead looking to enforce identity-based access control across your GitOps workflow, this guide will give you a production-ready foundation.
🔧 Prerequisites
Before configuring Argo CD with Azure Active Directory (AAD) SSO via the web UI, ensure the following are in place:
✅ Azure Subscription: Active subscription with Contributor (or higher) access to manage AKS, AAD, and networking. Sign up.
🏗️ Private AKS Cluster: Deployed via Terraform Cloud, with user-assigned managed identity and running behind firewall with private enspoints. Accessible via kubectl in the jumpbox within the nework.
🛡️ Istio Internal Ingress: Istio installed with an internal ingress gateway to restrict public access and allow access within the network or via a jumpbox/VPN.
🛠️ Azure CLI: Version 2.30+, authenticated with az login. Used for managing Azure AD apps and AKS.
🖥️ kubectl: Configured to access the AKS cluster. Test with kubectl get nodes.
📦 Helm 3.x: Required to deploy Argo CD, Istio, and cert-manager.
🔐 Workload Identity: Enabled for secretless authentication to Azure resources from Kubernetes workloads.
🔒 Cert-Manager (optional): For automated TLS via Let’s Encrypt or internal CA. Cert-Manager Docs. but i will use openssl because let’sencrypt does not authenticate ACME challenge in internal private dns zone
🌐 Domain & Private DNS zone : And point private dns name to the internal AKS ISTIO ingress IP (e.g., https://argocd.terranetesprivate.com).
🔑 App registration: used to configure AAD SSO/OIDC securely in argocd.
📚 Knowledge Required:
Basic understanding of Istio Gateway, VirtualService, and AAD SSO/OIDC
Familiarity with Kubernetes RBAC, TLS, and DNS setup
⚠️ Ensure all tools are installed and your Terraform state reflects the expected AKS and networking configuration.




kubectl get pod -A | grep -E 'istio|cert|wi-webhook'
kubectl get svc -A | grep -E 'istio|cert|wi-webhook'

Configure a new Entra ID App registration
Add a new Entra ID App registration¶
From the
Microsoft Entra ID>App registrationsmenu, choose+ New registrationEnter a
Namefor the application (e.g.ArgoCD-Terranetes-SSO).Specify who can use the application (e.g.
Accounts in this organizational directory only).Enter Redirect URI (optional) as follows (replacing
http://argocd.terranetesprivate.com:8085/auth/callbackwith your Argo URL), then chooseAdd.Platform:
WebRedirect URI:
https://argocd.terranetesprivate.com/auth/callback
When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID.
Configure additional platform settings for ArgoCD CLI¶
In the Azure portal, in App registrations, select your application.
Under Manage, select Authentication.
Under Platform configurations, select Add a platform.
Under Configure platforms, select the "Mobile and desktop applications" tile. Use the below value. You shouldn't change it.
- Redirect URI:
http://localhost:8085/auth/callback
- Redirect URI:

Add credentials a new Entra ID App registration¶
Using Workload Identity Federation (Recommended)
Label the Pods: Add the
azure.workload.identity/use: "true"label to theargocd-serverpods.Add Annotation to Service Account: Add
azure.workload.identity/client-id: "$CLIENT_ID"annotation to theargocd-serverservice account using the details from application created in previous step.From the
Certificates & secretsmenu, navigate toFederated credentials, then choose+ Add credentialChoose
Federated credential scenarioasKubernetes Accessing Azure resourcesEnter Cluster Issuer URL, refer to retrieve the OIDC issuer URL documentation
Enter namespace as the namespace where the argocd is deployed
Enter service account name as
argocd-serverEnter a unique name
Click Add.
I updated the Workload Identity Federation for the App registration with TERRAFORM

Setup permissions for Entra ID Application¶
From the
API permissionsmenu, choose+ Add a permissionFind
User.Readpermission (underMicrosoft Graph) and grant it to the created application:

From the Token Configuration menu, choose + Add groups claim

Associate an Entra ID group to your Entra ID App registration¶
From the
Microsoft Entra ID>Enterprise applicationsmenu, search the App that you created (e.g.ArgoCD-Terranetes-SSO).- An Enterprise application with the same name of the Entra ID App registration is created when you add a new Entra ID App registration.
From the
Users and groupsmenu of the app, add any users or groups requiring access to the service.mine is
terranetes-group

Configure and deploy Argocd to use the new Entra ID App registration with helm
let’s create variables
## create variables for the tenant_id and client_id and object_id for terranetes-group
export TENANT_ID="TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT"
export APP_REGISTRATION_CLIENT_ID="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
export TERRANETES_GROUP_OBJECT_ID="GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG"
### create argo-cd-values.yaml file with cat command
cat > argo-cd-values.yaml << EOF
## Custom resource configuration
crds:
install: true
keep: true
annotations: {}
additionalLabels: {}
global:
podAnnotations:
azure.workload.identity/use: "true"
podLabels:
azure.workload.identity/use: "true"
controller:
serviceAccount:
create: true
name: argocd-application-controller
annotations:
azure.workload.identity/client-id: ${APP_REGISTRATION_CLIENT_ID} # Required for workload identity
labels:
azure.workload.identity/use: "true" # Required if using the webhook
server:
serviceAccount:
create: true
name: argocd-server
annotations:
azure.workload.identity/client-id: ${APP_REGISTRATION_CLIENT_ID} # Required for workload identity
labels:
azure.workload.identity/use: "true" # Required if using the webhook
## Argo CD Configs
configs:
cm:
create: true
# Enable local admin user
admin.enabled: true
## ArgoCD URL
url: https://argocd.terranetesprivate.com
# OIDC configuration
oidc.config: |
name: AzureAD
issuer: https://login.microsoftonline.com/${TENANT_ID}/v2.0
clientID: ${APP_REGISTRATION_CLIENT_ID} # Required for workload identity
redirectURIs:
- https://argocd.terranetesprivate.com:8085/auth/callback
- https://argocd.terranetesprivate.com/auth/callback
azure:
useWorkloadIdentity: true
requestedIDTokenClaims:
groups:
essential: true
value: "SecurityGroup"
requestedScopes:
- openid
- profile
- email
params:
server.insecure: true
# Add RBAC configuration to map Azure AD groups to ArgoCD roles
rbac:
# Ensure this is set to true if you are defining RBAC policies here
create: true
policy.default: role:readonly
policy.csv: |
# Platform Admin Policies
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow # Allow GPG key management for admins
# Azure AD Group Mapping for terranetes-group (Object ID: GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG)
# The OIDC connector name 'AzureAD' must match 'name' in oidc.config.
g, AzureAD:${TERRANETES_GROUP_OBJECT_ID}, role:org-admin
scopes: "[groups, email]" # These scopes are required for RBAC to function correctly
userInfoGroupsField: memberOf
EOF
Deploy Argocd with helm
## install the argo-cd
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm upgrade --install argocd argo/argo-cd \
--namespace argocd \
--create-namespace \
-f argo-cd-values.yaml \
--debug
sleep 180 # wait for 3 mins
When you check the pods in argocd namespace, only 1 container will be running on each component.
kubectl get pods -n argocd
But after labelling the argocd namespace with aks istio envoy proxy injection and restart the namespace pods, each pod will be running with two containers (sidecar and main container).
kubectl label namespace argocd istio.io/rev=asm-1-24 --overwrite
## Verify the argocd namespace labels
kubectl get namespace argocd --show-labels
## restart the pods
kubectl rollout restart deployment -n argocd
kubectl delete pod argocd-application-controller-0 -n argocd

To add the callback url it requires https url . In that case, i need to create an tls certificate
I want to use cert-manager with let’sencrypt to automate the certificate issuing and certificate renewal.
But let’sencrypt does not validate ACME challenge in private dns zone, so i decided to use OPENSSL certificate for the POC. But i production i will advise to use private certificate provider like Digicert CA.
# Create directory in your home folder instead of root
mkdir -p ~/argocd-tls
# Generate self-signed certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout ~/argocd-tls/argocd.key \
-out ~/argocd-tls/argocd.crt \
-days 365 \
-subj "/CN=argocd.terranetesprivate.com/O=argocd.terranetesprivate.com"
Create a Kubernetes Secret for the Self-Signed Certificate in the aks-istio-ingress namespace
# Create the secret using the correct path to your files
kubectl create secret tls argocd-tls-secret \
--cert=./argocd-tls/argocd.crt \
--key=./argocd-tls/argocd.key \
-n aks-istio-ingress
## Verify the secret
kubectl get secret argocd-tls-secret -n aks-istio-ingress
Configure AKS Istio Gateway and Virtualservice: in aks-istio-ingress namespace where Istio Ingress Gateway is installed / cert tls secret is created.
echo "INSTALLING ISTIO INGRESS GATEWAY FOR INTERNAL TRAFFIC (private IP)"
cat <<EOF | kubectl apply -f -
# argocd-gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: argocd-internal-gateway-tls
namespace: aks-istio-ingress # Namespace for Istio Ingress Gateway
spec:
selector:
istio: aks-istio-ingressgateway-internal # Selects the internal gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "argocd.terranetesprivate.com"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: argocd-tls-secret # Name of the TLS secret created by cert-manager or kubectl and openssl
hosts:
- "argocd.terranetesprivate.com"
---
# argocd-virtualservice.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: argocd-virtualservice
namespace: argocd
spec:
hosts:
- "argocd.terranetesprivate.com"
gateways:
- aks-istio-ingress/argocd-internal-gateway-tls
http:
- route:
- destination:
host: argocd-server.argocd.svc.cluster.local
port:
number: 80
EOF
BINGO! 🚀


✅ Conclusion: Elevating Security and Efficiency with Azure AD SSO for Argo CD
Integrating Azure Active Directory (AAD) Single Sign-On (SSO) with Argo CD is not just a technical enhancement—it's a strategic upgrade for any organization embracing GitOps at scale. This integration bridges the gap between secure identity management and seamless developer experience, ensuring that access to your deployment pipelines is both tightly controlled and frictionless.
By aligning Argo CD authentication with your enterprise identity provider, you gain the ability to enforce consistent access policies, reduce operational overhead, and meet compliance requirements—all while empowering your teams to move faster and more securely.
**🔍 Why Azure AD SSO Is a Necessity—**Not Optional
🔐 Strengthened Security Posture
SSO centralizes authentication through Azure AD, enabling advanced security features like Multi-Factor Authentication (MFA), Conditional Access, and Identity Protection. This drastically reduces the risk of credential sprawl and unauthorized access, especially in environments where Argo CD manages critical deployment workflows.
🚀 Seamless Developer Experience
With SSO, developers and operators can log in using their existing corporate credentials—no need to manage or remember separate usernames and passwords. This reduces login friction, minimizes password fatigue, and allows teams to focus on delivering value rather than managing access.
🧩 Simplified Identity Lifecycle Management
As teams grow and change, managing user access manually becomes error-prone and inefficient. Azure AD integration ensures that onboarding, offboarding, and role transitions are handled centrally. When a user leaves the organization or changes roles, their access to Argo CD is automatically updated or revoked—no manual cleanup required.
📊 Centralized Auditing and Compliance
All authentication events are logged in Azure AD, providing a single source of truth for access control. This is essential for meeting regulatory requirements, conducting security audits, and maintaining visibility into who accessed what, when, and from where.
🔧 Foundation for Granular Access Control
This SSO setup lays the groundwork for implementing fine-grained access control in Argo CD using AppProjects. By mapping Azure AD groups to specific roles and projects, you can enforce least-privilege access across teams and environments—ensuring that users only interact with the resources they’re authorized to manage.
Follow me on Linkedin George Ezejiofor to stay updated on cloud-native observability insights! 😊
Happy deploying! 🚀🎉




