If you have ever migrated an ASP.NET application from Azure App Service or IIS to Kubernetes, there is a good chance that ASP.NET Data Protection was not something you had to think about — until things started breaking in subtle and confusing ways. Users getting logged out seemingly at random, cookie validation failures appearing under load, or errors that only materialise after a deployment. These are the classic symptoms of a Data Protection configuration that has not been adapted for a distributed, containerised environment.
This post explains what ASP.NET Data Protection is, why it matters in Kubernetes, and how to configure it correctly using Azure Blob Storage and Azure Key Vault — with passwordless authentication via Workload Identity on AKS.
What Is ASP.NET Data Protection?
ASP.NET Data Protection is the cryptographic subsystem built into ASP.NET Core. It is responsible for protecting (encrypting) and unprotecting (decrypting) sensitive data at rest and in transit within your application. Internally, it manages a set of cryptographic keys and uses them to protect payloads.
Common consumers of the Data Protection system include:
- Authentication cookies — the encrypted cookie payload written by
CookieAuthenticationHandler - Anti-forgery tokens — the hidden form field used to prevent CSRF attacks
- TempData — when stored in a cookie
IDataProtector— available for direct use in application code to protect arbitrary data
You rarely see it explicitly in application code because the framework uses it transparently under the hood. That invisibility is exactly why it catches teams off guard.
Why You Probably Never Thought About It Before
On Azure App Service
Azure App Service automatically handles Data Protection key storage and sharing. When you run multiple instances of your application within a single App Service Plan, the platform ensures all instances share the same key ring via a shared file system path (%HOME%\ASP.NET\DataProtection-Keys). This happens with zero configuration on your part, which means most developers have never had to think about it.
On IIS
In a traditional IIS deployment, you typically run one application per server, or you use a shared network path (UNC share) for a web farm setup. Either way, the key ring is usually on a shared disk or the problem is avoided entirely by the fact that a single-server deployment means only one instance exists.
On Kubernetes
Kubernetes changes the rules entirely. Your application runs across multiple pods, each with its own ephemeral container filesystem. There is no shared disk between pods by default. Every time a pod starts, it has no knowledge of the keys generated by its siblings — or by its previous incarnations. The Data Protection system defaults to in-memory key storage, which means keys live only for the lifetime of the process.
This is the source of the problems you will encounter.
What Goes Wrong Without Correct Configuration
Consider a typical ASP.NET web application deployed with a Kubernetes Deployment configured to run three replicas. Each pod generates its own in-memory key ring on startup.
sequenceDiagram
actor Browser
participant LB as K8s Service
participant A as Pod A
participant B as Pod B
participant C as Pod C
Note over A: In-memory Key Ring A
Note over B: In-memory Key Ring B
Note over C: In-memory Key Ring C
Browser->>LB: POST /login
LB->>A: Route request
A-->>Browser: 200 OK — Set-Cookie (encrypted with Key Ring A)
Note over Browser: Authenticated. Cookie payload protected by Key Ring A.
Browser->>LB: GET /dashboard (cookie attached)
LB->>B: Load-balanced to Pod B
Note over B: Cannot decrypt — Key Ring B ≠ Key Ring A
B-->>Browser: ❌ Authentication failure
Browser->>LB: GET /dashboard (retry)
LB->>C: Load-balanced to Pod C
Note over C: Cannot decrypt — Key Ring C ≠ Key Ring A
C-->>Browser: ❌ Authentication failure
Scenario: Authentication Cookie Failures Across Pods
A user authenticates against Pod A. The authentication middleware on Pod A encrypts the identity into a cookie using Pod A’s key ring and sends it to the browser.
On the next request, the Kubernetes Service load-balances the traffic to Pod B. Pod B attempts to decrypt the cookie using its own key ring. Because Pod B’s keys are entirely different from Pod A’s, decryption fails. The user is either signed out or receives a 500 error.
With three pods, statistically only one in three requests for an authenticated user would succeed — and the failures would appear random and intermittent, making them extremely difficult to diagnose.
Scenario: Application Restarts
A pod restarts due to a crash, a node eviction, or a rolling deployment. Its in-memory keys are gone. Any cookies or tokens encrypted by that pod before the restart are now permanently invalid. Every user who was authenticated against that pod is silently logged out.
Scenario: Scale-Up Events
Your application scales from two replicas to five under load. The three new pods each generate independent key rings. You now have five incompatible key rings in service simultaneously. Error rates spike proportionally to the number of pods that cannot decrypt tokens issued by others.
Scenario: Rolling Deployments
A rolling deployment replaces old pods with new ones. As old pods are terminated, the keys they held in memory are lost. Tokens issued during the previous deployment become unreadable by the new pods. Users experience a forced logout on every deployment.
The Distinction: Persisting Keys vs. Protecting Keys
Before looking at the solution, it is important to understand that there are two separate concerns:
Persisting keys means storing the key ring in a durable, shared location that all application instances can access. Without this, each pod has its own ephemeral keys and they cannot read each other’s output. Persistence ensures availability and consistency across the cluster.
Protecting keys means encrypting the key ring itself at rest. The key ring contains the cryptographic material used to encrypt your users’ session cookies. If an attacker obtains the raw key ring, they can forge authentication tokens and impersonate any user. Protecting keys ensures that even if the storage location is compromised, the key material remains unreadable without an additional secret — in this case, a key held in Azure Key Vault.
You should always do both. Persisting without protecting stores sensitive key material in plaintext on a storage account that may be accessible to many identities. Protecting without persisting is no better than in-memory keys — they disappear on restart. Together, they give you keys that survive across all pods and deployments, and that cannot be read even if the storage is breached.
The Solution: Azure Blob Storage + Azure Key Vault
The recommended approach for ASP.NET applications running on AKS (or any Azure-hosted Kubernetes) is to:
- Persist the key ring to an Azure Blob Storage container
- Protect the key ring using a key stored in Azure Key Vault
Infrastructure as Code: All of the Azure resources referenced in this post — the Storage Account, blob container, Key Vault, key, Managed Identity, and RBAC assignments — should ideally be provisioned and managed via Infrastructure as Code using tools such as Terraform or Crossplane. This ensures resources are version-controlled, repeatable across environments, and not subject to configuration drift. For brevity, the examples below use Azure CLI commands to illustrate the required setup, but these should be translated into your IaC tooling of choice before being used in a real environment.
flowchart TB
Browser["Client Browser"]
subgraph AKS["Azure Kubernetes Service"]
direction TB
WLID["Service Account\n+ Workload Identity"]
subgraph Deploy["Deployment"]
PodA["Pod A"]
PodB["Pod B"]
PodC["Pod C"]
end
end
subgraph AzureCloud["Azure"]
MI["User-Assigned\nManaged Identity"]
subgraph StorageAccount["Storage Account"]
Blob["Blob Container\nkeys.xml — Persisted Key Ring"]
end
subgraph KeyVault["Azure Key Vault"]
Key["RSA Key\nProtection Key"]
end
end
Browser --> Deploy
PodA & PodB & PodC --> WLID
WLID -->|"Federated Identity Credential"| MI
MI -->|"Storage Blob Data Contributor"| Blob
MI -->|"Key Vault Crypto User"| Key
Blob -.->|"Key ring XML encrypted using"| Key
Required NuGet Packages
dotnet add package Azure.Extensions.AspNetCore.DataProtection.Blobs
dotnet add package Azure.Extensions.AspNetCore.DataProtection.Keys
dotnet add package Azure.Identity
Configuration
// Program.cs
using Azure.Core;
using Azure.Identity;
using Azure.Storage.Blobs;
using Microsoft.AspNetCore.DataProtection;
var builder = WebApplication.CreateBuilder(args);
// Use WorkloadIdentityCredential in production for direct, low-latency token
// acquisition. Fall back to DefaultAzureCredential in development so that
// local az login / Visual Studio credentials are picked up automatically.
TokenCredential credential = builder.Environment.IsProduction()
? new WorkloadIdentityCredential()
: new DefaultAzureCredential();
// Reference the blob container where keys will be persisted
var blobClient = new BlobServiceClient(
new Uri("https://<storage-account-name>.blob.core.windows.net"),
credential)
.GetBlobContainerClient("dataprotection-keys")
.GetBlobClient("keys.xml");
builder.Services.AddDataProtection()
// Give this application a unique name so keys are not shared across
// different applications that happen to use the same storage account
.SetApplicationName("my-application-name")
// Persist the key ring to Azure Blob Storage
.PersistKeysToAzureBlobStorage(blobClient)
// Protect the key ring using a key in Azure Key Vault
.ProtectKeysWithAzureKeyVault(
new Uri("https://<key-vault-name>.vault.azure.net/keys/<key-name>"),
credential);
var app = builder.Build();
// ...
Pre-Creating the Blob Container
The blob container must exist before the application starts. Create it as part of your IaC provisioning, or with the Azure CLI:
az storage container create \
--name dataprotection-keys \
--account-name <storage-account-name> \
--auth-mode login
The keys.xml blob itself will be created by the Data Protection system on first startup if it does not exist.
Pre-Creating the Key Vault Key
The Key Vault key used to protect the key ring must be created in advance. The application does not create this key — it only references an existing key to perform wrapKey and unwrapKey operations. If the key does not exist at startup, the application will throw an exception when it first attempts to load or write the key ring.
Create the key with the Azure CLI:
az keyvault key create \
--vault-name <key-vault-name> \
--name dataprotection \
--kty RSA \
--size 2048
Once created, the full key identifier URI (including the version) will be shown in the output. You can also use a versionless URI (omitting the version segment) if you want key rotation to be picked up automatically. The URI format used in ProtectKeysWithAzureKeyVault should be:
https://<key-vault-name>.vault.azure.net/keys/<key-name>
In contrast to the Key Vault key, the blob container and the keys.xml blob within it only need the container to exist — the blob file is managed entirely by the Data Protection system.
Passwordless Authentication on AKS: Workload Identity
If you are running on Azure Kubernetes Service, you should take advantage of Workload Identity rather than using connection strings or client secrets. Workload Identity allows a Kubernetes pod to assume an Azure Managed Identity without any credentials being stored in the application or its configuration.
At a high level, the setup involves:
- An Azure User-Assigned Managed Identity (or System-Assigned on the AKS node pool)
- An OIDC Issuer on the AKS cluster (enabled with
--enable-oidc-issuer) - A Federated Identity Credential linking the Kubernetes Service Account to the Managed Identity
- The pod annotated with the Service Account and the
azure.workload.identity/use: "true"label
Once this is configured, your application authenticates as the Managed Identity — no connection strings, no secrets, no rotation required.
Refer to the AKS Workload Identity documentation for the full setup steps, as the specifics depend on your cluster configuration and identity strategy.
Choosing the Right Credential: Avoid DefaultAzureCredential in Production
DefaultAzureCredential is a convenience credential that works by walking through a chain of credential providers in a fixed order until one succeeds. The chain, in order, is: Environment Variables → Workload Identity → Managed Identity → Visual Studio → Azure CLI → Azure PowerShell → Azure Developer CLI.
In local development this is useful — it means a developer can run the application with az login and have it just work. However, in production this chain has a real cost. If the first several providers in the chain fail before reaching the correct one (for example, attempting Environment Variables, then timing out trying to reach the Managed Identity endpoint before falling through to Workload Identity), each failed attempt adds latency to the token acquisition. In a pod that is frequently acquiring or refreshing tokens, this compounds quickly.
In production, always use the specific credential type that matches your environment. On AKS with Workload Identity configured, use WorkloadIdentityCredential directly. Reserve DefaultAzureCredential for local development and tooling contexts where the flexibility is genuinely needed.
Required RBAC Roles
Azure Blob Storage
The identity running your application (Managed Identity or Workload Identity) needs permission to read and write blobs in the storage account. Grant the following role scoped to the storage account or the specific container:
| Role | Scope | Purpose |
|---|---|---|
Storage Blob Data Contributor | Storage Account or Container | Read and write the keys.xml blob |
az role assignment create \
--role "Storage Blob Data Contributor" \
--assignee-object-id <managed-identity-principal-id> \
--assignee-principal-type ServicePrincipal \
--scope /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/dataprotection-keys
Use the Principal ID (Object ID) of the Managed Identity, not the Client ID. You can find it in the Azure Portal under the identity resource, or via
az identity show --name <name> --resource-group <rg> --query principalId. Using--assignee-object-idwith--assignee-principal-type ServicePrincipalis the recommended approach as it avoids AAD graph lookups and eliminates race conditions that can cause newly created identities to fail role assignment.
Azure Key Vault
The identity needs permission to use the key for cryptographic operations. The exact role depends on whether your Key Vault uses the Vault access policy model or Azure RBAC. The RBAC model is preferred for new deployments.
| Role | Scope | Purpose |
|---|---|---|
Key Vault Crypto User | Key Vault or specific Key | Wrap and unwrap the data protection keys using the Key Vault key |
az role assignment create \
--role "Key Vault Crypto User" \
--assignee-object-id <managed-identity-principal-id> \
--assignee-principal-type ServicePrincipal \
--scope /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.KeyVault/vaults/<key-vault-name>
The Key Vault Crypto User role grants wrapKey and unwrapKey operations, which is exactly what the Data Protection system needs. It does not grant the ability to export or delete the key, which keeps the blast radius of a compromised identity small.
Note: If your Key Vault still uses the legacy access policy model, you will need to create a policy granting
Get,WrapKey, andUnwrapKeykey permissions to the identity instead.
Putting It All Together: A Complete Example
Here is a realistic Program.cs for an ASP.NET application using cookie authentication, with Data Protection correctly configured for a Kubernetes deployment:
// Program.cs
using Azure.Core;
using Azure.Identity;
using Azure.Storage.Blobs;
using Microsoft.AspNetCore.Authentication.Cookies;
using Microsoft.AspNetCore.DataProtection;
var builder = WebApplication.CreateBuilder(args);
// In production (AKS with Workload Identity), use WorkloadIdentityCredential
// directly — it goes straight to the correct token endpoint with no chain
// traversal overhead. In local development, DefaultAzureCredential provides
// the flexibility to authenticate via az login or Visual Studio.
TokenCredential credential = builder.Environment.IsProduction()
? new WorkloadIdentityCredential()
: new DefaultAzureCredential();
var storageAccountUri = new Uri("https://<storage-account-name>.blob.core.windows.net");
var keyVaultKeyUri = new Uri("https://<key-vault-name>.vault.azure.net/keys/<key-name>");
var blobClient = new BlobServiceClient(storageAccountUri, credential)
.GetBlobContainerClient("dataprotection-keys")
.GetBlobClient("keys.xml");
builder.Services.AddDataProtection()
.SetApplicationName("my-application-name")
.PersistKeysToAzureBlobStorage(blobClient)
.ProtectKeysWithAzureKeyVault(keyVaultKeyUri, credential);
builder.Services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
.AddCookie(options =>
{
options.Cookie.Name = "my-app-auth";
options.Cookie.HttpOnly = true;
options.Cookie.SecurePolicy = CookieSecurePolicy.Always;
options.Cookie.SameSite = SameSiteMode.Lax;
options.ExpireTimeSpan = TimeSpan.FromHours(8);
options.SlidingExpiration = true;
options.LoginPath = "/account/login";
});
builder.Services.AddControllersWithViews();
var app = builder.Build();
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapDefaultControllerRoute();
app.Run();
With this configuration:
- All pods in the deployment read and write the same
keys.xmlin Blob Storage - The key ring is encrypted at rest using the Key Vault key
WorkloadIdentityCredentialis used in production for direct, low-latency token acquisition;DefaultAzureCredentialis used locally for developer convenience- Authentication cookies written by any pod can be decrypted by any other pod
- Cookies remain valid across pod restarts, scale events, and rolling deployments
Configuration via Environment Variables or App Settings
If you prefer to keep the URIs out of your code and load them from configuration:
var storageAccountUri = new Uri(builder.Configuration["DataProtection:StorageAccountUri"]
?? throw new InvalidOperationException("DataProtection:StorageAccountUri is not configured"));
var keyVaultKeyUri = new Uri(builder.Configuration["DataProtection:KeyVaultKeyUri"]
?? throw new InvalidOperationException("DataProtection:KeyVaultKeyUri is not configured"));
These can then be supplied via Kubernetes ConfigMap or environment variables in your pod spec, keeping the values environment-specific without embedding them in the image.