1 The Modern Edge: Why the “CDN” Label is Outdated
Many .NET teams still think of Azure Front Door, Cloudflare, and AWS CloudFront as CDNs whose main job is serving static files. That view made sense years ago, but it no longer reflects reality in 2026. Today, the edge is where requests are routed, secured, filtered, authenticated, and sometimes even partially executed before they ever reach Kestrel.
This shift changes how you should design global .NET systems. The core question is no longer “which CDN is fastest,” but rather: Where does application responsibility stop, and where does the edge take over?
If you push everything into your .NET application, latency grows, deployments slow down, and global logic gets duplicated across services. If you push too much logic to the edge without clear boundaries, you risk scattering business rules across multiple systems. The balance most teams land on is a split-stack model: the edge enforces global, cross-cutting concerns, while the backend focuses on domain logic and data.
1.1 Beyond Static Caching: Redefining the Edge as a Compute, Security, and Routing Layer
The modern edge sits directly in front of your Azure-hosted .NET APIs and performs work that used to live inside middleware, filters, or even controllers. This work is not business-specific; it’s global, repeatable, and latency-sensitive.
1.1.1 Edge Compute
All three platforms now offer some form of edge compute designed for request shaping rather than full application logic:
- Azure Front Door Rules Engine handles redirects, rewrites, header manipulation, and route selection without code deployments.
- Cloudflare Workers run JavaScript, WASM, or Rust at every point of presence, with very fast startup times and strong isolation.
- AWS CloudFront Functions provide lightweight JavaScript execution for request/response handling, while Lambda@Edge supports heavier logic such as JWT inspection or experiment routing.
The architectural rule of thumb is straightforward: If logic does not depend on application state or databases, it probably belongs at the edge.
Common examples include:
- URL canonicalization (
/Home→/) - Redirecting mobile users to a mobile experience
- Blocking or challenging automated traffic
- Geo-based access control
- Cookie normalization before caching
- Static API key checks
- Tenant resolution based on hostname
Moving these concerns out of your .NET app reduces request processing time and keeps application code easier to reason about.
1.1.2 Security Layer
The edge has become the first and most important security boundary. It is where traffic is cheapest to inspect and easiest to discard.
Typical edge-enforced protections include:
- Managed and custom WAF rules
- Distributed DDoS mitigation
- IP reputation and threat scoring
- TLS version and cipher enforcement
- Mutual TLS validation
- Bot detection and fingerprinting
By stopping malicious or malformed requests at the edge, your App Service, Container Apps, or AKS workloads avoid wasting CPU cycles on traffic that should never have reached them. Under load or attack, this difference shows up clearly in more stable P95 and P99 latency.
1.1.3 Traffic Engineering Layer
Beyond security, the edge now controls how traffic flows globally:
- Anycast-based routing
- Global load balancing
- Session affinity where required
- Health-probe–driven failover
- Latency-, weight-, or geography-based routing
At this point, calling these platforms “CDNs” undersells what they do. They function as a global application traffic control plane, not just a cache.
1.2 The “Split-Stack” Architecture: Moving Routing Logic Out of Program.cs and Into the Edge
Many .NET applications still contain middleware that performs global routing decisions:
app.Use(async (context, next) =>
{
if (context.Request.Host == "old.example.com")
{
context.Response.Redirect("https://new.example.com", permanent: true);
return;
}
await next();
});
This works fine in a single region. At global scale, it becomes inefficient. Every request must traverse the public internet, reach the Azure region, be accepted by Kestrel, and then be redirected. That adds latency and consumes compute for work that could have been handled closer to the user.
In a split-stack model, these responsibilities move outward:
- Redirects → Edge rules (AFD, Cloudflare, or CloudFront)
- Host-based routing → Edge configuration
- Header normalization → Edge
- Regional failover → Edge
- Static API key checks → Edge
- IP filtering → Edge WAF
The .NET backend is then responsible for what actually requires application context:
- Authentication and authorization using
Microsoft.Identity.Web - Domain and business logic
- Tenant-aware authorization
- Data access and consistency
This separation has practical benefits. Performance improves because fewer requests reach the backend unnecessarily. Deployments become safer because changing a redirect or header rule no longer requires rebuilding and redeploying the application. And the codebase becomes easier to maintain because global networking logic is no longer mixed into application flow.
1.3 The 2026 Landscape
Over the last few years, edge platforms have changed quickly. For teams running .NET backends on Azure, three developments matter most.
1.3.1 Retirement of Azure Front Door Classic
Azure Front Door Classic is being phased out in favor of Standard and Premium tiers. This is not just a pricing change; it reflects a different architectural direction.
Key differences include:
- Premium adds first-class WAF, Private Link, and bot protection
- Traffic flows over the Microsoft global backbone rather than the public internet
- Routing configuration is simpler and more predictable
- Rule sets support ordered evaluation and clearer conditions
- Full support for ARM and Bicep deployments
Classic relied heavily on public IP exposure and had limited extensibility. Standard and Premium behave more like a modern application edge than a traditional CDN.
1.3.2 Cloudflare’s Evolution into a “Connectivity Cloud”
Cloudflare has moved well beyond acceleration and caching. Its focus from 2025 onward has been building a unified connectivity and security layer that spans users, devices, and applications.
Key areas include:
- Zero Trust access control
- Device posture and identity checks
- mTLS-based service identity
- Workers, WASM, and AI inference at the edge
- Layer 7 firewalls with ML-based decisions
- Network replacement capabilities (Magic WAN and Gateway)
- Private, optimized global routing
For organizations running workloads across multiple clouds, Cloudflare often becomes the single global entry point, even when all compute lives on Azure.
1.3.3 AWS CloudFront’s Expansion into Edge Compute
CloudFront remains dominant in media delivery, but it has grown into a capable application edge as well.
Notable capabilities include:
- CloudFront Functions for fast, simple request manipulation
- Lambda@Edge for more complex logic such as token validation or experiment routing
- Origin Shield to reduce load on backend systems
- Tight integration with AWS security tooling
For teams with a strong AWS footprint, CloudFront is a natural fit. With careful configuration, it can also front Azure-hosted .NET applications effectively.
1.4 The Latency Imperative: TCP Termination and TLS Handshake Performance
One of the biggest performance gains from using an edge network comes from where connections are established.
1.4.1 Anycast vs. Unicast
-
Anycast (Cloudflare, Azure Front Door, CloudFront) DNS resolves to the closest edge location. TCP and TLS handshakes complete near the user, not in the backend region.
-
Unicast All traffic resolves to a fixed location, and the handshake happens wherever the server lives.
Anycast consistently reduces connection setup time. For global users, savings of 40–80 ms per request are common, even before any caching or routing logic is applied.
1.4.2 Why This Matters for .NET Applications
For REST APIs built on modern .NET (9 or 10), latency improvements compound across the request lifecycle:
- Faster connection setup
- Quicker TLS negotiation
- Earlier WAF and bot evaluation
- Reduced header parsing overhead
- Faster routing decisions
When these steps happen closer to the user, the application feels faster and more stable, especially under load. Over millions of requests, small per-request improvements add up to meaningful gains in both performance and cost.
2 Architecture & Connectivity: Connecting to Azure Backends
Once you decide to place an edge in front of your application, the next real design decision is how traffic reaches your Azure workloads. This is where theory turns into architecture. The choices you make here affect security posture, latency, operational complexity, and long-term cost.
This section looks at how Azure Front Door, Cloudflare, and AWS CloudFront connect to Azure-hosted .NET backends running on App Service, Azure Container Apps (ACA), or AKS. The focus is not on feature checklists, but on the actual network paths your requests take and what trade-offs those paths introduce.
2.1 Azure Front Door (The Home Field Advantage)
When your backend lives entirely in Azure, Azure Front Door has one major advantage that the others simply cannot replicate: native integration with Azure networking. That integration shows up most clearly in how traffic reaches your origin.
2.1.1 Private Link Integration
Azure Front Door Premium supports private origins, which means your backend does not need to be publicly reachable at all.
In practice, this gives you three important properties:
- Your App Service, Container App, or AKS service does not expose a public IP
- Traffic flows from the AFD edge through Private Link into your virtual network
- Public inbound traffic can be fully disabled at the origin
The request path looks like this:
Client → AFD Edge (Anycast) → Microsoft Backbone → Private Endpoint → Azure Backend
From a security perspective, this is clean and easy to reason about. You are not managing IP allow-lists, rotating secrets in headers, or maintaining firewall rules that change as edge IP ranges evolve. Only Azure Front Door can reach the backend, and Azure enforces that at the network layer.
For regulated environments or internal APIs, this is often the deciding factor in choosing AFD. It gives you strong isolation without adding extra infrastructure.
A minimal Bicep example for defining a private origin looks like this:
resource origin 'Microsoft.Cdn/profiles/originGroups/origins@2023-05-01' = {
name: 'apiOrigin'
properties: {
hostName: 'myapp.azurewebsites.net'
privateLink: {
privateLinkResourceId: appService.id
requestMessage: 'AFD access to backend'
}
}
}
Once this is in place, you can disable public access on the App Service or Container App entirely. From the application’s point of view, nothing changes. From a security standpoint, the attack surface shrinks dramatically.
2.1.2 Microsoft Global Network Advantage
Another difference that matters at scale is how traffic travels between the edge and the backend. With Azure Front Door, traffic does not traverse the public internet after it reaches the edge. It stays on the Microsoft global backbone.
This has a few concrete effects:
- Lower and more consistent latency
- Less jitter during peak internet congestion
- Fewer transient packet loss events
- Faster and more predictable autoscaling behavior
For .NET APIs that handle authentication, billing, or other latency-sensitive operations, this consistency shows up in better P99 performance. You may not notice it during light traffic, but under steady global load, backbone routing is noticeably more stable than public internet paths.
2.2 Cloudflare (The Agnostic Giant)
Cloudflare is often chosen when organizations want a single edge layer across multiple clouds or when Zero Trust is a first-class requirement. Its connectivity model is different from Azure Front Door’s, but it is still well-suited to Azure backends when configured correctly.
2.2.1 Cloudflare Tunnel (cloudflared)
Cloudflare Tunnel allows you to connect Azure workloads to Cloudflare without exposing them publicly. Instead of accepting inbound connections, your backend initiates an outbound, encrypted connection to Cloudflare.
The flow looks like this:
Azure Backend (App Service / ACA / VM)
→ cloudflared (outbound)
→ Encrypted Tunnel
→ Cloudflare Edge
This approach removes several operational concerns at once:
- No public IPs on the backend
- No inbound firewall rules
- No IP allow-lists to maintain
- No reverse proxy VMs or load balancers
For .NET teams using Azure Container Apps, Tunnel is typically deployed as a sidecar container:
containers:
- name: api
image: myregistry/api:latest
- name: cloudflared
image: cloudflare/cloudflared:latest
env:
- name: TUNNEL_TOKEN
value: "<your token>"
From the application’s perspective, requests appear as normal HTTP traffic. From the network’s perspective, the origin is never directly reachable from the internet. This model scales well and aligns closely with Zero Trust principles, especially for internal APIs or admin endpoints.
2.2.2 Argo Smart Routing
By default, Cloudflare routes traffic from its edge to your Azure region over the public internet. Argo Smart Routing improves this by dynamically choosing less congested paths based on real-time network telemetry.
In practice, Argo can reduce tail latency by 20–40% for globally distributed users. The trade-off is cost: Argo is billed per GB, so it adds to your monthly spend.
Argo is usually worth enabling when:
- Your users are spread across multiple continents
- Your APIs are latency-sensitive
- Request volume is high enough that small per-request gains matter
It is less compelling when:
- Most users are near a single Azure region
- Responses are large enough that egress dominates cost
- The workload is already limited by backend processing time
Like many edge features, Argo is not universally necessary. It is a targeted optimization, not a default setting.
2.3 AWS CloudFront (The Competitor Integration)
CloudFront is not Azure-native, but it is still common in organizations with a strong AWS footprint or shared global edge strategy. When CloudFront fronts Azure workloads, the main challenge is securing the origin.
2.3.1 Securing Azure Origins from AWS Edges
Because Azure App Service and Container Apps typically expose public endpoints, you must explicitly restrict who can reach them when using CloudFront.
One common approach is custom headers. CloudFront injects a shared secret into every request, and the backend rejects requests that do not include it.
CloudFront configuration adds a header such as:
X-Origin-Auth: abc123-secret
The .NET application enforces it early in the pipeline:
app.Use(async (ctx, next) =>
{
if (!ctx.Request.Headers.TryGetValue("X-Origin-Auth", out var value) ||
value != "abc123-secret")
{
ctx.Response.StatusCode = StatusCodes.Status403Forbidden;
return;
}
await next();
});
This is simple and effective, but it requires secure secret management and careful rotation.
Another option is IP allow-listing using Azure Firewall or Application Gateway. AWS publishes CloudFront IP ranges in JSON, but they change frequently. Teams that choose this route usually automate updates as part of their infrastructure pipeline.
Both approaches work, but neither is as clean as Private Link or Tunnel-based connectivity.
2.3.2 Origin Shield
CloudFront Origin Shield introduces an additional aggregation layer between the edge and your Azure backend. Instead of every CloudFront PoP reaching your origin directly, requests are funneled through a single regional shield.
Origin Shield is useful when:
- The application serves mostly dynamic content
- Cache hit rates are low
- Azure SQL or compute is under sustained pressure
It reduces duplicate origin requests and smooths load spikes. However, it adds cost and an extra hop, so it is less helpful when the backend is already highly scalable or when most content is static and well-cached.
3 Caching Strategies & Rules Engines for .NET Apps
Caching at the edge sounds straightforward until you apply it to real .NET APIs. Most applications return dynamic data, vary responses by language or tenant, and rely on headers that unintentionally explode cache keys. When caching is misconfigured, the result is either low cache hit rates or, worse, users seeing the wrong content.
The goal of edge caching in 2026 is not “cache everything.” It is to cache deliberately, normalize inputs, and make routing and rewrite decisions before requests reach your application. Done well, caching reduces backend load and improves global latency without compromising correctness.
3.1 Vary-By-Header & Normalization
One of the most common causes of poor cache performance in .NET APIs is unbounded header variance. Headers such as Accept-Language, User-Agent, and client hints often differ per request, even when the response content does not meaningfully change.
Consider a simple request:
GET /products
Accept-Language: en-US
User-Agent: Chrome
From a CDN’s point of view, this is a unique cache key. Change the language slightly or the browser version, and you now have multiple cache entries for the same logical response. Over time, this leads to cache fragmentation and low hit ratios.
3.1.1 Practical Normalization Rules
Most production systems adopt a few simple rules:
- Reduce
Accept-Languageto a small, supported set (en,fr,de) - Remove headers that should not affect caching (
User-Agent,Sec-CH-UA) - Prefer explicit query parameters for variants (
?lang=en) - Use
Varyonly when a header genuinely affects the response
When a response truly varies by language, declare it explicitly:
Vary: Accept-Language
Avoid adding Vary by default. Each additional vary dimension multiplies the number of cache entries the edge must maintain.
3.1.2 ASP.NET Core Normalization Example
Normalization usually happens before the request reaches controllers. A lightweight middleware is sufficient:
app.Use(async (ctx, next) =>
{
var raw = ctx.Request.Headers["Accept-Language"].ToString();
var normalized = raw.StartsWith("fr") ? "fr" : "en";
ctx.Items["lang"] = normalized;
ctx.Request.Headers["Accept-Language"] = normalized;
await next();
});
The edge now sees fewer variants, and your application logic still receives a clean, predictable language value.
3.2 The Rules Engine Landscape
Every edge platform provides a rules engine, but they differ in how expressive and maintainable they are. The common theme is that rules should handle global behavior—rewrites, redirects, and header shaping—without involving application code.
3.2.1 Azure Front Door Rule Sets
AFD Premium rule sets are declarative and execute in a defined order. Each rule combines match conditions with one or more actions.
Common uses include:
- Redirecting HTTP to HTTPS
- Rewriting legacy paths
- Adding or removing headers
- Overriding caching behavior per route
A typical HTTPS redirect rule in Bicep:
resource rule 'Microsoft.Cdn/profiles/ruleSets/rules@2023-05-01' = {
name: 'forceHttps'
properties: {
matchConditions: [
{
matchVariable: 'RequestScheme'
operator: 'Equal'
matchValue: ['http']
}
]
actions: [
{
name: 'UrlRedirect'
parameters: {
redirectType: 'PermanentRedirect'
protocol: 'Https'
}
}
]
}
}
This replaces the need for redirect middleware in your .NET app and ensures the redirect happens as close to the user as possible.
3.2.2 Cloudflare Rules and Workers
Cloudflare splits functionality across multiple rule types:
- Redirect Rules for canonical URLs
- Transform Rules for header manipulation
- Workers when logic becomes conditional or programmatic
Simple cases stay declarative. More complex scenarios move into Workers.
A minimal Worker that enforces HTTPS:
export default {
async fetch(request) {
const url = new URL(request.url);
if (url.protocol === "http:") {
url.protocol = "https:";
return Response.redirect(url.toString(), 301);
}
return fetch(request);
}
};
This runs at every edge location and avoids round trips to the origin for simple routing logic.
3.2.3 CloudFront Functions
CloudFront Functions are designed for fast, simple transformations at viewer request time. They are ideal for adding headers or normalizing requests before caching decisions are made.
Example: injecting a tenant identifier based on hostname:
function handler(event) {
var request = event.request;
if (request.headers.host.value.endsWith(".example.com")) {
request.headers["x-tenant"] = { value: "default" };
}
return request;
}
If the logic requires external calls, state, or token validation, it moves to Lambda@Edge, but most caching-related rules remain small and deterministic.
3.3 Cache Invalidation Patterns in .NET
No cache strategy is complete without a reliable invalidation plan. The edge is fast at serving cached content but intentionally conservative about eviction. Your application must explicitly tell it when content is no longer valid.
3.3.1 Tag-Based Invalidation (Surrogate Keys)
Cloudflare supports tag-based purging using surrogate keys. Azure Front Door relies primarily on path-based purging, but you can still design your API responses to align with logical groupings.
A response might include:
Cache-Tag: product-123, category-9
In ASP.NET Core:
context.Response.Headers["Cache-Tag"] =
"product-123,category-9";
This allows you to invalidate all cached representations of a product without knowing every URL where it appears.
3.3.2 Programmatic Purging
Most teams trigger cache purges from background jobs or deployment pipelines rather than inline application code.
Azure Front Door (via ARM SDK):
await cdnClient.Endpoints.PurgeContentAsync(
resourceGroupName,
profileName,
endpointName,
new[] { "/products/*" });
Cloudflare (community SDK):
await api.ZoneCache.PurgeTags(
zoneId,
new[] { "product-123" });
CloudFront:
var request = new CreateInvalidationRequest
{
DistributionId = distributionId,
InvalidationBatch = new InvalidationBatch(
new[] { "/products/*" },
Guid.NewGuid().ToString())
};
await cloudFrontClient.CreateInvalidationAsync(request);
Each provider favors a slightly different model, but the principle is the same: invalidate narrowly and deliberately.
3.3.3 When Invalidation Is Necessary
Common triggers include:
- Publishing or updating catalog data
- CMS content changes
- Inventory or pricing updates
- Feature flag changes that affect public output
Avoid invalidating entire distributions unless absolutely necessary. Broad purges are expensive and erase the benefits of caching.
3.4 Handling Dynamic Content
Most .NET APIs serve a mix of public and personalized data. Treating all responses the same is a common mistake.
3.4.1 Cache-Control: private
Use this when responses must never be shared across users:
Cache-Control: private, no-store
These responses bypass edge caching entirely.
3.4.2 s-maxage for Shared Edge Caching
When content can be shared at the edge but not cached by browsers:
Cache-Control: public, s-maxage=60, max-age=0
This gives you global acceleration while ensuring clients always revalidate.
3.4.3 A Practical Pattern for Personalization
A common pattern is splitting responses into two layers:
- A public, cacheable envelope (product lists, metadata)
- A private enrichment call (user-specific pricing, entitlements)
This keeps the expensive parts of the response cacheable while preserving correctness for personalized data. It also reduces load on your Azure backend and improves perceived performance for users worldwide.
4 Security at the Edge: WAF, Bot Protection, and Zero Trust
Security is one of the strongest arguments for moving logic to the edge. Every request you block before it reaches Kestrel is CPU you never spend, connections you never open, and logs you never have to analyze. For globally exposed .NET applications, this matters more than most teams realize.
Modern edge platforms combine traditional WAF capabilities with bot detection, identity-aware routing, and behavioral analysis. The result is a security layer that runs before routing, caching, or application logic. Your App Service, Container Apps, or AKS workloads stay focused on business behavior rather than traffic hygiene.
4.1 WAF Architecture
Web Application Firewalls no longer sit at the end of the request pipeline. At the edge, WAF evaluation happens before routing and often before cache lookup. That ordering is important: malicious requests are dropped early, and legitimate requests move through faster.
Each provider ships managed rule sets for common threats, plus custom rules that let you adapt protection to how your .NET APIs are structured. The objective is simple: stop bad traffic deterministically and cheaply.
4.1.1 AFD Premium WAF
Azure Front Door Premium includes a managed WAF based on OWASP rules, maintained by Microsoft and regularly updated. These rules detect common attack patterns such as SQL injection, cross-site scripting, and command injection. For most teams, managed rules handle the majority of threats without additional tuning.
Where AFD WAF becomes more powerful is in custom rules. Many .NET APIs follow predictable routing patterns, for example:
/api/{tenant}/v1/{resource}
That structure makes it easy to write targeted rules that apply only to specific routes or environments. A common pattern is to require a header for internal-only APIs or admin endpoints.
A simple custom rule defined in Bicep might look like this:
resource wafPolicy 'Microsoft.Network/frontdoorwebapplicationfirewallpolicies@2023-05-01' = {
name: 'afdwaf-api'
properties: {
customRules: {
rules: [
{
name: 'Require-Internal-Header'
priority: 10
ruleType: 'MatchRule'
action: 'Block'
matchConditions: [
{
matchVariables: [
{
variableName: 'RequestHeaders'
selector: 'X-Internal-Auth'
}
]
operator: 'Equal'
matchValues: [ 'required-token' ]
}
]
}
]
}
managedRules: {
managedRuleSets: [
{
ruleSetType: 'OWASP'
ruleSetVersion: '3.2'
}
]
}
}
}
This rule prevents requests without the expected header from ever reaching your backend. The application no longer needs to reject these requests itself, and your logs stay cleaner as a result.
A common workflow is to run managed rules in “Log” mode first, review false positives, and then switch to “Block” once confidence is high.
4.1.2 Cloudflare WAF
Cloudflare’s WAF benefits from the sheer volume of traffic it sees globally. Instead of relying only on static signatures, it incorporates bot fingerprints, request behavior, and machine learning–based risk scoring.
One practical advantage is the ability to respond quickly to active attacks. Cloudflare’s “Under Attack” mode applies aggressive challenges automatically, buying time without requiring application changes or redeployments.
Custom rules are defined using a rule expression language that evaluates headers, paths, methods, and reputation signals. Rate limiting can also be tied to ML-derived scores rather than raw request counts, which is useful when attackers rotate IPs.
A simplified example of a rule that blocks high-risk traffic on API routes:
// Conceptual example for Cloudflare rule builder
if (cf.threat_score > 30 && http.request.uri.path starts_with "/api/")
{
action = block
}
This approach is especially effective for protecting login endpoints or token-issuing APIs, where abusive automation is more common than classic injection attacks.
4.2 DDoS Mitigation: Layer 3/4 vs. Layer 7
Not all DDoS attacks are the same, and understanding the difference matters when evaluating edge protection.
Layer 3 and 4 attacks target the network itself. These floods are handled almost entirely by the edge provider’s infrastructure. Azure Front Door, Cloudflare, and CloudFront all absorb or discard this traffic before a TCP connection is established. For most Azure-hosted .NET applications, L3/L4 attacks are effectively a solved problem.
Layer 7 attacks are more subtle. They look like valid HTTP traffic and are designed to exhaust application resources rather than network capacity. Examples include credential stuffing, aggressive scraping, and slow-read attacks.
These attacks happen after the TCP handshake, which means the edge must inspect request behavior before deciding whether to forward traffic to Azure.
Typical edge-level mitigations include:
- JavaScript or managed challenges
- Adaptive rate limiting
- Reputation- and behavior-based blocklists
- Path-specific thresholds (for example,
/api/auth/login) - Device and fingerprint analysis
Trying to handle this inside ASP.NET middleware is rarely effective at scale. By the time your code executes, the request has already consumed connection slots and CPU. Edge-based mitigation stops the request before it becomes your problem.
4.3 Zero Trust & Auth Offloading
As systems grow, internal dashboards and operational endpoints become just as sensitive as public APIs. Treating them as “internal but public” often leads to brittle security controls. Zero Trust flips that model by making identity verification a prerequisite for access, regardless of network location.
Modern edge platforms allow you to offload much of this logic.
4.3.1 Azure: Validating JWTs at the Edge
Azure Front Door Premium can validate JWTs before forwarding requests to your backend. This shifts token parsing and signature verification away from your .NET application.
When enabled, AFD validates:
- Token signature
- Expiration time
- Issuer
- Audience
- Optional claims
If validation succeeds, selected claims are forwarded as headers. Your application trusts the edge and skips redundant validation.
A minimal example of consuming forwarded claims:
app.Use(async (ctx, next) =>
{
if (ctx.Request.Headers.TryGetValue("X-Afd-User-Id", out var userId))
{
ctx.Items["UserId"] = userId.ToString();
}
await next();
});
This does not replace application-level authorization, but it significantly reduces per-request overhead for high-volume APIs.
4.3.2 Cloudflare Access
Cloudflare Access provides a Zero Trust layer for tools that were never designed to be internet-facing, such as Hangfire dashboards, health endpoints, or internal admin UIs.
Instead of modifying the application, you define identity providers and access rules at the edge. Once authenticated, Cloudflare injects identity information into request headers.
A typical setup looks like this:
- Define an Access application for
/hangfire - Restrict access to users with company email addresses
- Inject a header such as
CF-Access-User
In the .NET app, enforcement is minimal:
app.Map("/hangfire", subApp =>
{
subApp.Use(async (ctx, next) =>
{
if (!ctx.Request.Headers.ContainsKey("CF-Access-User"))
{
ctx.Response.StatusCode = StatusCodes.Status401Unauthorized;
return;
}
await next();
});
});
No OAuth handlers, no cookie management, and no custom login UI are required. The edge handles identity, and the application simply trusts the result.
5 Resiliency & Failover: Designing for High Availability
One of the biggest advantages of modern edge platforms is that they allow you to design for regional failure without pushing complexity into your .NET codebase. When configured correctly, failover decisions happen before requests ever reach your application. Your backend simply runs in more than one region, and the edge decides where traffic should go.
The challenge is not enabling failover—it is making sure it behaves the way you expect during real incidents. That depends almost entirely on how health probes are configured and how routing policies react when something goes wrong.
5.1 Origin Groups & Health Probes
Health probes are the foundation of edge-based failover. If probes are too shallow, traffic is sent to unhealthy systems. If they are too aggressive, healthy regions get removed from rotation. Both scenarios cause outages that are hard to diagnose.
Although Azure Front Door, Cloudflare, and CloudFront implement probing differently, the underlying principle is the same: the edge continuously tests a specific endpoint and uses the result to decide whether a region should receive traffic.
5.1.1 Azure Front Door Health Probes
Azure Front Door evaluates origin health using a probe path that you define in the origin group. For .NET applications, it is important to distinguish between liveness and readiness.
- Liveness answers: “Is the process running?”
- Readiness answers: “Can this instance safely handle traffic right now?”
AFD should always probe readiness. Probing liveness alone can send traffic to instances that are still starting up, overloaded, or unable to reach dependencies such as databases.
A typical ASP.NET Core readiness endpoint:
app.MapHealthChecks("/health/ready", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready")
});
When an App Service or Container App starts failing readiness checks, AFD removes it from rotation quickly—usually within seconds. Because this decision happens at the edge, users rarely see connection errors. Requests are simply routed to the next healthy region.
5.1.2 Cloudflare Load Balancing Health Checks
Cloudflare supports two health-check models:
- Active monitoring, where Cloudflare probes your origin from multiple locations
- Passive monitoring, where Cloudflare infers health from real traffic
For most .NET backends, active monitors are the safer choice. They behave consistently even when traffic volume is low and do not depend on user requests to detect failures.
A simple active monitor configuration might look like this:
monitor:
type: http
path: "/health/ready"
interval: 30
expected_status:
- 200
Because Cloudflare probes originate outside Azure’s backbone, you may see slightly higher probe latency than with AFD. In practice, this rarely affects failover accuracy for global SaaS workloads, but it is worth accounting for when tuning probe intervals and thresholds.
5.2 Multi-Region Failover Scenarios
Once health probes are reliable, the next decision is how traffic should be distributed across regions. Modern edge platforms support routing strategies that used to require DNS-based tools such as Azure Traffic Manager. The difference is speed: edge-based routing reacts in seconds, not minutes.
5.2.1 Active-Active: Latency-Based Routing
In an active-active configuration, multiple Azure regions serve traffic at the same time. The edge routes each request to the closest healthy region based on real-time latency measurements.
This pattern works well for .NET applications that are:
- Stateless
- Read-heavy
- Backed by distributed caches (for example, Redis Enterprise)
The benefits are clear:
- Lower latency for global users
- Natural load distribution
- Near-instant regional failover
The trade-off is complexity. Active-active systems require careful handling of writes, shared state, and data consistency. If your application allows writes in multiple regions, you must design for conflicts and eventual consistency.
Because of this, active-active is most common for APIs where reads dominate and writes are either rare or carefully controlled.
5.2.2 Active-Passive: Warm Standby
Active-passive routing sends all traffic to a primary region under normal conditions. A secondary region runs in standby mode with minimal capacity and takes over only when the primary becomes unhealthy.
Azure Front Door supports this model directly through origin priorities. When the primary origin fails health checks, traffic is shifted to the secondary automatically. Clients are unaware of the change because AFD uses Anycast IPs.
A simple two-region origin group in Bicep:
resource originGroup 'Microsoft.Cdn/profiles/originGroups@2023-05-01' = {
name: 'primarySecondaryGroup'
properties: {
loadBalancingSettings: {
sampleSize: 4
successfulSamplesRequired: 3
}
healthProbeSettings: {
probePath: '/health/ready'
probeIntervalInSeconds: 30
protocol: 'Https'
}
origins: [
{
name: 'primary'
properties: { priority: 1 }
}
{
name: 'secondary'
properties: { priority: 2 }
}
]
}
}
This approach fits most enterprise .NET workloads. It keeps data flows simple, reduces the risk of conflicts, and still provides fast regional recovery.
5.3 The “Split-Brain” Problem
The hardest part of multi-region design is not routing—it is state. Edge platforms can move traffic almost instantly, but your data layer cannot always keep up.
In active-active setups, clients may send write requests to different regions within seconds of each other. This creates classic split-brain scenarios where:
- Concurrent writes conflict
- Time-based logic behaves unpredictably
- Distributed caches diverge
- Duplicate requests are processed more than once
The edge cannot solve this for you. These problems must be addressed in application and data design.
One practical mitigation for .NET APIs is enforcing idempotency on write operations. This reduces the impact of duplicate requests during failover events.
A simplified example using a distributed cache:
app.Use(async (ctx, next) =>
{
if (ctx.Request.Method == HttpMethods.Post &&
ctx.Request.Headers.TryGetValue("Idempotency-Key", out var key))
{
var exists = await cache.GetStringAsync(key);
if (exists != null)
{
ctx.Response.StatusCode = StatusCodes.Status409Conflict;
return;
}
await cache.SetStringAsync(key, "processed",
new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5)
});
}
await next();
});
Another option is using globally distributed databases such as Azure Cosmos DB with multi-region writes enabled. Even then, conflict resolution policies must be chosen carefully, and not all domains tolerate eventual consistency.
For many systems, the simplest and safest approach is a hybrid model: active-passive for write-heavy workloads and active-active only for read-optimized services. This aligns well with how edge platforms behave and keeps failure scenarios predictable.
6 Practical Implementation: The .NET Developer’s View
All the architectural decisions discussed so far only pay off if the application behaves correctly behind the edge. In practice, most production issues are not caused by missing features, but by small mismatches between how the edge forwards requests and how the .NET application interprets them.
This section focuses on the mechanics that matter day to day: trusting forwarded headers, handling security headers cleanly, managing edge infrastructure through code, and avoiding “works locally but breaks in production” scenarios. The examples assume your backend runs on App Service, Azure Container Apps, or AKS.
6.1 Middleware Configuration (The Code)
When traffic flows through an edge, the request that reaches Kestrel is no longer a direct client request. The edge terminates TLS, rewrites headers, and forwards metadata about the original connection. If your middleware is not configured to interpret that metadata correctly, logging, rate limiting, and security logic will all behave incorrectly.
6.1.1 ForwardedHeadersMiddleware
ForwardedHeadersMiddleware tells ASP.NET Core how to reconstruct the original request context. Without it, HttpContext.Connection.RemoteIpAddress reflects the edge POP instead of the client, and Request.Scheme may always appear as HTTP even when the client used HTTPS.
A typical configuration for .NET 9/10 looks like this:
var builder = WebApplication.CreateBuilder(args);
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto |
ForwardedHeaders.XForwardedHost;
// Trust will be enforced explicitly via proxy validation
options.KnownNetworks.Clear();
options.KnownProxies.Clear();
});
var app = builder.Build();
app.UseForwardedHeaders();
app.MapGet("/debug/request", (HttpContext ctx) => new
{
ClientIp = ctx.Connection.RemoteIpAddress?.ToString(),
Scheme = ctx.Request.Scheme,
Host = ctx.Request.Host.Value
});
app.Run();
This configuration ensures your application sees the same request details the client experienced. Clearing KnownNetworks and KnownProxies is intentional; it avoids hardcoding proxy IPs and lets you control trust explicitly.
6.1.2 Trusting Proxy Ranges
Forwarded headers should only be trusted if they come from a known edge provider. Otherwise, a client could spoof X-Forwarded-For and bypass IP-based controls.
Most teams solve this by validating the immediate proxy IP against the provider’s published ranges. Instead of relying on framework internals, many prefer explicit middleware so the behavior is easy to audit.
A simple example:
public sealed class TrustedProxyMiddleware
{
private readonly RequestDelegate _next;
private readonly IReadOnlyCollection<IPAddressRange> _trustedRanges;
public TrustedProxyMiddleware(
RequestDelegate next,
IEnumerable<string> cidrRanges)
{
_next = next;
_trustedRanges = cidrRanges
.Select(IPAddressRange.Parse)
.ToArray();
}
public async Task InvokeAsync(HttpContext context)
{
var proxyIp = context.Connection.RemoteIpAddress;
if (proxyIp == null ||
!_trustedRanges.Any(r => r.Contains(proxyIp)))
{
context.Response.StatusCode = StatusCodes.Status403Forbidden;
return;
}
await _next(context);
}
}
Teams usually automate retrieval of CIDR ranges from:
- Cloudflare IP lists (
ips-v4,ips-v6) - Azure IP range JSON for Front Door
- AWS IP range feed for CloudFront
This automation keeps trust configuration current without manual intervention.
6.2 Security Headers
Security headers are another area where edge and application responsibilities often overlap. If both layers inject the same headers, browsers may receive duplicates or conflicting values, leading to unpredictable behavior.
As a rule, decide where each header is owned:
- Edge: HSTS, global CSP, cross-site protections
- Backend: API-specific headers or dynamic CSP additions
A minimal backend approach:
app.Use(async (ctx, next) =>
{
ctx.Response.Headers["X-Frame-Options"] = "DENY";
ctx.Response.Headers["X-Content-Type-Options"] = "nosniff";
ctx.Response.Headers["Referrer-Policy"] =
"strict-origin-when-cross-origin";
if (!ctx.Response.Headers.ContainsKey("Content-Security-Policy"))
{
ctx.Response.Headers["Content-Security-Policy"] =
"default-src 'self'";
}
await next();
});
If your team prefers a library-based approach, NWebSec integrates cleanly:
app.UseCsp(o => o.DefaultSources(s => s.Self()));
app.UseXfo(o => o.Deny());
app.UseXContentTypeOptions();
Many organizations ultimately centralize security headers at the edge so behavior is consistent across environments. In that case, the backend should add only what cannot be expressed statically.
6.3 Infrastructure as Code (IaC) Examples
Edge configuration changes just as frequently as application code. Managing those changes manually does not scale. Infrastructure as Code ensures routing rules, origins, and security settings evolve predictably alongside the application.
6.3.1 Bicep: Azure Front Door Premium
A typical Azure Front Door Premium deployment includes:
- A global profile
- One or more endpoints
- Origin groups with health probes
- Private Link–backed origins
- Routes with protocol and redirect rules
A minimal but realistic Bicep example:
param profileName string = 'afd-profile'
param endpointName string = 'global-endpoint'
resource profile 'Microsoft.Cdn/profiles@2023-05-01' = {
name: profileName
location: 'global'
sku: { name: 'Premium_AzureFrontDoor' }
}
resource endpoint 'Microsoft.Cdn/profiles/endpoints@2023-05-01' = {
name: '${profileName}/${endpointName}'
}
resource originGroup 'Microsoft.Cdn/profiles/originGroups@2023-05-01' = {
name: '${profileName}/primary'
properties: {
healthProbeSettings: {
probePath: '/health/ready'
probeIntervalInSeconds: 30
protocol: 'Https'
}
}
}
resource origin 'Microsoft.Cdn/profiles/originGroups/origins@2023-05-01' = {
name: '${profileName}/primary/app'
properties: {
hostName: 'myapp.azurewebsites.net'
privateLink: {
privateLinkResourceId: resourceId(
'Microsoft.Web/sites',
'myapp')
requestMessage: 'AFD access'
}
}
}
resource route 'Microsoft.Cdn/profiles/routes@2023-05-01' = {
name: '${profileName}/default'
properties: {
originGroup: { id: originGroup.id }
supportedProtocols: ['Https']
httpsRedirect: 'Enabled'
}
}
This pattern mirrors how most production AFD deployments are structured and works equally well for App Service, ACA, or AKS.
6.3.2 Terraform: Cloudflare Configuration
Terraform is the dominant choice for Cloudflare because it cleanly models DNS, rules, and Zero Trust configuration in one place.
A practical example:
resource "cloudflare_zone" "app" {
name = "example.com"
}
resource "cloudflare_record" "api" {
zone_id = cloudflare_zone.app.id
name = "api"
type = "CNAME"
value = "app.azurewebsites.net"
proxied = true
}
resource "cloudflare_transform_rule" "strip_headers" {
zone_id = cloudflare_zone.app.id
name = "strip-legacy-headers"
enabled = true
expression = "http.request.uri.path starts_with \"/api\""
actions {
request_headers {
operation = "remove"
name = "X-Legacy-Token"
}
}
}
This keeps edge behavior declarative and version-controlled, which becomes critical as rules grow more complex.
6.4 Local Development Workflows
Many edge-related bugs only appear after deployment because developers test against Kestrel directly. Headers, redirects, and routing rules are skipped entirely in local runs.
A simple improvement is introducing a local reverse proxy that behaves like the edge. YARP works well for this.
Example appsettings.json:
{
"ReverseProxy": {
"Routes": {
"all": {
"ClusterId": "backend",
"Match": { "Path": "{**catch-all}" }
}
},
"Clusters": {
"backend": {
"Destinations": {
"local": { "Address": "https://localhost:5001/" }
}
}
}
}
}
Startup configuration:
app.MapReverseProxy();
Developers can then add transforms to simulate:
X-Forwarded-*headers- HTTP-to-HTTPS redirects
- Header removal or rewriting
- Path normalization
This catches edge-related issues early and shortens the feedback loop significantly.
7 Pricing Models & Cost Optimization (TCO)
Pricing is often the final decision point when choosing an edge provider, but it should not be the first. Cost only makes sense once you understand how traffic flows, where it exits Azure, and which features you actually need. Most surprises come from assumptions that worked for static websites but break down for high-volume .NET APIs.
This section focuses on total cost of ownership, not just list prices. The goal is to understand which costs scale with traffic, which are fixed, and where edge choices amplify or reduce Azure spend.
7.1 The Hidden Costs of Egress
For most global SaaS platforms, data egress is the single largest variable cost. It is also the most commonly underestimated.
7.1.1 Azure to Azure Front Door
When traffic flows from Azure resources to Azure Front Door, it stays entirely on the Microsoft network.
In practice, this means:
- No Azure egress charges
- Traffic never leaves the Microsoft backbone
- You are not billed per GB for origin-to-edge transfer
For .NET applications running on App Service, ACA, or AKS, this is a major advantage. It allows you to scale read-heavy APIs or serve large JSON payloads without worrying that backend traffic will silently drive up costs.
This single factor is often enough to justify Azure Front Door when the backend is fully Azure-native.
7.1.2 Azure to Cloudflare or Azure to AWS
When you front Azure workloads with Cloudflare or CloudFront, traffic leaves Azure before it reaches the edge.
That introduces several cost layers:
- Standard Azure egress fees
- Possible CDN ingestion or routing fees
- TLS termination and request-based charges at the edge
For high-throughput APIs, this adds up quickly.
A simple illustration:
50 TB per month of outbound traffic
Azure egress (≈ $0.087 per GB): ~$4,350
Edge provider costs: additional, plan-dependent
Many teams accept this trade-off because Cloudflare offers strong Zero Trust capabilities or because the architecture spans multiple clouds. The key point is that these costs are structural, not accidental. They should be modeled early, not discovered after launch.
7.2 Base Fees vs. Usage-Based Pricing
The three platforms approach pricing differently, and those differences affect predictability as much as total cost.
7.2.1 Azure Front Door Premium
Azure Front Door Premium combines a fixed base cost with usage-based components.
Typical cost drivers include:
- Monthly base fee for the profile
- Per-request charges
- WAF policy and inspection costs
For organizations already operating under an Enterprise Agreement, this model is easy to budget and aligns well with other Azure services. Costs scale with usage, but there are fewer moving parts than with third-party edges.
7.2.2 Cloudflare Enterprise
Cloudflare Enterprise pricing is contract-based rather than purely metered.
Common characteristics:
- Custom bandwidth and request allowances
- Bundled pricing for WAF, bot management, Access, and Workers
- Discounts at high volume, with minimum commitments
At scale, Cloudflare can be very cost-efficient per GB. The trade-off is flexibility: you commit upfront and optimize within the contract rather than scaling purely on demand. This works well for mature platforms with predictable growth.
7.2.3 AWS CloudFront
CloudFront follows a strict pay-as-you-go model.
Key traits:
- Charges per GB transferred
- Charges per request
- Regional price differences
This model fits teams that want fine-grained control and minimal contractual commitment. It also maps well to workloads with steady, predictable traffic patterns. For spiky or rapidly growing APIs, costs can be harder to forecast without careful monitoring.
7.3 WAF and Advanced Feature Pricing
For many production systems, security features cost more than bandwidth.
Typical examples include:
- Azure Front Door managed rules charged per request
- Cloudflare Bot Management as a premium add-on
- Cloudflare Access or Turnstile billed per request or per user
- AWS WAF charged per rule and per million requests
A .NET API that handles tens or hundreds of millions of requests per month can easily spend more on inspection and bot mitigation than on raw data transfer. This is not a reason to avoid these features—but it is a reason to enable them selectively and measure their impact.
The most cost-effective approach is usually layered:
- Broad protection with managed rules
- Targeted advanced features only on sensitive endpoints
- Aggressive logging early, blocking once confident
7.4 Cost Modeling Example
Consider a typical global .NET SaaS application:
- 50 TB of outbound traffic per month
- 100 million HTTP requests
- Two Azure regions
- Mostly JSON APIs with some static assets
A simplified relative comparison looks like this (illustrative, not list pricing):
| Provider | Data Transfer | Requests | Security Features | Overall TCO |
|---|---|---|---|---|
| Azure Front Door Premium | $0 (internal) | Moderate | Medium | Lowest |
| Cloudflare | Azure egress applies | Low | High (Bot, Zero Trust) | Medium–High |
| CloudFront | Azure egress applies | Low–Medium | Medium | Medium |
In practice:
- Azure Front Door is usually the most cost-efficient choice when all origins are in Azure.
- Cloudflare becomes attractive when you need multi-cloud support, strong bot mitigation, or Zero Trust access.
- CloudFront fits teams with an existing AWS footprint or workloads dominated by media delivery.
The important takeaway is that edge pricing is architectural. Once traffic patterns are established, costs follow naturally. The goal is to choose the edge whose pricing model aligns with how your .NET application actually behaves in production.
8 Final Verdict: The Decision Matrix
By this point, the trade-offs should be clear. Azure Front Door, Cloudflare, and AWS CloudFront are all capable global edge platforms, but they solve different problems particularly well. The “best” choice is the one that aligns with your backend architecture, traffic patterns, security model, and cost constraints—not the one with the longest feature list.
This section distills everything covered so far into practical guidance you can use when making or justifying a decision.
8.1 When to Choose Azure Front Door
Azure Front Door is usually the right default when your .NET platform is fully Azure-native.
It fits best when:
- Your backends run primarily on App Service, Azure Container Apps, or AKS
- You need private, non-public origins using Private Link
- You want to avoid Azure egress costs entirely
- You prefer ARM or Bicep for infrastructure definitions
- You operate under an Enterprise Agreement and want consolidated billing
AFD works especially well for teams that value simplicity and predictability. Networking, security, and routing stay inside Azure’s control plane, and traffic remains on the Microsoft backbone. For many enterprise .NET workloads, this reduces both operational overhead and long-term cost.
If you do not have strong multi-cloud requirements, Azure Front Door is often the most straightforward and economical choice.
8.2 When to Choose Cloudflare
Cloudflare stands out when your architecture extends beyond Azure or when security and Zero Trust are primary drivers.
It is a strong fit when:
- You run workloads across multiple clouds or on-premises
- Bot mitigation and behavioral protection are critical
- You want to protect internal tools without modifying application code
- You need programmable edge logic through Workers
- Your team standardizes on Terraform rather than ARM
Cloudflare is often chosen not just as a CDN, but as a global security and connectivity layer. For .NET teams, this is particularly compelling when internal dashboards, admin APIs, or partner-facing endpoints need strong identity enforcement without custom authentication flows.
The trade-off is cost structure. Azure egress applies, and advanced features are typically premium, but for many organizations the security and flexibility justify the expense.
8.3 When to Choose AWS CloudFront
CloudFront is the natural choice when AWS is already a major part of your platform.
It makes sense when:
- Your organization has a heavy AWS footprint
- You rely on Lambda@Edge for request-time logic
- You deliver large volumes of video or media content
- You prefer strict pay-as-you-go pricing
- You need very granular control over caching behavior
For Azure-hosted .NET backends, CloudFront is rarely the first choice, but it can still work well in hybrid environments. Teams already operating CloudFront at scale often extend it to Azure origins to maintain a single global edge strategy.
The main considerations are origin security and Azure egress costs, both of which require deliberate design.
8.4 Summary Checklist for Architects
When deciding between edge providers, walk through the following questions in order. The answers usually point to a clear outcome.
- Do most backends run exclusively on Azure?
- Do you require private origins with no public exposure?
- Will Azure egress costs materially affect your budget?
- Are redirects, rewrites, or JWT validation better handled at the edge?
- Is Zero Trust access for internal tools a requirement?
- Do you need programmable compute at the edge?
- Is Terraform the dominant IaC tool in your organization?
- Does your workload include large media or streaming traffic?
- Do you require latency-based global routing by default?
- Are you prepared to automate IP trust and edge configuration updates?
If most answers point toward Azure-native concerns, Azure Front Door is usually the best fit. If identity, security, and multi-cloud flexibility dominate, Cloudflare often wins. If AWS is already central to your platform or media delivery is the core use case, CloudFront remains a strong option.
The key takeaway for 2026 is this: the edge is no longer an afterthought. It is a first-class part of your .NET architecture, and choosing the right one early makes everything else—performance, security, cost, and operations—easier to get right.