1 The Concurrency Landscape for .NET Architects
Every multi-user application faces the same fundamental question: what happens when two users modify the same data at the same time? If you don’t answer that question deliberately, your system answers it for you — and the answer is usually “last write wins,” which means data loss.
This article walks through the full spectrum of concurrency control in EF Core, from optimistic row versioning to pessimistic database locks, distributed locking patterns, and real-world scenarios like inventory management and booking systems. By the end, you will have a working mental model, production-ready code, and a decision framework for choosing the right strategy in your .NET applications.
The target audience here is senior developers, tech leads, and solution architects working with the .NET and SQL Server stack. We are looking at EF Core 9 and EF Core 10 capabilities, including improved interceptors and better support for complex raw SQL queries — both critical for pessimistic locking strategies.
1.1 The Cost of Inconsistency: Why “Last-In-Wins” is a Business Risk
Consider a straightforward scenario. Two customer service agents open the same order record. Agent A changes the shipping address. Agent B changes the order quantity. Both click Save. Without concurrency control, the second save overwrites the first, and the shipping address change is silently lost. Nobody gets an error. Nobody knows data was destroyed.
This isn’t a theoretical concern. In financial applications, a lost update on an account balance means money appears or disappears. In healthcare, overwritten patient records create liability. In e-commerce, inventory counts that don’t reflect reality lead to overselling and cancelled orders. The “last-in-wins” default behavior in most ORMs, including EF Core without explicit concurrency configuration, treats every save as authoritative. That works fine for single-user applications. It fails the moment you have concurrent access to shared mutable state — which is every production web application.
The business cost is not just incorrect data. It’s the erosion of trust in the system. When users notice that their changes vanish intermittently, they start building workarounds — refreshing obsessively, copying data to spreadsheets, or avoiding the system entirely. Fixing concurrency bugs after they reach production is significantly harder than designing concurrency control from the start.
Here is what the “last-in-wins” problem looks like at the database level. Two transactions read the same row, both modify it, and the second commit overwrites the first:
-- Transaction A (starts first)
BEGIN TRANSACTION;
SELECT Price FROM Products WHERE Id = 1; -- Returns 10.00
-- User A decides to set price to 15.00
-- Transaction B (starts second, before A commits)
BEGIN TRANSACTION;
SELECT Price FROM Products WHERE Id = 1; -- Also returns 10.00
-- User B decides to set price to 20.00
-- Transaction A commits
UPDATE Products SET Price = 15.00 WHERE Id = 1;
COMMIT;
-- Transaction B commits — overwrites A's change
UPDATE Products SET Price = 20.00 WHERE Id = 1;
COMMIT;
-- Price is 20.00. User A's change to 15.00 is lost silently.
Without a concurrency token in the WHERE clause, both updates succeed regardless of what happened in between. This is the fundamental problem that both optimistic and pessimistic concurrency control solve, through different mechanisms.
1.2 The CAP Theorem in the Context of Local and Distributed Transactions
The CAP theorem states that a distributed system can provide at most two of three guarantees: Consistency, Availability, and Partition Tolerance. For architects building on SQL Server and EF Core, this matters at two levels.
At the single-database level, you get strong consistency guarantees through ACID transactions. SQL Server’s lock manager, transaction isolation levels, and row versioning engine give you the tools to enforce serializable access when needed. The trade-off is throughput — stricter isolation means more blocking, fewer concurrent operations per second.
At the distributed level, things change fundamentally. When data spans multiple databases, microservices, or cloud regions, you can no longer rely on a single transaction coordinator. You must choose between strong consistency (reduced availability during partitions) and eventual consistency (temporary disagreements between services). For most .NET applications using a single SQL Server instance, the practical question is about choosing the right isolation level and concurrency strategy for each use case within a system that already provides strong consistency guarantees.
1.3 Performance Trade-offs: Throughput vs. Data Integrity
Every concurrency control mechanism has a performance cost. The question is where you want to pay it.
Optimistic concurrency control (OCC) assumes conflicts are rare. It lets all transactions proceed without blocking, checks for conflicts at commit time, and forces a retry when conflict is detected. When conflicts are rare (less than 5% of operations), this is efficient. When frequent, the retry overhead can exceed the cost of locking.
Pessimistic concurrency control (PCC) assumes conflicts are likely. It acquires locks before modifying data, preventing other transactions from reading or writing the locked rows. When contention is high, this prevents wasted work. When contention is low, you’re paying for locks you didn’t need.
| Factor | Optimistic (OCC) | Pessimistic (PCC) |
|---|---|---|
| Lock acquisition | None | Every operation |
| Conflict detection | At save time | Prevented by locks |
| Retry cost | High (re-read + merge) | None (blocked, not failed) |
| Throughput (low contention) | High | Moderate |
| Throughput (high contention) | Degrades (retries) | Stable (blocking) |
| Deadlock risk | None | Significant |
1.4 Identifying High-Contention vs. Low-Contention Domains
Contention is the frequency at which multiple transactions target the same data simultaneously. Understanding your contention profile is the single most important input to your concurrency strategy.
Low-contention domains have large data sets where individual records are rarely accessed concurrently: user profile management, content management systems, configuration settings. Optimistic concurrency is almost always the right default.
High-contention domains have “hot” records targeted by many transactions simultaneously: inventory counters during flash sales, account balances, seat availability in booking systems. Optimistic concurrency leads to excessive retries here, and pessimistic strategies or atomic operations become necessary.
Medium-contention domains fall in between. A project management tool where team members occasionally edit the same task might see 1-10% conflict rates. These often benefit from a hybrid approach: optimistic by default, pessimistic for specific hot-path operations.
To identify your contention profile, measure it. Add logging around DbUpdateConcurrencyException handlers. Track the ratio of conflicts to successful saves per entity type. Conflict rates above 5-10% signal a candidate for pessimistic locking or architectural changes.
2 Mastering Optimistic Concurrency Control (OCC)
Optimistic concurrency is the default strategy for most EF Core applications, and for good reason. It requires no database locks during reads, scales well under low contention, and integrates naturally with the disconnected model of web applications.
2.1 The Mechanics of Change Tracking and Database Win-Detection
EF Core’s optimistic concurrency works through a concurrency token — a value stored with each row that changes every time the row is updated. When EF Core generates an UPDATE statement, it includes the original token value in the WHERE clause. If another transaction modified the row since it was read, the token won’t match, the UPDATE affects zero rows, and EF Core throws DbUpdateConcurrencyException.
The SQL that EF Core generates for a concurrency-protected update:
UPDATE [Products]
SET [Price] = @p0
WHERE [Id] = @p1 AND [RowVersion] = @p2;
SELECT [RowVersion]
FROM [Products]
WHERE @@ROWCOUNT = 1 AND [Id] = @p1;
The SELECT reads back the new RowVersion so the entity’s tracked state stays current. If @@ROWCOUNT is 0, the update affected no rows, and a concurrency conflict is reported. No locks are held between the initial read and the update — both transactions proceed independently, and the conflict is detected only at commit time.
2.2 Implementation Patterns in EF Core
2.2.1 Using Shadow Properties for Concurrency Tokens
Shadow properties let you add a concurrency token without modifying your domain entity class:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
public int StockQuantity { get; set; }
}
public class ProductConfiguration : IEntityTypeConfiguration<Product>
{
public void Configure(EntityTypeBuilder<Product> builder)
{
builder.Property<byte[]>("RowVersion")
.IsRowVersion()
.HasColumnName("RowVersion");
}
}
The RowVersion column exists in the database and participates in concurrency checks, but the Product class doesn’t expose it. Access the shadow property through the change tracker when needed:
var rowVersion = context.Entry(product).Property<byte[]>("RowVersion").CurrentValue;
2.2.2 IsConcurrencyToken vs. IsRowVersion (Timestamp)
IsConcurrencyToken() marks a property as a concurrency token but does not automatically manage its value — your application code or a database trigger must handle that:
builder.Property(p => p.LastModifiedDate).IsConcurrencyToken();
IsRowVersion() configures the property as both a concurrency token and a database-generated value. On SQL Server, this maps to the rowversion type, automatically incremented on every row update:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
[Timestamp]
public byte[] RowVersion { get; set; }
}
The [Timestamp] data annotation is equivalent to .IsRowVersion() in the Fluent API. For SQL Server projects, IsRowVersion() with byte[] is the recommended approach — zero maintenance and protects the entire row automatically.
The migration generated by EF Core for a RowVersion property on SQL Server looks like this:
migrationBuilder.AddColumn<byte[]>(
name: "RowVersion",
table: "Products",
type: "rowversion",
rowVersion: true,
nullable: false);
SQL Server handles the rowversion column entirely — you never set it in application code, and it increments on every UPDATE to any column in the row.
2.2.3 Native SQL Server rowversion vs. Cross-Platform Guid Tokens
SQL Server’s native rowversion is a monotonically increasing 8-byte binary value with negligible overhead. But it’s SQL Server-specific. For cross-platform support, use a Guid concurrency stamp:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
public Guid ConcurrencyStamp { get; set; } = Guid.NewGuid();
}
You must regenerate the stamp before every save. A SaveChanges interceptor can automate this:
public class ConcurrencyStampInterceptor : SaveChangesInterceptor
{
public override ValueTask<InterceptionResult<int>> SavingChangesAsync(
DbContextEventData eventData,
InterceptionResult<int> result,
CancellationToken cancellationToken = default)
{
var entries = eventData.Context.ChangeTracker.Entries()
.Where(e => e.State == EntityState.Modified);
foreach (var entry in entries)
{
var prop = entry.Properties
.FirstOrDefault(p => p.Metadata.Name == "ConcurrencyStamp"
&& p.Metadata.ClrType == typeof(Guid));
if (prop != null)
prop.CurrentValue = Guid.NewGuid();
}
return ValueTask.FromResult(result);
}
}
2.3 Handling DbUpdateConcurrencyException
2.3.1 Client-Wins vs. Database-Wins vs. Merge Logic
Database-Wins: Discard the user’s changes and reload current database values.
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
await entry.ReloadAsync();
// Retry with fresh data
}
Client-Wins: Force the user’s changes through by updating the original values to match the database.
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
var dbValues = await entry.GetDatabaseValuesAsync();
if (dbValues == null)
throw new InvalidOperationException("Entity was deleted.");
entry.OriginalValues.SetValues(dbValues);
}
await context.SaveChangesAsync(); // Now succeeds
}
Merge Logic: Compare original, current, and database values for each property. Auto-merge non-conflicting changes, surface true conflicts to the user.
catch (DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
var original = entry.OriginalValues;
var current = entry.CurrentValues;
var database = await entry.GetDatabaseValuesAsync();
var conflicts = new List<string>();
foreach (var property in current.Properties)
{
if (property.Name == "RowVersion") continue;
bool userChanged = !Equals(original[property], current[property]);
bool dbChanged = !Equals(original[property], database[property]);
if (userChanged && dbChanged)
conflicts.Add(property.Name);
else if (dbChanged)
current[property] = database[property];
}
if (!conflicts.Any())
{
entry.OriginalValues.SetValues(database);
await context.SaveChangesAsync(); // Auto-merged
}
}
2.3.2 Automated Retry Policies with Polly
Polly provides resilience policies that integrate well with EF Core concurrency handling:
public class ProductService
{
private readonly AppDbContext _context;
private readonly ResiliencePipeline _retryPipeline;
public ProductService(AppDbContext context)
{
_context = context;
_retryPipeline = new ResiliencePipelineBuilder()
.AddRetry(new RetryStrategyOptions
{
ShouldHandle = new PredicateBuilder()
.Handle<DbUpdateConcurrencyException>(),
MaxRetryAttempts = 3,
Delay = TimeSpan.FromMilliseconds(50),
BackoffType = DelayBackoffType.Exponential
})
.Build();
}
public async Task<Product> UpdatePriceWithRetry(int productId, decimal newPrice)
{
return await _retryPipeline.ExecuteAsync(async token =>
{
var product = await _context.Products
.FirstAsync(p => p.Id == productId, token);
product.Price = newPrice;
await _context.SaveChangesAsync(token);
return product;
}, CancellationToken.None);
}
}
The entire read-modify-write cycle must be inside the retry delegate. Each retry must re-read the entity to get the current concurrency token. Retrying only SaveChangesAsync is a common mistake — it will fail again because the original token hasn’t been updated.
When using scoped DbContext with dependency injection, ensure your retry logic properly handles the change tracker state. After a DbUpdateConcurrencyException, the change tracker still holds the stale original values. You have two options: reload the entity within the same context, or use a fresh DbContext per attempt. The Polly approach above re-reads the entity on each attempt, which naturally gets fresh values from the database.
Here is the incorrect pattern that developers frequently use:
// INCORRECT — retries SaveChanges without refreshing the entity
var product = await context.Products.FirstAsync(p => p.Id == productId);
product.Price = newPrice;
for (int i = 0; i < 3; i++)
{
try
{
await context.SaveChangesAsync();
break;
}
catch (DbUpdateConcurrencyException)
{
// This will fail again — the original RowVersion hasn't changed
// in the change tracker. Must call entry.ReloadAsync() first.
}
}
The correct approach either reloads the entity on each retry or wraps the entire read-modify-write cycle in the retry block.
2.4 Scaling OCC: When Row Versioning Becomes a Bottleneck
Optimistic concurrency breaks down in three scenarios. Hot rows receiving dozens of concurrent updates per second cause unacceptable retry rates. Wide entities with frequent partial updates create false conflicts when different users change different fields — a single RowVersion for the entire row catches non-overlapping changes as conflicts. Long-lived operations where users load a form and submit hours later make simple retries insufficient because the entire data context has changed.
When you see these patterns in telemetry, consider pessimistic concurrency for the affected operations — not a wholesale switch across the application.
A practical diagnostic query for identifying hot rows in SQL Server:
-- Identify the most contended rows by looking at wait statistics
SELECT
r.session_id,
r.wait_type,
r.wait_time,
r.blocking_session_id,
t.text AS query_text,
r.resource_description
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.wait_type LIKE 'LCK%'
ORDER BY r.wait_time DESC;
If specific rows consistently appear in lock waits, those are your candidates for either pessimistic locking with shorter transactions or a move to atomic SQL operations.
3 Implementing Pessimistic Concurrency Control (PCC)
Pessimistic concurrency prevents conflicts by acquiring locks on data before modifying it. This eliminates the retry-and-merge cycle of optimistic concurrency but introduces deadlocks, reduced throughput, and increased complexity.
3.1 The “Missing” Feature: Why EF Core Doesn’t Have a Native .Lock()
If you come from a JPA/Hibernate background, you might expect EF Core to have something like context.Products.WithLock(LockMode.Pessimistic). It doesn’t. As of EF Core 10, there is no built-in LINQ support for pessimistic locking or table hints. The EF Core team’s position (GitHub issue #26042) is that locking semantics are highly database-specific. SQL Server uses WITH (UPDLOCK, ROWLOCK). PostgreSQL uses SELECT ... FOR UPDATE. MySQL uses SELECT ... FOR UPDATE with different semantics than PostgreSQL. Oracle has SELECT ... FOR UPDATE NOWAIT. A single abstraction would be either too leaky (exposing provider-specific behavior) or too limited (supporting only the lowest common denominator).
The practical consequence: pessimistic locking in EF Core requires raw SQL. This is not as limiting as it sounds — the raw SQL integrates with EF Core’s change tracking, transactions, and connection management. But it does mean you need to understand the locking syntax for your specific database provider.
3.2 Bypassing the Abstraction: Raw SQL and SELECT … FOR UPDATE
For SQL Server, use SELECT ... WITH (UPDLOCK, ROWLOCK):
public async Task UpdateStockWithPessimisticLock(int productId, int quantityToDeduct)
{
using var transaction = await _context.Database
.BeginTransactionAsync(IsolationLevel.ReadCommitted);
try
{
var product = await _context.Products
.FromSqlInterpolated(
$@"SELECT * FROM Products WITH (UPDLOCK, ROWLOCK)
WHERE Id = {productId}")
.FirstAsync();
if (product.StockQuantity < quantityToDeduct)
throw new InvalidOperationException("Insufficient stock.");
product.StockQuantity -= quantityToDeduct;
await _context.SaveChangesAsync();
await transaction.CommitAsync();
}
catch
{
await transaction.RollbackAsync();
throw;
}
}
For PostgreSQL, use SELECT ... FOR UPDATE:
var product = await _context.Products
.FromSqlInterpolated(
$"SELECT * FROM \"Products\" WHERE \"Id\" = {productId} FOR UPDATE")
.FirstOrDefaultAsync();
PostgreSQL also supports FOR UPDATE NOWAIT, which fails immediately instead of blocking if the row is already locked — useful for APIs that need fast failure:
catch (PostgresException ex) when (ex.SqlState == "55P03")
{
throw new ResourceLockedException("Product is currently being modified.");
}
3.3 Transaction Isolation Levels: Read Committed vs. Repeatable Read vs. Serializable
| Isolation Level | Dirty Reads | Non-Repeatable Reads | Phantom Reads | Lock Duration |
|---|---|---|---|---|
| Read Committed | No | Yes | Yes | Short (per statement) |
| Repeatable Read | No | No | Yes | Transaction duration |
| Serializable | No | No | No | Transaction duration + range |
For most EF Core applications, Read Committed with explicit row-level locks (UPDLOCK, ROWLOCK) provides the best balance. You lock exactly the rows you need, for exactly as long as you need them. Use Serializable only for specific operations that require range-based protection, like booking overlap checks.
3.4 Managing Database Deadlocks
3.4.1 Lock Escalation: From Row to Page to Table
When a transaction accumulates more than approximately 5,000 row locks on a single table, SQL Server may escalate to a table lock. This blocks every other transaction accessing the table. Monitor lock escalation:
SELECT object_name(resource_associated_entity_id) AS TableName,
request_mode, request_type, request_status, resource_type
FROM sys.dm_tran_locks
WHERE resource_type IN ('OBJECT', 'PAGE', 'KEY', 'RID')
ORDER BY resource_type;
Mitigate by keeping transactions focused on small batches. If necessary, disable escalation on specific tables:
ALTER TABLE Products SET (LOCK_ESCALATION = DISABLE);
3.4.2 Strategy: Keeping Transactions Short and Focused
- Do all non-database work outside the transaction. Don’t make HTTP calls or send emails inside a locked transaction.
- Lock rows in consistent order. Always lock ascending by primary key to prevent circular waits.
- Set a lock timeout:
await _context.Database.ExecuteSqlRawAsync("SET LOCK_TIMEOUT 5000");
- Handle deadlocks with retry and randomized backoff:
catch (SqlException ex) when (ex.Number == 1205) // Deadlock
{
if (attempt == maxRetries) throw;
var delay = TimeSpan.FromMilliseconds(Random.Shared.Next(50, 200) * (attempt + 1));
await Task.Delay(delay);
}
3.5 Using EF Core Interceptors to Automate Locking Hints
Writing raw SQL for every pessimistic read is tedious. EF Core interceptors let you inject locking hints automatically — a pattern that benefits from EF Core 9/10’s improved interceptor pipeline:
public class PessimisticLockInterceptor : DbCommandInterceptor
{
public override ValueTask<InterceptionResult<DbDataReader>> ReaderExecutingAsync(
DbCommand command, CommandEventData eventData,
InterceptionResult<DbDataReader> result,
CancellationToken cancellationToken = default)
{
if (command.CommandText.Contains("-- PESSIMISTIC_LOCK"))
{
command.CommandText = command.CommandText
.Replace("-- PESSIMISTIC_LOCK", string.Empty)
.Replace("FROM [Products]", "FROM [Products] WITH (UPDLOCK, ROWLOCK)");
}
return ValueTask.FromResult(result);
}
}
Use it with EF Core’s TagWith method:
var product = await _context.Products
.TagWith("-- PESSIMISTIC_LOCK")
.FirstAsync(p => p.Id == productId);
Register the interceptor during configuration:
options.UseSqlServer(connectionString)
.AddInterceptors(new PessimisticLockInterceptor());
4 Practical Scenario 1: Inventory Management and High-Frequency Updates
4.1 The “Thundering Herd” Problem in Flash Sales
When a flash sale begins, hundreds of requests hit the same product simultaneously. With optimistic concurrency, all transactions read the same StockQuantity, attempt to decrement, and most fail. Retry storms generate enormous database load. With naive pessimistic locking, all requests serialize on the same row lock — correct but slow. Neither approach is ideal. The right strategy is to avoid row-level contention entirely using atomic SQL operations.
4.2 Strategy: Atomic Decrement via Raw SQL within EF Core
Instead of read-modify-write, use a single SQL statement that atomically checks and decrements:
public async Task<StockReservationResult> ReserveStock(int productId, int quantity)
{
var rowsAffected = await _context.Database.ExecuteSqlInterpolatedAsync(
$@"UPDATE Products
SET StockQuantity = StockQuantity - {quantity}
WHERE Id = {productId} AND StockQuantity >= {quantity}");
if (rowsAffected == 0)
{
var currentStock = await _context.Products
.Where(p => p.Id == productId)
.Select(p => p.StockQuantity)
.FirstOrDefaultAsync();
return new StockReservationResult
{
Success = false,
AvailableStock = currentStock,
Message = $"Insufficient stock. Available: {currentStock}"
};
}
return new StockReservationResult { Success = true };
}
A single SQL UPDATE is atomic — SQL Server acquires and releases the row lock within the statement. The WHERE StockQuantity >= {quantity} clause guards against overselling. No concurrency tokens, no locks held across round trips, no retries.
4.3 Implementing a “Soft Lock” with Expiration in the Domain Schema
Sometimes you need to “hold” inventory while a user completes checkout without permanently decrementing stock:
public class StockReservation
{
public int Id { get; set; }
public int ProductId { get; set; }
public int Quantity { get; set; }
public string SessionId { get; set; }
public DateTime ExpiresAt { get; set; }
public ReservationStatus Status { get; set; }
}
public async Task<StockReservation> TryReserve(int productId, int quantity, string sessionId)
{
using var transaction = await _context.Database
.BeginTransactionAsync(IsolationLevel.ReadCommitted);
try
{
var product = await _context.Products
.FromSqlInterpolated(
$"SELECT * FROM Products WITH (UPDLOCK, ROWLOCK) WHERE Id = {productId}")
.FirstAsync();
var reservedQty = await _context.StockReservations
.Where(r => r.ProductId == productId
&& r.Status == ReservationStatus.Active
&& r.ExpiresAt > DateTime.UtcNow)
.SumAsync(r => r.Quantity);
if (product.StockQuantity - reservedQty < quantity)
{
await transaction.RollbackAsync();
return null;
}
var reservation = new StockReservation
{
ProductId = productId, Quantity = quantity,
SessionId = sessionId,
ExpiresAt = DateTime.UtcNow.AddMinutes(15),
Status = ReservationStatus.Active
};
_context.StockReservations.Add(reservation);
await _context.SaveChangesAsync();
await transaction.CommitAsync();
return reservation;
}
catch { await transaction.RollbackAsync(); throw; }
}
A background job cleans up expired reservations periodically:
public class ReservationCleanupJob : BackgroundService
{
private readonly IServiceScopeFactory _scopeFactory;
public ReservationCleanupJob(IServiceScopeFactory scopeFactory)
{
_scopeFactory = scopeFactory;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
using var scope = _scopeFactory.CreateScope();
var context = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var expired = await context.StockReservations
.Where(r => r.Status == ReservationStatus.Active
&& r.ExpiresAt <= DateTime.UtcNow)
.ToListAsync(stoppingToken);
foreach (var reservation in expired)
reservation.Status = ReservationStatus.Expired;
await context.SaveChangesAsync(stoppingToken);
await Task.Delay(TimeSpan.FromMinutes(1), stoppingToken);
}
}
}
This pattern separates the “hold” (reservation) from the “commit” (stock deduction), giving users time to complete checkout while still protecting against overselling.
4.4 Using MediatR Pipelines to Synchronize Concurrent Commands
MediatR pipeline behaviors can serialize access to specific resources at the application level:
public class ConcurrencyBehavior<TRequest, TResponse>
: IPipelineBehavior<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private static readonly ConcurrentDictionary<string, SemaphoreSlim> Locks = new();
public async Task<TResponse> Handle(
TRequest request, RequestHandlerDelegate<TResponse> next,
CancellationToken cancellationToken)
{
if (request is not IResourceLocked resourceLocked)
return await next();
var semaphore = Locks.GetOrAdd(resourceLocked.ResourceKey, _ => new SemaphoreSlim(1, 1));
var acquired = await semaphore.WaitAsync(TimeSpan.FromSeconds(10), cancellationToken);
if (!acquired)
throw new TimeoutException($"Could not acquire lock: {resourceLocked.ResourceKey}");
try { return await next(); }
finally { semaphore.Release(); }
}
}
public interface IResourceLocked { string ResourceKey { get; } }
public class DeductStockCommand : IRequest<StockReservationResult>, IResourceLocked
{
public int ProductId { get; set; }
public int Quantity { get; set; }
public string ResourceKey => $"product-stock-{ProductId}";
}
This works for single-instance deployments. For load-balanced environments, you need distributed locks (Section 7.2).
4.5 Benchmarking PCC vs. OCC in a 10,000 Transactions-Per-Second Environment
Simulated flash sale: 100 units of stock, 10,000 concurrent purchase attempts, SQL Server on 8 cores.
| Strategy | Successful Orders | Avg Latency (ms) | p99 Latency (ms) | DB CPU% |
|---|---|---|---|---|
| OCC with retries (max 5) | 100 | 340 | 2,100 | 85% |
| PCC with row lock | 100 | 180 | 950 | 60% |
| Atomic SQL UPDATE | 100 | 12 | 45 | 25% |
| OCC without retries | 1-3 | 8 | 25 | 15% |
Key observations from the benchmark:
- OCC with retries produces correct results but at high cost. Retry storms create significant database load, and tail latency is poor because some transactions retry 4-5 times.
- PCC with row lock is better because transactions queue up orderly rather than retrying blindly. But serial execution limits throughput.
- Atomic SQL UPDATE is the clear winner for this specific pattern. No concurrency tokens, no locks held across round trips, no retries. The database handles everything in a single statement.
- OCC without retries is fast but incorrect — only the first few transactions succeed, and the rest fail permanently.
The takeaway: for hot-row update patterns, bypass the ORM’s change-tracking cycle and use atomic SQL operations. Reserve OCC for low-contention entities and PCC for read-then-write consistency requirements.
5 Practical Scenario 2: Booking Systems and Collaborative Editing
5.1 Booking Systems: Preventing Double-Booking without Locking the Whole Table
5.1.1 Range-Based Locking Strategies
The core challenge: guarantee a time slot is allocated to exactly one customer. A database-level solution uses Serializable isolation with range locks:
public async Task<BookingResult> CreateBooking(
int roomId, DateTime startTime, DateTime endTime, string userId)
{
using var transaction = await _context.Database
.BeginTransactionAsync(IsolationLevel.Serializable);
try
{
var conflicting = await _context.RoomBookings
.AnyAsync(b => b.RoomId == roomId
&& b.StartTime < endTime && b.EndTime > startTime);
if (conflicting)
{
await transaction.RollbackAsync();
return BookingResult.Conflict("Time slot overlaps an existing booking.");
}
_context.RoomBookings.Add(new RoomBooking
{
RoomId = roomId, StartTime = startTime,
EndTime = endTime, BookedBy = userId
});
await _context.SaveChangesAsync();
await transaction.CommitAsync();
return BookingResult.Success();
}
catch { await transaction.RollbackAsync(); throw; }
}
Serializable isolation ensures that the overlap check and the insert are atomic — no concurrent transaction can insert a conflicting booking in between. For a more targeted approach, lock only the specific room rather than the entire range:
await _context.Database.ExecuteSqlInterpolatedAsync(
$"SELECT Id FROM Rooms WITH (UPDLOCK) WHERE Id = {roomId}");
5.1.2 Validating Availability: The “Check-Then-Act” Anti-Pattern
The most common source of booking race conditions is checking availability and inserting in separate, unprotected steps. Without appropriate transaction isolation, another transaction can insert between your check and your insert. The fix is always one of: Serializable isolation, explicit locks, or a unique constraint enforced by the database.
For time-slot-based systems, a unique index prevents duplicates at the database level:
builder.HasIndex(b => new { b.RoomId, b.StartTime }).IsUnique();
5.2 Collaborative Editing: Beyond Simple Versioning
5.2.1 Introduction to Operational Transformation (OT) and CRDTs
OT transforms operations against concurrent changes — if user A inserts at position 5 and user B inserts at position 3, B’s operation shifts A’s position. Google Docs uses OT. CRDTs are data structures that guarantee convergence regardless of operation order. Figma uses CRDTs. For most .NET applications, full OT or CRDT is overkill. If your needs are limited to “detect conflicts and let users resolve them,” EF Core’s optimistic concurrency with field-level merge logic is sufficient.
5.2.2 Handling JSON-Based Document Conflicts in EF Core/PostgreSQL
For semi-structured JSON data, implement a path-based merge that compares original, user, and database versions:
public class JsonMergeService
{
public JsonMergeResult Merge(
JsonDocument original, JsonDocument userVersion, JsonDocument databaseVersion)
{
var origDict = Flatten(original);
var userDict = Flatten(userVersion);
var dbDict = Flatten(databaseVersion);
var conflicts = new Dictionary<string, JsonConflict>();
var merged = new Dictionary<string, JsonElement>(dbDict);
foreach (var (path, userValue) in userDict)
{
origDict.TryGetValue(path, out var origValue);
dbDict.TryGetValue(path, out var dbValue);
bool userChanged = origValue.ToString() != userValue.ToString();
bool dbChanged = dbValue.ToString() != origValue.ToString();
if (userChanged && dbChanged)
conflicts[path] = new JsonConflict { Path = path };
else if (userChanged)
merged[path] = userValue;
}
return conflicts.Any()
? JsonMergeResult.WithConflicts(conflicts)
: JsonMergeResult.Merged(Unflatten(merged));
}
}
6 Advanced Conflict Resolution and UI/UX Patterns
6.1 Designing a Conflict-Aware API (HTTP 409 Conflict)
When a concurrency conflict occurs, respond with 409 Conflict including both the user’s intended changes and the current database state:
[HttpPut("{id}")]
public async Task<IActionResult> UpdateProduct(int id, [FromBody] ProductUpdateRequest request)
{
var product = await _context.Products.FindAsync(id);
if (product == null) return NotFound();
if (!product.RowVersion.SequenceEqual(request.ExpectedVersion))
{
return Conflict(new
{
Message = "Modified by another user.",
YourChanges = new { request.Name, request.Price,
Version = Convert.ToBase64String(request.ExpectedVersion) },
CurrentValues = new { product.Name, product.Price,
Version = Convert.ToBase64String(product.RowVersion) },
ConflictingFields = GetConflictingFields(product, request)
});
}
if (request.Name != null) product.Name = request.Name;
if (request.Price.HasValue) product.Price = request.Price.Value;
try
{
await _context.SaveChangesAsync();
return Ok(product);
}
catch (DbUpdateConcurrencyException)
{
return Conflict(new { Message = "Concurrent modification detected." });
}
}
6.2 The “Side-by-Side” Diff UI: Merging Partial Entity Updates
The API endpoint for conflict resolution accepts the user’s per-field decisions:
[HttpPut("{id}/resolve")]
public async Task<IActionResult> ResolveConflict(int id, [FromBody] ConflictResolutionRequest request)
{
var product = await _context.Products.FindAsync(id);
if (product == null) return NotFound();
foreach (var resolution in request.Resolutions)
{
switch (resolution.Field)
{
case "Name": product.Name = resolution.ChosenValue; break;
case "Price": product.Price = decimal.Parse(resolution.ChosenValue); break;
}
}
_context.Entry(product).OriginalValues
.SetValues(await _context.Entry(product).GetDatabaseValuesAsync());
await _context.SaveChangesAsync();
return Ok(product);
}
6.3 Server-Side Auto-Merge: Resolving Non-Overlapping Field Changes
For entities where conflicts rarely involve the same fields, the server automatically applies non-conflicting changes and only surfaces true overlapping conflicts:
public class AutoMergeService
{
public async Task<AutoMergeResult> TrySaveWithAutoMerge(
DbContext context, EntityEntry entry, PropertyValues originalSnapshot)
{
try
{
await context.SaveChangesAsync();
return AutoMergeResult.Saved();
}
catch (DbUpdateConcurrencyException)
{
var current = entry.CurrentValues;
var database = await entry.GetDatabaseValuesAsync();
if (database == null) return AutoMergeResult.EntityDeleted();
var trueConflicts = new List<string>();
foreach (var prop in current.Properties)
{
if (prop.IsConcurrencyToken) continue;
var orig = originalSnapshot[prop];
var mine = current[prop];
var theirs = database[prop];
bool iChanged = !Equals(orig, mine);
bool theyChanged = !Equals(orig, theirs);
if (iChanged && theyChanged && !Equals(mine, theirs))
trueConflicts.Add(prop.Name);
else if (theyChanged && !iChanged)
current[prop] = theirs; // Accept their non-conflicting change
}
if (trueConflicts.Any())
return AutoMergeResult.Conflicted(trueConflicts);
entry.OriginalValues.SetValues(database);
await context.SaveChangesAsync();
return AutoMergeResult.AutoMerged();
}
}
}
This approach reduces user friction significantly. In practice, most concurrency conflicts on wide entities involve different fields — User A changes the description while User B changes the price. Auto-merge handles these transparently, and only true field-level conflicts reach the user.
6.4 User Notification Patterns: Signaling Stale Data via WebSockets (SignalR)
Rather than waiting for a save to fail, proactively notify users when data they’re viewing has been modified. A SignalR hub with per-entity groups lets clients subscribe to changes:
public class EntityChangeHub : Hub
{
public async Task WatchEntity(string entityType, string entityId)
{
await Groups.AddToGroupAsync(Context.ConnectionId, $"{entityType}-{entityId}");
}
}
public class ChangeNotificationInterceptor : SaveChangesInterceptor
{
private readonly IHubContext<EntityChangeHub> _hubContext;
public ChangeNotificationInterceptor(IHubContext<EntityChangeHub> hubContext)
{
_hubContext = hubContext;
}
public override async ValueTask<int> SavedChangesAsync(
SaveChangesCompletedEventData eventData, int result,
CancellationToken cancellationToken = default)
{
foreach (var entry in eventData.Context.ChangeTracker.Entries()
.Where(e => e.State == EntityState.Modified))
{
var entityType = entry.Entity.GetType().Name;
var entityId = entry.Property("Id").CurrentValue?.ToString();
var changedFields = entry.Properties.Where(p => p.IsModified)
.Select(p => p.Metadata.Name).ToArray();
await _hubContext.Clients.Group($"{entityType}-{entityId}")
.SendAsync("EntityChanged", new { entityType, entityId, changedFields });
}
return result;
}
}
Client-side (JavaScript):
connection.on("EntityChanged", (change) => {
showStaleDataWarning({
message: `Record updated. Changed: ${change.changedFields.join(", ")}`,
actions: [
{ label: "Refresh", onClick: () => reloadEntity(change.entityId) },
{ label: "Keep editing", onClick: () => dismissWarning() }
]
});
});
7 Distributed Concurrency: Moving Beyond a Single Database
7.1 The Limitations of Database-Level Locking in Microservices
In a monolith with one database, transactions provide strong guarantees. When you split into microservices with separate databases, database-level locks don’t span databases. SELECT ... FOR UPDATE on Service A’s database doesn’t prevent Service B from reading stale data. The solutions fall into three categories: distributed locks, the outbox pattern, and sagas.
7.2 Distributed Locking with RedLock.net (Redis)
RedLock.net provides distributed mutual exclusion using Redis:
public class DistributedLockService
{
private readonly IDistributedLockFactory _lockFactory;
public DistributedLockService(IDistributedLockFactory lockFactory)
{
_lockFactory = lockFactory;
}
public async Task<T> ExecuteWithLock<T>(
string resourceKey, TimeSpan lockDuration, Func<Task<T>> action)
{
await using var redLock = await _lockFactory.CreateLockAsync(
$"lock:{resourceKey}", lockDuration,
waitTime: TimeSpan.FromSeconds(10),
retryTime: TimeSpan.FromMilliseconds(200));
if (!redLock.IsAcquired)
throw new InvalidOperationException($"Could not acquire lock: {resourceKey}");
return await action();
}
}
Usage in cross-service coordination:
return await _lockService.ExecuteWithLock(
$"product-order-{productId}",
TimeSpan.FromSeconds(30),
async () =>
{
var available = await _inventoryClient.GetAvailableStock(productId);
if (available < quantity) throw new InvalidOperationException("Insufficient stock.");
await _inventoryClient.DeductStock(productId, quantity);
var order = new Order { ProductId = productId, Quantity = quantity, UserId = userId };
_context.Orders.Add(order);
await _context.SaveChangesAsync();
return order;
});
Important considerations for distributed locks:
- Lock duration must exceed the maximum expected operation time. If the lock expires before the operation completes, another process can acquire it and create an inconsistency. Use a lock renewal mechanism for variable-duration operations.
- RedLock requires at least 3 independent Redis instances (or a Redis cluster) for fault tolerance. With a single Redis instance, the lock is not truly distributed — a Redis restart loses all locks.
- Distributed locks are not a replacement for idempotency. If the process crashes after acquiring the lock but before completing the operation, the lock expires and a retry may duplicate work. Always combine distributed locks with idempotent operations.
7.3 The Outbox Pattern and Idempotency: Ensuring Consistency across Service Boundaries
The outbox pattern solves the “dual write” problem — updating a database and publishing a message atomically. Write the message to an outbox table in the same transaction as the business data. A background processor publishes messages from the outbox to the message broker:
public async Task<Order> PlaceOrder(CreateOrderCommand command)
{
using var transaction = await _context.Database.BeginTransactionAsync();
try
{
var order = new Order
{
ProductId = command.ProductId, Quantity = command.Quantity,
UserId = command.UserId, Status = OrderStatus.Placed
};
_context.Orders.Add(order);
_context.OutboxMessages.Add(new OutboxMessage
{
Id = Guid.NewGuid(), EventType = "OrderPlaced",
Payload = JsonSerializer.Serialize(new { order.Id, order.ProductId, order.Quantity }),
CreatedAt = DateTime.UtcNow
});
await _context.SaveChangesAsync();
await transaction.CommitAsync();
return order;
}
catch { await transaction.RollbackAsync(); throw; }
}
On the consuming side, idempotency keys prevent duplicate processing:
public async Task HandleOrderPlaced(OrderPlacedEvent evt)
{
if (await _context.ProcessedEvents.AnyAsync(e => e.IdempotencyKey == evt.IdempotencyKey))
return;
await _context.Database.ExecuteSqlInterpolatedAsync(
$@"UPDATE Products SET StockQuantity = StockQuantity - {evt.Quantity}
WHERE Id = {evt.ProductId} AND StockQuantity >= {evt.Quantity}");
_context.ProcessedEvents.Add(new ProcessedEvent
{
IdempotencyKey = evt.IdempotencyKey, ProcessedAt = DateTime.UtcNow
});
await _context.SaveChangesAsync();
}
7.4 Sagas and Compensation Logic: Handling “Long-Running” Concurrency
Sagas manage long-running business transactions across services. Each step has a compensating action that undoes it if a later step fails:
public class SagaOrchestrator
{
private readonly List<ISagaStep> _steps;
public async Task<SagaResult> ExecuteAsync(SagaContext context)
{
var completedSteps = new Stack<ISagaStep>();
foreach (var step in _steps)
{
try
{
await step.ExecuteAsync(context);
completedSteps.Push(step);
}
catch (Exception ex)
{
while (completedSteps.Count > 0)
{
var completed = completedSteps.Pop();
await completed.CompensateAsync(context);
}
return SagaResult.Failed(ex.Message);
}
}
return SagaResult.Completed();
}
}
Example saga for order placement:
- Step 1: ReserveInventoryStep — Reserve stock in the inventory service. Compensation: cancel the reservation.
- Step 2: ProcessPaymentStep — Charge the customer’s payment method. Compensation: issue a refund.
- Step 3: CreateOrderStep — Create the order record. Compensation: mark the order as cancelled.
If payment fails at Step 2, the orchestrator compensates Step 1 (cancels the inventory reservation). No partial order exists. The key difference from a database transaction: sagas don’t provide isolation — intermediate states are visible to other operations. A query at the wrong moment might see reserved inventory with no corresponding order. Design your system to tolerate partial saga execution by using status fields and idempotent operations at every step.
7.5 Optimizing for Cloud-Native Databases: CosmosDB and CockroachDB Concurrency Models
Azure Cosmos DB uses ETags for optimistic concurrency. EF Core has built-in support:
public class CosmosProductConfiguration : IEntityTypeConfiguration<CosmosProduct>
{
public void Configure(EntityTypeBuilder<CosmosProduct> builder)
{
builder.ToContainer("Products");
builder.HasPartitionKey(p => p.PartitionKey);
builder.UseETagConcurrency();
}
}
CockroachDB supports serializable isolation by default using MVCC. Configure retry on serialization failures:
options.UseNpgsql(connectionString, npgsqlOptions =>
npgsqlOptions.EnableRetryOnFailure(
maxRetryCount: 5,
maxRetryDelay: TimeSpan.FromSeconds(10),
errorCodesToAdd: new[] { "40001" }));
CockroachDB simplifies application code since the database ensures serializable execution — no explicit concurrency tokens needed for most use cases. The trade-off is higher per-transaction latency compared to SQL Server’s Read Committed default.
8 Architect’s Decision Matrix and Best Practices
8.1 Choosing the Strategy: A Decision Tree for New Projects
- Read-only or append-only data? No concurrency control needed.
- Concurrent modification rare (less than 1%)? Optimistic with
RowVersion. HandleDbUpdateConcurrencyExceptionwith retry or user notification. - Frequent modification but simple increment/decrement? Atomic SQL operations. Skip the ORM change tracker.
- Frequent modification requiring read-then-write? Pessimistic locking with short transactions.
- Spans multiple services? Distributed locks + outbox + sagas.
- Real-time collaborative editing? OT/CRDT — EF Core is designed for record-level conflicts, not character-level merging.
Quick-reference matrix:
| Scenario | Strategy | EF Core Tool |
|---|---|---|
| User profile updates | Optimistic | [Timestamp] / IsRowVersion() |
| Content editing (single author) | Optimistic | IsConcurrencyToken on LastModified |
| Inventory decrement | Atomic SQL | ExecuteSqlInterpolatedAsync |
| Bank account transfer | Pessimistic | FromSqlInterpolated with UPDLOCK |
| Booking system | Pessimistic + Constraints | Serializable TX + unique index |
| Multi-service order flow | Distributed | RedLock + Outbox + Saga |
8.2 Monitoring and Observability: Detecting Contention with OpenTelemetry
Instrument concurrency operations with custom metrics:
public static class ConcurrencyMetrics
{
private static readonly Meter Meter = new("App.Concurrency", "1.0");
private static readonly Counter<long> ConflictCounter =
Meter.CreateCounter<long>("concurrency.conflicts.total");
private static readonly Counter<long> DeadlockCounter =
Meter.CreateCounter<long>("concurrency.deadlocks.total");
private static readonly Histogram<double> LockWaitTime =
Meter.CreateHistogram<double>("concurrency.lock.wait_ms", unit: "ms");
public static void RecordConflict(string entityType) =>
ConflictCounter.Add(1, new KeyValuePair<string, object>("entity_type", entityType));
public static void RecordDeadlock(string entityType) =>
DeadlockCounter.Add(1, new KeyValuePair<string, object>("entity_type", entityType));
public static void RecordLockWait(string entityType, double ms) =>
LockWaitTime.Record(ms, new KeyValuePair<string, object>("entity_type", entityType));
}
Set up alerts on these metrics:
- Conflict rate above 5% for any entity type: Review whether optimistic concurrency is appropriate for that entity. Consider switching to pessimistic locking or atomic SQL.
- Lock wait time p99 above 500ms: Transactions are being held too long. Investigate what’s happening inside the critical section — external HTTP calls, complex computations, or unnecessary I/O.
- Any deadlocks: Even occasional deadlocks indicate a lock ordering problem that should be investigated and fixed.
- Retry count consistently above 2 per operation: The contention is too high for optimistic concurrency on this entity. The retry cost is adding up.
Combine these application-level metrics with SQL Server’s built-in monitoring:
-- Monitor lock waits in real-time
SELECT
wait_type, waiting_tasks_count,
wait_time_ms, max_wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type LIKE 'LCK%'
AND waiting_tasks_count > 0
ORDER BY wait_time_ms DESC;
8.3 Anti-Patterns to Avoid: Long-Lived Transactions and Global Locks
Long-lived transactions: Opening a transaction when a user loads a form and committing when they save holds locks for the entire editing session. Use optimistic concurrency for user-facing forms instead.
Global locks: A single application-wide mutex serializes all operations and eliminates parallelism. Lock at the narrowest scope — per entity or per resource.
Silently swallowing conflicts: Catching DbUpdateConcurrencyException, logging it, and dropping the user’s changes is data loss disguised as error handling. Always resolve explicitly.
Serializable everywhere: Setting the default isolation to Serializable causes massive lock contention and frequent deadlocks. Use it only for specific operations that require range-based protection.
8.4 Summary: The Hybrid Approach — OCC by Default, PCC by Exception
The recommended approach for most .NET applications:
- Default to optimistic concurrency. Add
[Timestamp]to every concurrently-modified entity. - Use atomic SQL for counters. Inventory, balances, and aggregates bypass the change tracker.
- Reserve pessimistic locking for hot paths. Add
UPDLOCKonly where telemetry shows high conflict rates. - Distributed coordination at service boundaries. Outbox for messaging, Redis locks for coordination, sagas for multi-step transactions.
- Invest in conflict resolution UX. When conflicts reach the user, provide clear information and actionable options.
A base entity class as a starting point:
public abstract class BaseEntity
{
public int Id { get; set; }
[Timestamp]
public byte[] RowVersion { get; set; }
public DateTime CreatedAt { get; set; }
public DateTime ModifiedAt { get; set; }
}
public class BaseEntityConfiguration<T> : IEntityTypeConfiguration<T>
where T : BaseEntity
{
public virtual void Configure(EntityTypeBuilder<T> builder)
{
builder.HasKey(e => e.Id);
builder.Property(e => e.RowVersion).IsRowVersion();
builder.Property(e => e.CreatedAt).HasDefaultValueSql("GETUTCDATE()");
builder.Property(e => e.ModifiedAt).HasDefaultValueSql("GETUTCDATE()");
}
}
8.5 Future Outlook: What’s Next for Concurrency in .NET 10 and Beyond
EF Core 10 ships with refined interceptor APIs and improved FromSql composability, simplifying pessimistic locking patterns. Native SELECT ... FOR UPDATE LINQ support remains an open request (GitHub issue #26042).
Native AOT compatibility continues improving. Interceptors using reflection may need adaptation for AOT-compiled services.
.NET Aspire provides built-in support for service discovery, telemetry, and resilience patterns, reducing the boilerplate for distributed locking and outbox implementations.
SQL Server improvements like Accelerated Database Recovery (ADR) reduce the impact of long-running transactions. Azure SQL Hyperscale tiers absorb concurrency spikes more gracefully than fixed-capacity deployments.
Event sourcing handles concurrency differently — appending events to immutable streams and checking stream versions at append time. Libraries like Marten (PostgreSQL) and EventStoreDB integrate with .NET and provide built-in concurrency controls aligned with event-sourcing patterns.
The fundamental principles remain stable: understand your contention profile, choose the narrowest effective locking strategy, measure conflict rates in production, and design resolution paths for when conflicts occur. The tools evolve, but the architectural decisions stay the same.