Skip to content
Building Event Sourcing with EventStore and .NET: From Theory to Production

Building Event Sourcing with EventStore and .NET: From Theory to Production

1 Introduction: Beyond the Limits of State

Software engineering has always been about managing change—capturing how things evolve over time. Yet, for decades, we’ve largely defaulted to CRUD (Create, Read, Update, Delete) operations against relational databases. CRUD is simple, ubiquitous, and well-understood, but it comes with severe limitations when the system’s evolution, history, and reasoning behind changes matter. Event Sourcing offers a different approach: instead of persisting only the latest state, we capture what happened and derive state from those facts.

Before diving deeper, let’s set the context:

TL;DR

  • Use Event Sourcing when… Auditability, traceability, temporal queries, or replayable business history are first-class requirements.
  • Avoid Event Sourcing when… Your domain is simple, CRUD suffices, or operational complexity outweighs the benefits.

Also, a quick clarifier: Event Sourcing is not the same as event-driven architecture (EDA).

  • Event Sourcing is a persistence strategy: it stores facts as a stream of immutable events.
  • Event-driven architecture is about integration and communication: systems reacting to published events. The two often complement each other but solve different problems.

This guide is a deep dive into building event-sourced systems with EventStoreDB and .NET. We’ll move from foundational principles to advanced, production-ready techniques. Along the way, we’ll see how theory translates into code, what trade-offs to expect, and how to build systems that remain adaptable under changing business demands.

1.1 The CRUD Conundrum

CRUD-based systems dominate enterprise applications. The pattern is simple: write the latest version of an entity into a table, and read it back later. But as business complexity grows, cracks begin to show:

  • Loss of business intent: If a row changes from status='Pending' to status='Shipped', we lose the why. Was the order auto-fulfilled? Did a warehouse manager intervene? Did we oversell inventory? The context disappears.
  • Destructive updates: Each update overwrites prior state. Unless you bolt on an audit log (which is usually partial and inconsistent), history is gone.
  • Audit trails and compliance headaches: Industries like finance, healthcare, or logistics often require a complete history of transactions and state changes. Retrofitting this onto CRUD databases is clunky and error-prone.
  • Debugging and incident response: When a bug causes invalid data, CRUD systems often cannot reconstruct the sequence of steps that led there.

These shortcomings aren’t just theoretical. Many teams have experienced the frustration of being asked, “Why did this record change?”—and realizing the database can’t answer.

1.2 A Paradigm Shift

Event Sourcing flips the perspective: we don’t model what the system is; we model what happened. Every change in the system is captured as an immutable event, appended to a stream. For example:

  • Instead of setting Order.Status = "Shipped", we record an event OrderShipped { OrderId = 123, ShippedAt = 2025-09-15 }.
  • Instead of updating a product’s stock level directly, we record StockCheckedIn or StockReserved.

This shift has profound implications:

  • State becomes derived: Current state is the result of applying (replaying) all past events.
  • History is preserved: We never lose information; we only append.
  • Business language becomes central: Events are expressed in ubiquitous domain terms, often directly mirroring how stakeholders talk.

Importantly, Event Sourcing isn’t just a technology choice—it’s a modeling philosophy. It requires thinking in terms of facts and flows of events rather than tables and updates.

1.3 The Promise of ES

Why go through the extra complexity of modeling events? Because the benefits are significant:

  • Complete auditability: Every action is preserved, with context and metadata (who, when, why). This is gold for compliance, debugging, and accountability.
  • Temporal queries: You can ask, “What was the stock level for SKU X last Tuesday at 2 PM?”—a question CRUD systems struggle to answer without expensive logging.
  • Business insight: Events reflect real business processes, enabling advanced analytics and projections. You can replay events to generate new reports or insights.
  • Architectural flexibility: Event streams can feed multiple consumers—read models, analytics pipelines, or external integrations—without modifying the core system.
  • Safe evolution: As requirements shift, you can rebuild new projections by replaying the event history. No risky schema migrations that discard data.

In short, Event Sourcing gives you a time machine for your business data—something state-based systems simply can’t match.

1.4 Our Toolkit

For this guide, we’ll use two main technologies:

  • .NET (8 or 10 LTS) Modern .NET provides a high-performance runtime, mature tooling, and strong support for asynchronous I/O. C# record types are particularly elegant for modeling immutable events and commands.

  • EventStoreDB A purpose-built event database, EventStoreDB was designed from the ground up to handle event streams at scale. Key features:

    • Immutable append-only storage: guarantees that events are never lost or overwritten.
    • gRPC client API: modern, performant client with excellent .NET support.
    • Subscriptions: push-based mechanisms for projections and consumers.
    • Projections engine: server-side stream transformations and filtering.

With this stack, we can build a robust, production-ready system that doesn’t just support Event Sourcing as a pattern—it embraces it.

1.5 Who This Article Is For

This guide is written for:

  • Senior Developers and Tech Leads who need to move beyond CRUD and design systems that scale in complexity.
  • Solution Architects evaluating Event Sourcing as a strategic approach to meet audit, analytics, or integration requirements.

Prerequisites:

  • Strong proficiency in C# and .NET, including asynchronous programming and idiomatic use of record types.
  • Familiarity with distributed systems concepts: eventual consistency, idempotency, and message-driven architectures.
  • Comfort with Docker and basic infrastructure setup.

If you’re coming from a CRUD background, prepare for a mindset shift. If you’re already versed in DDD or CQRS, you’ll find Event Sourcing a natural next step.


2 Event Sourcing Fundamentals: The Core Building Blocks

Before we dive into code, we need to ground ourselves in the core concepts. Event Sourcing introduces a vocabulary—events, streams, aggregates, commands—that frames how we model and implement systems.

2.1 What is an Event?

An event is an immutable fact that something happened in the past. Once it occurs, it cannot be changed or deleted; we can only record it and derive consequences.

Key properties of events

  • Immutable: Events are never updated or deleted. They are written once and remain forever.
  • Descriptive: Events describe what happened, not what will or should happen.
  • Domain-oriented: Events use the language of the business: OrderPlaced, PaymentReceived, StockReserved.

Best practices for naming events

  • Use past tense to emphasize that the event is a fact, not a command.

    • Correct: OrderShipped
    • Incorrect: ShipOrder (that’s a command, not an event)
  • Keep names concise but unambiguous.

  • Include necessary context in the event payload.

Example in C#

public record OrderPlaced(
    Guid OrderId,
    Guid CustomerId,
    DateTimeOffset PlacedAt,
    IReadOnlyList<OrderItem> Items
);

public record OrderItem(string Sku, int Quantity);

Notes:

  • It’s an immutable record.
  • It captures all relevant data to understand the event historically.
  • DateTimeOffset is used instead of DateTime to preserve the original timezone and offset.

Events are the atoms of an event-sourced system. Everything else—state, projections, read models—is derived from them.

2.2 Event Envelopes

In practice, events rarely live in isolation. We often wrap them in an envelope that provides consistent metadata for correlation, causation, and auditing. A common pattern is:

public record EventEnvelope<T>(
    string Type,
    T Data,
    EventMetadata Metadata
);

public record EventMetadata(
    Guid EventId,
    Guid CorrelationId,
    Guid CausationId,
    Guid UserId,
    DateTimeOffset Timestamp
);

This allows us to capture not just what happened but also who triggered it, why it happened, and how it relates to other events. This metadata becomes critical for tracing workflows, debugging distributed systems, and enforcing accountability.

2.3 Streams

A stream is a sequence of events, ordered by time, associated with a particular entity or aggregate instance. Each event is appended to the end of its stream.

Characteristics

  • Append-only: Events are only added, never removed.
  • Ordered: Events maintain a strict sequence. For an aggregate, this sequence is critical because business rules depend on the exact order of events.
  • Entity-scoped: Typically, a stream corresponds to one aggregate instance.

Example:

order-123 stream:

  1. OrderPlaced { OrderId=123, Items=[...] }
  2. PaymentReceived { OrderId=123, Amount=100.00 }
  3. OrderShipped { OrderId=123, ShippedAt=2025-09-15 }

To reconstruct the order’s state, we replay the events in order.

2.4 Aggregates

An aggregate is a domain-driven design concept: it’s the consistency boundary for business rules and invariants. In Event Sourcing:

  • The aggregate processes commands.
  • It validates rules (e.g., cannot ship more stock than available).
  • It produces events when rules are satisfied.
  • Its state is derived by replaying its own event stream.

Example

public class InventoryItem
{
    public Guid Id { get; private set; }
    public int CurrentStock { get; private set; }

    public IEnumerable<object> Handle(ReserveStock cmd)
    {
        if (cmd.Quantity > CurrentStock)
            throw new InvalidOperationException("Not enough stock");

        yield return new StockReserved(cmd.ItemId, cmd.Quantity, DateTimeOffset.UtcNow);
    }

    // Explicit Apply overloads for compiler safety
    public void Apply(StockCheckedIn e) => CurrentStock += e.Quantity;
    public void Apply(StockShipped e)   => CurrentStock -= e.Quantity;
    public void Apply(StockReserved e)  => CurrentStock -= e.Quantity;
}

Key points:

  • Each event has a strongly-typed Apply method rather than a generic Apply(object). This gives the compiler full type safety and avoids boxing.
  • The aggregate’s state is only updated through these Apply methods.

2.5 Commands

A command is a request to perform an action. Unlike events, commands are imperative: they express intent, not facts. A command may be accepted or rejected depending on business rules.

Example

public record ReserveStock(Guid ItemId, int Quantity);

public record StockReserved(
    Guid ItemId,
    int Quantity,
    DateTimeOffset ReservedAt
);

Here:

  • The command ReserveStock expresses an intent.
  • The event StockReserved captures the fact if the command succeeds.

Commands are not persisted. Only the resulting events are.

2.6 The Golden Rule: State = f(State, Event)

At the heart of Event Sourcing lies a simple functional idea:

State is the left-fold of events.

State = Event0 -> Event1 -> Event2 -> ... -> CurrentState

Example

var events = new object[]
{
    new StockCheckedIn(itemId, 50),
    new StockShipped(itemId, 10)
};

var inventory = new InventoryItem();
foreach (var e in events)
{
    // Dispatcher calls the correct Apply overload
    ((dynamic)inventory).Apply((dynamic)e);
}

Console.WriteLine(inventory.CurrentStock); // Output: 40

The state (CurrentStock=40) is derived solely from the sequence of events. There are no destructive overwrites—just a replay of facts.

This determinism ensures reproducibility, debuggability, and confidence that business state is always explainable from its event history.


3 A Practical Start: Building an Inventory Management System

Theory is useful, but developers ultimately need to see ideas in code before they become second nature. Event Sourcing shines brightest when implemented in domains full of real-world constraints—stock levels, reservations, cancellations, and concurrent modifications. An inventory management system provides a concrete, relatable playground for exploring these concepts in .NET with EventStoreDB.

3.1 Setting the Stage

3.1.1 Domain Choice

Inventory management is deceptively simple at first glance: track what items are in stock, update counts when stock arrives, and decrement counts when items ship. But real-world needs quickly add complexity:

  • Concurrency challenges: Multiple processes may try to reserve or ship stock simultaneously. Correctness requires precise sequencing.
  • Business rules: You cannot ship stock that was never checked in. Reservations may expire. Cancelling a reservation should restore availability.
  • Auditability: Operations must be traceable. Warehouse teams, auditors, and even customers might need to know exactly when stock was checked in, reserved, or shipped.
  • Analytics: Events like StockReserved or StockShipped can feed demand forecasting, restock planning, and anomaly detection.

This makes the inventory domain an excellent candidate for illustrating the mechanics of Event Sourcing.

3.1.2 Environment Setup

  1. Run EventStoreDB with Docker The simplest way to run EventStoreDB locally is with Docker. Use a named volume to persist data, expose only the gRPC/HTTP port, and explicitly enable projections for development:

    docker run --name eventstore \
      -it -p 2113:2113 \
      -v esdb-data:/var/lib/eventstore \
      eventstore/eventstore:latest \
      --insecure --run-projections=All
    • --insecure disables TLS for local dev.
    • --run-projections=All enables system projections (handy for exploring streams).

    ⚠️ Important: Don’t use --insecure in production. Always configure proper certificates and authentication.

    Admin UI: http://localhost:2113

  2. Create a new .NET solution

    dotnet new sln -n InventorySystem
    dotnet new console -n InventoryApp
    dotnet sln add InventoryApp
  3. Install EventStoreDB client

    Use the current official gRPC client:

    dotnet add InventoryApp package EventStore.Client

At this point, we’re ready to model our domain and begin interacting with EventStoreDB.

3.2 Defining Our Domain’s Language

Event Sourcing forces us to articulate the business language precisely: commands (intent) and events (facts).

3.2.1 Commands

public record CheckInStock(Guid ItemId, int Quantity, string Warehouse);
public record ReserveStock(Guid ItemId, int Quantity, Guid ReservationId);
public record ShipStock(Guid ItemId, int Quantity, Guid ReservationId);
public record CancelReservation(Guid ItemId, Guid ReservationId);

Observations:

  • CheckInStock specifies where the stock was added.
  • ReserveStock includes a ReservationId so it can later be cancelled.
  • Commands are not persisted—only the resulting events are.

3.2.2 Events

public record StockCheckedIn(
    Guid ItemId, int Quantity, string Warehouse, DateTimeOffset CheckedInAt);

public record StockReserved(
    Guid ItemId, int Quantity, Guid ReservationId, DateTimeOffset ReservedAt);

public record StockShipped(
    Guid ItemId, int Quantity, Guid ReservationId, DateTimeOffset ShippedAt);

public record StockReservationCancelled(
    Guid ItemId, Guid ReservationId, DateTimeOffset CancelledAt);

Notes:

  • Use DateTimeOffset instead of DateTime to preserve timezone context.
  • Events always include identifiers (ItemId, ReservationId) for correlation and replay.

3.3 The InventoryItem Aggregate

3.3.1 Structure

public class InventoryItem
{
    private readonly List<object> _changes = new();
    private readonly Dictionary<Guid, int> _reservations = new();

    public Guid Id { get; private set; }
    private int _currentStock;

    private InventoryItem() { }

    public static InventoryItem LoadFromHistory(IEnumerable<object> history)
    {
        var item = new InventoryItem();
        foreach (var e in history)
            item.Apply(e);
        return item;
    }

    public IReadOnlyList<object> GetUncommittedChanges() => _changes.AsReadOnly();
    public void ClearUncommittedChanges() => _changes.Clear();
}

3.3.2 Business Logic

public void CheckIn(Guid itemId, int quantity, string warehouse)
{
    if (itemId == Guid.Empty) throw new ArgumentException("ItemId must be set.");
    if (quantity <= 0) throw new InvalidOperationException("Quantity must be positive.");

    var @event = new StockCheckedIn(itemId, quantity, warehouse, DateTimeOffset.UtcNow);
    ApplyChange(@event);
}

public void Reserve(Guid reservationId, int quantity)
{
    if (quantity <= 0) throw new InvalidOperationException("Quantity must be positive.");
    if (_currentStock < quantity) throw new InvalidOperationException("Insufficient stock available.");

    var @event = new StockReserved(Id, quantity, reservationId, DateTimeOffset.UtcNow);
    ApplyChange(@event);
}

public void Ship(Guid reservationId, int quantity)
{
    if (!_reservations.TryGetValue(reservationId, out var reserved) || reserved < quantity)
        throw new InvalidOperationException("Cannot ship more than reserved.");

    var @event = new StockShipped(Id, quantity, reservationId, DateTimeOffset.UtcNow);
    ApplyChange(@event);
}

public void CancelReservation(Guid reservationId)
{
    if (!_reservations.ContainsKey(reservationId)) return;

    var @event = new StockReservationCancelled(Id, reservationId, DateTimeOffset.UtcNow);
    ApplyChange(@event);
}

3.3.3 State Management

private void ApplyChange(object @event)
{
    Apply(@event);
    _changes.Add(@event);
}

private void Apply(object @event)
{
    switch (@event)
    {
        case StockCheckedIn e:
            Id = e.ItemId;
            _currentStock += e.Quantity;
            break;
        case StockReserved e:
            _currentStock -= e.Quantity;
            _reservations[e.ReservationId] = e.Quantity;
            break;
        case StockShipped e:
            _reservations[e.ReservationId] -= e.Quantity;
            if (_reservations[e.ReservationId] == 0)
                _reservations.Remove(e.ReservationId);
            break;
        case StockReservationCancelled e:
            if (_reservations.TryGetValue(e.ReservationId, out var qty))
            {
                _currentStock += qty;
                _reservations.Remove(e.ReservationId);
            }
            break;
    }
}

3.4 Communicating with EventStoreDB

3.4.1 Event Type Mapping

Avoid coupling to CLR type names. Instead, maintain a registry:

public static class EventTypeRegistry
{
    private static readonly Dictionary<string, Type> _map = new()
    {
        ["stock-checked-in"] = typeof(StockCheckedIn),
        ["stock-reserved"] = typeof(StockReserved),
        ["stock-shipped"] = typeof(StockShipped),
        ["stock-reservation-cancelled"] = typeof(StockReservationCancelled)
    };

    public static string GetName(Type type) =>
        _map.First(x => x.Value == type).Key;

    public static Type GetType(string name) => _map[name];
}

3.4.2 Repository

public class EventStoreRepository<T> where T : class
{
    private readonly EventStoreClient _client;
    private readonly string _prefix;
    private readonly JsonSerializerOptions _jsonOptions = new()
    {
        PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
        Converters = { new JsonStringEnumConverter() }
    };

    public EventStoreRepository(EventStoreClient client, string prefix)
    {
        _client = client;
        _prefix = prefix;
    }

    private string GetStream(Guid id) => $"{_prefix}-{id}";

    public async Task<T> GetByIdAsync(Guid id, Func<IEnumerable<object>, T> factory)
    {
        var stream = GetStream(id);
        var events = new List<object>();

        var result = _client.ReadStreamAsync(Direction.Forwards, stream, StreamPosition.Start);

        if (await result.ReadState == ReadState.StreamNotFound)
            return factory(Array.Empty<object>());

        await foreach (var resolved in result)
        {
            if (resolved.Event.EventType.StartsWith("$")) continue;

            var type = EventTypeRegistry.GetType(resolved.Event.EventType);
            var evt = JsonSerializer.Deserialize(resolved.Event.Data.Span, type, _jsonOptions)!;
            events.Add(evt);
        }

        return factory(events);
    }

    public async Task SaveAsync(Guid id, IEnumerable<object> changes, ulong? expectedRevision = null)
    {
        var stream = GetStream(id);
        var eventData = changes.Select(evt =>
            new EventData(
                Uuid.NewUuid(),
                EventTypeRegistry.GetName(evt.GetType()),
                JsonSerializer.SerializeToUtf8Bytes(evt, evt.GetType(), _jsonOptions),
                contentType: "application/json"
            ));

        try
        {
            if (expectedRevision == null)
                await _client.AppendToStreamAsync(stream, StreamState.NoStream, eventData);
            else
                await _client.AppendToStreamAsync(stream, StreamRevision.FromUInt64(expectedRevision.Value), eventData);
        }
        catch (WrongExpectedVersionException)
        {
            // handle optimistic concurrency: reload & retry
            throw;
        }
    }
}

4 From Write-Side to Read-Side: CQRS and Projections

The write model we’ve built so far ensures correctness and business rules. But users rarely want to replay streams to answer queries like “Show me all products with less than 10 units in stock.” That’s where CQRS and projections come in.

4.1 The Querying Problem

Querying directly against an event store is inefficient for complex reads:

  • To answer “What’s the current stock for item X?”, you must replay all events in inventory-X.
  • To answer “Show low-stock items across all warehouses”, you’d need to read thousands of streams and aggregate them on the fly.

This doesn’t scale, nor does it provide the responsiveness users expect. We need a read-optimized model.

4.2 CQRS (Command Query Responsibility Segregation): A Natural Fit

CQRS separates the write side (aggregates and commands) from the read side (projections and queries). Benefits include:

  • Performance: Reads hit a denormalized, query-optimized store.
  • Scalability: Writes remain focused and isolated; reads scale independently.
  • Flexibility: Multiple read models can serve different query needs without changing the write model.

In Event Sourcing, CQRS is a natural consequence—events flow into projections that shape the read model.

4.3 Projections: The Heart of the Read Model

4.3.1 Concept

A projection subscribes to events and updates a denormalized store. For example:

  • From StockCheckedIn and StockShipped, build a table of current stock levels.
  • From StockReserved, maintain a list of active reservations.

Projections make queries efficient without replaying entire streams.

4.3.2 Choosing Your Read Model Store

The store depends on query needs:

  • PostgreSQL: Relational queries, joins, reports.
  • Elasticsearch: Full-text search, dashboards.
  • Redis: Low-latency lookups.

Many systems mix and match, with different projections writing to different stores.

4.4 Implementation Strategies

4.4.1 Application-Side Projections (Catch-up Subscriptions)

In .NET, you can run a background service that subscribes to EventStoreDB and maintains a read model database. Crucially, you must manage checkpointing so projections resume after restarts, and ensure idempotency so events are never applied twice.

Example with PostgreSQL:

public class StockProjectionService : BackgroundService
{
    private readonly EventStoreClient _client;
    private readonly NpgsqlConnection _connection;

    public StockProjectionService(EventStoreClient client, NpgsqlConnection connection)
    {
        _client = client;
        _connection = connection;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        // Load last checkpoint
        var checkpoint = await LoadCheckpoint();

        await _client.SubscribeToAllAsync(
            checkpoint ?? FromAll.Start,
            async (subscription, resolved, ct) =>
            {
                var evt = Deserialize(resolved.Event);
                switch (evt)
                {
                    case StockCheckedIn e:
                        await UpsertStock(e.ItemId, e.Quantity);
                        break;
                    case StockShipped e:
                        await DecreaseStock(e.ItemId, e.Quantity);
                        break;
                }

                // Persist checkpoint position every N events
                await SaveCheckpoint(resolved.Event.Position);
            },
            cancellationToken: stoppingToken
        );
    }

    private async Task UpsertStock(Guid itemId, int qty)
    {
        var sql = @"INSERT INTO stock_levels (item_id, quantity) 
                    VALUES (@id, @qty) 
                    ON CONFLICT (item_id) DO UPDATE 
                    SET quantity = stock_levels.quantity + @qty";
        await _connection.ExecuteAsync(sql, new { id = itemId, qty });
    }

    private async Task DecreaseStock(Guid itemId, int qty)
    {
        var sql = @"UPDATE stock_levels 
                    SET quantity = quantity - @qty 
                    WHERE item_id = @id";
        await _connection.ExecuteAsync(sql, new { id = itemId, qty });
    }

    private Task<Position?> LoadCheckpoint() { /* read from DB */ }
    private Task SaveCheckpoint(Position position) { /* write to DB */ }
}

Notes:

  • SubscribeToAllAsync is supplied with a checkpoint so the projection can resume.
  • Persist checkpoints periodically to avoid replaying from the beginning.
  • SQL statements use ON CONFLICT for idempotency, ensuring the same event doesn’t double-count.

4.4.2 Server-Side Projections with EventStoreDB

EventStoreDB also supports server-side projections in JavaScript. For example, to group all inventory-* streams into a category:

fromCategory('inventory')
  .when({
    StockCheckedIn: function(state, event) {
      linkTo('stock_levels', event);
    },
    StockShipped: function(state, event) {
      linkTo('stock_levels', event);
    }
  });

Important:

  • This relies on streams being named inventory-<id>.
  • $by_category must be enabled (--run-projections=All in dev).

Server-side projections are ideal for lightweight stream transformations and linking. For durable read models with external databases and checkpoints, prefer application-side projections.


5 Production-Grade Patterns and Concerns

Once you have a functional event-sourced system, the next challenge is making it production-ready. At this stage, concerns shift from “does it work?” to “does it scale, evolve, and stay reliable under real-world conditions?”. Let’s explore the most critical production patterns for Event Sourcing in .NET with EventStoreDB.

5.1 Performance Tuning: Taming Long-Lived Streams with Snapshots

5.1.1 The Problem

Imagine an aggregate that has lived for years and accumulated thousands—or even millions—of events. Every time you load the aggregate, you must replay all those events to rebuild its state. While EventStoreDB streams are efficient, this replay can still add noticeable latency, especially when aggregates are loaded frequently in transactional workflows.

A classic example is an inventory item that has been checked in, reserved, and shipped thousands of times. Rebuilding its state from scratch on every command would become prohibitively expensive as event counts grow.

5.1.2 The Solution

The solution is snapshots. A snapshot is a serialized copy of an aggregate’s state at a particular version. Instead of replaying all events from the beginning, you:

  1. Load the latest snapshot.
  2. Read and apply only the events that occurred after that snapshot’s version.

This reduces rehydration time from thousands of events to a handful.

Snapshots are not a replacement for the event stream—they are a cache. Events remain the source of truth, and snapshots can always be rebuilt if lost.

5.1.3 Implementation

A snapshot is best persisted in its own stream, separate from the domain event stream:

public record InventoryItemSnapshot(
    Guid ItemId,
    int CurrentStock,
    IReadOnlyDictionary<Guid, int> Reservations,
    ulong Version,
    DateTime TakenAt
);

Saving snapshots can be triggered periodically, either every N events or after a time interval. Always persist the aggregate’s current version in the snapshot so that rehydration can resume precisely.

public class SnapshottingEventStoreRepository<T> : EventStoreRepository<T> where T : class
{
    private readonly int _snapshotFrequency;

    public SnapshottingEventStoreRepository(EventStoreClient client, string streamPrefix, int snapshotFrequency = 100)
        : base(client, streamPrefix)
    {
        _snapshotFrequency = snapshotFrequency;
    }

    public async Task SaveWithSnapshotAsync(Guid id, T aggregate, IEnumerable<object> changes, ulong currentVersion)
    {
        await SaveAsync(id, changes, currentVersion);

        var totalEvents = currentVersion + (ulong)changes.Count();
        if (totalEvents % (ulong)_snapshotFrequency == 0)
        {
            if (aggregate is InventoryItem item)
            {
                var snapshot = new InventoryItemSnapshot(
                    item.Id,
                    item.CurrentStock,
                    new Dictionary<Guid, int>(item.Reservations),
                    totalEvents,
                    DateTime.UtcNow
                );

                var stream = $"snapshot-inventory-{id}";
                var eventData = Serialize(snapshot);
                await _client.AppendToStreamAsync(stream, StreamState.Any, new[] { eventData });
            }
        }
    }

    public async Task<T> GetByIdWithSnapshotAsync(Guid id, Func<T> factory, Action<T, object> applyEvent)
    {
        var snapshotStream = $"snapshot-inventory-{id}";
        ulong snapshotVersion = 0;
        T aggregate = factory();

        var snapshotResult = _client.ReadStreamAsync(Direction.Backwards, snapshotStream, StreamPosition.End, 1);
        await foreach (var resolved in snapshotResult)
        {
            var snapshot = Deserialize(resolved.Event.Data.Span, resolved.Event.EventType) as InventoryItemSnapshot;
            if (snapshot != null)
            {
                snapshotVersion = snapshot.Version;
                aggregate = RehydrateFromSnapshot(snapshot);
                break;
            }
        }

        var eventStream = GetStreamName(id);
        var result = _client.ReadStreamAsync(Direction.Forwards, eventStream, new StreamPosition(snapshotVersion + 1));
        await foreach (var evt in result)
        {
            var domainEvent = Deserialize(evt.Event.Data.Span, evt.Event.EventType);
            applyEvent(aggregate, domainEvent);
        }

        return aggregate;
    }

    private T RehydrateFromSnapshot(InventoryItemSnapshot snapshot)
    {
        return (T)(object)new InventoryItem(snapshot.ItemId, snapshot.CurrentStock, snapshot.Reservations);
    }
}

Notice how the snapshot is rehydrated directly, and subsequent events are applied starting from snapshot.Version + 1. The snapshot is not routed through the aggregate’s domain Apply method, avoiding mismatches between snapshot state and domain event handlers.

5.2 System Evolution: Painless Event Versioning

5.2.1 The Inevitable

No matter how carefully you design, business requirements change. Event schemas that once fit perfectly may become inadequate. For instance:

  • StockReserved initially has { ItemId, Quantity }.
  • Later, the business requires tracking WarehouseId as well.

How do we evolve without breaking history?

5.2.2 Strategies

Upcasting

Upcasting transforms old events into the new shape during rehydration. Think of it as a migration layer:

public static object Upcast(object oldEvent)
{
    return oldEvent switch
    {
        StockReserved_V1 e => new StockReserved_V2(
            e.ItemId,
            e.Quantity,
            WarehouseId: Guid.Empty,
            e.ReservedAt
        ),
        _ => oldEvent
    };
}

Best practices:

  • Keep upcasters covered with tests.
  • Record metrics whenever an upcaster is invoked—this reveals when legacy shapes are still in circulation.
  • Consider embedding a discriminated type field in your JSON (e.g., "eventType": "StockReserved.V1") for safer routing and deserialization.
Introducing New Events

An alternative is to introduce new events while leaving old ones untouched:

  • StockReserved remains unchanged.
  • Add StockReservedAtWarehouse for the new requirement.

Aggregates simply handle both:

private void Apply(object @event)
{
    switch (@event)
    {
        case StockReserved e:
            _currentStock -= e.Quantity;
            break;
        case StockReservedAtWarehouse e:
            _currentStock -= e.Quantity;
            _reservations[e.ReservationId] = e.Quantity;
            _warehouse = e.WarehouseId;
            break;
    }
}

This preserves historical integrity and avoids mutating legacy streams, at the cost of handling multiple versions in your aggregate.

Strongly-Typed Event Schemas

Schema-first approaches (e.g., JSON Schema, Avro, Protobuf) enforce compatibility rules and make versioning more predictable:

message StockReserved {
  string item_id = 1;
  int32 quantity = 2;
  string warehouse_id = 3; // optional in v1
  string reserved_at = 4;
}

5.3 Embracing Eventual Consistency

5.3.1 The Reality

In an event-sourced system, the read model is always slightly behind the write model. The delay is usually milliseconds but can stretch to seconds under load. This is the nature of asynchronous projection updates.

Users must understand: the write model is the source of truth, while the read model is a cached view.

5.3.2 UI/UX Strategies

Good UI can make consistency gaps almost invisible:

  • Optimistic UI updates: Apply changes in the interface immediately, reconcile later if projections disagree.
  • Polling: Refresh until projections catch up.
  • Push updates: Use SignalR or WebSockets to notify clients when projections update.
public class StockHub : Hub
{
    public async Task NotifyStockChanged(Guid itemId, int newQuantity)
    {
        await Clients.All.SendAsync("StockUpdated", itemId, newQuantity);
    }
}

5.3.3 Handling Command Failures

Sometimes commands are issued against stale data. For example, the UI shows 5 units available, but another user reserves them first.

The aggregate enforces business rules and rejects the command. Handle this by:

  • Returning a meaningful error (InsufficientStock).
  • Allowing the client to refresh its view.
  • Retrying with updated state if appropriate.

This ensures aggregates remain the ultimate authority, while user experience stays resilient in the face of eventual consistency.


6 The Superpowers of Event Sourcing in Action

So far, we’ve focused on mechanics: commands, events, projections. But the real magic of Event Sourcing comes from the capabilities it unlocks. Features like temporal queries, complete audit trails, and controlled replays provide leverage far beyond CRUD systems.

6.1 Temporal Queries: The “Time Machine”

One of the most powerful features of Event Sourcing is the ability to ask: What did the system look like at any point in time?

For example: “What was the stock level for product X at 5 PM last Friday?” In a CRUD world, this is impossible unless changes were logged separately. With Event Sourcing:

  1. Read the stream up to events with a timestamp ≤ 2025-09-12T17:00:00Z.
  2. Replay them to reconstruct state as of that moment.

Crucially, don’t rely on the database server’s internal event Created timestamp. For consistency across clusters, replays, or backfills, always rely on the metadata timestamp you attach to events (typically a DateTimeOffset captured at command handling time).

Example in C#:

public async Task<int> GetStockAt(Guid itemId, DateTimeOffset asOf)
{
    var events = new List<object>();
    var result = _client.ReadStreamAsync(Direction.Forwards, $"inventory-{itemId}", StreamPosition.Start);

    await foreach (var resolved in result)
    {
        var meta = DeserializeMetadata(resolved.Event.Metadata.Span);
        if (meta.Timestamp > asOf) break;

        var evt = Deserialize(resolved.Event.Data.Span, resolved.Event.EventType);
        events.Add(evt);
    }

    var item = InventoryItem.LoadFromHistory(events);
    return item.CurrentStock;
}

This effectively turns your database into a time machine, supporting forensic analysis, backdated reports, and compliance audits.

6.2 The Ultimate Audit Log

Every event can carry rich metadata:

  • Who: the actor (UserId).
  • What: the domain event payload.
  • When: the timestamp captured in metadata.
  • Why: correlation to the triggering command (CorrelationId, CausationId).

Example metadata:

{
  "userId": "warehouse-operator-42",
  "correlationId": "cmd-12345",
  "causationId": "evt-67890",
  "timestamp": "2025-09-12T17:02:00Z"
}

Reading the stream yields a tamper-proof audit trail:

var result = _client.ReadStreamAsync(Direction.Forwards, "inventory-123", StreamPosition.Start);

await foreach (var resolved in result)
{
    var meta = DeserializeMetadata(resolved.Event.Metadata.Span);
    Console.WriteLine($"{resolved.Event.EventType} by {meta.UserId} at {meta.Timestamp}");
}

This kind of transparency is invaluable in regulated industries like healthcare, finance, or logistics.

6.3 Replaying for Business Intelligence

Perhaps the most underappreciated feature of Event Sourcing is the ability to replay events into new projections.

Fixing Bugs

Suppose a bug caused your projection to double-count reserved stock. In a CRUD system, the historical data may be permanently corrupted. With Event Sourcing:

  1. Fix the projection code.
  2. Drop the read model database.
  3. Replay all events to rebuild the projection correctly.

The true history remains intact.

Creating New Projections

Need a new report—say, “Which items are frequently reserved but rarely shipped?”—without changing your core system?

  • Write a new projection that listens to StockReserved and StockShipped.
  • Replay historical events to populate it.
  • Begin querying it immediately.

Example: Replay Service

public async Task ReplayAllEvents(Func<object, Task> handler, bool replayMode = true)
{
    var result = _client.ReadAllAsync(Direction.Forwards, Position.Start);

    await foreach (var resolved in result)
    {
        var evt = Deserialize(resolved.Event.Data.Span, resolved.Event.EventType);

        if (replayMode && evt is IntegrationEvent) 
            continue; // skip side-effects like emails/integrations

        await handler(evt);
    }
}

Important: disable or route around side effects during replays. For example, you don’t want to resend thousands of emails or webhooks when rebuilding a projection. A common pattern is to mark a “replay mode” flag, or to publish integration events only from live subscriptions, not from replay code paths.


7 Integration and Coexistence

It’s rare that a system gets to adopt Event Sourcing from day one. More often, ES is introduced into an environment where large CRUD-based systems already exist. The question then becomes: how do you bring ES in without rewriting everything? The answer is coexistence—where ES grows organically, integrates cleanly with legacy systems, and avoids leaking domain details.

7.1 The Strangler Fig Pattern

The Strangler Fig strategy is a proven way to introduce new patterns into existing systems. Rather than a “big bang” rewrite:

  1. Identify a bounded context where ES offers clear value (for example, inventory or reservations in an e-commerce monolith).
  2. Route new functionality to ES so fresh commands and aggregates are event-sourced.
  3. Keep legacy CRUD for the rest until the ES footprint grows naturally.

In .NET this often looks like introducing an API layer that routes different endpoints to different backends:

app.MapPost("/api/inventory/reserve", async (ReserveStock cmd, InventoryHandler handler) =>
{
    await handler.Handle(cmd);
    return Results.Accepted();
});

app.MapPost("/api/orders/create", async (CreateOrderDto dto, LegacyOrderService service) =>
{
    var orderId = await service.CreateOrder(dto);
    return Results.Ok(orderId);
});

Both styles coexist peacefully—one backed by ES, the other still using the legacy service.

7.2 Bridging Worlds

Events must often flow between ES and CRUD. The key is to separate internal domain events (rich, detailed, and tied to business rules) from external integration events (flattened, anonymized, and stable). Only integration events should be published outside the ES boundary, so you don’t leak internals.

7.2.1 From ES to Legacy Systems

Internal domain events like StockReserved might carry detailed invariants. Before publishing externally, map them into stable integration contracts:

public record StockReservedIntegrationEvent(
    Guid ItemId,
    Guid ReservationId,
    int Quantity,
    DateTimeOffset ReservedAt
);

Publish these to a message bus so downstream consumers see only the simplified, stable shape:

public class IntegrationPublisher
{
    private readonly ITopicClient _bus;

    public IntegrationPublisher(ITopicClient bus) => _bus = bus;

    public async Task Publish(StockReserved e)
    {
        var integration = new StockReservedIntegrationEvent(e.ItemId, e.ReservationId, e.Quantity, e.ReservedAt);
        var message = new ServiceBusMessage(JsonSerializer.Serialize(integration))
        {
            Subject = nameof(StockReservedIntegrationEvent)
        };
        await _bus.SendMessageAsync(message);
    }
}

This shields consumers from future domain refactors while still exposing meaningful facts.

7.2.2 From Legacy CRUD to ES

Sometimes the legacy system remains the system of record, and ES needs to observe it. In these cases, Change Data Capture (CDC) tools like Debezium can stream table changes, which are then mapped into meaningful events.

For example, a Customers table update can be translated:

CRUD operationES event
INSERTCustomerRegistered
UPDATE EmailCustomerEmailChanged
UPDATE NameCustomerNameChanged
DELETECustomerDeleted

Mapping in code:

public async Task HandleCdc(CustomerCdcRecord record)
{
    object evt = record.Operation switch
    {
        "c" => new CustomerRegistered(record.Id, record.Name, record.Email),
        "u" when record.ChangedColumn == "Email" 
            => new CustomerEmailChanged(record.Id, record.Email),
        "u" when record.ChangedColumn == "Name" 
            => new CustomerNameChanged(record.Id, record.Name),
        "d" => new CustomerDeleted(record.Id),
        _   => throw new InvalidOperationException()
    };

    await _eventStore.AppendToStreamAsync(
        $"customer-{record.Id}",
        StreamState.Any,
        new[] { Serialize(evt) });
}

This ensures that even though the CRUD system is still in control, ES accumulates a full semantic trail of business changes.

7.3 Protecting the Domain with an Anti-Corruption Layer

When integrating with legacy systems, you don’t want their awkward representations leaking into your ES core. An Anti-Corruption Layer (ACL) acts as a translator between the two.

For example, if the legacy system stores loyalty tiers as integers, your domain can keep expressive events:

public class LoyaltyAcl
{
    public object TranslateFromLegacy(CustomerLegacyRecord record) =>
        record.LoyaltyTier switch
        {
            0 => new CustomerRegistered(record.Id, record.Name),
            1 => new CustomerPromotedToSilver(record.Id),
            2 => new CustomerPromotedToGold(record.Id),
            _ => throw new InvalidOperationException("Unknown tier")
        };

    public CustomerLegacyRecord TranslateToLegacy(CustomerAggregate aggregate) =>
        new CustomerLegacyRecord
        {
            Id = aggregate.Id,
            Name = aggregate.Name,
            LoyaltyTier = aggregate.CurrentTier switch
            {
                LoyaltyTier.Bronze => 0,
                LoyaltyTier.Silver => 1,
                LoyaltyTier.Gold => 2,
                _ => 0
            }
        };
}

The ACL keeps your event-sourced model clean and expressive, even when legacy systems work with flat or awkward data structures.


8 Operational Readiness and Best Practices

Building a working system is one thing. Running it in production under load, with resilience, compliance, and monitoring, is another. This section covers the hard-earned lessons for operating event-sourced systems at scale.

8.1 Idempotency is Key

In distributed systems, duplicate deliveries are inevitable. Projections or handlers may see the same event multiple times. If your handlers aren’t idempotent, you risk corrupting read models.

Naive projection (incorrect):

public async Task Handle(StockReserved e)
{
    var sql = "UPDATE StockLevels SET Reserved = Reserved + @qty WHERE ItemId = @id";
    await _connection.ExecuteAsync(sql, new { qty = e.Quantity, id = e.ItemId });
}

If delivered twice, this doubles the reserved quantity. A correct version uses uniqueness constraints:

public async Task Handle(StockReserved e)
{
    var sql = @"
        INSERT INTO Reservations (ReservationId, ItemId, Quantity)
        VALUES (@resId, @itemId, @qty)
        ON CONFLICT (ReservationId) DO NOTHING;
    ";
    await _connection.ExecuteAsync(sql, new { resId = e.ReservationId, itemId = e.ItemId, qty = e.Quantity });
}

By guarding on ReservationId, duplicates have no effect.

8.2 Persistent Subscriptions

For consumers that need reliability, retries, and balanced delivery across workers, persistent subscriptions are recommended. Compared to volatile SubscribeToAll, they provide:

  • Server-side checkpointing and retry logic.
  • Load balancing across multiple subscribers.
  • Dead-letter queue (parking) for poisoned messages.
  • Easier monitoring of lag and processing status.

For long-running projections or integration handlers, persistent subscriptions simplify operations considerably.

8.3 Distributed Tracing

As systems grow, a single user action may touch multiple aggregates, projections, and integrations. Without tracing, debugging becomes guesswork.

Best practice: include CorrelationId and CausationId in metadata for every event:

var metadata = new Dictionary<string, string>
{
    ["CorrelationId"] = correlationId,
    ["CausationId"] = causationEventId,
    ["UserId"] = userId,
    ["Timestamp"] = DateTimeOffset.UtcNow.ToString("O")
};

var eventData = new EventData(
    Uuid.NewUuid(),
    nameof(StockReserved),
    JsonSerializer.SerializeToUtf8Bytes(evt),
    JsonSerializer.SerializeToUtf8Bytes(metadata)
);

With these identifiers, observability tools (e.g., OpenTelemetry) can reconstruct causal chains from “user clicked Reserve” through to “stock shipped.”

8.4 Monitoring and Health

Operational readiness depends on visibility. Key practices:

  • EventStoreDB metrics: Expose /stats endpoint or use Prometheus exporters for metrics like stream throughput, subscriptions, and system health.
  • Projection lag: Monitor how far behind your read models are from the head of the log. Alert when lag exceeds thresholds.
  • .NET health checks: Expose liveness/readiness endpoints for subscription services and read-model database connectivity. This allows orchestration systems (Kubernetes, etc.) to restart unhealthy services automatically.
  • Dead-letter handling: Ensure problematic events are quarantined, not endlessly retried.

Example health check in .NET:

services.AddHealthChecks()
    .AddCheck("eventstore-subscription", new SubscriptionHealthCheck(_client))
    .AddNpgSql(_connectionString, name: "read-model-db");

8.5 Archiving and Data Retention

Event sourcing naturally accumulates large volumes of data. Regulations like GDPR add the challenge of data erasure. Strategies include:

  • Crypto-shredding: Encrypt sensitive data with per-entity keys. Discarding a key makes the data unreadable.
  • Data minimization by reference: Keep PII out of events altogether. Instead, store an opaque reference (CustomerRefId) in events, with actual PII in a secure vault.
  • Archiving: Move older streams to cold storage (e.g., S3), keeping hot data in EventStoreDB.

Example of minimization by reference:

public record CustomerRegistered(Guid CustomerId, string CustomerRefId);

Here, CustomerRefId points to a secure vault entry, not to raw PII inside the event stream.

Several .NET projects encode good practices for Event Sourcing:

  • Core.EventSourcing (Oskar Dudycz) – toolkit for modeling aggregates and events with ergonomic patterns.
  • EventStoreDB Client – the official gRPC client, with first-class support for persistent subscriptions.
  • Marten – PostgreSQL-based event sourcing library, useful as a reference for CQRS/ES patterns even outside PostgreSQL.

These libraries reduce boilerplate, improve reliability, and let you focus on business logic rather than infrastructure details.


9 Conclusion: A Deliberate Choice

Event Sourcing is not a silver bullet, but it is a deliberate architectural choice that can transform how systems evolve, scale, and deliver business value.

9.1 Summary of Benefits

Throughout this guide, we’ve seen that Event Sourcing provides:

  • Auditability: A perfect record of who did what and when.
  • Temporal power: The ability to query history, reconstruct state, and perform time-travel debugging.
  • Architectural flexibility: Multiple read models, projections, and integrations without touching core business logic.
  • Business alignment: Events speak the language of the domain, bridging developers and stakeholders.

9.2 The Trade-Offs

These benefits come with trade-offs:

  • Complexity: Event stores, projections, and snapshots add moving parts.
  • Paradigm shift: Teams must shift from “what is” to “what happened.”
  • Eventual consistency: Read models lag behind writes, requiring UX strategies and error handling.

Acknowledging these costs is essential before adopting ES.

9.3 A Decision Framework

When should you use Event Sourcing?

  • Yes: In core domains with rich business rules, compliance needs, or analytical value (orders, payments, healthcare records).
  • No: In simple domains where state can be safely overwritten (feature flags, simple settings).

The key is whether ES’s long-term benefits outweigh the added operational complexity.

9.4 Getting Started Checklist

If you’re ready to experiment, here’s a minimal path to hands-on learning:

  1. Run EventStoreDB locally Spin up with Docker (docker run … --insecure --run-projections=All).
  2. Seed some sample data Append a handful of StockCheckedIn and StockReserved events.
  3. Build a first projection Project stock levels into a simple read model (e.g., PostgreSQL stock_levels table).
  4. Test concurrency Simulate two writers appending to the same stream with the same expected revision and handle a WrongExpectedVersionException.
  5. Replay into a new read model Drop the projection and rebuild it to see replays in action.

Working through these steps gives you a feel for the mechanics before scaling into real domains.

9.5 Final Thoughts

With modern tools like .NET 8/10 and EventStoreDB, Event Sourcing is more accessible than ever. It’s no longer a niche technique reserved for banking giants—it’s a pragmatic option for teams who value auditability, adaptability, and business insight.

Approach ES deliberately. Start small, apply it where it fits best, and let its strengths compound over time. Done right, Event Sourcing doesn’t just capture history—it empowers organizations to shape the future with confidence.

Advertisement