Skip to content
Reactive + Functional UI Patterns in TypeScript and F#: RxJS, Signals, and Elmish for Real-Time Apps

Reactive + Functional UI Patterns in TypeScript and F#: RxJS, Signals, and Elmish for Real-Time Apps

1 Why reactive + functional UIs now (and what this article delivers)

Real-time apps no longer feel “advanced.” They’re expected. Users assume dashboards refresh instantly, collaborative editors sync seamlessly, and data-rich UIs react fluidly under load. But building such systems—where latency, concurrency, and consistency all collide—requires more than frameworks and components. It requires composable reactivity and disciplined functional architecture.

This guide takes you from first principles to practical integration: RxJS observables, signals, and Elmish MVU—three paradigms that, when combined, bring real-time UI resilience without chaos. We’ll bridge TypeScript (frontend) and F# (module-level orchestration) through a predictable, backpressure-aware .NET backend. By the end, you’ll have an end-to-end playbook for building real-time applications that remain understandable under stress.

1.1 Problem framing: real-time apps under load—chat, trading dashboards, collaborative docs, IoT telemetry

Modern users expect live systems. Whether it’s a trading dashboard reacting to thousands of market updates per second, a chat app managing concurrent edits, or an IoT telemetry screen streaming sensor readings, latency defines experience. A 500 ms lag feels broken; stale data can mislead.

The problem: these apps aren’t just fast—they’re alive. They ingest streams of events, update partial UI state, and coordinate multiple concurrent sources (user input, network updates, timers, caches). Each of these streams must coexist predictably.

Let’s take a few representative examples:

  • Trading dashboard: Market ticks arrive faster than the render loop. You must aggregate, sample, or coalesce events to prevent UI jank.
  • Collaborative editor: Local and remote edits interleave; conflicts must resolve deterministically.
  • IoT monitoring: Thousands of devices push telemetry continuously. You need dynamic subscription management and rate limiting.
  • Live chat: Messages, typing indicators, presence updates—all asynchronous, all interdependent.

Each case stresses the same architectural boundaries: event throughput, state synchronization, cancellation, and correctness under concurrency.

1.2 Pain points to eliminate: shared mutable state, race conditions, leaky subscriptions, duplicated side-effects, janky re-renders, and unbounded queues

Every real-time front-end eventually breaks for the same few reasons.

  1. Shared mutable state – Multiple sources mutate the same object, causing subtle temporal bugs. Immutable updates and pure transformations fix this, but only if your architecture enforces them.
  2. Race conditions – Async fetches or event streams complete in unpredictable order. Without cancellation semantics, stale data overwrites fresh state.
  3. Leaky subscriptions – Observables or event listeners live longer than their owners, keeping components or closures in memory indefinitely.
  4. Duplicated side effects – Imperative handlers re-run unnecessarily on re-renders, doubling network calls or mutations.
  5. Janky re-renders – Coarse-grained change detection forces the entire component tree to repaint for small updates.
  6. Unbounded queues – Event streams outpace consumption; memory balloons or UI freezes.

Reactive and functional patterns aim to solve these systematically:

  • RxJS Observables model async data with built-in teardown and cancellation.
  • Signals create dependency-tracked synchronous updates, ensuring minimal re-renders.
  • Elmish MVU (Model–View–Update) enforces immutability and isolates side effects through explicit commands.

The trick is combining them without crossing their conceptual wires. This article shows exactly how.

1.3 The three pillars you’ll combine

The functional-reactive stack for 2025’s real-time apps stands on three orthogonal but complementary abstractions.

1.3.1 Observables (RxJS / Rx.NET)

Observables model time-varying asynchronous sequences. They push values to observers as they occur—perfect for modeling events, network streams, or user input. Key benefits:

  • Built-in cancellation and backpressure awareness.
  • Composition through declarative operators (map, merge, switchMap, etc.).
  • Deterministic teardown via unsubscribe or takeUntil.

Observables handle asynchronous change: WebSocket frames, debounced keystrokes, API retries, or telemetry bursts.

1.3.2 Signals (Angular / Preact / Solid)

Signals capture synchronous state dependencies in the UI. Each signal knows who depends on it; when it changes, only those dependents recompute.

Unlike observables, signals are pull-based—they store the current value and trigger updates synchronously. This makes them ideal for rendering pipelines or derived UI state like computed totals, filtered lists, or status indicators.

Signals handle synchronous propagation of already-known values, keeping your DOM updates predictable and efficient.

1.3.3 Elmish MVU (F#)

Elmish implements the Model–View–Update pattern from Elm in F#. It treats state updates as pure functions of messages:

type Model = { Count : int }
type Msg = Increment | Decrement

let update msg model =
    match msg with
    | Increment -> { model with Count = model.Count + 1 }, Cmd.none
    | Decrement -> { model with Count = model.Count - 1 }, Cmd.none

Elmish shines in orchestrating complex async flows while preserving determinism. Effects (Cmd) are explicit, testable, and decoupled from state transitions. It’s how you ensure business logic behaves identically in simulation and production.

Together, these three layers separate reactivity (when and how things change) from behavior (what to do when they change).

1.4 Target architecture (high-level diagram described)

Let’s visualize the overall topology (described textually):

[Browser: TypeScript]
    ├─ RxJS: handles streams (WebSocket/SSE/user events)
    ├─ Signals: synchronous derived UI state
    └─ Bridge: Observable ⇄ Signal interop

[Elmish Module: F# (Fable or SAFE Stack)]
    ├─ Model: immutable domain state
    ├─ Update: pure transitions
    └─ Cmd: async effects (network, timers, subscriptions)

[ASP.NET Core Backend]
    ├─ SignalR (WebSocket) / SSE endpoints
    ├─ Bounded Channels / TPL Dataflow for backpressure
    ├─ Idempotent command handling (MediatR + Polly)
    └─ Observability: metrics, retries, logs

1.4.1 Flow summary

  1. Frontend: RxJS wraps real-time streams; Signals render the latest known state; Elmish modules manage predictable updates.
  2. Interop: Observables feed signals (toSignal); signals emit observables (toObservable) for async flows.
  3. Backend: ASP.NET Core uses Channels or Dataflow to buffer and fan-out updates efficiently.
  4. Consistency: Each layer is pure within its scope—side effects isolated, cancellations explicit, feedback loops measurable.

This separation guarantees clarity even under high throughput.

1.5 Reading guide & code packs you’ll provide

The rest of this guide unfolds progressively:

  • Section 2 – Core mental models (Observables, Signals, MVU).
  • Section 3 – Real-time modeling with RxJS: WebSockets, SSE, backpressure.
  • Section 4 – Signals in Angular, Preact, Solid—fine-grained reactivity and interop.
  • Section 5 – Elmish in F#: deterministic async orchestration.
  • Section 6 – .NET real-time backend: channels, flow control, idempotency.
  • Section 7 – End-to-end case study: Live Order Book & Trades.
  • Section 8 – Production checklist: performance, leak prevention, observability.

Each stage includes runnable TypeScript/F# snippets and architectural guidance so you can reproduce the patterns in your own stack.


2 Core mental models: Observables, Signals, and MVU (Elmish)

Before writing code, you need the right mental model. Each of these tools—Observables, Signals, Elmish—represents a way of reasoning about change. Understanding their differences and boundaries prevents misuse.

2.1 Observables: push-based streams, cancellation, multicasting, backpressure strategies

Observables express values over time. Think of them as asynchronous iterators that push values to subscribers.

import { Observable } from 'rxjs';

const clicks$ = new Observable<MouseEvent>(observer => {
  const handler = (event: MouseEvent) => observer.next(event);
  document.addEventListener('click', handler);
  return () => document.removeEventListener('click', handler); // teardown
});

Subscribers receive data as it arrives, and the teardown ensures cancellation when unsubscribed.

2.1.1 Cold vs hot

  • Cold observable: Starts producing when subscribed (e.g., fetch() result).
  • Hot observable: Shared source already emitting (e.g., mouse moves, WebSocket).

To avoid duplicating work, use share() or shareReplay() to multicast:

const feed$ = connectToFeed().pipe(shareReplay(1));

2.1.2 Backpressure strategies

When producers emit faster than consumers can handle, you need to choose between lossless and lossy strategies.

  • Lossy – drop or sample events:

    feed$.pipe(sampleTime(200)); // take latest every 200ms
  • Lossless – buffer or window events:

    feed$.pipe(bufferTime(1000)); // batch per second

Choosing depends on what’s more expensive: skipping updates or risking memory growth.

2.1.3 Cancellation and teardown

Cancellation is first-class in RxJS. For instance:

const controller = new AbortController();
http$(controller.signal)
  .pipe(retry({ delay: 1000 }))
  .subscribe();
controller.abort(); // cancels ongoing observable

This enables deterministic resource cleanup—crucial in real-time apps with long-lived connections.

2.1.4 Marble thinking

Imagine a marble diagram:

source:  --a---b---c---|
map(x=>x*2): --A---B---C---|

This mental model—values flowing through time—helps reason about concurrency without shared mutable state.

2.2 Signals: dependency-tracked synchronous state, computed/effect lifecycles, and why signals reduce unnecessary re-renders

Signals solve a different problem: how to update the UI efficiently when state changes.

Unlike observables, signals are synchronous and dependency-tracked. When a signal changes, only computations that depend on it re-run.

import { signal, computed, effect } from '@angular/core';

const count = signal(0);
const doubled = computed(() => count() * 2);

effect(() => console.log(`Count is ${count()}, doubled is ${doubled()}`));

count.set(1); // logs automatically

2.2.1 Lifecycle

  1. Signal stores a value and notifies dependents on change.
  2. Computed automatically re-evaluates when dependencies change.
  3. Effect runs for side effects—imperative reactions to state changes.

This graph of dependencies forms a reactive DAG (Directed Acyclic Graph), eliminating the need for dirty-checking.

2.2.2 Why signals outperform observer-based reactivity

Observables require subscriptions; signals track dependencies automatically. This reduces subscription churn and unnecessary re-renders.

In Angular or Solid, signals can replace @Input and @Output bindings for local state, while observables remain for external async streams.

2.2.3 Incorrect vs correct use

Incorrect – Using signals for async events:

const messages = signal([]);
ws.onmessage = e => messages.set([...messages(), e.data]); // race-prone

Correct – Stream through an observable, derive last value via toSignal:

const messages$ = webSocket<Message>('wss://feed').pipe(scan((acc, x) => [...acc, x], []));
const latestMessages = toSignal(messages$, { initialValue: [] });

Signals should mirror current state, not perform async orchestration.

2.3 Interop: bridging Observable ⇄ Signal in Angular (toSignal, toObservable) and gotchas

Angular’s new reactive primitives make interop seamless—but you must use them carefully.

2.3.1 Observable → Signal

const price$ = fromEventSource('/feed');
const priceSignal = toSignal(price$, { initialValue: 0 });

This converts a stream into a snapshot signal of its latest value. Angular automatically unsubscribes when the component is destroyed if you use the DestroyRef context.

2.3.2 Signal → Observable

const price = signal(0);
const price$ = toObservable(price);
price$.pipe(debounceTime(300)).subscribe(saveToServer);

Use this when you want to perform async side effects (like API calls) based on synchronous state changes.

2.3.3 Gotchas

  • Avoid creating observables inside computed()—it breaks determinism.
  • Remember: toSignal samples latest value only. If your stream emits bursts, intermediate values may be dropped.
  • Always specify initialValue for cold observables; otherwise, signals may render undefined.

Interop bridges should be edges, not the default glue.

2.4 MVU/Elmish in F#: immutable model + pure update + explicit Cmd/subscriptions for effects

Elmish’s power lies in its simplicity. Everything revolves around Model, Msg, and update.

type Model = { Connected : bool; Messages : string list }
type Msg = Connect | Disconnect | Receive of string

let update msg model =
    match msg with
    | Connect -> { model with Connected = true }, Cmd.none
    | Disconnect -> { model with Connected = false }, Cmd.none
    | Receive text -> { model with Messages = text :: model.Messages }, Cmd.none

Every transition is pure and predictable. Side effects—network calls, timers, subscriptions—are explicit via Cmd.

2.4.1 Commands

let sendMessage text =
    Cmd.OfAsync.perform (fun () -> api.SendMessage text) () (fun _ -> Ack)

Elmish’s Cmd type ensures you can test state transitions independently from effects. The update function remains referentially transparent.

2.4.2 Subscriptions

Subscriptions attach long-lived external sources (WebSocket messages, timers) to the message loop.

let subscriptions model =
    if model.Connected then
        Cmd.ofSub (fun dispatch ->
            let ws = new WebSocket("wss://feed")
            ws.onmessage <- fun m -> dispatch (Receive m.data)
        )
    else Cmd.none

The runtime manages lifecycle automatically—no orphaned event handlers.

2.4.3 Why MVU excels in real-time orchestration

MVU brings structure to chaos:

  • Immutable model ensures consistent rendering.
  • Pure updates make async retries reproducible.
  • Explicit commands clarify effect ownership.
  • Time-travel debugging becomes trivial—just replay messages.

In a complex trading dashboard or telemetry console, Elmish’s determinism prevents subtle cross-talk between async operations.

2.5 Choosing the tool per problem

Each abstraction shines in a specific scope. The art is combining them wisely.

ProblemRecommended ToolWhy
UI value derivation (totals, filters, computed fields)SignalSynchronous propagation; minimal re-renders
Continuous async updates (WebSocket, SSE, DOM events)ObservablePush-based, cancellable streams
Module-level state transitions (user actions, effects)Elmish MVUPure updates, explicit effects
Cross-layer synchronization (UI ↔ backend)RxJS + ElmishPredictable messaging and retry orchestration
Rendering snapshot of async dataObservable → Signal bridgeLatest value for stable view binding

2.5.1 Real-world composition

Example: A live trade feed.

  • Backend pushes trades via SSE.
  • RxJS wraps the SSE into an observable with retry/backoff.
  • toSignal() projects the latest snapshot for UI rendering.
  • Elmish coordinates trade submissions, error handling, and local caching.
// TypeScript: observable stream
const trades$ = eventSource$('https://api/trades').pipe(
  retry({ delay: 2000 }),
  shareReplay(1)
);
const latestTrades = toSignal(trades$, { initialValue: [] });
// F#: Elmish orchestrator
type Msg = PlaceTrade of Trade | TradeConfirmed of Trade | Error of exn

let update msg model =
    match msg with
    | PlaceTrade t -> model, Cmd.OfAsync.either api.SubmitTrade t TradeConfirmed Error
    | TradeConfirmed t -> { model with Orders = t :: model.Orders }, Cmd.none
    | Error e -> { model with Error = Some e.Message }, Cmd.none

Each piece stays within its lane—RxJS handles flow, Signals render, Elmish governs logic.


3 Modeling real-time streams in TypeScript with RxJS (WebSockets & SSE)

Reactive systems depend on disciplined handling of event streams. In a TypeScript front end, RxJS becomes the backbone of all asynchronous flows—network data, user inputs, timers, or telemetry updates. The key is not just listening to events but modeling them as controlled, composable streams. This section shows how to build reliable real-time transports—SSE and WebSocket—then layer backpressure, retry, and teardown patterns for production-grade resilience.

3.1 SSE primer: EventSource, event framing, reconnect semantics, one-way push; when it beats WebSockets

Server-Sent Events (SSE) offer a lightweight, one-way push channel from server to browser. They use plain HTTP, automatically reconnect, and work seamlessly through proxies—making them ideal for dashboards, logs, or telemetry feeds.

3.1.1 Anatomy of an SSE connection

The browser API is simple:

const source = new EventSource('/api/events');
source.onmessage = e => console.log('New event:', e.data);
source.onerror = e => console.error('SSE error:', e);

Each event is text-framed:

id: 42
event: tick
data: {"price":102.4}

and separated by blank lines. The browser automatically reconnects if the connection drops, resuming from the last id if sent.

3.1.2 RxJS wrapping for composability

To integrate with reactive flows, wrap EventSource into a cold observable:

import { Observable } from 'rxjs';

function eventSource$(url: string): Observable<MessageEvent> {
  return new Observable(observer => {
    const source = new EventSource(url);
    source.onmessage = e => observer.next(e);
    source.onerror = e => observer.error(e);
    return () => source.close(); // teardown
  });
}

This ensures deterministic teardown—closing the stream when unsubscribed.

3.1.3 When SSE beats WebSockets

SSE wins when:

  • The stream is server → client only (telemetry, feed updates).
  • You need HTTP-friendly transport (caching, reverse proxy, firewall-friendly).
  • You care about auto-reconnect without custom logic.

For simple dashboards or IoT telemetry, SSE’s simplicity trumps the flexibility of WebSockets.

3.2 WebSocket primer: bidirectional semantics; vendor services (Azure Web PubSub) as managed options

WebSockets shine when bidirectional, low-latency communication is essential—think chat apps or trading platforms. Unlike SSE, they enable client-to-server messaging over a single TCP connection.

3.2.1 Basic WebSocket lifecycle

const socket = new WebSocket('wss://feed.example.com');
socket.onopen = () => socket.send(JSON.stringify({ type: 'subscribe', topic: 'prices' }));
socket.onmessage = e => console.log('Message', JSON.parse(e.data));
socket.onclose = () => console.log('Closed');

While native APIs work, you quickly need RxJS to tame asynchronous complexity—reconnections, multiplexed topics, and teardown logic.

3.2.2 Managed real-time infrastructure

Cloud providers now offer WebSocket-as-a-Service:

  • Azure Web PubSub – integrates with SignalR and Function Apps; handles scaling and authentication.
  • AWS API Gateway WebSocket API – offers message routing via Lambda.
  • Ably and Pusher – provide global low-latency edge delivery.

Even with managed services, RxJS remains the client-side orchestrator. You’ll use it to sequence reconnections, control backpressure, and unify multiple event sources into one coherent stream.

3.3 RxJS from first principles for transport layers

3.3.1 Wrapping transports as cold observables

Cold observables don’t start until subscribed and clean up automatically when unsubscribed—ideal for event sources.

Example: wrapping WebSocket

import { Observable } from 'rxjs';

function webSocket$(url: string): Observable<MessageEvent> {
  return new Observable(observer => {
    const ws = new WebSocket(url);

    ws.onmessage = e => observer.next(e);
    ws.onerror = e => observer.error(e);
    ws.onclose = () => observer.complete();

    return () => {
      ws.close();
    };
  });
}

Now you can compose this stream declaratively:

webSocket$('wss://feed')
  .pipe(
    map(e => JSON.parse(e.data)),
    filter(msg => msg.type === 'price')
  )
  .subscribe(console.log);

3.3.2 Teardown and cancellation

When the subscriber unsubscribes, RxJS automatically executes the teardown logic. This prevents lingering network connections and event listeners.

3.3.3 Choosing higher-order mapping operators

Real-world apps often need to restart streams (e.g., reconnecting after network loss). Higher-order mapping operators control how new inner streams replace old ones:

OperatorBehaviorUse case
switchMapCancels previous inner observable on new emissionReconnect or search-as-you-type
concatMapQueues emissions, executes sequentiallyOrdered commands
mergeMapProcesses all concurrentlyParallel network requests
exhaustMapIgnores new emissions while one is activePrevent double-submit

Example: reconnect logic using switchMap:

import { interval, switchMap, retry } from 'rxjs';

const reconnect$ = interval(5000).pipe(
  switchMap(() => webSocket$('wss://feed').pipe(retry({ delay: 2000 })))
);

This ensures only one active connection at a time.

3.4 Backpressure & rate control in the browser

The browser is both producer and consumer—events arrive from servers and users simultaneously. Without rate control, you risk dropping frames or freezing UIs.

3.4.1 Time-based throttling

Use time operators to shape data flow:

feed$.pipe(throttleTime(200)); // emit first, then silence
feed$.pipe(auditTime(200));    // emit last after quiet period
feed$.pipe(sampleTime(200));   // periodic sampling
feed$.pipe(debounceTime(300)); // wait for idle period

Each serves a distinct UI pattern:

  • throttleTime: drag/move events
  • debounceTime: text input
  • auditTime: latest value before repaint
  • sampleTime: monitoring dashboards

3.4.2 Count-based buffering

When you must preserve every event (e.g., analytics), use buffering:

feed$.pipe(bufferCount(100)).subscribe(batch => sendToServer(batch));

Or batch by time window:

feed$.pipe(bufferTime(1000)).subscribe(batch => updateUI(batch.length));

3.4.3 Lossless vs lossy strategy

  • Lossless – Retain all data; use buffering or windows.
  • Lossy – Drop intermediate values; use throttling or sampling.

Pick based on information density. For a graph of CPU usage, lossy sampling is fine; for financial transactions, go lossless.

3.5 Error handling & retries with cancellation: retry, retryWhen + AbortController handoff; ensuring idempotent UI intents

Real-time streams inevitably break—network drops, server restarts, or serialization errors. RxJS makes recovery declarative.

3.5.1 Simple retry with delay

feed$.pipe(retry({ delay: 2000 })).subscribe();

This restarts after two seconds on error. But in production, you’ll need exponential backoff and cancellation support.

3.5.2 Controlled retry using retryWhen

import { retryWhen, delay, takeUntil } from 'rxjs';

feed$.pipe(
  retryWhen(errors => errors.pipe(delay(1000))),
  takeUntil(stop$)
).subscribe();

The stream retries indefinitely until a stop signal arrives.

3.5.3 Integrating AbortController

When dealing with fetch or long-lived observables, use AbortController to cancel in-flight work cleanly:

function fetch$(url: string, signal: AbortSignal) {
  return from(fetch(url, { signal })).pipe(switchMap(r => r.json()));
}

const controller = new AbortController();
fetch$('https://api/data', controller.signal)
  .pipe(retry({ delay: 1000 }))
  .subscribe(console.log);

// Later
controller.abort();

3.5.4 Idempotent UI intents

When retrying actions (like submitting orders), ensure idempotency. Tag each intent with a unique key:

interface Command {
  id: string;
  payload: any;
}

function sendCommand$(cmd: Command) {
  return ajax.post('/api/command', cmd);
}

The backend de-duplicates by id, so retries are safe.

3.6 Memory-safety patterns

Long-lived real-time UIs risk leaks if streams outlive components. RxJS offers deterministic teardown patterns.

3.6.1 takeUntil for lifecycle binding

Create a destroy$ subject and tie all streams to it:

const destroy$ = new Subject<void>();

feed$.pipe(takeUntil(destroy$)).subscribe(updateUI);

// On component teardown
destroy$.next();
destroy$.complete();

3.6.2 Angular’s takeUntilDestroyed

Angular 17 introduces automatic teardown integration:

import { takeUntilDestroyed } from '@angular/core/rxjs-interop';

@Component({...})
export class Dashboard {
  constructor(destroyRef: DestroyRef) {
    feed$.pipe(takeUntilDestroyed(destroyRef)).subscribe(updateUI);
  }
}

The stream completes when the component unmounts—no manual subjects needed.

3.6.3 Avoiding nested subscriptions

Never subscribe inside a subscription; use operators like switchMap or mergeMap instead. Incorrect nesting leaks memory and creates uncontrolled concurrency.

Incorrect

feed$.subscribe(f => another$.subscribe(...));

Correct

feed$.pipe(switchMap(f => another$)).subscribe(...);

3.7 Production-grade examples (concise code sketches)

Let’s consolidate these concepts into two real-world examples.

3.7.1 Resilient SSE consumption with backoff and toSignal for latest value rendering

import { defer, retry, delay, toSignal } from 'rxjs';
import { signal } from '@angular/core';

const feed$ = defer(() => eventSource$('/api/feed')).pipe(
  retry({ delay: 2000 }),     // reconnect with backoff
  map(e => JSON.parse(e.data)),
  shareReplay(1)
);

const latestData = toSignal(feed$, { initialValue: { price: 0 } });

// Angular template automatically re-renders when latestData() changes

Here, toSignal bridges the async observable to synchronous UI state. Backpressure is implicit since shareReplay(1) stores only the latest frame.

3.7.2 WebSocket command stream with concatMap for ordered, idempotent sends

import { Subject, concatMap, ajax } from 'rxjs';

interface TradeCmd { id: string; symbol: string; qty: number }

const trade$ = new Subject<TradeCmd>();

const send$ = trade$.pipe(
  concatMap(cmd =>
    ajax.post('/api/trade', cmd).pipe(
      catchError(err => of({ ...cmd, error: true }))
    )
  ),
  shareReplay(1)
);

// When user submits trades
trade$.next({ id: crypto.randomUUID(), symbol: 'AAPL', qty: 50 });

concatMap ensures each command completes before the next begins—preserving ordering under retries. shareReplay(1) lets new subscribers immediately receive the latest response.


4 Signals in modern UI frameworks, and how they complement streams

Signals emerged independently across frameworks—Angular, Preact, and Solid—but converge on one idea: fine-grained reactivity without virtual DOM churn. They complement RxJS by handling local, synchronous updates while observables manage asynchronous flows. This section compares APIs, interop, and pitfalls across ecosystems.

4.1 Signals in Angular, Preact, Solid—converging ideas, different APIs

4.1.1 Angular signals

Angular’s signal API integrates tightly with its change detection system:

const count = signal(0);
const doubled = computed(() => count() * 2);
effect(() => console.log('Doubled:', doubled()));

Angular tracks dependencies automatically—only components reading doubled() re-render when it changes.

4.1.2 SolidJS signals

Solid uses nearly identical primitives but built from scratch for reactive granularity:

import { createSignal, createEffect } from 'solid-js';

const [count, setCount] = createSignal(0);
createEffect(() => console.log(count()));

Updates propagate synchronously without a virtual DOM diff, ideal for dense UIs like telemetry dashboards.

4.1.3 Preact signals

Preact’s @preact/signals and @preact/signals-react bring the same semantics to React-compatible codebases:

import { signal, effect } from '@preact/signals-react';

const count = signal(0);
effect(() => console.log('Count', count.value));

React components subscribed to signals re-render automatically, bypassing useState and useEffect.

4.1.4 Convergence

Across all ecosystems:

  • Signals represent synchronous, dependency-tracked values.
  • Computeds derive new values from base signals.
  • Effects handle side effects upon change.

This model yields predictable, minimal updates—a perfect counterpoint to RxJS’s event-oriented asynchrony.

4.2 Practical interop patterns in Angular

Real UIs need both paradigms: RxJS for async data and signals for rendering. Angular’s toSignal and toObservable make this bridge safe and ergonomic.

4.2.1 toSignal for last-value render

Convert an observable into a signal to render the latest known value:

@Component({
  template: `<div>Latest price: {{ price() }}</div>`
})
export class PriceTicker {
  price = toSignal(price$, { initialValue: 0 });
}

This approach eliminates manual subscription management and ensures UI reflects the most recent stream value.

4.2.2 effect() for side effects triggered by state changes

Use effect() when you need imperative reactions to signal changes, like analytics logging or conditional fetches:

const filter = signal('AAPL');

effect(() => {
  console.log('Filter changed:', filter());
  socket.send(JSON.stringify({ type: 'subscribe', symbol: filter() }));
});

4.2.3 toObservable when downstream control flow is needed

If you need to pipe through operators (debounceTime, switchMap), convert back to an observable:

const query = signal('');

toObservable(query)
  .pipe(
    debounceTime(300),
    switchMap(q => http.get(`/api/search?q=${q}`))
  )
  .subscribe(renderResults);

This hybrid approach ensures signals stay synchronous while async orchestration remains declarative.

4.3 When signals can introduce subtle races (e.g., async fetching inside effect) and how RxJS switchMap restores determinism

Signals’ synchronous nature can cause race conditions if effects trigger asynchronous actions without proper cancellation.

4.3.1 The race

Incorrect

effect(() => {
  fetch(`/api/search?q=${query()}`)
    .then(r => r.json())
    .then(renderResults);
});

If the user types fast, multiple fetches overlap, and slower responses may overwrite newer ones.

4.3.2 Corrected with switchMap

Bridge through RxJS to gain cancellation semantics:

toObservable(query)
  .pipe(
    debounceTime(300),
    switchMap(q => from(fetch(`/api/search?q=${q}`)).pipe(switchMap(r => r.json())))
  )
  .subscribe(renderResults);

switchMap cancels in-flight requests when a new query arrives, guaranteeing latest-wins determinism.

4.3.3 Key insight

Signals guarantee consistency within a render frame, not across async boundaries. For asynchronous effects, always delegate to RxJS, which can cancel, retry, and sequence correctly.

4.4 Preact Signals in React apps (@preact/signals-react)—where it fits versus React Query/Zustand, and how to keep streams at the edges

Preact’s signal system has quietly become a favorite in React teams looking for fine-grained reactivity without context churn.

4.4.1 Replacing useState with signals

import { signal } from '@preact/signals-react';

const counter = signal(0);

function Counter() {
  return <button onClick={() => counter.value++}>{counter.value}</button>;
}

Unlike useState, signal.value++ doesn’t trigger a component-level re-render; only consumers of that signal update.

4.4.2 Signals vs React Query/Zustand

  • React Query manages remote async state (data fetching, caching).
  • Zustand manages shared state via a central store.
  • Signals manage fine-grained local reactivity—perfect for UI computations or inter-component derived state.

Best practice: use React Query for server data, wrap its results into signals for efficient rendering.

const userSignal = signal<User | null>(null);

useEffect(() => {
  fetchUser().then(u => (userSignal.value = u));
}, []);

4.4.3 Keeping streams at the edges

Streams belong at the I/O edges—network, events, timers. Signals belong at the rendering core.

Pattern

  1. Stream async data via RxJS (WebSocket, SSE).
  2. Bridge to signals for current snapshot.
  3. Use computed signals for derived UI state.
  4. Convert back to observables only when orchestrating new async flows.

This separation avoids tangled reactivity and preserves clarity across boundaries.


5 Elmish (F#) for predictable async orchestration

Elmish brings mathematical clarity to real-time state management. It eliminates hidden side effects and unpredictable async mutations by enforcing a pure, functional pattern — the Model–View–Update (MVU) loop. Each UI or module state is a deterministic projection of messages processed through pure functions. When combined with RxJS or Signals at the edge, Elmish becomes your predictable command-and-control center for asynchronous operations across the entire real-time stack.

5.1 MVU refresher with Elmish: model, messages, update, commands; testing updates as pure functions

At its core, Elmish defines a triad:

  1. Model – The immutable state.
  2. Msg – A discriminated union representing events.
  3. update – A pure function transforming (Msg × Model)(Model × Cmd<Msg>).

5.1.1 A minimal example

type Model = { Count: int; Loading: bool }

type Msg =
    | Increment
    | Decrement
    | Load
    | Loaded of int

let update msg model =
    match msg with
    | Increment -> { model with Count = model.Count + 1 }, Cmd.none
    | Decrement -> { model with Count = model.Count - 1 }, Cmd.none
    | Load -> { model with Loading = true }, Cmd.OfAsync.perform loadData () Loaded
    | Loaded value -> { model with Loading = false; Count = value }, Cmd.none

5.1.2 Testing the update function

Because update is pure, it can be unit-tested directly:

let initial = { Count = 0; Loading = false }
let model', _ = update Increment initial
assert (model'.Count = 1)

No mocks, no DI — the Elmish model enforces a transparent flow from intent to state.

5.1.3 Declarative effects

Cmd values carry side effects explicitly. They’re separate from business logic, enabling composability, retries, and parallelism while keeping state updates deterministic.

This separation is the antidote to “callback hell” or ad-hoc Promises littering imperative codebases. Elmish turns all async behavior into declarative, testable units.

5.2 Async flows with Cmd.OfAsync / Cmd.OfPromise: retries, error routing, and transactional UI updates

Elmish provides built-in combinators for asynchronous commands:

  • Cmd.OfAsync.perform – Fire and forget, dispatch success on completion.
  • Cmd.OfAsync.either – Dispatch one message on success, another on error.
  • Cmd.OfPromise.perform – Same but for JavaScript promises in Fable contexts.

5.2.1 Example: safe data fetch with retry and error handling

type Msg =
    | Load
    | Loaded of Data
    | Error of exn

let update msg model =
    match msg with
    | Load ->
        let cmd =
            Cmd.OfAsync.either
                (fun () -> api.FetchData()) () Loaded Error
        { model with Loading = true }, cmd

    | Loaded data ->
        { model with Loading = false; Data = Some data }, Cmd.none

    | Error e ->
        { model with Loading = false; Error = Some e.Message }, Cmd.none

This pattern keeps async intent explicit and allows central orchestration for retries.

5.2.2 Transactional UI updates

Elmish encourages optimistic updates: update UI first, rollback on failure.

| Submit order ->
    let newModel = { model with Pending = true; Orders = order :: model.Orders }
    newModel, Cmd.OfAsync.either api.Submit order Submitted SubmitFailed

If the API fails, the rollback path simply removes the optimistic item — all without mutable side effects.

5.2.3 Retry policy integration

Combine Elmish commands with libraries like Polly on the .NET side or use recursive command scheduling:

let rec retry n cmd =
    if n = 0 then Cmd.none
    else Cmd.batch [ cmd; Cmd.OfAsync.perform (fun () -> Async.Sleep 2000) () (fun _ -> Retry (n-1)) ]

Elmish loops can model exponential backoff purely, without timers or side effects leaking outside.

5.3 Subscriptions in Elmish: wiring timers, sockets, or DOM events into the message loop; lifecycle attach/detach

Subscriptions allow long-lived event sources to feed messages into Elmish. They differ from commands in that they’re continuous, not one-shot.

5.3.1 Timer subscription

let timerSub dispatch =
    let rec loop () = async {
        do! Async.Sleep 1000
        dispatch Tick
        return! loop ()
    }
    Async.StartImmediate(loop())

let subscriptions model =
    if model.Active then Cmd.ofSub timerSub else Cmd.none

Each tick dispatches Tick, updating the model deterministically.

5.3.2 WebSocket subscription

let socketSub dispatch =
    let ws = new WebSocket("wss://feed")
    ws.onmessage <- fun e -> dispatch (Received e.data)
    ws.onclose <- fun _ -> dispatch Disconnected

Elmish guarantees teardown by disposing the subscription when the component unmounts. This keeps sockets clean and memory stable.

5.3.3 Lifecycle control

Subscriptions return Cmd<Msg> values, so Elmish runtime manages attach/detach automatically. They’re particularly powerful when paired with backpressure-aware data sources like Channels or Rx streams via Fable bindings.

5.4 WebSockets in Elmish via Elmish.Bridge or SAFE Stack guidance; when Bridge is appropriate (closed client ecosystem)

When Elmish runs in a browser (via Fable), WebSocket integration often uses Elmish.Bridge — a small framework that synchronizes messages between F# clients and a .NET backend.

5.4.1 Basic Bridge setup

type ServerMsg = PriceUpdate of float | TradeAck of string
type ClientMsg = PlaceTrade of Trade | Ping

Program.mkProgram init update view
|> Program.withBridge
    (Bridge.endpoint "/socket"
        |> Bridge.withServerHub<ServerMsg, ClientMsg>())
|> Program.run

Bridge automatically serializes messages, maintains connection state, and routes messages to update.

5.4.2 When to use Bridge

Use Elmish.Bridge when:

  • All clients are Elmish/Fable-based (closed ecosystem).
  • You want a unified message type and shared contract.
  • The backend uses ASP.NET SignalR or compatible hubs.

Avoid it for open or mixed frontends — plain WebSocket/SSE streams are simpler for integration with TypeScript or non-F# clients.

5.4.3 SAFE Stack variant

SAFE Stack (Suave + Azure + Fable + Elmish) provides guidance to wire Elmish subscriptions directly to WebSocket endpoints. Here, Elmish manages connection retries through commands, while the backend simply emits JSON frames.

This yields flexibility to interop with other client stacks using the same event schema.

5.5 Interop with Fable and TypeScript front-ends: shared message contracts, JSON codecs, and backpressure handoff strategies

Modern teams often pair TypeScript UI (RxJS + Signals) with Elmish modules compiled via Fable. To avoid drift, define shared domain contracts in F#.

5.5.1 Shared message schema

type Trade =
    { Id: Guid; Symbol: string; Qty: int; Price: decimal }

type ClientToServer =
    | PlaceTrade of Trade
    | Cancel of Guid

type ServerToClient =
    | TradeUpdate of Trade
    | Acknowledged of Guid

Fable outputs these unions as tagged JSON objects usable by TypeScript clients.

5.5.2 JSON codec interop

Use Thoth.Json to encode/decode safely:

let encodeTrade t = Encode.object [ "Id", Encode.guid t.Id; "Symbol", Encode.string t.Symbol ]

In TypeScript:

interface Trade { id: string; symbol: string; qty: number; price: number }

This shared schema keeps front-end and back-end evolution in sync.

5.5.3 Backpressure coordination

When Elmish modules emit high-frequency updates, use Channels or throttled SignalR groups on the .NET side. The Elmish client can then use Cmd.OfAsync.either with controlled Async.Sleep intervals to regulate input frequency.

This cross-layer contract ensures smooth updates under load — every layer respecting throughput constraints from UI to backend.


6 .NET backend patterns for real-time + correctness

On the backend, correctness and scalability come down to how events flow. ASP.NET Core 8 and .NET 9 have matured real-time primitives — SignalR, Channels, and Dataflow — that integrate with both F# Elmish clients and TypeScript observables.

6.1 Transport choices: SignalR hubs, raw WebSockets, and SSE endpoints in ASP.NET Core; choosing per scenario and infra constraints

Each transport fits different constraints:

TransportDirectionIdeal ForNotes
SignalRDuplexChat, collaborative appsProtocol abstraction, automatic reconnect
WebSocketDuplexLow-latency tradingFull control, binary frames possible
SSEServer → ClientMarket data, telemetrySimpler, HTTP-friendly, auto-reconnect

6.1.1 SignalR example

public class TradeHub : Hub
{
    public async Task SendTrade(Trade trade)
        => await Clients.All.SendAsync("TradeUpdate", trade);
}

SignalR manages connection lifetimes, scaling via Redis or Azure Web PubSub.

6.1.2 SSE endpoint example

app.MapGet("/feed", async (HttpResponse res) =>
{
    res.Headers.Add("Content-Type", "text/event-stream");
    await foreach (var evt in FeedService.Stream())
    {
        await res.WriteAsync($"data: {JsonSerializer.Serialize(evt)}\n\n");
        await res.Body.FlushAsync();
    }
});

SSE is stateless and works seamlessly with caching/CDNs.

6.2 Backpressure & flow control on the server

6.2.1 System.Threading.Channels

Channels provide bounded asynchronous queues for producer/consumer coordination.

var channel = Channel.CreateBounded<Trade>(new BoundedChannelOptions(1000)
{
    FullMode = BoundedChannelFullMode.Wait
});

Producers await space when the channel is full — natural backpressure.

6.2.2 Consumption pattern

await foreach (var trade in channel.Reader.ReadAllAsync())
{
    await ProcessTrade(trade);
}

6.2.3 TPL Dataflow alternative

For richer pipelines or parallelism:

var block = new TransformBlock<Trade, Order>(
    t => Transform(t),
    new ExecutionDataflowBlockOptions { BoundedCapacity = 500, MaxDegreeOfParallelism = 4 });

This suits CPU-bound event transformations.

6.3 Idempotent commands in distributed systems

Real-time systems must prevent duplicate processing during retries.

6.3.1 Idempotency key pattern

public class CommandHandler
{
    private readonly ISet<Guid> _processed = new HashSet<Guid>();

    public async Task HandleAsync(Command cmd)
    {
        if (_processed.Contains(cmd.Id)) return;
        _processed.Add(cmd.Id);
        await Process(cmd);
    }
}

In production, use a dedupe table in a transactional database or Redis.

6.3.2 Outbox pattern

Persist events before publishing:

await db.SaveChangesAsync();
await outbox.WriteAsync(new OutboxEvent(evt));

A background worker drains the outbox to prevent lost messages.

6.3.3 Middleware integration

MediatR + Polly can enforce retries and deduplication:

builder.Services.AddTransient(typeof(IPipelineBehavior<,>), typeof(IdempotencyBehavior<,>));
builder.Services.AddResiliencePipeline("default", p => p.AddRetry(3).AddCircuitBreaker(5));

6.4 Rx.NET in services for event composition—where it still shines and current project status

While Channels dominate structured flow, Rx.NET remains excellent for in-memory event composition: aggregations, joins, and time-based analysis.

var feed = market.ObservePrices()
    .Buffer(TimeSpan.FromSeconds(1))
    .Select(batch => batch.Average(x => x.Price))
    .Subscribe(avg => Console.WriteLine($"Avg: {avg}"));

Rx.NET’s functional operators simplify telemetry or analytics within bounded scopes. For cross-service flows, Channels provide better observability and control.

6.5 Managed real-time services vs DIY: Azure Web PubSub vs self-hosted SignalR/YARP—integration notes

6.5.1 Managed

Azure Web PubSub scales transparently, offering built-in auth and geo-distribution. Ideal for global dashboards or chat.

services.AddSignalR().AddAzureWebPubSub("Endpoint=...");

6.5.2 Self-hosted

Self-hosted SignalR (or YARP) offers full control — you can co-locate with app services and tune channel capacity manually.

Use this when latency is critical or you need custom backpressure logic.

6.5.3 Hybrid strategy

Use managed PubSub for external clients and self-hosted SignalR for internal orchestration. The pattern scales linearly while keeping private data local.

6.6 Observability: measuring queue depth (channels), dropped vs processed events, per-tenant rate-limits

Observability closes the feedback loop. Metrics must reflect event flow health, not just uptime.

6.6.1 Channel metrics

var gauge = meter.CreateObservableGauge("channel_depth", () => channel.Reader.Count);

Track average queue depth and dropped messages per minute.

6.6.2 Rate limiting

.NET’s RateLimiter middleware throttles producers before overload:

app.UseRateLimiter(new RateLimiterOptions
{
    GlobalLimiter = PartitionedRateLimiter.Create<HttpContext, string>(_ =>
        RateLimitPartition.GetFixedWindowLimiter("default", _ => new FixedWindowRateLimiterOptions
        {
            PermitLimit = 100,
            Window = TimeSpan.FromSeconds(1)
        }))
});

6.6.3 Tracing and correlation

Use ActivitySource to trace events through pipelines, correlating SSE frames to upstream commands for end-to-end latency measurement.


7 End-to-end implementation: “Live Order Book & Trades” (browser + F# module + .NET)

Let’s integrate everything. This case study unites RxJS + Signals (UI), Elmish (orchestration), and .NET Channels (backend) into one coherent system.

7.1 Domain and requirements

A real-time trading dashboard should:

  • Stream live market data (prices, order book).
  • Accept user trades via command channel.
  • Ensure idempotency and backpressure safety.
  • Recover gracefully after network interruptions.

7.2 Backend (ASP.NET Core)

7.2.1 Data pipeline

var feed = Channel.CreateBounded<Trade>(5000);
_ = Task.Run(async () =>
{
    await foreach (var trade in source.StreamAsync())
        await feed.Writer.WriteAsync(trade);
});

Fan-out via SignalR and SSE:

app.MapHub<TradeHub>("/ws");
app.MapGet("/sse", async res =>
{
    res.Headers.ContentType = "text/event-stream";
    await foreach (var t in feed.Reader.ReadAllAsync())
        await res.WriteAsync($"data:{JsonSerializer.Serialize(t)}\n\n");
});

7.2.2 Command endpoint

app.MapPost("/trade", async (TradeCommand cmd, ITradeService svc) =>
{
    await svc.HandleAsync(cmd); // includes idempotency check
    return Results.Ok();
});

Outbox ensures persistence before broadcast.

7.3 UI—TypeScript

7.3.1 Network layer

const trades$ = eventSource$('/sse').pipe(
  retry({ delay: 2000 }),
  map(e => JSON.parse(e.data)),
  shareReplay(1)
);

const trades = toSignal(trades$, { initialValue: [] });

7.3.2 Trade submission

const tradeCmd$ = new Subject<Trade>();

tradeCmd$
  .pipe(concatMap(cmd => ajax.post('/trade', cmd)))
  .subscribe();

7.3.3 Rate limiting

trades$
  .pipe(auditTime(300))
  .subscribe(renderOrderBook);

The UI paints at 3Hz even if the backend emits faster, preventing layout thrash.

7.4 Elmish (F#) module

Mirrored messages keep Elmish and TS in sync.

type Msg =
    | NewPrice of float
    | SubmitTrade of Trade
    | Confirmed of Trade
    | Error of exn

let update msg model =
    match msg with
    | NewPrice p -> { model with Price = p }, Cmd.none
    | SubmitTrade t -> { model with Pending = true }, Cmd.OfAsync.either api.Submit t Confirmed Error
    | Confirmed t -> { model with Pending = false; Trades = t :: model.Trades }, Cmd.none
    | Error e -> { model with Error = Some e.Message }, Cmd.none

This guarantees predictable transitions even under concurrent updates.

7.5 Cross-stack backpressure strategy

  • Browser: auditTime smooths DOM updates.
  • Backend: Channels with BoundedCapacity prevent overload.
  • Elmish: Command throttling limits concurrent submissions.

Together, they form an end-to-end flow-control chain.

7.6 Failure drills

  • Server restarts – SSE Last-Event-ID header ensures resumable feeds.
  • Duplicate commands – server ded

upe tables reject repeats.

  • Client reconnection – RxJS switchMap reinitializes streams automatically.

All recovery logic remains declarative and measurable.

7.7 Packaging & deploy notes (Docker, CI checks, infra toggles for SSE/WebSocket selection)

Containerize each component:

FROM mcr.microsoft.com/dotnet/aspnet:8.0
COPY ./out /app
ENTRYPOINT ["dotnet", "TradeFeed.dll"]

Use environment toggles to select transport:

{ "Transport": "SSE" }

In CI: run integration tests verifying backpressure and reconnection. Deploy SignalR + Channels under Kestrel, or pair with Azure Web PubSub for elastic scaling.

Result: a composable, reactive system with mathematically verifiable behavior under load — every message traceable, every failure recoverable, every update deterministic.


8 Production checklist: performance, leaks, and operability

Shipping a real-time system that feels fast is only half the challenge; keeping it stable under sustained load is the other. Once the reactive patterns and functional structures are in place, success depends on operational rigor—how well you manage memory, concurrency, observability, and security at scale. This final section condenses field-tested practices to ensure that RxJS, Signals, Elmish, and .NET backends cooperate safely in long-lived production environments.

8.1 RxJS ergonomics & performance

Reactive streams are efficient when operators are chosen deliberately. Misusing mapping or throttling can create hidden latency or wasted cycles.

8.1.1 Operator selection matrix

ScenarioPreferred OperatorReason
Navigation / search suggestionsswitchMapCancels stale requests, latest wins
Sequential network commandsconcatMapPreserves order, ensures no overlap
Fire-and-forget background tasksmergeMapMax concurrency, non-blocking
Tap-to-pay or single-flight actionsexhaustMapIgnores duplicates while active

A quick decision matrix:

// Latest result wins (search bar)
query$.pipe(
  debounceTime(300),
  switchMap(q => http.get(`/search?q=${q}`))
)

// Ordered sequence (trade queue)
commands$.pipe(concatMap(cmd => send$(cmd)))

// Prevent duplicate tap (payment)
payClicks$.pipe(exhaustMap(() => processPayment$()))

8.1.2 Rate limiting

Choosing the correct rate limiter depends on what “timeliness” means in your UI:

OperatorEmitsIdeal Use
throttleTimefirst value per windowDrag or scroll events
debounceTimelast value after quietAutocomplete input
auditTimelast after intervalUI paint smoothing
sampleTimeperiodic snapshotMetrics dashboards

Example for dashboard rendering:

feed$.pipe(auditTime(300)).subscribe(updateChart);

auditTime lets intermediate bursts collapse into one update cycle, preserving the final state with minimal DOM churn.

8.1.3 Avoiding redundant computation

Use distinctUntilChanged for derived streams that frequently emit identical data, e.g., when rate-limited SSE updates repeat the same price tick.

price$.pipe(distinctUntilChanged()).subscribe(renderPrice);

This ensures efficient change propagation to signals or components.

8.2 Memory-leak prevention

Long-lived observables and components are natural leak vectors if teardown isn’t explicit. The following discipline prevents that.

8.2.1 Centralized teardown

Always create a single destroy$ subject per component or service, then unify cleanup via takeUntil(destroy$):

feed$
  .pipe(takeUntil(destroy$))
  .subscribe(render);

When the component unloads:

destroy$.next();
destroy$.complete();

8.2.2 Angular-specific

Angular’s takeUntilDestroyed() automates lifecycle teardown:

constructor(destroyRef: DestroyRef) {
  this.feed$.pipe(takeUntilDestroyed(destroyRef)).subscribe(render);
}

This prevents dangling subscriptions and ensures deterministic cleanup.

8.2.3 Avoiding manual subscribe

In templates, use the async pipe or signals instead of manual subscription:

<div *ngIf="price$ | async as price">Price: {{ price }}</div>

Angular unsubscribes automatically, reducing human error.

8.2.4 Signals as memory guards

Because signals automatically track dependencies, they reduce subscription churn. But never create new signals inside effects dynamically—it can lead to exponential dependency graphs. Keep signal creation static and predictable.

8.3 Signals best practices

Signals provide precision and determinism only if they remain synchronous state holders, not async conduits.

8.3.1 Separation of roles

  • Use signals for local state (form fields, computed totals).
  • Use observables for async I/O (WebSocket, SSE, timers).
  • Bridge them only once per edge via toSignal or toObservable.

Correct bridging:

// From SSE to signal for rendering
const feed$ = eventSource$('/feed').pipe(map(JSON.parse));
const latest = toSignal(feed$, { initialValue: {} });

Avoid mixing the two mid-pipeline—signals don’t handle time; observables do.

8.3.2 Derived signals and memoization

Computed signals automatically memoize. When deriving heavy calculations, wrap them with computed() so updates propagate only when inputs change.

const trades = signal<Trade[]>([]);
const avgPrice = computed(() =>
  trades().length
    ? trades().reduce((s, t) => s + t.price, 0) / trades().length
    : 0
);

8.3.3 Controlled effects

Effects should never fetch data directly—always delegate async to RxJS. Keep them idempotent and free of branching logic.

effect(() => console.log('New trade count:', trades().length));

This ensures the reactivity graph remains acyclic and deterministic.

8.4 Elmish hardening

Elmish is robust by design, but under production pressure, additional discipline ensures longevity and traceability.

8.4.1 Purity enforcement

Never mutate the model directly in update. Treat every field as immutable:

Incorrect

model.Count <- model.Count + 1
model, Cmd.none

Correct

{ model with Count = model.Count + 1 }, Cmd.none

8.4.2 Exhaustive pattern matching

Always handle all cases in Msg unions—F#’s compiler warnings ensure future-proof safety.

match msg with
| Increment -> ...
| Decrement -> ...
| _ -> model, Cmd.none // temporary catch-all during refactor

8.4.3 Side-effect containment

Only Cmd and Sub blocks should perform side effects. If logic requires persistence, encapsulate it in a helper module, not directly inside update.

8.4.4 Testing updates

Use snapshot testing for message transitions:

let model, cmd = update (Receive "Hello") initialModel
Assert.Equal(["Hello"], model.Messages)

Testing pure updates is faster than UI integration tests and catches 90% of regressions early.

8.5 .NET backpressure/throughput

On the server, throughput and latency hinge on bounded queues and async coordination.

8.5.1 Prefer bounded Channels

Bounded channels prevent overload:

var channel = Channel.CreateBounded<Event>(new BoundedChannelOptions(1000)
{
    FullMode = BoundedChannelFullMode.Wait
});

This guarantees that producers block when capacity is full—natural backpressure without resource explosion.

8.5.2 Measuring saturation

Expose channel metrics:

metrics.Gauge("queue_depth", () => channel.Reader.Count);

Sustained depths near capacity indicate bottlenecks upstream.

8.5.3 When to use TPL Dataflow

Use Dataflow when:

  • You require parallelism control (e.g., multiple workers).
  • You need fan-in/fan-out pipelines.
  • You want batching and throttling built-in.

Example:

var transform = new TransformBlock<Order, Quote>(
    o => Process(o),
    new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4 });

Dataflow naturally composes pipeline blocks, making complex event topologies easier to reason about.

8.6 Idempotency & retries

Distributed real-time systems must tolerate duplicates without breaking business logic.

8.6.1 Command idempotency

Include unique keys on every command:

public record PlaceOrder(Guid Id, string Symbol, decimal Qty);

Check for prior completion before reprocessing:

if (await db.Orders.AnyAsync(o => o.Id == cmd.Id)) return;

8.6.2 Deterministic handlers

Design command handlers as pure transformations from request → event. Avoid system time or random IDs inside handlers; pass all inputs explicitly for replayability.

8.6.3 Retry and circuit breaker

Integrate Polly for safe retries and fallback:

var policy = Policy
    .Handle<Exception>()
    .WaitAndRetryAsync(3, i => TimeSpan.FromSeconds(Math.Pow(2, i)))
    .WrapAsync(Policy.CircuitBreakerAsync(5, TimeSpan.FromSeconds(30)));

Wrap outbound HTTP or message operations in these pipelines to avoid cascade failures.

8.7 Observability & SLOs

Real-time correctness depends on continuous visibility into flow health. Observability metrics should measure lag, loss, and load.

8.7.1 Core metrics

  • Event lag – time between enqueue and emit.
  • Queue depth – channel length or pending observable items.
  • Dropped vs processed – distinguish intentional sampling from overload loss.
  • Resubscription counts – signal churn due to reconnections.

Example with .NET metrics API:

meter.CreateObservableGauge("feed_lag_ms", () => MeasureLag());

8.7.2 Traces

Use ActivitySource to correlate user actions through RxJS streams to backend commands. Each request carries a correlation ID propagated via headers:

ajax.post('/trade', cmd, { headers: { 'X-Correlation-Id': uuid() } });

Backends log and trace using ILogger.BeginScope with the same ID, enabling end-to-end timing diagnostics.

8.7.3 SLO measurement

Define measurable service-level objectives:

  • 99th percentile latency < 250 ms for feed updates.
  • 99.9% successful reconnections within 5 s.
  • 0 unbounded queues.

These metrics should feed into dashboards, triggering alerts before users notice degradation.

8.8 Security & resilience

Reactivity increases attack surfaces—persistent connections, serialized commands, and user streams. Secure each edge explicitly.

8.8.1 Auth for WebSockets/SSE

Use JWT-based authentication for SignalR and managed PubSub:

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(options =>
    {
        options.Events = new JwtBearerEvents
        {
            OnMessageReceived = ctx =>
            {
                var accessToken = ctx.Request.Query["access_token"];
                if (!string.IsNullOrEmpty(accessToken))
                    ctx.Token = accessToken;
                return Task.CompletedTask;
            }
        };
    });

This ensures every connection is traceable and revocable.

8.8.2 Per-client quotas

Implement per-tenant rate limits or channel capacities to avoid a single client overwhelming streams:

if (client.EventCount > 1000) await client.DisconnectAsync("Rate limit exceeded");

8.8.3 Server-side filtering

Always perform filtering and transformation server-side before fan-out. Never push raw multi-tenant data to clients for local filtering—this prevents N× message amplification.

8.8.4 Recovery drills

Simulate network drops, database throttling, and reconnections in staging. Verify that backpressure logic and retries behave deterministically. Periodic drills turn theoretical resilience into verified operational capability.

8.8.5 Infrastructure hygiene

Use rolling restarts, connection draining, and idle connection timeouts. Containerize each component and tag versions consistently across frontend and backend builds to maintain message schema compatibility.

Advertisement