Skip to content
360-Degree Performance Reviews: Configurable Workflows, Anonymous Feedback, and Goal Tracking with .NET

360-Degree Performance Reviews: Configurable Workflows, Anonymous Feedback, and Goal Tracking with .NET

1 Why 360-Degree Reviews Need Configurable Workflows in 2025

1.1 Problem Statement and Scope

In 2025, performance management is no longer a once-a-year ritual. The rise of hybrid work, flatter organizations, and outcome-based cultures has exposed how rigid review systems fail to capture the real contributions of individuals and teams. Traditional HR systems—built around static appraisal forms and single-line manager ratings—struggle to reflect today’s distributed, cross-functional environments. What’s missing is configurability: the ability to tailor performance review workflows, participants, and timing to the needs of different teams and business rhythms.

A 360-degree performance review system aims to solve this gap by collecting input from multiple perspectives—peers, direct reports, and managers—to create a holistic view of performance. However, without configurable workflows, even 360 programs quickly become brittle. For example, a company may want to run quarterly feedback cycles for engineers tied to OKRs, while maintaining annual reviews for corporate roles. Another may require anonymous peer input in EMEA but not in North America due to differing labor laws.

The scope of this article is to show how to design and implement a 360-degree performance review platform on .NET 8 LTS, with configurable workflows, anonymous feedback, and goal tracking. The focus is architectural and technical—covering workflow engines, domain modeling, data privacy, and HRIS integration—aimed at architects and senior developers designing enterprise-grade systems.

1.2 What “360” Really Solves and What It Doesn’t

A 360-degree review broadens the input signal for evaluating performance. In traditional models, a single manager’s perception dominates the result, often colored by recency bias or limited visibility into the employee’s collaborations. By gathering input from multiple directions—manager, peers, and reports—you can surface strengths and blind spots otherwise invisible.

1.2.1 What 360 Solves

  • Bias dilution: Multiple perspectives reduce the impact of individual bias. Peer ratings often surface collaboration and reliability traits that managers can’t observe directly.
  • Trust and transparency: When implemented well, employees see feedback as balanced and data-backed, not arbitrary.
  • Actionability: Combining quantitative scores with qualitative comments gives managers richer coaching material.
  • Cultural reinforcement: Structured feedback loops can strengthen learning and accountability norms.

1.2.2 What 360 Doesn’t Solve

  • It doesn’t replace performance judgment. Aggregated feedback helps, but calibration and context still matter.
  • It doesn’t guarantee fairness. Anonymity can protect raters but may also encourage unconstructive comments if not moderated.
  • It doesn’t automate growth. Insights still require follow-up—coaching, goal updates, and career planning.

A performant 360 system is not about collecting more feedback—it’s about ensuring the right feedback flows to the right people at the right time under controlled, auditable processes.

1.3 From Annual Cycles to Continuous/OKR-Based Reviews: Why One Size Doesn’t Fit All

The annual review model emerged in an era of stable roles and long planning horizons. In fast-moving, cross-functional teams, that rhythm feels archaic. Today’s performance programs must accommodate several modes:

  • Annual cycles: still useful for formal compensation and promotion decisions.
  • Quarterly or continuous feedback: short loops aligned to agile delivery cadences.
  • OKR-linked reviews: directly tied to measurable objectives and key results.

A configurable workflow must support all three without code changes. For instance:

reviewCycle:
  type: "okr-linked"
  stages:
    - name: "Objective Setup"
      due: "2025-01-15"
    - name: "Midpoint Feedback"
      due: "2025-03-30"
      anonymous: true
    - name: "Final Review"
      due: "2025-06-30"

Each cycle type brings different requirements—approval routing, anonymity rules, and SLA timers. Rather than baking these in code, they should be driven by configuration (YAML/JSON or database-backed templates) interpreted by a workflow engine such as Elsa Workflows or Durable Functions.

This separation of configuration from execution is crucial for scalability. HR teams can evolve review programs—change stages, thresholds, or participants—without redeploying the application.

1.4 Objectives for the System: Configurability, Anonymity, Calibration, SCIM-Based HRIS Integration, Auditability

Designing a modern 360-degree review platform means balancing HR policy flexibility with engineering rigor. Let’s define the key objectives.

1.4.1 Configurability

Every organization has its own structure and performance philosophy. Workflows must be model-driven, not hardcoded. That means:

  • Defining review templates as JSON/YAML or in a database.
  • Allowing conditional routing (e.g., skip-level approvals for certain grades).
  • Supporting multiple workflow engines (embedded or external).

1.4.2 Anonymity and Privacy

Trust hinges on data handling. The system must enforce:

  • Minimum rater thresholds before revealing results.
  • Redaction of personally identifiable terms from text.
  • Configurable anonymity levels (fully anonymous, confidential, or named).

This requires both data model support (aggregated buckets) and policy-aware presentation logic.

1.4.3 Calibration

Managers and HRBPs must align ratings across teams to avoid grade inflation or compression. The system should facilitate panel-based calibration with configurable distribution guidance—soft bell curves or percentile bands—without enforcing punitive forced rankings.

1.4.4 SCIM-Based HRIS Integration

HR data (employees, managers, reporting lines) is the backbone of the system. Integrating via SCIM 2.0 ensures consistent identity and group provisioning from HRIS or IdP sources (Workday, SuccessFactors, Entra ID). It avoids shadow directories and supports rehires, contingent workers, and org moves.

1.4.5 Auditability

Performance data influences pay and promotions—making audit trails essential. Each workflow action (creation, approval, feedback submission) must be logged with timestamps, user context, and before/after state snapshots. Audit data can feed into compliance exports or analytics pipelines.

Together, these pillars define an architecture that’s flexible for HR, trustworthy for employees, and maintainable for engineering.

1.5 Target Runtime: .NET 8 LTS and the Path to .NET 10

.NET 8 (LTS, released November 2023) is the current production baseline for enterprise systems in 2025. It brings native AOT improvements, performance boosts in ASP.NET Core, and modern cloud-native support via minimal APIs and container-first tooling. Building a 360 platform on .NET 8 ensures long-term stability and security updates through late 2026.

1.5.1 .NET Support Policy Overview

Microsoft’s release cadence alternates between STS (Standard Term Support) and LTS (Long Term Support) each November:

  • LTS versions (e.g., .NET 6, 8, 10) receive 3 years of patches.
  • STS versions (e.g., .NET 7, 9) receive 18 months. For HR-critical systems with long operational lifetimes, LTS is the default choice.

1.5.2 Preparing for .NET 10

.NET 10 (projected November 2025 LTS) will extend cloud-native improvements, unified container builds, and stronger telemetry (OpenTelemetry baked in). By targeting .NET 8 now and adhering to clean architectural principles—isolating domain, infrastructure, and UI—you can upgrade to .NET 10 with minimal friction when GA.


2 Domain Modeling the Performance Program

2.1 Core Entities and Relationships

A clear domain model keeps workflow configuration and data semantics consistent. Let’s walk through the key building blocks.

2.1.1 Person, Employment, Identity, and App User

These four concepts often get conflated. Separate them early.

  • Person: the immutable human entity. Holds name, email, and metadata.
  • Employment: the HR record (employeeId, department, cost center, managerId). Source-of-truth from HRIS via SCIM.
  • Identity: the authentication principal (from Entra ID, Okta, etc.).
  • App User: the application-level projection combining identity and HR attributes.

A simplified C# model:

public record Person(Guid Id, string FullName, string Email);
public record Employment(Guid Id, string EmployeeId, Guid PersonId, string Department, string ManagerId);
public record Identity(Guid Id, string ExternalId, string Provider, string UserPrincipalName);
public record AppUser(Guid Id, Guid PersonId, string Role, string Region);

This separation enables clean integration with IdPs while preserving HR linkage.

2.1.2 ReviewCycle, ReviewTemplate, Questionnaire

  • ReviewCycle defines a single run (e.g., “FY2025 Annual Review”).
  • ReviewTemplate provides reusable stage definitions and forms.
  • Questionnaire holds questions, rating scales, and weights.

Example schema:

public class ReviewCycle {
  public Guid Id { get; set; }
  public string Name { get; set; }
  public string Type { get; set; } // annual, okr, continuous
  public DateTime StartDate { get; set; }
  public DateTime EndDate { get; set; }
  public List<StageDefinition> Stages { get; set; }
}

2.1.3 Participants: Subject, Manager, Peers, Reports, Skip-Level, Self

A robust model treats participation as dynamic, not hardcoded roles:

public class ReviewParticipant {
  public Guid ReviewId { get; set; }
  public Guid PersonId { get; set; }
  public ParticipantRole Role { get; set; }
}
public enum ParticipantRole { Self, Manager, Peer, DirectReport, SkipLevel }

This enables rule-based generation (e.g., auto-include direct reports, allow self-nomination of peers).

2.1.4 Goal/OKR Model

Goals anchor feedback to measurable outcomes. The common structure:

  • Objective: qualitative aim (“Improve team delivery predictability”)
  • Key Results: quantifiable metrics (“Reduce missed sprint commitments <5%”)
  • Initiatives: actions driving progress.
public class Objective {
  public Guid Id { get; set; }
  public string Title { get; set; }
  public List<KeyResult> KeyResults { get; set; }
}
public class KeyResult {
  public Guid Id { get; set; }
  public string Metric { get; set; }
  public double Target { get; set; }
  public double Current { get; set; }
}

Tie Key Results to performance cycles for traceability.

2.1.5 Feedback Artifacts

Each feedback submission produces artifacts:

  • Ratings: numerical (1–5 scale, Likert)
  • Comments: free text
  • Attachments: documents or screenshots
  • Evidence links: URLs to commits, PRs, or OKRs

These feed into aggregated summaries per participant group.

2.2 Approval and Routing Concepts

2.2.1 ApprovalMatrix and Overrides

In real organizations, approval routing rarely follows a clean hierarchy. You’ll encounter dotted-line managers, matrix structures, and temporary delegations. Define an ApprovalMatrix with flexible conditions:

{
  "rules": [
    { "role": "Manager", "approver": "managerId" },
    { "role": "SecondaryManager", "condition": "region == 'EMEA'", "approver": "skipLevelId" },
    { "role": "HRBP", "override": true }
  ]
}

Store this policy centrally and interpret it in the workflow engine. For enforcement, use Casbin.NET for ABAC (Attribute-Based Access Control) policies—powerful enough to express “Managers in cost center X can approve reviews for grade ≤ Y.”

2.2.2 Reassignment, Delegation, and Escalation

Reviews often need reassignment when managers change or employees go on leave. Embed rules:

  • Delegation: assign temporarily to another approver.
  • Escalation: auto-route to next approver after SLA expiry.
  • Reassignment: maintain audit trail for historical continuity.

With Quartz.NET, schedule escalations as background jobs to check pending approvals and trigger notifications.

2.3 Privacy and Anonymity Requirements

2.3.1 Anonymity Thresholds, Aggregation, Redaction

Anonymity must be technically enforced, not just promised. Common patterns:

  • Thresholds: don’t reveal peer averages unless ≥3 raters.
  • Aggregation windows: delay visibility until all submissions close.
  • Redaction: detect and remove personally identifying terms in comments.

Example logic:

if (peerCount < 3)
    return "Insufficient responses to display peer feedback.";
else
    return AggregateAndAnonymize(feedbackItems);

Integrate NLP or rule-based filters to redact names and pronouns.

2.3.2 Trade-offs and Policy Toggles

Organizations vary widely:

  • Anonymous: full anonymity (common in large orgs for peer input).
  • Confidential: HR can trace but not share rater identities.
  • Named: open feedback, typical in startups promoting transparency.

A configuration-driven toggle allows per-cycle selection:

anonymityMode: "confidential"
thresholds:
  peerMinimum: 3
  reportMinimum: 2

UX should clearly communicate the mode to avoid mistrust.

2.4 Calibration and Distribution Outcomes

2.4.1 What Calibration Is—and Isn’t

Calibration aligns ratings across teams to ensure fairness. It’s not about enforcing quotas—it’s about normalizing expectations. Panels review employee ratings, evidence, and peer feedback to reach consensus.

A typical workflow:

  1. HR compiles distribution data.
  2. Managers discuss outliers in calibration panels.
  3. Adjustments logged and justified.
  4. Final decisions frozen before publishing.

Use workflow stages to manage this sequence (CollectFeedback → ManagerReview → Calibration → PublishResults).

2.4.2 The Bell-Curve Debate

Forced bell curves once dominated performance management, aiming for statistical fairness. In practice, they often damage morale and collaboration. Modern calibration favors soft guidance:

  • Suggest target distribution bands (e.g., 10% top, 70% middle, 20% developing).
  • Allow variance with justification.
  • Track calibration variance across orgs for HR analytics.

Your platform should model these distributions as advisory, not mandatory, and log all adjustments for compliance. Include analytics to monitor for rating compression or bias drift over cycles.


3 Reference Architecture (Clean/Hexagonal) for a 360 Platform on .NET

3.1 High-Level Topology

A 360-degree review platform benefits from a clean or hexagonal architecture because it separates volatile external concerns—UI, workflow engines, and HR integrations—from stable domain logic. Each core capability lives as an autonomous service with clear boundaries and contracts. The hexagonal approach also simplifies evolution: you can replace the workflow engine or persistence layer without breaking the domain.

3.1.1 Core Services

At the heart of the platform are seven core services, each with a defined purpose and its own persistence boundary:

  1. Reviews Service – Manages review cycles, templates, participants, and feedback artifacts. It owns the canonical domain model.
  2. Workflow Service – Executes the configured review process (e.g., rater nominations, submissions, calibration). This can use Elsa Workflows, Durable Functions, or Dapr Workflows.
  3. Goals/OKRs Service – Stores objectives, key results, and progress updates. Links to Reviews through shared identifiers.
  4. Anonymization Service – Handles feedback aggregation, threshold enforcement, and redaction before feedback is revealed to managers.
  5. Calibration & Analytics Service – Runs panels, computes distribution statistics, and feeds metrics to dashboards or BI pipelines.
  6. Directory/SCIM Service – Synchronizes identities and employment records with HRIS or IdP systems using SCIM 2.0 endpoints.
  7. Notifications Service – Sends email, chat, or push notifications for reminders, approvals, and escalations.

Each service exposes its API through ASP.NET Core minimal APIs with OpenAPI documentation. Communication between services happens via asynchronous messages (RabbitMQ or Azure Service Bus) and REST for synchronous reads.

3.1.2 Edge/API Composition

The edge layer consists of a public API gateway and BFF (Backend for Frontend) components.

  • API Gateway: Handles routing, rate limiting, and authentication. Common tools include YARP (Yet Another Reverse Proxy) for self-hosted environments or Azure API Management in cloud-native setups.
  • BFF Layer: Provides client-optimized endpoints for web and mobile apps. For example, the BFF might aggregate review data and goal progress in one response for dashboard pages.

Example of a minimal API definition with OpenAPI in .NET 8:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

app.MapGet("/reviews/{id}", async (Guid id, IReviewRepository repo) =>
{
    var review = await repo.GetAsync(id);
    return review is not null ? Results.Ok(review) : Results.NotFound();
})
.WithOpenApi(op => new(op) { Summary = "Get a review by ID" });

app.UseSwagger();
app.UseSwaggerUI();
app.Run();

The BFF can then consume multiple such services via internal HTTP or gRPC calls, caching lightweight projections in Redis to minimize latency.

3.2 Persistence and Data Flow

A 360 platform deals with both transactional consistency and large semi-structured data (feedback forms). Splitting persistence into relational OLTP and document stores ensures scalability without compromising referential integrity.

3.2.1 OLTP Stores

The relational layer—PostgreSQL or SQL Server—is ideal for structured entities: users, review cycles, goals, and approvals. EF Core 8 provides excellent support for both providers.

public class ReviewDbContext : DbContext
{
    public DbSet<ReviewCycle> ReviewCycles => Set<ReviewCycle>();
    public DbSet<FeedbackItem> FeedbackItems => Set<FeedbackItem>();

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<ReviewCycle>()
            .HasMany(r => r.Stages)
            .WithOne()
            .OnDelete(DeleteBehavior.Cascade);
    }
}

Feedback payloads—multi-question surveys with arbitrary structures—fit better in a document store such as MongoDB or Cosmos DB. You can store survey responses as JSON while linking them to relational review metadata.

3.2.2 Eventing and Outbox Pattern

Workflow transitions (e.g., “FeedbackSubmitted”, “ReviewLocked”) should emit events. To avoid data loss and duplication, use the Outbox pattern—persist events in the same transaction as domain state, then publish asynchronously.

public class ReviewService
{
    private readonly ReviewDbContext _db;
    private readonly IOutboxPublisher _publisher;

    public async Task SubmitFeedback(FeedbackItem item)
    {
        _db.FeedbackItems.Add(item);
        _db.OutboxMessages.Add(new OutboxMessage
        {
            EventType = "FeedbackSubmitted",
            Payload = JsonSerializer.Serialize(item)
        });
        await _db.SaveChangesAsync();
    }
}

A background worker processes pending messages:

public async Task ProcessOutboxAsync()
{
    var pending = await _db.OutboxMessages
        .Where(m => !m.Published)
        .ToListAsync();

    foreach (var message in pending)
    {
        await _bus.PublishAsync(message.EventType, message.Payload);
        message.Published = true;
    }
    await _db.SaveChangesAsync();
}

This pattern maintains transactional guarantees while integrating with message brokers such as RabbitMQ or Azure Service Bus. Events flow into downstream services like Notifications or Analytics for processing.

3.3 Cross-Cutting Concerns

3.3.1 Identity and Authentication

Identity is handled through OpenID Connect/OAuth2. Use OpenIddict if you need a self-hosted provider, or integrate with Entra ID or Okta if your organization already manages SSO.

Example of configuring OpenIddict in ASP.NET Core:

builder.Services.AddOpenIddict()
    .AddServer(options =>
    {
        options.SetAuthorizationEndpointUris("/connect/authorize")
               .SetTokenEndpointUris("/connect/token");

        options.AllowAuthorizationCodeFlow();
        options.RegisterScopes("reviews.read", "reviews.write");
        options.AddEphemeralEncryptionKey();
        options.AddEphemeralSigningKey();
    });

Tokens carry claims like department or role that downstream services use for ABAC (Attribute-Based Access Control).

3.3.2 Authorization with Casbin.NET

Casbin provides a policy-driven approach to authorization. A simple policy might look like this:

p, manager, reviews, approve, region == "EMEA"
p, hrbp, reviews, override, costCenter == "CORP"
g, alice, manager

In C#:

var e = new Enforcer("model.conf", "policy.csv");
var sub = "alice"; // user
var obj = "reviews";
var act = "approve";

if (e.Enforce(sub, obj, act, new { region = "EMEA" }))
    Console.WriteLine("Access granted");

This approach scales elegantly across multi-tenant systems with regional or grade-based policies.

3.3.3 Background Processing with Quartz.NET

Quartz.NET is used for reminders, deadline checks, and SLA escalations. Jobs are scheduled through cron-like expressions.

public class ReminderJob : IJob
{
    private readonly INotificationService _notify;

    public ReminderJob(INotificationService notify) => _notify = notify;

    public Task Execute(IJobExecutionContext context)
    {
        return _notify.SendRemindersAsync();
    }
}

Registering the job:

builder.Services.AddQuartz(q =>
{
    q.ScheduleJob<ReminderJob>(trigger => trigger
        .WithIdentity("reminder-job")
        .WithCronSchedule("0 8 * * *")); // daily at 8 AM
});
builder.Services.AddQuartzHostedService();

3.3.4 Observability with OpenTelemetry

With multiple services and async workflows, observability must be built-in. OpenTelemetry .NET provides standardized traces, metrics, and logs. Each service exports telemetry to a collector, which forwards it to Grafana, Azure Monitor, or Datadog.

builder.Services.AddOpenTelemetry()
    .WithTracing(b => b
        .AddAspNetCoreInstrumentation()
        .AddHttpClientInstrumentation()
        .AddSource("ReviewsService")
        .AddOtlpExporter())
    .WithMetrics(b => b
        .AddRuntimeInstrumentation()
        .AddOtlpExporter());

Ensure correlation IDs and trace context are propagated through message headers or workflow instances for full end-to-end visibility.

3.4 Workflow Options in .NET

3.4.1 Embedded Engine: Elsa Workflows

Elsa Workflows is ideal for model-driven, human-centric flows. It ships with a visual designer that HR admins can use to configure process logic. You can embed the designer directly in the admin portal, store workflow definitions in a database, and trigger them via REST.

A review flow can look like:

  1. Start → RaterNominationActivity → WaitForSubmission → ManagerApproval → CalibrationPanel → Publish

Example C# registration:

builder.Services.AddElsa(elsa => elsa
    .UseEntityFrameworkPersistence()
    .AddConsoleActivities()
    .AddHttpActivities());

Admins can adjust due dates or insert new steps without redeploying the app.

3.4.2 Durable Functions vs Dapr Workflows

When scalability or cloud-native operation matters, Azure Durable Functions or Dapr Workflows are alternatives.

FeatureDurable FunctionsDapr Workflows
ModelCode-first (C# orchestrators)Code + stateful API
HostingAzure FunctionsAny container platform
State StorageAzure Tables/BlobDapr state store
CostConsumption/serverlessContainer runtime cost
Ideal forServerless reviews, simple orchestrationMulti-service sagas and long-lived workflows

Durable Functions orchestrators:

[FunctionName("ReviewCycleOrchestrator")]
public async Task Run([OrchestrationTrigger] IDurableOrchestrationContext context)
{
    var review = context.GetInput<ReviewCycle>();
    await context.CallActivityAsync("SendNominationEmails", review);
    await context.CreateTimer(review.DueDate, CancellationToken.None);
    await context.CallActivityAsync("LockReviews", review.Id);
}

3.4.3 Messaging and Sagas with MassTransit

For distributed review workflows that span services (e.g., Reviews, Notifications, Analytics), MassTransit coordinates long-running transactions through sagas.

Example saga definition:

public class ReviewSaga : MassTransitStateMachine<ReviewState>
{
    public State WaitingForFeedback { get; private set; }
    public Event<FeedbackSubmitted> FeedbackSubmitted { get; private set; }

    public ReviewSaga()
    {
        InstanceState(x => x.CurrentState);
        Event(() => FeedbackSubmitted, x => x.CorrelateById(ctx => ctx.Message.ReviewId));

        Initially(
            When(FeedbackSubmitted)
                .Then(ctx => ctx.Instance.FeedbackCount++)
                .If(ctx => ctx.Instance.FeedbackCount >= ctx.Instance.RequiredCount,
                    x => x.TransitionTo(WaitingForFeedback))
        );
    }
}

MassTransit 8 remains OSS and reliable for production. If using version 9 or later, review the licensing terms carefully before upgrading.


4 Designing Configurable Review Workflows (From YAML to Running Instances)

4.1 Workflow DSL and Configuration Strategy

A configurable workflow starts with a declarative schema describing stages, SLAs, and transitions. YAML or JSON is practical because HR teams can edit them through admin UIs or version control.

4.1.1 ReviewCycle Schema

id: "FY2025-Annual"
type: "annual"
stages:
  - name: "Nomination"
    dueDays: 14
    activity: "NominateRaters"
  - name: "Peer Feedback"
    dueDays: 21
    activity: "CollectFeedback"
    anonymous: true
  - name: "Manager Summary"
    dueDays: 7
    activity: "CompileManagerSummary"
  - name: "Calibration"
    dueDays: 10
    activity: "RunCalibration"
  - name: "Finalize"
    dueDays: 5
    activity: "PublishResults"

When uploaded, this configuration is validated and translated into workflow instances. You can store templates in a database and version them per cycle.

4.1.2 Pluggable Steps

Each stage maps to a reusable component (activity):

  • ScopeSelectionActivity
  • RaterNominationActivity
  • SurveyDispatchActivity
  • ReminderActivity
  • LockCycleActivity
  • CalibrationActivity

A simple registry pattern allows plug-in discovery:

public interface IWorkflowActivity
{
    Task ExecuteAsync(WorkflowContext context);
}

public class ActivityRegistry
{
    private readonly Dictionary<string, IWorkflowActivity> _activities = new();

    public void Register(string name, IWorkflowActivity activity) =>
        _activities[name] = activity;

    public IWorkflowActivity Get(string name) => _activities[name];
}

This design decouples the workflow definition from execution logic, enabling extensibility.

4.2 Implementing with Elsa Workflows (Option A)

4.2.1 Modeling Stages and Branching

With Elsa, you define activities declaratively. Each stage becomes a node, connected by transitions or signals.

public class ReviewWorkflow : IWorkflow
{
    public void Build(IWorkflowBuilder builder)
    {
        builder
            .StartWith<NominationActivity>()
            .Then<CollectFeedbackActivity>()
            .Then<ManagerSummaryActivity>()
            .Then<CalibrationActivity>()
            .Then<PublishResultsActivity>();
    }
}

You can introduce conditions:

.Then<DecisionActivity>(x => x.WithCondition(ctx => ctx.Data.NeedsCalibration))

4.2.2 Human Tasks, Timers, and Multi-Tenancy

Elsa supports human tasks for approvals and timers for SLAs. Each tenant can run isolated workflow instances with separate persistence schemas. Persistence configuration:

builder.Services.AddElsa(elsa => elsa
    .UseEntityFrameworkPersistence(options =>
        options.UseSqlServer("Server=.;Database=ElsaTenant1")));

Versioning allows workflows to evolve safely—running instances finish under their original definition while new cycles start under the updated one.

4.3 Implementing with Durable Functions (Option B)

4.3.1 Orchestrator Patterns

Durable Functions enable simple orchestration with strong reliability. Use fan-out/fan-in to dispatch surveys and wait for all responses.

[FunctionName("RunReviewCycle")]
public async Task Run([OrchestrationTrigger] IDurableOrchestrationContext ctx)
{
    var review = ctx.GetInput<ReviewCycle>();
    var tasks = new List<Task>();

    foreach (var rater in review.Raters)
        tasks.Add(ctx.CallActivityAsync("SendSurvey", rater));

    await Task.WhenAll(tasks);

    await ctx.CallActivityAsync("AggregateFeedback", review.Id);
}

4.3.2 Compensation Patterns

Handle edge cases like late raters or manager changes via compensation. For high throughput, research Netherite, an alternative Durable backend with improved state storage throughput.

try
{
    await ctx.CallActivityAsync("LockStage", review.Id);
}
catch (TimeoutException)
{
    await ctx.CallActivityAsync("ReopenStage", review.Id);
}

4.4 Implementing with Dapr Workflows (Option C)

4.4.1 Using Dapr State and Pub/Sub

Dapr Workflows are ideal when your system already uses Dapr sidecars. State, pub/sub, and bindings are all native Dapr primitives.

builder.Services.AddDaprWorkflow(options =>
{
    options.RegisterWorkflow<ReviewWorkflow>();
});

Example workflow:

class ReviewWorkflow : Workflow<ReviewCycle>
{
    public override async Task RunAsync(WorkflowContext ctx, ReviewCycle review)
    {
        await ctx.CallActivityAsync("SendNominationInvites", review);
        await ctx.WaitForExternalEventAsync("FeedbackCompleted");
        await ctx.CallActivityAsync("PublishResults", review.Id);
    }
}

Admin operations like pause, resume, or purge can be invoked via Dapr’s HTTP APIs for lifecycle control:

curl -X POST http://localhost:3500/v1.0-beta1/workflows/myapp/ReviewWorkflow/pause/1234

4.5 Policy-Driven Approvals

4.5.1 ApprovalMatrix in Casbin

Store your approval matrix as Casbin policies. Example CSV:

p, manager, approve, review, costCenter == "ENG"
p, hrbp, override, review, region == "APAC"
g, bob, manager

Check authorization dynamically:

if (await _enforcer.EnforceAsync(user, "review", "approve", new { costCenter = "ENG" }))
{
    await ApproveReviewAsync(reviewId);
}

This lets HR define approval rules declaratively and adapt them per region or department.

4.6 Scheduling and Nudging

4.6.1 Reminders and Escalations

Quartz.NET can automate nudges and lock enforcement. Example job structure:

public class EscalationJob : IJob
{
    private readonly IWorkflowService _workflow;

    public async Task Execute(IJobExecutionContext context)
    {
        var overdue = await _workflow.GetOverdueTasksAsync();
        foreach (var task in overdue)
            await _workflow.EscalateAsync(task);
    }
}

Add blackout calendars to skip weekends or holidays:

q.AddCalendar("HolidayCalendar", new HolidayCalendar
{
    ExcludedDates = new HashSet<DateTime> { new(2025, 12, 25) }
});

Cron expressions can schedule reminders dynamically based on review stage metadata, ensuring no cycle slips through unnoticed.


5 Engineering for Anonymous & Trustworthy Feedback

5.1 Anonymity Models: Anonymous, Confidential, Named; When to Use Each

Anonymity shapes how honest and useful feedback becomes. The model you choose affects candor, legal exposure, and organizational trust. In 360-degree systems, three models dominate:

  • Anonymous: Rater identities are hidden from both the feedback recipient and their manager. Aggregates are only shown when a minimum number of raters respond. This model suits large teams where anonymity thresholds can be met without losing resolution.
  • Confidential: HR or People Ops can view the mapping between raters and responses, but managers and subjects cannot. This balances accountability with privacy protection. It’s common in organizations that need to audit submissions for compliance or legal reasons.
  • Named: Rater identity is always visible. This is typical in open-feedback cultures or in continuous feedback features embedded in collaboration tools.

A flexible review platform should allow per-stage anonymity configuration. For instance, peer feedback may be anonymous, while manager reviews are named.

stages:
  - name: "Peer Feedback"
    anonymityMode: "anonymous"
    minRaters: 3
  - name: "Manager Summary"
    anonymityMode: "named"

The system must enforce these configurations both at collection (e.g., UI masking) and at aggregation (e.g., anonymization pipelines).

5.2 Practical Anonymity Patterns

5.2.1 Rater-Count Thresholds

To prevent accidental disclosure, feedback from small groups must be suppressed or delayed until enough responses exist.

public async Task<FeedbackSummary> AggregateAsync(Guid reviewId)
{
    var peerFeedback = await _repo.GetFeedbackByRoleAsync(reviewId, "Peer");
    if (peerFeedback.Count < 3)
        return FeedbackSummary.Hidden("Insufficient responses");
    return FeedbackSummary.Aggregate(peerFeedback);
}

Thresholds can differ by role type—peers may need at least three raters, while direct reports may need five due to team size sensitivity. Threshold logic should execute server-side to ensure policy enforcement even if clients attempt to query prematurely.

5.2.2 Bucket-Level Aggregation and k-Anonymity-Like Grouping

Borrowing from k-anonymity, feedback should only be shown when at least k raters share the same group or attribute combination. Suppose your organization spans many small teams. Individual scores might identify raters by deduction. A safe approach is to group feedback dynamically by department, role level, or geography until anonymity is guaranteed.

var groups = feedback
    .GroupBy(f => new { f.Department, f.RoleLevel })
    .Where(g => g.Count() >= 3);
return AggregateBy(groups);

This ensures that if only one engineer in a region submits feedback, the result merges into the next safe group (e.g., region → division → global).

5.2.3 Text Redaction Heuristics and Tone Checks

Even when numeric data is anonymized, free-text comments can leak identity clues. Redaction heuristics can detect and mask names, pronouns, or terms specific to a small team.

import re

def redact_names(comment, names):
    pattern = r'(' + '|'.join(map(re.escape, names)) + r')'
    return re.sub(pattern, '[REDACTED]', comment, flags=re.IGNORECASE)

comment = "Alex consistently reviews my pull requests quickly."
print(redact_names(comment, ["Alex", "Jordan"]))

You can extend this by running toxicity or sentiment checks before publishing comments to ensure feedback remains constructive. For example, using Azure Cognitive Services or HuggingFace sentiment models can help filter inappropriate text automatically.

5.3 Data-at-Rest & In-Transit Security

Security underpins anonymity. Even if raters are hidden in the UI, weak encryption can expose identities in the database or backups.

Encryption at Rest

Use envelope encryption with per-tenant keys stored in Azure Key Vault or AWS KMS. Each tenant has a unique key encryption key (KEK), which encrypts per-cycle data encryption keys (DEKs).

byte[] plain = Encoding.UTF8.GetBytes(feedbackJson);
byte[] dek = _keyVault.GenerateKey(tenantId);
byte[] cipher = _crypto.Encrypt(plain, dek);
await _store.SaveAsync(reviewId, cipher);

Encryption in Transit

All API endpoints must enforce TLS 1.2+ and HSTS. Service-to-service traffic between microservices should use mutual TLS (mTLS), especially when messages contain employee identifiers.

PII Minimization

Avoid storing unnecessary PII. Instead of persisting names or emails in feedback tables, store immutable person IDs and resolve names through the directory service when rendering. This keeps the feedback dataset de-identified by default.

5.4 Threat Modeling and Abuse Prevention

Even with encryption and thresholds, systems are vulnerable to behavioral threats like deanonymization attempts or collusion.

Deanonymization Scenarios

Attackers might attempt to correlate submission timestamps or text styles to identify raters. Mitigations include:

  • Randomizing submission order in analytics views.
  • Delaying feedback availability until the stage closes.
  • Normalizing timestamps (e.g., storing submission day only).

Collusion and Duplicate Submissions

A manager might ask team members to reveal or share screenshots of their responses. While you can’t eliminate human collusion, you can detect anomalies such as identical comments or unusual response timing.

var duplicates = responses
    .GroupBy(r => r.Comment.Trim().ToLowerInvariant())
    .Where(g => g.Count() > 1);
if (duplicates.Any())
    LogSuspiciousPattern(reviewId, duplicates.Count());

Query Throttling

APIs exposing aggregate feedback should implement rate limiting and query window controls. Prevent users from iteratively filtering data to infer individual responses.

builder.Services.AddRateLimiter(_ => _
    .AddFixedWindowLimiter(policyName: "feedbackView", options =>
    {
        options.PermitLimit = 10;
        options.Window = TimeSpan.FromMinutes(5);
    }));

5.5 Policy Controls in UI

The front-end must enforce visibility rules consistently. For example:

  • Managers cannot see peer identities.
  • HR can view rater lists only after the cycle closes.
  • Feedback release occurs only when all raters’ statuses are “submitted.”

Implementing delayed reveal:

if (!cycle.IsClosed)
    return Results.Forbid(); // Hide results until cycle closure

The UI should also visually reinforce privacy settings—labels like “Anonymous Feedback – Minimum 3 Responses Required” help users understand protection levels. Transparency about how anonymity is handled increases participation and trust.

5.6 Evidence-Based Rationale for Anonymity

Empirical research supports anonymity’s impact on feedback honesty. Studies in organizational psychology (e.g., London & Smither, Personnel Psychology, 1995) show that employees provide more critical and balanced feedback when anonymity is assured. However, anonymity can reduce accountability if poorly managed.

Key takeaways:

  • Anonymous systems improve participation rates by 20–40% in early cycles.
  • Confidential feedback retains auditability while maintaining psychological safety.
  • Named feedback works best in coaching-oriented cultures with high interpersonal trust.

The optimal model blends modes across stages—anonymous for peers, named for managers, confidential for calibration.


6 Calibration, Bell Curves, and Fairness-First Analytics

6.1 Calibration Workflow Design: Intake → Panel → Consensus → Publish

Calibration ensures fairness and consistency in ratings. The workflow typically moves through four phases:

  1. Intake: Collect completed reviews and normalize scores.
  2. Panel: Managers and HR review distributions, identify outliers, and discuss adjustments.
  3. Consensus: Finalize ratings after panel decisions and log rationales.
  4. Publish: Push finalized outcomes to HRIS or compensation systems.

A workflow engine like Elsa or Dapr can orchestrate this.

builder.AddWorkflow("CalibrationFlow", flow => flow
    .StartWith<CollectReviews>()
    .Then<OpenPanel>()
    .Then<ConsensusMeeting>()
    .Then<PublishResults>());

Calibration data should be versioned to track adjustments over time.

6.2 Distribution Guidance Models

6.2.1 Soft Guidance vs Hard Constraints

Instead of forcing bell curves, apply soft guidance—show expected distribution ranges but allow flexibility.

var target = new { Top = 0.1, Solid = 0.7, Develop = 0.2 };
var actual = await _analytics.GetDistributionAsync(managerId);
var variance = Math.Abs(target.Top - actual.Top);

if (variance > 0.05)
    _notifications.Warn(managerId, "Distribution deviates from target range");

This encourages calibration without penalizing natural variations. Hard constraints (auto-enforced quotas) should be avoided—they damage morale and increase administrative overhead.

6.2.2 What-If Simulation

A “what-if” sandbox helps HR visualize the impact of adjustments before locking results. Using temporary data slices, you can simulate how moving one rating affects the overall distribution.

public DistributionSimResult Simulate(Guid cycleId, Guid employeeId, string newRating)
{
    var current = _repo.GetCurrentDistribution(cycleId);
    var simulated = current.ApplyChange(employeeId, newRating);
    return new DistributionSimResult(current, simulated);
}

Visualizing this in an admin UI promotes data-driven calibration discussions rather than arbitrary curve fitting.

6.3 Alternatives to Forced Bell Curves

Modern performance systems emphasize evidence-based evaluation over percentile ranking. Alternatives include:

  • Percentile Bands: Compute relative position across teams but avoid forcing quotas.
  • Goal/Outcome Evidence: Tie final ratings to OKR achievement and observable metrics.
  • Weighted Peer Signals: Blend peer input (normalized for bias) with manager evaluations.

Example weighting:

double finalScore = (managerScore * 0.6) + (peerAverage * 0.3) + (okrScore * 0.1);

This method makes calibration quantitative yet human-centered.

6.4 Bias Detection and Explainability

6.4.1 Statistical Checks

Bias often hides in rating variance across demographics or teams. Use privacy-preserving aggregates to detect anomalies.

var variance = ratings
    .GroupBy(r => r.Gender)
    .Select(g => new { g.Key, Mean = g.Average(x => x.Score) })
    .ToList();

You can compute disparity ratios or perform non-parametric tests to flag statistically significant differences. Store only anonymized aggregates to avoid reidentification.

6.4.2 Drift Monitors

Track each manager’s historical scoring patterns to detect leniency or severity drift.

var history = await _analytics.GetManagerScores(managerId);
var drift = history.StandardDeviation();
if (drift > 1.2)
    _notifications.FlagCalibration(managerId, "Significant drift detected");

Automated checks like this maintain fairness without manual audits.

Forced ranking models have triggered legal challenges under discrimination laws when distributions disproportionately affect protected groups. Many organizations now reject bell curves to reduce compliance risk and cultural backlash.

Modern alternatives—like percentile bands and evidence-backed ratings—offer transparency and inclusivity. They encourage developmental conversations rather than punitive comparisons, aligning with evolving labor regulations and DEI standards worldwide.


7 HRIS & Identity Integration via SCIM 2.0 (with Mappings)

7.1 SCIM Refresher

The System for Cross-domain Identity Management (SCIM) standard defines schemas (RFC 7643) and protocols (RFC 7644) for synchronizing users and groups between systems. Key resources include /Users, /Groups, and custom extensions like urn:ietf:params:scim:schemas:extension:enterprise:2.0:User.

SCIM APIs enable provisioning workflows: create, update, deactivate users, and manage group memberships automatically from HR or IdP systems.

7.2 Provisioning Patterns

7.2.1 Inbound Identities

HRIS (e.g., Workday, SuccessFactors) exports employee records to the corporate IdP (Entra ID, Okta). Your app then receives SCIM calls from the IdP to create or update user records.

{
  "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"],
  "userName": "sara.lee@contoso.com",
  "externalId": "E12345",
  "active": true,
  "name": { "givenName": "Sara", "familyName": "Lee" },
  "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User": {
    "manager": { "value": "E10001" },
    "department": "Engineering"
  }
}

7.2.2 Outbound Group/Role Provisioning

Roles such as “HRBP,” “Manager,” or “Reviewer” can be provisioned from IdP to your app using SCIM /Groups endpoints.

{
  "displayName": "HRBP",
  "members": [{ "value": "E12345" }]
}

Your platform then maps these to internal roles or Casbin groups automatically.

7.3 Building a SCIM Endpoint in .NET

7.3.1 Endpoints

Implement REST endpoints /Users and /Groups following RFC 7644 semantics. ASP.NET minimal APIs work well:

app.MapPost("/scim/Users", async (UserDto user, IUserService svc) =>
{
    var result = await svc.CreateAsync(user);
    return Results.Created($"/scim/Users/{result.Id}", result);
});

Support pagination (startIndex, count), filtering (filter=userName eq "alex"), and patch operations for partial updates.

7.3.2 Attribute Mapping

Define consistent mappings between HR fields and internal models:

  • employeeIdpersonId
  • manager.valuemanagerId
  • departmentcostCenter
public AppUser MapFromScim(UserDto user)
{
    return new AppUser
    {
        PersonId = user.ExternalId,
        Department = user.Extensions.Enterprise.Department,
        ManagerId = user.Extensions.Enterprise.Manager?.Value
    };
}

7.3.3 Handling Soft Deletes and Rehires

When an employee leaves, mark their account inactive rather than deleting it to preserve historical reviews.

if (!scimUser.Active)
    await _userRepo.MarkInactiveAsync(scimUser.ExternalId);

Handle rehires idempotently by reactivating existing records instead of creating duplicates.

7.4 Connecting Microsoft Entra ID

In Entra ID, configure Enterprise Application → Provisioning → Custom SCIM endpoint. Provide your app’s base URL and OAuth bearer token for authentication. The provisioning agent periodically syncs updates.

For on-prem AD, the Azure AD Connect Cloud Sync Agent bridges HR directories to SCIM endpoints. Non-gallery apps use the same configuration but require manual attribute mappings.

7.5 Real Vendor Examples and Nuances

  • Workday SCIM: Requires provisioning via the Workday REST integration layer with batched updates.
  • Salesforce SCIM: Only supports /Users, not /Groups, requiring manual role sync.
  • Okta SCIM: Strong validation—expect strict schema enforcement and PATCH semantics.

Testing against each vendor’s sandbox ensures compatibility with their SCIM quirks before production rollout.

7.6 Open-Source SCIM SDK Options and Build-vs-Buy Guidance

Open-source .NET SDKs like SimpleIdServer.Scim or Microsoft.SCIM provide controllers and schema classes out-of-the-box.

Example using SimpleIdServer.Scim:

builder.Services.AddScimServer(opts => 
{
    opts.AddUserStore<MyUserStore>();
    opts.AddGroupStore<MyGroupStore>();
});

If your organization manages thousands of users with complex HR attributes, a commercial SCIM gateway (e.g., SailPoint, OneLogin) may offer better reliability. Smaller teams can safely adopt an open-source SDK with added telemetry and retry logic.

The decision hinges on two factors:

  • Control vs Compliance: Custom builds offer flexibility; vendors offer certification and SLAs.
  • Scale: Above 100k identities, managed solutions reduce operational complexity.

8 Implementation Blueprint: A Walkthrough on .NET

8.1 Tech Stack Recap

A production-ready 360-degree review platform demands both flexibility and durability. The .NET 8 ecosystem now provides everything needed—from lightweight APIs to observability hooks—to build systems that scale gracefully.

8.1.1 ASP.NET Core Minimal APIs; EF Core; MediatR; OpenAPI/Swagger; OpenTelemetry; Test Stack

The runtime backbone uses ASP.NET Core minimal APIs to reduce boilerplate while maintaining first-class middleware and DI capabilities. Each service uses EF Core 8 for persistence, MediatR for clean command/query separation, and OpenAPI for discoverability.

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<ReviewDbContext>(opt =>
    opt.UseNpgsql(builder.Configuration.GetConnectionString("Reviews")));
builder.Services.AddMediatR(cfg => cfg.RegisterServicesFromAssembly(typeof(Program).Assembly));
builder.Services.AddOpenTelemetry()
    .WithTracing(t => t.AddAspNetCoreInstrumentation().AddEntityFrameworkCoreInstrumentation())
    .WithMetrics(m => m.AddRuntimeInstrumentation());
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();
app.UseSwagger();
app.UseSwaggerUI();
app.MapGet("/", () => "360 Review API ready");
app.Run();

The test stack uses xUnit with FluentAssertions for API-level verification and Testcontainers for integration tests with ephemeral PostgreSQL databases.

8.1.2 Workflow and Messaging Selection

Among workflow options discussed earlier—Elsa Workflows, Durable Functions, and Dapr Workflows—the reference build uses Elsa for on-prem and SaaS parity. It provides an embeddable engine with human-centric features.

For messaging, MassTransit v8 remains the stable open-source choice, connecting seamlessly to RabbitMQ or Azure Service Bus. Durable Functions or Dapr remain valid alternatives for cloud-native, serverless environments but introduce higher operational abstraction and less direct debugging control.

Use CaseRecommended StackTrade-Off
SaaS with multiple tenantsElsa + MassTransitSimplifies audit and configurability
Serverless continuous feedbackDurable Functions + Service BusLow cost, fewer custom ops
High-scale microserviceDapr Workflows + Pub/SubStrong resilience, added operational overhead

8.2 Service Boundaries & API Contracts

Microservice boundaries must mirror domain seams. The 360 platform exposes four main APIs—Reviews, Goals, Workflow, and SCIM—each independently deployable.

8.2.1 Reviews API

The Reviews API manages review cycles, stages, and feedback submissions. Key endpoints:

app.MapPost("/reviews", async (CreateReviewCycle cmd, IMediator mediator) =>
    await mediator.Send(cmd));
app.MapPost("/reviews/{id}/nominate", async (Guid id, NominateRaters cmd, IMediator mediator) =>
    await mediator.Send(cmd with { ReviewId = id }));
app.MapPost("/reviews/{id}/submit", async (Guid id, SubmitFeedback cmd, IMediator mediator) =>
    await mediator.Send(cmd with { ReviewId = id }));
app.MapPost("/reviews/{id}/lock", async (Guid id, LockReviewCycle cmd, IMediator mediator) =>
    await mediator.Send(cmd));

Each command triggers domain events published via the outbox pattern for downstream notifications and analytics processing.

8.2.2 Goals API

The Goals API provides OKR management tied to review cycles.

app.MapPost("/goals", async (CreateObjective cmd, IMediator mediator) =>
    await mediator.Send(cmd));
app.MapPost("/goals/{id}/keyresults", async (Guid id, AddKeyResult cmd, IMediator mediator) =>
    await mediator.Send(cmd with { ObjectiveId = id }));
app.MapPost("/goals/{id}/progress", async (Guid id, UpdateProgress cmd, IMediator mediator) =>
    await mediator.Send(cmd with { ObjectiveId = id }));

To link feedback to performance outcomes, reviewers reference Objective IDs when submitting qualitative feedback, ensuring traceability between results and input.

8.2.3 Workflow API

This service executes workflow operations and interacts with Elsa’s persistence store.

app.MapPost("/workflow/start", async (StartWorkflowRequest req, IWorkflowStarter starter) =>
{
    var id = await starter.StartAsync(req.TemplateId, req.Context);
    return Results.Ok(new { InstanceId = id });
});
app.MapPost("/workflow/{id}/advance", async (Guid id, IWorkflowManager mgr) =>
{
    await mgr.TriggerSignalAsync("Advance", id);
    return Results.NoContent();
});
app.MapGet("/workflow/tasks", async (IWorkflowQuery query) =>
    Results.Ok(await query.ListPendingTasksAsync()));

Admin overrides use policy-guarded endpoints that require HR or system roles validated through Casbin.

8.2.4 SCIM API

The SCIM service supports provisioning and sync operations compliant with RFC 7644.

app.MapPatch("/scim/Users/{id}", async (string id, ScimPatchRequest patch, IUserSync svc) =>
{
    await svc.ApplyPatchAsync(id, patch);
    return Results.Ok();
});

Automated test scripts using Postman or curl validate compatibility with Microsoft Entra provisioning jobs, simulating user create, update, and deactivate flows.

8.3 Data Models and Schema Slices

8.3.1 ReviewCycle, StageInstance, TaskInstance, FeedbackItem, AggregatedBucket, CalibrationDecision

The relational schema captures immutable entities for auditability.

public class ReviewCycle
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public DateTime Start { get; set; }
    public DateTime End { get; set; }
    public ICollection<StageInstance> Stages { get; set; } = new List<StageInstance>();
}

public class StageInstance
{
    public Guid Id { get; set; }
    public string StageName { get; set; }
    public ReviewStatus Status { get; set; }
    public DateTime DueDate { get; set; }
    public ICollection<TaskInstance> Tasks { get; set; } = new List<TaskInstance>();
}

Each TaskInstance maps to one reviewer’s action (nominate, review, approve).

public class TaskInstance
{
    public Guid Id { get; set; }
    public Guid StageId { get; set; }
    public Guid AssigneeId { get; set; }
    public TaskType Type { get; set; }
    public DateTime? CompletedOn { get; set; }
}

Feedback data is semi-structured and often stored in a JSON column:

public class FeedbackItem
{
    public Guid Id { get; set; }
    public Guid ReviewId { get; set; }
    public string Role { get; set; }
    public string Content { get; set; } // JSON blob of question-response pairs
}

Aggregated buckets represent anonymized summaries.

public class AggregatedBucket
{
    public Guid ReviewId { get; set; }
    public string RoleGroup { get; set; }
    public double AverageScore { get; set; }
    public string SummaryText { get; set; }
}

Calibration decisions store final consensus outcomes:

public class CalibrationDecision
{
    public Guid ReviewId { get; set; }
    public Guid ManagerId { get; set; }
    public string Rating { get; set; }
    public string Justification { get; set; }
    public DateTime ApprovedOn { get; set; }
}

8.4 Example Artifacts

8.4.1 YAML ReviewCycle Template

Annual review with calibration and OKR linkage:

id: "FY2025"
type: "annual"
stages:
  - name: "Nomination"
    activity: "NominateRaters"
    dueDays: 10
  - name: "Peer Feedback"
    activity: "CollectFeedback"
    anonymityMode: "anonymous"
    dueDays: 21
  - name: "Manager Summary"
    activity: "CompileSummary"
    dueDays: 7
  - name: "Calibration"
    activity: "RunPanel"
    dueDays: 10
  - name: "Finalize"
    activity: "PublishResults"
    dueDays: 3

Continuous feedback cycles replace rigid deadlines with rolling windows.

8.4.2 Casbin Policies for ApprovalMatrix

Approval policies allow fine-grained control:

p, hrbp, reviews, override, department == "Engineering"
p, manager, reviews, approve, region == "EMEA"
g, alice, manager
g, steve, hrbp

Applied dynamically:

if (await _enforcer.EnforceAsync(user, "reviews", "approve", new { department = "Engineering" }))
    await ApproveAsync(cycleId);

8.4.3 Quartz Job Configs

Reminders and escalations trigger automatically using Quartz.NET.

q.ScheduleJob<ReminderJob>(trigger => trigger
    .WithIdentity("peer-reminder")
    .WithCronSchedule("0 9 * * MON")); // every Monday 9 AM

Escalations run on a separate queue to avoid blocking workflow processing.

8.5 Observability & SLOs

8.5.1 Tracing Critical Paths

OpenTelemetry spans should wrap every workflow transition. The trace context flows through MassTransit message headers.

using var activity = _activitySource.StartActivity("FeedbackSubmission");
await _repo.SaveFeedbackAsync(feedback);
await _bus.PublishAsync(new FeedbackSubmitted(feedback.Id));

Key SLOs:

  • 99.5% of feedback submissions complete under 2 seconds.
  • 95% of reminders trigger within 5 minutes of schedule.

Error budgets define alert thresholds and drive capacity planning.

8.5.2 Metrics

Critical metrics monitored via Prometheus or Azure Monitor:

  • review_cycle_progress{stage="Peer Feedback"} – percentage complete per cycle
  • feedback_anonymity_violations_total – number of threshold breaches
  • calibration_variance_score – difference between target and actual distribution

These metrics feed dashboards for HR and SRE teams to identify lagging stages or privacy risks early.

8.6 Security & Compliance Hardening

8.6.1 Multi-Tenant Isolation

Each tenant gets its own database schema or partition key. Encryption keys are scoped per tenant, rotated quarterly.

string connection = $"Server=db;Database=reviews_{tenantId};User Id=app;";

PII inventory scans (using custom analyzers or Azure Purview) ensure compliance with GDPR and internal data policies.

8.6.2 RBAC/ABAC Checks and Audit Trails

API requests validate roles through Casbin or JWT claims. Workflow steps log immutable audit events.

_audit.Log(new AuditEvent
{
    ActorId = user.Id,
    Action = "SubmitFeedback",
    Target = reviewId,
    Timestamp = DateTime.UtcNow
});

Audit trails are exportable for compliance audits or employee data requests.

8.7 Deployment Shapes

8.7.1 Single Tenant vs Multi-Tenant SaaS

For enterprises running one internal instance, a single-tenant deployment simplifies compliance. Multi-tenant SaaS uses a shared app layer with tenant-scoped databases.

Feature flags managed via Azure App Configuration allow phased rollout:

if (_featureManager.IsEnabled("EnableContinuousFeedback"))
    await StartContinuousCycleAsync();

Blue/green deployments—two identical environments behind a load balancer—ensure zero downtime upgrades.

8.7.2 Cloud-Native Picks

Option A: AKS (Azure Kubernetes Service) with Dapr sidecars for state management and service invocation—ideal for predictable workloads and strong control over cost.

Option B: Serverless model using Durable Functions and Service Bus reduces idle costs but limits custom scheduling control.

OptionProsCons
AKS + DaprFull observability, state controlHigher ops overhead
Durable FunctionsPay-per-execution, easy scalingLess control, slower cold starts

8.8 Migration/Phase-In Plan

8.8.1 Pilot First

Start with one narrow program—e.g., peer feedback only—to validate anonymity thresholds and workflow rules. Once stable, expand to manager and calibration stages.

var config = _cycleTemplates["PeerFeedbackOnly"];
await _workflow.StartAsync(config.Id);

Monitor participation and feedback quality before expanding to full 360 cycles.

8.8.2 Legacy Import via SCIM

Import employee data from legacy HR tools using SCIM /Users and /Groups.

foreach (var user in legacyUsers)
{
    var scimUser = _mapper.MapToScim(user);
    await _scimClient.CreateUserAsync(scimUser);
}

Historical feedback can be migrated as read-only documents, preserving past performance data for longitudinal analytics.

8.9 What to Measure Post-Launch

The system’s impact is judged by both leading indicators (participation, timeliness) and lagging indicators (quality, fairness, retention).

  • Completion Rate: % of submitted vs expected reviews per cycle.
  • Manager Load: average number of direct reports per manager per cycle.
  • Feedback Quality Score: sentiment analysis of comment richness or tone diversity.
  • Calibration Dispersion: standard deviation across managers’ average ratings.
  • Retention Signals: correlation between feedback participation and voluntary turnover.
var dispersion = ratings.GroupBy(r => r.ManagerId)
    .Select(g => g.Average(x => x.Score))
    .StandardDeviation();

Tracking these metrics over multiple review cycles reveals whether cultural and process goals—fairness, trust, growth—are being achieved.

Advertisement