ORbit’s flag engine automatically detects operational issues in surgical cases based on configurable rules. It supports timing, financial, and quality metric thresholds.
The flag engine lives in lib/flagEngine.ts. It’s invoked after case validation, during demo data generation, and can be manually triggered from the data quality page.
Core tables
| Table | Description |
|---|
flag_rules | Rule definitions per facility (metric, threshold, operator, severity) |
case_flags | Individual flag instances detected on cases |
Threshold types
| Type | How it works |
|---|
absolute | Direct threshold value (e.g., turnover > 45 min) |
median_plus_sd | Median ± (N × standard deviation) |
median_plus_offset | Median ± fixed offset |
percentage_of_median | Median × (1 ± percent) |
percentile | Nth percentile of the cohort |
between | Value falls within range [min, max] |
Comparison scopes
| Scope | Description |
|---|
facility | Compare against facility-wide baselines |
personal | Compare against the surgeon’s own historical performance |
Metric categories
Timing
total_case_time, surgical_time, pre_op_time, anesthesia_time, closing_time, emergence_time, prep_to_incision, surgeon_readiness_gap
Efficiency
turnover_time (cross-case), fcots_delay (first-case only), room_idle_gap
Financial
case_profit, case_margin, profit_per_minute, total_case_cost, reimbursement_variance, or_time_cost, excess_time_cost
Quality
missing_milestones, milestone_out_of_order
Evaluation pipeline
evaluateCasesBatch(cases, rules):
1. Build baselines from historical cases
2. Build turnover baseline (if turnover rules exist)
3. Identify first cases of the day (for FCOTS)
4. Pre-compute turnovers per case
5. For each case → evaluate all enabled rules → CaseFlag[]
Per-case evaluation
For each case and each enabled rule:
- Extract the metric value from case data
- Look up the baseline (personal or facility scope)
- Resolve the threshold based on type + baseline
- Compare value against threshold using the operator
- If triggered, create a
CaseFlag with severity, rule name, and metric details
Standard vs. custom rules
| Type | Description |
|---|
| Standard | Seeded from global templates on facility creation. Cannot be deleted, only toggled. |
| Custom | Created by facility admins. Can use any metric including dynamic cost-category metrics. |
When flags run
- When a case is validated (Completed status)
- During demo data generation (batch mode)
- Manually triggered from the data quality page
Key file
lib/flagEngine.ts
Adding a new flag metric
Define the metric
Add the metric name and extraction logic to the metric registry in flagEngine.ts.
Create a flag rule
Add a rule in the flag_rules table (or create a migration to seed it as a built-in rule) with the threshold type, operator, and severity.
Test with demo data
Run the demo generator on a test facility to verify the flag triggers correctly under the expected conditions.
Custom rules created through the UI are stored in flag_rules with is_custom = true. Built-in rules seeded from global templates have is_custom = false and cannot be deleted by facility admins.
Next steps