Skip to main content
The ORbit Score (v2.2) measures facility operational efficiency using median absolute deviation (MAD) statistics instead of mean-based approaches, providing robust outlier resistance.
The scoring engine lives in lib/orbitScoreEngine.ts and runs client-side. Results are cached to surgeon_scorecards and refreshed nightly via pg_cron. Both web and iOS platforms read from the cached table.

Four pillars

PillarWeightWhat it measures
Profitability30%Profit per case vs. peers in the same procedure cohort
Consistency25%Coefficient of variation of case duration per procedure type
Schedule Adherence25%Actual vs. booked case duration — did the day go as planned?
Availability20%Prep-to-incision gap and surgeon delay rate

MAD-based scoring (Pillars 1 & 2)

Profitability and Consistency use Median Absolute Deviation instead of standard deviation:
score = 50 + (value - cohort_median) / effectiveMAD × 16.67

Key parameters

ParameterValuePurpose
MAD_BAND3MADs from median to reach floor/ceiling
MIN_MAD_PERCENT5%Percentage floor prevents noise amplification
MIN_ABSOLUTE_MAD_CV0.01Absolute floor for CV scoring
MIN_PILLAR_SCORE10No pillar scores below 10

Why 3 MAD bands?

With 5–15 surgeons (typical ASC size), being 2 MADs from median just means you’re the best or worst — not necessarily an outlier. 3 MAD provides useful differentiation:
DistanceScore
At median50
1 MAD~33 or ~67
2 MAD~17 or ~83
3 MAD10 (floor) or 100 (cap)

Graduated decay (Pillars 3 & 4)

Schedule Adherence and Availability use direct graduated scoring with no peer comparison:
case_score = max(0, 1.0 - minutes_over_grace / floor_minutes)
pillar_score = mean(all case_scores) × 100
This produces a meaningful absolute score — 77% means “77% on-time effectiveness.”

Volume weighting

Scoring is done within procedure-type cohorts (THA vs. THA, not THA vs. TKA), then volume-weighted across the surgeon’s case mix.

Composite calculation

Composite = (Profitability × 0.30) + (Consistency × 0.25)
          + (Adherence × 0.25) + (Availability × 0.20)
All pillar scores are floored at 10 and capped at 100.

Grade thresholds

GradeScoreDescription
A≥ 80Elite — top performer
B≥ 65Strong — above average
C≥ 50Developing — meeting expectations
D< 50Needs improvement

Data requirements

Per case

patient_in_at, patient_out_at, incision_at, prep_drape_complete_at, closing_at, start_time, scheduled_date, procedure_type_id, surgeon_id, or_room_id

Financial

profit — null means skip, 0 means break-even

Settings

start_time_milestone, grace_minutes, floor_minutes, min_procedure_cases, min_case_threshold

Data flow

  1. case_completion_stats → raw per-case data (41 columns)
  2. orbitScoreEngine.ts → scoring calculation (client-side)
  3. surgeon_scorecards → cached results (refreshed nightly via Edge Function + pg_cron)
  4. Both web and iOS read from surgeon_scorecards

Improvement plans

The engine generates per-surgeon improvement plans via generateImprovementPlan(), identifying weakest pillars, projecting composite improvement, and estimating annual time and dollar savings.

Implementation notes

When all surgeons perform identically (MAD approaches zero), the score formula would amplify tiny differences into huge score swings. The 5% floor prevents noise amplification while still allowing differentiation.
A floor of 10 prevents zero or negative pillar scores, which would make weighted composite calculations unintuitive. Even the worst performer gets at least 10 points per pillar.
The scoring engine is a pure function — given case_completion_stats rows and settings, it produces deterministic scores. Write unit tests with known input data and verify expected pillar scores and composite output.

Next steps