Skip to main content
The ORbit iOS app is a native SwiftUI application (iOS 18+) that connects to the same Supabase backend as the web platform. It uses MVVM architecture with a repository pattern, role-based navigation, and a sophisticated two-stage voice command system for hands-free OR operation.
The iOS app shares the same database, same RLS policies, and same data model as the web app. Any database migrations created for the web automatically apply to iOS since both platforms use the same Supabase project.

Stack

LayerTechnology
UISwiftUI, iOS 18+
ArchitectureMVVM + Repository pattern
BackendSupabase (shared project with web)
AuthSupabase Auth + Keychain token storage
Wake wordPicovoice Porcupine (on-device)
Speech recognitionApple SFSpeechRecognizer
Text-to-speechAVSpeechSynthesizer
LLM classificationClaude Haiku via Supabase Edge Function
DependenciesSupabase Swift SDK (SPM), Porcupine iOS SDK

Architecture pattern

View (SwiftUI) → ViewModel (@Observable/@MainActor) → Repository → SupabaseClient → Supabase
Each layer has strict responsibilities:
LayerResponsibilityRules
ViewUI rendering and user interactionNo business logic. No direct Supabase calls.
ViewModelState management and business logic@MainActor always. No PostgREST imports.
RepositoryData access and Supabase queriesOnly layer that imports PostgREST. Receives accessToken and facilityId as init params.
ModelData structuresPlain Codable structs. No business logic.
These rules prevent common bugs in the iOS codebase. Violating them causes runtime issues.
  • Only Repositories import PostgREST — Views and ViewModels must never import it directly
  • All ViewModels are @MainActor — never use DispatchQueue.main.async
  • Use @EnvironmentObject injection, not .shared singletons
  • Pass accessToken and facilityId as init params to repositories

Project structure

ORbit/
├── ORbitApp.swift                      → App entry point, environment setup
├── ContentView.swift                   → Root TabView with role-based routing
├── Core/
│   ├── Auth/AuthManager.swift          → @Observable auth state, token refresh, Keychain
│   ├── Network/SupabaseClient.swift    → Supabase client configuration
│   ├── Theme/Theme.swift               → Design tokens (colors, fonts, spacing, scaling)
│   ├── NotificationManager.swift       → Push notifications, Realtime subscriptions
│   ├── ActiveCaseManager.swift         → Tracks current in-progress case
│   ├── AppearanceManager.swift         → Light/dark/system mode preferences
│   ├── HapticManager.swift             → Haptic feedback (light, medium, selection)
│   ├── BiometricAuthManager.swift      → Face ID lock/unlock
│   └── Error/ORbitError.swift          → Centralized error types
├── Models/
│   ├── Case.swift                      → Surgical case data
│   ├── Milestone.swift                 → Milestone + VoiceCommandAlias model
│   ├── Room.swift                      → OR room data
│   ├── SurgeonDay.swift                → Surgeon schedule aggregation
│   ├── RepCase.swift                   → Device rep case view
│   ├── SurgeonScorecard.swift          → Performance metrics
│   ├── PaceStatus.swift                → Pace calculation enum
│   └── CaseStatus.swift                → Case status enum
├── Repositories/
│   ├── CaseRepository.swift            → Cases CRUD, milestone recording
│   ├── MilestoneRepository.swift       → Milestone queries, validation, ordering
│   ├── RoomRepository.swift            → Room queries, case-room relationships
│   ├── StaffRepository.swift           → Staff assignments, role lookups
│   ├── DelayRepository.swift           → Delay tracking
│   ├── ImplantRepository.swift         → Implant data
│   └── ScoreRepository.swift           → Surgeon scorecard queries
├── ViewModels/
│   ├── SurgeonHomeViewModel.swift      → Surgeon dashboard logic
│   ├── CasesViewModel.swift            → Cases list + filters
│   ├── RoomsViewModel.swift            → Rooms board + filters
│   ├── CaseDetailViewModel.swift       → Single case management
│   └── RoomModeViewModel.swift         → Room Mode orchestrator (~1000+ lines)
├── Features/
│   ├── Login/                          → Login, forgot password, biometric lock
│   ├── SurgeonHome/                    → Surgeon dashboard (surgeon role only)
│   ├── Cases/                          → Case list + detail views
│   ├── Rooms/                          → Room status board
│   ├── RoomMode/                       → iPad Room Mode (full-screen OR dashboard)
│   │   ├── RoomModeView.swift          → Main Room Mode layout
│   │   ├── Components/                 → Timer cards, milestone timeline, modals
│   │   │   └── CementTimerOverlay.swift → Floating draggable cement timer
│   │   └── Voice/                      → Voice command pipeline (see below)
│   ├── Notifications/                  → Notification center
│   ├── Profile/                        → Profile, settings, voice picker
│   ├── DeviceRep/                      → Device rep multi-facility experience
│   └── MainTabView.swift               → Role-based tab routing + Actions sheet
├── Components/                         → Shared UI (ActiveCaseBar, Toast, StatusBadge, etc.)
└── ORbitTests/                         → XCTest unit tests

Core managers

The app uses several singleton managers injected via @EnvironmentObject:

AuthManager

Manages the full authentication lifecycle:
  • Login/logout with Supabase Auth
  • Token refresh — automatic refresh 5 minutes before expiry
  • Foreground refresh — handles iOS background suspension gracefully
  • Keychain storage — tokens encrypted via iOS Keychain Services
  • User profile — role, facility_id, and display information
  • Auto-logout — if token refresh fails, returns to login screen

NotificationManager

  • Supabase Realtime subscriptions for live case updates
  • APNs integration — device token registration and push notification handling
  • Device token table — stores tokens in device_tokens for server-side push
  • Foreground handling — banner + sound + badge for in-app notifications
  • Tap routing — opens relevant case or screen from notification

ActiveCaseManager

Tracks the current in-progress case for the floating ActiveCaseBar that appears above the tab bar. Polls on a timer and updates via Realtime events.

Design system

The design system is centralized in Theme.swift:

Colors

TokenUsage
orbitPrimary (#2563EB)Primary brand blue
orbitGreenSuccess states, ahead-of-pace
orbitRedError states, behind-pace
orbitOrangeWarning states, on-pace
orbitSlateNeutral text and borders
roomBackground, roomCard, etc.Room Mode dark palette

Device scaling

The app uses a scaling factor for iPad:
static let deviceScale: CGFloat = UIDevice.current.userInterfaceIdiom == .pad ? 1.5 : 1.0
Applied via .deviceScaled and .scaledSystem() modifiers to fonts and dimensions. This ensures Room Mode text is readable from across the operating room.

Typography

  • Dynamic Type support for accessibility
  • Rounded design for titles (Font.Design.rounded)
  • iPad fonts scaled at 1.5x via device scale factor

Data layer

Repositories

All database access flows through repositories. Each repository:
  • Receives accessToken and facilityId in its initializer
  • Sets the Supabase client’s auth header for RLS
  • Returns typed Swift models (never raw JSON)
  • Handles error mapping to ORbitError types
RepositoryKey operations
CaseRepositoryFetch cases by date/room/status, record milestones, update case fields
MilestoneRepositoryGet facility milestones (ordered), validate sequence, check recorded status
RoomRepositoryRoom list with case counts, room-case relationships
StaffRepositoryStaff assignments CRUD, role lookups, staff-case links
DelayRepositoryAdd/remove delays with reason codes
ImplantRepositoryImplant data and sizing
ScoreRepositorySurgeon scorecards from pre-computed table

Key queries

All queries follow these platform-wide rules:
  • Every query filters by facility_id
  • Milestone joins use facility_milestone_id (never milestone_type_id)
  • Soft-delete tables filter is_active = true
  • Analytics calculations use median, not mean

Voice command pipeline

The voice system is the most complex subsystem in the iOS app. It lives entirely in Features/RoomMode/Voice/ and consists of 8 coordinated services.

Architecture overview

┌─────────────────────────────────────────────────────────────────┐
│                    VoiceCommandService                          │
│                    (Two-Stage Orchestrator)                     │
│                                                                 │
│  ┌──────────────┐     ┌──────────────────┐                     │
│  │ Stage 1      │     │ Stage 2          │                     │
│  │ WakeWord     │────▶│ SFSpeechRecognizer                     │
│  │ Detector     │     │ (1.5s silence    │                     │
│  │ (Porcupine)  │     │  or 60s timeout) │                     │
│  │ ~1-2% CPU    │     │                  │                     │
│  └──────────────┘     └───────┬──────────┘                     │
│                               │ transcription                   │
│                               ▼                                 │
│  ┌──────────────────────────────────────────┐                  │
│  │           VoiceCommandParser             │                  │
│  │     ┌─────────────────────────┐          │                  │
│  │     │ MilestoneAliasDictionary│          │                  │
│  │     │ (in-memory hash map)    │          │                  │
│  │     └─────────────────────────┘          │                  │
│  └──────────┬────────────────┬──────────────┘                  │
│        match │                │ no match                        │
│             ▼                ▼                                  │
│  ┌──────────────┐  ┌───────────────────┐                       │
│  │ Fast Path    │  │ Slow Path         │                       │
│  │ (instant,    │  │ VoiceLLMClassifier│                       │
│  │  offline,    │  │ (Claude Haiku     │                       │
│  │  free)       │  │  Edge Function)   │                       │
│  └──────┬───────┘  └────────┬──────────┘                       │
│         │                   │                                   │
│         └─────────┬─────────┘                                  │
│                   ▼                                             │
│  ┌──────────────────────────────────────┐                      │
│  │       MilestoneValidation            │                      │
│  │  (sequence check, already-recorded)  │                      │
│  └──────────────┬───────────────────────┘                      │
│                 ▼                                               │
│  ┌──────────────────────────────────────┐                      │
│  │  RoomModeViewModel (command router)  │                      │
│  │  → record / cancel / undo / query    │                      │
│  └──────────────┬───────────────────────┘                      │
│                 ▼                                               │
│  ┌──────────────────────────────────────┐                      │
│  │       VoiceFeedbackService           │                      │
│  │  (TTS + chime, pauses Stage 2)      │                      │
│  └──────────────────────────────────────┘                      │
└─────────────────────────────────────────────────────────────────┘

Service breakdown

VoiceCommandService (orchestrator)

The central coordinator (~764 lines). Manages the two-stage flow:
  1. Configures audio session for simultaneous playback + recording
  2. Starts Porcupine wake word detection (Stage 1)
  3. On wake word → starts SFSpeechRecognizer (Stage 2)
  4. On transcription → routes to parser
  5. On result → triggers feedback and returns to Stage 1
  6. Handles microphone permissions, audio route changes, and error recovery
State machine:
idle → waitingForWakeWord → recognizingCommand → processing → (classifying) → back to waitingForWakeWord
Falls back to always-on speech recognition if Porcupine is unavailable (missing model file or access key).

WakeWordDetector

Picovoice Porcupine wrapper for the “Orbit” wake word:
  • Processes audio frames from the shared AVAudioEngine
  • Converts 48kHz float samples → 16kHz Int16 (Porcupine’s required format)
  • Runs at ~1-2% CPU (vs. 15-25% for always-on speech recognition)
  • Bundled model file: Orbit_en_ios_v4_0_0.ppn
  • No network required — fully on-device

VoiceCommandParser

Routes transcriptions through the alias dictionary:
  1. Normalizes input (lowercase, trim whitespace, collapse multiple spaces)
  2. Checks MilestoneAliasDictionary for a match
  3. Returns ParsedCommand with action type, milestone ID, and confidence
  4. Returns .noMatch if no alias found → triggers LLM slow path
Also handles pending state: when a command is waiting for confirmation, routes “yes”/“confirm”/“cancel”/“no” to the appropriate confirm/cancel action.

MilestoneAliasDictionary

An in-memory hash map of voice command aliases loaded from the voice_command_aliases database table:
  • Exact match — direct hash lookup (O(1))
  • Contains match — checks if the transcription contains a known alias phrase
  • Fuzzy match — Levenshtein distance ≤ 2 for close misses
  • Auto-learning — new aliases can be added at runtime when the LLM auto-caches
Normalization pipeline: lowercased → trimmed → collapsed whitespace → stripped punctuation

VoiceLLMClassifier

Calls the classify-voice-command Supabase Edge Function when no local alias matches: Request payload:
{
  "transcription": "the patient is in the room",
  "facilityMilestones": [...],
  "caseContext": { "currentPhase": "pre-op", "recordedMilestones": [...] }
}
Response:
{
  "intent": "record_milestone",
  "facilityMilestoneId": "uuid",
  "confidence": 0.92,
  "spokenResponse": "Patient In recorded",
  "shouldCache": true
}
The Edge Function uses Claude Haiku for fast, cost-effective classification. When shouldCache = true and confidence ≥ 85%, the phrase is automatically saved as a new alias in the voice_command_aliases table (with auto_learned = true).

MilestoneValidation

Validates milestone sequence before recording:
ResultConditionBehavior
ImmediateAll prior milestones recordedRecord instantly
SkippedWarning1 milestone skippedRecord with warning toast
OutOfOrder2+ milestones skippedHold pending — require verbal confirmation (15s timeout)
AlreadyRecordedMilestone has existing timestampHold pending — announce existing time, ask to confirm update

VoiceFeedbackService

Text-to-speech engine using AVSpeechSynthesizer:
  • Supports configurable voice selection (system voices + Siri voices)
  • Three feedback levels: Full Verbal, Sounds Only, Silent
  • Pauses Stage 2 recognition during TTS playback to prevent self-hearing
  • Stage 1 keeps running during TTS — wake word detection is never interrupted
  • Plays confirmation chimes via AVAudioPlayer

VoiceQueryResponder

Handles non-action intents (query_time, query_duration, query_case_info):
  • Formats response text based on case state and recorded milestones
  • Returns overlay data for UI display (auto-dismiss after 5 seconds)
  • Responds verbally via VoiceFeedbackService

VoiceCommandModels

Type definitions for the voice pipeline:
enum VoiceState {
    case idle, waitingForWakeWord, recognizingCommand, processing, classifying
}

struct ParsedCommand {
    let actionType: String        // "record", "cancel", "undo_last", etc.
    let facilityMilestoneId: UUID?
    let milestoneName: String?
    let confidence: CommandConfidence
    let classificationPath: String // "exact_match" or "llm"
}

struct VoiceLogEntry {
    let commandText: String
    let matchedMilestone: String?
    let confidence: CommandConfidence
    let outcome: String           // "recorded", "pending", "rejected", etc.
    let classificationPath: String
    let timestamp: Date
}

Command routing

The RoomModeViewModel routes parsed commands to their handlers:
switch command.actionType {
    case "record":             // Record milestone timestamp
    case "cancel":             // Clear milestone timestamp
    case "undo_last":          // Undo last voice-recorded milestone
    case "next_patient":       // Call for next patient
    case "surgeon_left":       // Mark surgeon departure
    case "confirm_pending":    // Confirm a pending command
    case "cancel_pending":     // Cancel a pending command
    case "start_cement_timer": // Start floating cement timer
    case "stop_cement_timer":  // Stop/dismiss cement timer
    case "query_time":         // Answer time question
    case "query_duration":     // Answer duration question
    case "query_case_info":    // Answer case info question
}

Database tables

The voice system uses two database tables:

voice_command_aliases

ColumnTypeDescription
idUUIDPrimary key
facility_idUUID (nullable)NULL = global template, UUID = facility-specific
milestone_type_idUUID (nullable)Links to milestone type
facility_milestone_idUUID (nullable)Links to facility milestone
alias_phrasetextThe spoken phrase to match
action_typetextRouting key (record, cancel, undo_last, etc.)
source_alias_idUUID (nullable)Template propagation tracking
is_activebooleanSoft delete flag
auto_learnedbooleanWhether this was auto-learned from LLM
deleted_attimestamptz (nullable)Soft delete timestamp
Scoping: Global templates (facility_id = NULL) are managed by global admins. Facility-specific aliases override or extend the global set. Each facility loads both global and local aliases.

voice_command_logs

ColumnTypeDescription
idUUIDPrimary key
case_idUUIDThe case this command was spoken during
command_texttextRaw transcription
matched_milestone_idUUID (nullable)Which milestone was matched
confidence_leveltexthigh, medium, low, none
outcometextrecorded, pending, rejected, cancelled, timeout, unrecognized
source_texttext (nullable)Original speech before normalization

Feature parity matrix

FeatureWebiOS
Case managementFull CRUDView + milestone recording
Room status boardFullFull
Surgeon home dashboardFullFull
Device rep tray trackingFullFull (differentiator)
ORbit Score / ScorecardsClient-side calculationVia surgeon_scorecards table
Voice commandsN/AFull (iPad differentiator)
Room ModeN/AFull (iPad only)
Face ID authN/AFull
Push notificationsN/AFull (APNs)
Analytics dashboards6 viewsNot started
Block schedulingFullNot started
Admin featuresFullNot planned for mobile

Auth flow

User → Login → Supabase Auth → Session token → Keychain → RLS enforces access
  • Tokens stored in iOS Keychain (encrypted, hardware-backed)
  • Auto-refresh 5 minutes before expiry
  • Foreground refresh handles iOS background suspension
  • Face ID gates app access when enabled

Shared principles

The iOS app follows the same platform-wide rules as the web app:
  • Milestone v2.0facility_milestone_id is the FK, never milestone_type_id
  • Median over average — all analytics use median, not mean
  • Soft deletes — filter is_active = true on soft-delete tables
  • Facility scoping — every query filters by facility_id

Testing

Tests live in ORbitTests/ using XCTest:
Coverage areaStatus
RepositoriesCovered
ViewModelsCovered
Voice pipeline (VoiceCommandService, VoiceLLMClassifier, etc.)Gap — highest priority
UI / integrationManual testing via Simulator/device

Build configuration

SettingValue
Xcode projectORbit.xcodeproj
TargetORbit
Minimum iOS18.0
Swift version5.9+
Package managerSwift Package Manager
DependenciesSupabase Swift SDK, Porcupine iOS SDK
EntitlementsKeychain access, push notifications, microphone, speech recognition

FAQ

Yes. Both apps use the same Supabase project. Point the iOS app’s configuration to your Supabase URL and anon key. RLS ensures data isolation per facility.
  1. Add the action type string to the voice_command_aliases migration (or insert rows directly)
  2. Add a case handler in RoomModeViewModel’s command routing switch
  3. Seed default aliases for the new action type
  4. Add the action type to the web settings UI filter (in VoiceCommandsPageClient.tsx)
It orchestrates milestone recording, voice command routing, staff/delay/implant modals, pace calculation, timer management, and cement timer state. Consider extracting voice routing into a dedicated VoiceCommandRouter if it grows further.
When the classify-voice-command Edge Function returns shouldCache = true with confidence ≥ 85%, the iOS app inserts a new row into voice_command_aliases with auto_learned = true. The alias is immediately added to the in-memory dictionary for instant matching on subsequent uses.

Next steps

iOS app guide

User-facing guide to the iOS app’s features and pages.

Room Mode

Full-screen OR dashboard with voice commands.

Architecture

Full technical overview of the ORbit platform.

Data model

Database schema and relationships.