Back to blog

I Managed 588 People From a Single Dashboard (Here's How)

How a well-designed system replaced chaos with clarity - managing hundreds of field staff through software instead of spreadsheets.

systemsoperationsscalecase-study

Header image

I Managed 588 People From a Single Dashboard (Here's How)

I didn’t “automate 588 people.”

I managed 588 clinicians – nurses, evaluators, and field staff – from a single dashboard so the humans could finally stop drowning in busywork and actually spend time with patients and families.

The whole point of the system wasn’t headcount reduction.

It was this:

Take the rote, repetitive coordination work away from humans
so they can spend more time on connection work – the part that actually matters.

That distinction is everything.


1. The “588” Number – What I Was Actually Managing

At peak, the system was coordinating roughly:

  • 300–500 traveling nurses/evaluators in the field
  • Plus additional clinical staff and support roles
  • Across multiple states, time zones, and regulatory environments

All of that:

  • Assignments
  • Scheduling
  • Capacity management
  • Compliance flags
  • Performance signals

…was visible and actionable from one operational dashboard.

This wasn’t a “nice internal tool.”

This was the control panel for an entire national operation.


2. What It Means to “Manage” 588 People From a Dashboard

“Manage” here doesn’t mean “click a button and the humans obey.”

It meant:

  • Scheduling & capacity

    • Who’s available where and when?
    • Who can take one more case today?
    • Who is at risk of being overloaded?
  • Assignments

    • Match the right clinician to the right case:
      • geography
      • skill set
      • licensing
      • client requirements
  • Compliance

    • License expirations
    • Required training
    • Documentation completion
    • State-specific rules
  • Workflow tracking

    • Where is each case in the pipeline?
    • What’s stuck? Who’s waiting on what?
    • What’s about to breach a promised SLA?
  • Performance signals

    • Throughput, turnaround times, error rates
    • Not to punish people, but to spot where the process is failing them

All that used to require:

  • Phone calls
  • Emails
  • Spreadsheets
  • Sticky notes
  • “Who remembers what?”

The dashboard pulled the reality into one screen where:

  • You could see the operation
  • You could act on it
  • And the system did the drudge work in the background

3. The Dashboard: What It Actually Looked Like

This was not a sexy “BI chart for executives.”

It was a working operator’s console.

Key pieces:

a) At-a-glance load view

  • Current cases in progress
  • Broken down by:
    • region
    • client
    • clinician type
  • Color-coded:
    • Normal
    • At-risk
    • Breach imminent

b) Capacity & availability panel

  • For each clinician:
    • Current assignments
    • Remaining capacity (based on configurable rules)
    • Time-off and blackout periods
    • Travel constraints / territory

You could:

  • Filter by region, license, client, etc.
  • See who can realistically take the next case without burning out.

c) Assignment & routing tools

From the dashboard you could:

  • Auto-assign based on rules and priorities
  • Manually override when human judgment was better
  • Bulk assign in special situations (surges, seasonal spikes)

The automation did the first pass.
Humans corrected the edge cases.

d) Compliance & risk strip

  • Expiring licenses
  • Missing documentation
  • Incomplete evals
  • Training deadlines

The system:

  • Flagged risk before it became crisis
  • Reduced the “oh shit, we missed that” moments

e) Actions and automation

From one place, operational staff could:

  • Trigger notifications to clinicians
  • Re-route work
  • Pause new assignment flow to overstretched areas
  • Generate reports for clients or auditors

The litmus test was simple:

“Can one person sitting here understand and influence what 588 people are doing without picking up the phone?”


Before: Chaos

4. Before the System: Chaos by Spreadsheet and Heroics

Before Conductor, operations looked like:

  • Massive shared spreadsheets
  • Color coding that only three people fully understood
  • Outlook calendars patched together into a pseudo-scheduling system
  • Phone calls, texts, and emails to:
    • check availability
    • chase paperwork
    • rescue last-minute schedule failures

Problems:

  • No single source of truth
  • No real-time picture of:
    • load
    • risk
    • capacity
  • Everything relied on:
    • “Who remembers what?”
    • “Who’s the most organized person in the office?”

It “worked” as long as:

  • Volume stayed modest
  • The same heroic coordinators never took a real vacation

Past a certain scale, it breaks.
Not because people are bad at their jobs,
but because humans are not designed to juggle that much state in their heads.


5. Key Technical Decisions That Made This Possible

This didn’t magically come from “a dashboard.”

It came from architectural decisions:

  1. Single operational source of truth

    • The system owned:
      • case status
      • assignments
      • availability
    • No “side spreadsheets” were allowed to become reality.
  2. A data model built around people + work

    • Entities modeled:
      • clinicians
      • qualifications
      • time windows
      • assignments
      • constraints
    • Not just “rows in a generic tasks table.”
  3. Near real-time updates

    • Events flowed as:
      • assignments changed
      • clinicians accepted/declined
      • cases moved stages
    • The dashboard wasn’t a stale snapshot.
  4. Rules engine for scheduling & capacity

    • Configurable rules for:
      • max daily/weekly load
      • service areas
      • client-specific constraints
    • So changes in policy didn’t require massive code changes.
  5. Role-based access & views

    • Field staff saw what they needed.
    • Coordinators saw cross-cutting views.
    • Leadership saw aggregates and trend lines.
  6. Auditability

    • Every change:
      • who did it
      • when
      • why (where relevant)
    • Non-negotiable in healthcare and critical workflows.

This combination made it possible to scale without scaling the chaos.

After: Clarity


6. Scheduling 300–500 Traveling Nurses: The Hard Part

Scheduling traveling clinicians is not:

“Put names in a calendar.”

You’re juggling:

  • Geographies
  • State lines and licensing rules
  • Travel time
  • Visit durations
  • Time zones
  • “Hard” constraints (no license = no work)
  • “Soft” constraints (don’t burn out your top performers)

The system handled things like:

  • Who is within a reasonable radius for this assignment?
  • Who has the right credentials for this type of case?
  • Who has capacity based on:
    • weekly limits
    • time already committed
  • How do we minimize:
    • travel dead time
    • unproductive gaps
    • last-minute scrambles?

We automated:

  • First-pass matching
  • Automatic filtering of ineligible clinicians
  • Scheduling suggestions ranked by:
    • fit
    • cost
    • current load

Humans:

  • Reviewed
  • Overrode when they had external knowledge (family issue, local context)
  • Focused on edge cases instead of starting from zero every time

That’s the theme:

Automation handled coordination.
Humans focused on connection and judgment.


7. The Myth of “Replacing 25 Back-Office Staff”

There’s a version of this story that sounds like:

“We replaced 25 people with software.”

That’s not the story I want to tell.

The deeper reality:

  • In a traditional model, running this scale of operation would have required:
    • ~25 more full-time coordinators, schedulers, and ops staff
    • just to keep up with the volume

Because the system existed, we could:

  • Run the same or higher volume with:
    • a much smaller ops team
    • who were able to spend their time:
      • solving real problems
      • supporting clinicians
      • handling exceptions
      • improving processes

What changed:

  • Before

    • Humans spent most of their day:
      • copying between systems
      • updating spreadsheets
      • playing phone/email tag
      • fighting fires they couldn’t see coming
  • After

    • The system:
      • tracked status
      • routed work
      • watched key thresholds
    • Humans:
      • had time to talk to clinicians
      • explain changes
      • help with tough cases
      • build relationships with clients

Yes, we avoided hiring a bunch of additional coordinators.
But the story isn’t “we cut humans.”

It’s:

“We stopped asking humans to act like bad, error-prone databases
and let them act like humans again.”


8. Scale Challenges: What Broke Going From 100 to 588

Scaling from managing ~100 people to ~588 is not a linear change.

Things that broke or nearly broke:

  1. Naive assumptions in the scheduling logic

    • At small scale, simple rules work.
    • At higher scale:
      • edge cases become common
      • performance tuning matters
      • greedily assigning “the best” clinician can starve others
  2. Reporting & visibility

    • Early on, a few rough reports are enough.
    • Later:
      • finance wants one view
      • ops wants another
      • clients want SLAs
    • We had to:
      • restructure reporting
      • add specific operational metrics
      • optimize queries that were fine at 50 clinicians and miserable at 500+
  3. Change management

    • When you change a workflow for 50 people, you can hold their hands.
    • At 500+, any UI or process tweak has:
      • downstream impact
      • real training cost
    • We had to:
      • slow down changes
      • communicate better
      • use the dashboard to detect who was struggling with the new flow
  4. Mistakes hurt more

    • A small bug at 50 people:
      • annoying
    • The same bug at 500+:
      • real money
      • real stress
    • That pushed even more focus on:
      • testing critical paths
      • monitoring
      • feature-flagging changes

Lesson:

Scale doesn’t just amplify throughput.
It amplifies flaws and blind spots.


9. How the 588 People Actually Used the System

For field staff (nurses, evaluators, etc.):

  • Interface: primarily web, with mobile-friendly access for on-the-go use
  • Core things they did:
    • See their upcoming assignments
    • Acknowledge / accept work
    • Document visits, outcomes, notes
    • Upload required documentation

For coordinators and ops:

  • Lived in the dashboard for:
    • daily triage
    • scheduling
    • exception handling
    • client-specific requests

Adoption wasn’t magic. It required:

  • Training

    • Short, focused sessions on:
      • how to see your work
      • how to complete tasks
      • how to correct mistakes without opening a ticket for everything
  • Feedback loops

    • Listening to:
      • “This screen makes no sense”
      • “This takes too many clicks”
    • Using that to:
      • simplify flows
      • hide irrelevant detail
      • add shortcuts where they mattered
  • Trust-building

    • If the system assigns you nonsense work, you stop trusting it.
    • So we iterated:
      • tuning rules
      • making assignments more sane
      • giving users an easy way to say:

        “This doesn’t work, here’s why.”

The end state:

  • People used the system because it helped them,
    not because someone forced them to click boxes.

10. Operator Insights Most Developers Never See

Managing 588 people from a single dashboard teaches you things that aren’t in most dev books:

  1. Humans are not lazy – they’re overloaded.

    • When people “fail,” it’s often because:
      • the system gives them bad information
      • the process fights them
      • they’re doing too much manual work
  2. The cost of coordination is invisible until you remove it.

    • You don’t feel the drag of:
      • constant checking
      • constant correction
    • until the system lifts it and everyone suddenly has time to breathe.
  3. Good architecture is about protecting humans from chaos.

    • It’s not an academic exercise.
    • It’s about:
      • surfacing the right information
      • at the right time
      • in a form they can act on
      • without frying their brain.
  4. Dashboards can be weapons or instruments.

    • Weapons:
      • used to beat people up with metrics
    • Instruments:
      • used to see reality
      • and improve the system to support the people in it
    • We deliberately aimed for the second.
  5. You can’t design real systems from a whiteboard alone.

    • A lot of the best behavior came from:
      • watching how ops REALLY worked
      • seeing where stress piled up
      • and coding the real world, not the imagined one

If there’s one thing I’d want technical leaders to take from this:

Don’t build systems that try to replace people.
Build systems that remove the stupid, brittle, repetitive work
so your people can do the part only humans can do well.

That’s what managing 588 people from a single dashboard was really about.


Context → Decision → Outcome → Metric

  • Context: Multi-state healthcare ops, 588 people at peak, high-stakes scheduling with licensing and travel constraints.
  • Decision: Centralize coordination in a single dashboard with rules-driven matching, idempotent tasks, and human override paths; force every integration through adapters so downstream changes didn’t break scheduling.
  • Outcome: Avoided hiring ~25 additional coordinators, reduced scramble calls, and increased on-time assignments while keeping humans in the loop for judgment calls.
  • Metric: Assignment SLA hit rates improved from ~82% to ~96%; manual escalations per week dropped by ~40% after automating first-pass matching.

Anecdote: The Week of the Ice Storm

An ice storm froze half the region and wiped out travel. Instead of whiteboards and panic, the dashboard surfaced at-risk appointments, auto-paused them, and re-ranked eligible clinicians who could still move. Coordinators used the “human notes” field to mark local conditions and overrides. Result: we salvaged 60% of the schedule that would have been canceled, with documented reasoning for every exception. No one guessed; the system gave them a sane starting point.

Mini Checklist: Scaling People Ops Without Chaos

  • Encode hard constraints (licenses, eligibility) and soft ones (fatigue, travel sanity) separately; enforce the first, surface the second.
  • Keep automation reversible; every automated assignment should be explainable and overridable with an audit trail.
  • Build “pause and re-rank” workflows for regional disruptions; don’t make humans rebuild schedules from scratch.
  • Measure coordination load: escalations, overrides, and assignment SLA hits. Optimize for fewer escalations, not just more assignments.