The Architecture Decisions That Prevented $20M in Rebuilds
The seven key architecture decisions that kept Conductor running for 20 years without a ground-up rewrite.

The Architecture Decisions That Prevented $20M in Rebuilds
When an independent group looks at your system and says:
“It would cost on the order of $20M to rebuild this from scratch,”
you can react two ways:
- Panic
- Or say:
“Good. That means the architecture held up long enough that a rewrite is that expensive.”
This post is about why Conductor never needed a ground-up rewrite, despite 20 years of change.
1. The $20M Replacement Estimate
Directionally, the replacement cost came from:
- Developer-years:
- multiple teams, over multiple years
- rebuilding:
- core platform
- integrations
- reporting
- migration tooling
- Rates:
- mid to high six-figure annual cost per senior engineer + overhead
- or contractor equivalents
Ballpark:
- 15–25 serious engineers/architects
- 3–4 years
- plus:
- QA
- PM
- ops
- domain experts
- migration and cutover
You’re easily in eight-figure territory.
The point isn’t the exact number.
It’s that a full rewrite is a massive, risky investment — and we never had to make it.
Because of some deliberate architecture calls.
2. Key Decision #1: Relational, Well-Modeled Core (SQL, Not Chaos)
Decision:
- Use a relational database (SQL Server)
- With a carefully designed schema:
- normalized where it mattered
- denormalized where justified
- clear primary keys and relationships
Reporting stayed sane for 20 years because the schema reflected reality. Audits were possible because relationships were explicit. When scale grew, we could add indexes and tune performance without rewriting the data layer.
The alternative—a junk drawer of JSON blobs and NoSQL experiments—would have forced a rewrite by year five. I've seen that movie. Every query becomes archaeology. Every report requires a data warehouse to reconstruct obvious questions. That's the trap.
3. Key Decision #2: Clear Domain Boundaries in the Schema
Decision:
- Model real-world concepts explicitly:
- participants
- assessments
- cases
- providers
- programs
- Avoid:
- vague “blob” tables
- generic “config” for everything
When new features arrived—and they always do—they had a home. We could anchor to existing concepts or add new, well-defined entities without tearing up core tables or breaking every report.
Without this? Schema entropy. Every new requirement becomes a new column glommed onto one massive table. Eventually someone says, "We have to redo the data model." That's a rewrite in disguise.
4. Key Decision #3: Integration via Explicit Interfaces, Not DB Side-Doors
Decision:
- Integrate with external systems through:
- APIs
- file interfaces
- well-defined contracts
- No:
- direct writes into our core DB by external parties
- “just hook into our tables” shortcuts
This meant we could evolve schema internals and change implementation details without breaking partner systems or triggering cascading failures.
The alternative is grim. You get permanently handcuffed to old schemas because some external system assumes a column exists. You can't change anything without coordinating with five partners who don't care about your roadmap. I've seen companies spend $2M just to untangle unsafe DB-level integrations. We never had to.
5. Key Decision #4: Separate Operational Workflows from Reporting
Decision:
- Distinguish between:
- OLTP (live operations)
- reporting / analytics views
- Avoid:
- loading heavy, ad-hoc reporting directly on transactional tables
- “just run that monster query live” habits
Operational performance stayed stable for 20 years. We could tune indexes separately, add reporting-friendly views, build analytics—without wrecking day-to-day usage.
I've seen the alternative. Someone runs a "quick report" directly against production tables. It locks rows. Operations grind to a halt. Now you need a "re-architecture to fix reporting." Not great.
6. Key Decision #5: Centralized Business Rules, Not Scattered Logic
Decision:
- Keep core business logic:
- in well-defined services/modules
- with clear entry points
- Avoid:
- copy-pasted rule logic spread across UI, DB, and random scripts
Policy changes happened constantly—healthcare credentialing rules shift every year. When they did, we could implement the change in one place, test it, review it, and deploy it without wondering what else would break.
Rule spaghetti kills systems. Copy-paste logic across UI, database, and random scripts means every change is unpredictable. Eventually someone decides a full rewrite is "easier" than understanding existing behavior. That's the moment the death spiral starts.
7. Key Decision #6: Gradual Modernization, Not Big Bang
Decision:
- Incrementally:
- refactor ugly areas
- improve modules
- introduce better patterns
- While:
- the system stayed live
- customers kept using it
We got modernization benefits without stopping the world. No betting the business on a multi-year rewrite. The system stayed live. Customers kept using it. We improved it piece by piece.
Here's what nobody tells you about "big bang" rewrites: three years in, you still haven't launched. The old system is still running. You're maintaining two codebases. Morale is shot. That's the real cost.

8. Key Decision #7: Conservative, Proven Technologies
As covered in the “boring tech” post:
Decision:
- Use:
- .NET
- SQL Server
- Windows/IIS
- Avoid chasing every new framework fad.
Lower framework churn meant fewer forced rewrites just because "the tech went obsolete." Ops and devs could focus on business complexity instead of framework drama.
The honest answer is: nobody remembers what JavaScript framework was hot in 2008. But SQL Server from 2005 still runs. The boring choice was the right choice.
9. Technical Deep-Dive: Integration Contracts as a Rewrite Shield
Let’s zoom into one decision:
Treat integrations as contracts, not as afterthoughts.
Pattern:
-
Each integration:
- had a defined interface:
- schema
- field semantics
- error behavior
- went through:
- mapping
- validation
- logging
- had a defined interface:
-
Internally, we:
- transformed data into our domain model
- never let external quirks leak deep inside
Why it’s a big deal:
-
Change isolation
- If a partner changed:
- field formats
- sending behavior
- we only had to adjust:
- the adapter layer
- not the whole core system
- If a partner changed:
-
Migration flexibility
- When replacing an external system, we:
- reused internal core logic
- only swapped integration modules
- When replacing an external system, we:
-
Auditability
- We could:
- trace:
- “what did we receive?”
- “what did we send?”
- without spelunking half a dozen random places
- trace:
- We could:
Counterfactual (what if we hadn’t done this):
- External systems would:
- write directly into core tables
- rely on weird, undocumented conventions
- Any change would risk:
- silent data corruption
- cascading bugs
- huge fear of change
- Eventually someone says:
“This is unfixable junk; we have to rewrite the whole thing.”
This one choice — treating integration boundaries as real contracts — is a huge part of why Conductor could evolve instead of being restarted from scratch.

10. Lessons for Architects
What to take away:
-
Design for evolution, not perfection.
- Ask:
- “How will this survive policy/requirement changes?”
- Not:
- “How do I make this extremely clever today?”
- Ask:
-
Protect your core with clear boundaries.
- Data model
- Integrations
- Business rules
-
Prefer boring tech in long-lived systems.
- Let your innovation live in:
- domain modeling
- UX
- workflows
- Not in:
- unnecessary tech novelty
- Let your innovation live in:
-
Make rewrites the least attractive option.
- If a full rewrite is cheap, your architecture probably didn’t encode much real value.
- If a full rewrite is obviously painful, that means:
- your system carries significant, accumulated knowledge
- your job is to evolve it, not burn it.
If you do this right, no one applauds you for "the big rewrite."
Because they never need one.
Context → Decision → Outcome → Metric
- Context: 20-year healthcare platform processing $100M+ annually, with pressure points that could have forced rewrites: changing integrations, growing scale, policy changes, staff turnover.
- Decision: Made seven architecture calls early: relational core with good schema, clear domain boundaries, integration via explicit interfaces, separated OLTP from reporting, centralized business rules, gradual modernization, and boring tech.
- Outcome: Platform never needed a ground-up rewrite. Evolved continuously while staying live. Third-party audit estimated $20-35M replacement cost—a measure of accumulated value, not debt.
- Metric: Zero full rewrites in 20 years. 99.9% uptime. Zero contract losses over 12-year stretch. Modernization happened incrementally without stopping the business.
Anecdote: The Integration That Could Have Killed Us
In 2011, a state partner announced they were changing their data exchange format. Six months' notice. Completely different schema, different auth mechanism, different delivery method.
If we'd built integrations the wrong way—direct DB writes, tight coupling, no adapter layer—that change would have required touching code across the entire platform. Months of work. High regression risk. Probably the kind of project that triggers "let's just rewrite the whole thing."
Instead, we had the adapter pattern in place. The integration existed behind a clear contract. The core system had no idea where data came from; it just received validated domain objects.
The migration took three weeks. Two developers. Zero changes to core business logic. The state administrator later told us it was "the smoothest vendor transition they'd seen."
That three-week migration is what $20M of avoided rewrite looks like in practice.
Anecdote: The Schema Decision That Paid Off for 15 Years
In 2005, we had a debate about how to model "cases" in the database. One option was a generic table: case_id, case_type, data_blob. Flexible. Easy to add new case types. Very tempting.
The other option was explicit tables for each case type with proper relationships and foreign keys. More upfront work. More migrations when requirements changed. Less "flexible."
We chose explicit modeling. It felt like more work at the time.
Fifteen years later, that decision was still paying dividends. Reporting was sane because the schema reflected the domain. Audits were possible because relationships were explicit. New features could anchor to existing entities without archaeology expeditions.
The "flexible blob" approach would have been faster in year one. It would have been a nightmare by year five. By year fifteen, it would have been a rewrite trigger.
The boring choice was the right choice.
Mini Checklist: Architecture for Longevity
- [ ] Schema models real-world concepts explicitly (not generic blobs)
- [ ] Integrations go through adapters with clear contracts, not DB side-doors
- [ ] OLTP and reporting are separated; analytics don't kill production
- [ ] Business rules live in defined modules, not scattered across UI/DB/scripts
- [ ] Modernization happens gradually while system stays live
- [ ] Tech stack is boring enough to have a 10-year support horizon
- [ ] Every major decision has a documented "why" that future developers can reference