Back to blog

2025 Tech Decisions: What Actually Matters and What's Just Noise

A filter for technology decisions in 2025 - where to stay boring, where to experiment, and what to ignore.

technologydecisions2025strategy

Header image

2025 Tech Decisions: What Actually Matters and What's Just Noise

Every week in 2025 there's some new thing you're "already behind on":

  • New JS framework that's "finally the one"
  • New "serverless" twist that will "change everything"
  • New vector DB that makes all previous databases obsolete
  • New AI infra stack that you "need to adopt now"
  • New "you're an idiot if you're not using X" hot take

If you chase all of it, you'll drown in migrations, ramp-up costs, and half-finished experiments. If you ignore all of it, you'll calcify into irrelevance.

The trick is knowing what's real change versus what's noise dressed up as innovation.

Here's how I filter tech decisions after 20 years of watching hype cycles come and go.


What Has Actually Changed Since the 2000s

Some things really are different now. Denying these would be stubborn, not principled.

Cloud is real and mature:

You don't need to run your own hardware anymore. Managed databases, queues, and infrastructure are legitimate options with proven track records at scale. The operational burden of self-hosting is rarely worth it unless you have specific compliance, latency, or cost constraints.

In 2008, we ran our own servers in a colocation facility. We managed hardware failures, network configs, and capacity planning ourselves. It was necessary then. In 2025, it's usually a choice—and often the wrong one.

AI/LLMs are now a primitive:

You can realistically classify, summarize, and assist without a PhD and a cluster. The APIs are stable enough for production. The costs have dropped enough to be viable at scale.

This doesn't mean "put AI everywhere." It means AI is now a tool in the toolbox, like databases or message queues. You evaluate it for specific use cases, not as a religion.

Observability is normal:

Logging, metrics, and tracing are expected, cheap, and mostly solved problems. You don't need to build your own monitoring stack. You don't need to argue about whether visibility into your systems is worth the investment.

Twenty years ago, we debugged production issues with grep and intuition. Now there's no excuse for flying blind.


What Hasn't Changed At All

The fundamentals didn't move. Anyone telling you otherwise is selling something.

Data modeling still matters:

If your schema is garbage, no stack will save you. A poorly modeled domain creates compounding problems that show up in every query, every report, and every feature request.

I've inherited systems where "we'll fix the data model later" turned into "we can't add this feature without a six-month migration." The schema is the foundation. Everything else is decoration.

Rewrites are still expensive:

Burning it down is still a multi-year, multi-million-dollar decision. The fantasy of "we'll just rewrite it in [new framework] and everything will be better" kills companies.

I've watched three rewrites fail in my career. Each one took 18–36 months, consumed the entire engineering team, and delivered roughly the same functionality as the original—minus the edge cases that had been quietly handled for years.

Humans are still the bottleneck:

Coordination, communication, and bad requirements cause more failures than technology choices. The limiting factor on most projects isn't the stack. It's alignment.

No framework fixes "we don't know what we're building" or "the team doesn't talk to each other."

Reliability still wins long-term:

People eventually get tired of shiny things that break. The system that stays up, processes correctly, and doesn't lose data wins in the end—even if it's boring, even if it's "old."


Where You Should Be Boring in 2025

Places to stay aggressively boring:

1. Primary database:

Use Postgres, SQL Server, or another mature relational DB unless you have a very specific reason not to.

"We might need to scale like Netflix one day" is not a reason. Netflix's scale problems are not your problems. Your problem is shipping features without losing data or corrupting transactions.

Conductor ran on SQL Server for 20 years, processing $100M+ annually. We never needed horizontal sharding. We never needed a distributed database. We needed good indexes, proper schema design, and sensible queries.

2. Core transactional flows:

The stuff that creates money, moves money, or affects contracts—be allergic to cleverness here.

Use well-understood frameworks and patterns. When something goes wrong at 3 a.m., you want to debug code that works like every tutorial says it should, not some novel architecture that seemed clever six months ago.

3. Auth, security, compliance:

Don't roll your own auth. Don't YOLO PII (Personally Identifiable Information) or PHI (Protected Health Information) into random tools because it's convenient. Use boring, documented, well-audited components.

The downside of getting security wrong is existential. The upside of rolling your own is... pride? Use Auth0, Okta, or whatever your enterprise stack provides.

4. Integration patterns:

Use patterns with decades of track record: queues, idempotent consumers, clear contracts.

Do not glue your business to some "beta" integration platform with no escape hatch. Integrations are where you'll spend 40% of your debugging time. Make them boring.

The payoff of boring:

Fewer surprises when regulations change. Fewer panics when staff leaves. Fewer late nights when you need to scale. Boring in these areas buys you freedom everywhere else.


Where You Can Afford to Experiment

If you never experiment, you end up stuck on 2008 forever. Here's where it's safer to play:

1. Internal tools:

Dashboards, internal support tools, scripted helpers. If they break, you get annoyed, not sued. Use that new framework. Try that AI library. Learn what works.

We built internal reporting tools with whatever was newest at the time. Some worked great. Some got replaced in six months. The production system never noticed.

2. Non-core AI helpers:

Summarizing tickets, drafting responses, assisting internal workflows—with a human in the loop and a fallback.

Start with use cases where "wrong" means "inefficient" not "lawsuit." A bad AI suggestion that a human catches and fixes is a learning opportunity. A bad AI decision that hits production automatically is a liability.

3. Prototypes:

New product ideas, exploratory features, things that may never see scale. This is where you learn whether something is viable before betting the company.

Rule of thumb:

If this thing fails, do customers feel pain or just annoyance?

If it's "mild annoyance for a few people internally," that's where you test new tech.

Boring vs experimental zones


A Sanity Filter for Any New Technology

Before you adopt something new in 2025, run it through these questions:

1. Will this still exist in 5–10 years?

Is it backed by real companies with business models? Is there real adoption outside Twitter and Hacker News? Can you find production case studies from companies that aren't the vendor?

If the only evidence is blog posts from the creators and enthusiasm from early adopters, that's not validation. That's marketing.

2. Can I hire for this without begging?

Are there enough people who know it—or can pick it up fast because it's adjacent to something mature? Can you staff a team in six months?

Exotic technology choices constrain your talent pool. Every constraint has a cost.

3. What's the blast radius if it sucks?

If you decide in 18 months this was a bad idea, how hard is it to unwind? Is it sitting in the core or at the edges?

Technology at the edges is replaceable. Technology in the core is a commitment. Know which one you're making.

4. Does it solve a real problem we actually have?

Or is it a solution in search of a justification? "It would be cool to use X" is not a business case. "X solves Y problem that costs us Z dollars" is a business case.

5. What does this do to ops?

Is deployment simpler or more "bespoke ritual"? Does monitoring become easier or harder? Are there good tools around it?

The fanciest technology in the world is worthless if you can't deploy it reliably and debug it at 3 a.m.

If you can't answer those questions cleanly, you're gambling. Sometimes gambling is fine—but call it what it is: a bet, not a strategy.


A 2025 Tech Stack You Should Not Be Embarrassed By

Something like this is still totally defensible:

  • Backend: .NET / Java / Node / Rails / Django (pick one you have talent for)
  • Frontend: React / Vue / Svelte (again: pick one, stop chasing every meta-framework)
  • Database: Postgres or SQL Server
  • Infrastructure: Managed cloud (AWS/Azure/GCP), containers if you need them, plain PaaS if you don't
  • Messaging: SQS/RabbitMQ/Kafka used sanely with idempotent consumers
  • AI: External LLM APIs or a managed hosting partner, wrapped behind your own service boundary so you can swap later

Is it the coolest? No.

Will it work, be supportable, and not destroy you? Yes.

That's the point.


What Actually Matters in 2025

You don't win because you picked the exact right framework.

You win because:

  • Your data model is sane and your schema reflects your actual business domain
  • Your architecture matches your real use cases, not hypothetical future scale
  • You ship reliably without each release being a prayer
  • You avoid catastrophic decisions like pointless rewrites, gluing your core to immature tools, or letting tech churn drive your roadmap

The rest—framework debates, AI vendor flame wars, infrastructure fashion—is noise.

If you're clear on what your system does, who depends on it, and how it fails, then picking tech in 2025 becomes way less mystical.

It's not "bet on the hottest thing."

It's "pick boring where the stakes are high, experiment where they're not, and ignore 90% of the hype."


Context → Decision → Outcome → Metric

  • Context: 20+ years of technology decisions in healthcare, watching hype cycles come and go, seeing teams chase novelty and teams stay stuck on ancient stacks.
  • Decision: Developed a filter: boring for core systems (database, auth, transactions, integrations), experimental for edges (internal tools, prototypes, non-critical AI helpers).
  • Outcome: Core systems ran for decades without rewrites. Experiments at the edges provided learning without risk. Teams could try new things without betting the company.
  • Metric: Zero forced technology migrations in 20 years. Three rewrites avoided by choosing evolution over revolution. Experimental tools provided 2-3 genuine improvements that made it to production each year.

Anecdote: The Framework That Ate Six Months

In 2016, we evaluated a new frontend framework for an internal dashboard. The pitch was compelling: faster rendering, better developer experience, modern architecture. Everyone was using it.

We decided to try it on a non-critical internal tool first—following the "experiment at the edges" rule.

Six months later, the tool was still half-built. The framework had released two major versions with breaking changes. The documentation lagged behind. The patterns that worked in tutorials didn't work at our scale.

We killed the project and rebuilt it in our boring, established stack. Took three weeks.

The lesson wasn't "never try new things." We'd tried it in a safe place, learned it wasn't ready, and moved on without damaging anything critical.

The lesson was: the evaluation filter works. We'd have lost a year and destabilized production if we'd tried that framework on something that mattered.

Anecdote: The Database That Didn't Need to Change

In 2019, a consultant recommended we migrate our primary database to a distributed system. "You'll hit scaling limits," they said. "Better to migrate now than during a crisis."

We ran the numbers:

  • Our transaction volume had grown 40% in five years
  • Current database was at 30% capacity
  • At current growth, we'd hit limits in... 15 years

The migration would cost $500K+ in engineering time and carry significant risk. The problem it solved wouldn't exist for over a decade.

We declined. Four years later, the database is still at 40% capacity. The distributed system the consultant recommended has been deprecated by its vendor.

Sometimes the boring choice is boring because it works.

Mini Checklist: 2025 Tech Decisions

  • [ ] Core systems (database, auth, transactions) use mature, boring technology
  • [ ] Experiments happen at the edges (internal tools, prototypes, non-critical features)
  • [ ] New technology passes the 5-10 year survival test
  • [ ] Hiring pool exists for any technology you adopt
  • [ ] Blast radius is understood before committing to new tech
  • [ ] Each technology choice solves a real problem you actually have
  • [ ] Ops burden is considered (deployment, monitoring, debugging)
  • [ ] AI is wrapped behind service boundaries for future swapability
  • [ ] Framework churn doesn't drive your roadmap
  • [ ] 90% of hype is consciously ignored