Skip to content

Modernization Anti-Patterns

Modernization projects fail in predictable ways. The same mistakes repeat across organizations, technology stacks, and team sizes. Recognizing these anti-patterns before you fall into them is more valuable than any technique for getting out of them afterward.

Each anti-pattern below describes the trap, explains why teams fall into it, and provides the alternative approach.

1. Wrapping, Not Extracting

Putting a new API layer over the legacy system without moving business logic out. The legacy database remains the source of truth. The new layer becomes a translation facade that must change whenever the legacy changes.

2. UI-First, API-Later

Building the new frontend before the new backend exists. The frontend couples to legacy APIs, which freezes the legacy API surface and prevents backend modernization.

3. Big Bang Rewrite

Attempting to rewrite the entire system before shipping anything. No production feedback for months or years. Scope creeps. Teams lose morale. The project gets cancelled.

4. Shared Development Database

Old and new systems reading from and writing to the same database during migration. Creates tight coupling, prevents schema evolution, and makes independent deployment impossible.

5. Verbal-Only Architecture

Making migration decisions in meetings without writing them down. Decisions get lost, repeated, or contradicted. New team members have no context.

6. Accuracy Without Baselines

Claiming “95% parity” without defining what 100% means. Self-assessed progress creates false confidence. The remaining “5%” turns out to be 40% of the effort.

7. Migrating Everything

Treating every entity, feature, and module as equally important. Spending the same effort on a rarely-used report as on the core transaction engine.

8. Ignoring the Framework

Treating migration as a language translation problem. The real challenge is extracting business logic from framework coupling, not converting Python syntax to Go syntax.


A team builds a new API that calls the legacy system underneath. The new API has clean endpoints and modern documentation, but every request ultimately hits the legacy database through legacy code.

┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ New API │────▶│ Thin Adapter │────▶│ Legacy System │
│ (looks modern) │ │ (pass-through) │ │ (still runs) │
└────────────────┘ └────────────────┘ └────────────────┘
  • It ships fast — “we have a modern API in 2 weeks”
  • It avoids the hard work of understanding legacy business logic
  • It satisfies management’s desire for visible progress
  • The legacy system is still the source of truth for all behavior
  • Every legacy change requires a corresponding adapter change
  • Performance is worse (extra network hop, no optimization opportunity)
  • You have not migrated — you have added a layer

Extract business logic into the new system with its own data store. The new system owns the behavior and the data. Use the strangler fig pattern with an anti-corruption layer that is designed to be temporary.


The team builds a new React/Vue/Angular frontend against the legacy API. The frontend looks modern, but it is tightly coupled to legacy API response shapes, error codes, and implicit conventions.

  • Visible progress: stakeholders can see and click the new UI
  • Frontend developers are often more available than backend specialists
  • “We’ll build the API later” feels like a reasonable plan
  • The new frontend becomes a constraint on backend migration: you cannot change the API without breaking the UI
  • Legacy API quirks become permanent — the frontend depends on them
  • When the backend migration starts, the frontend needs rewriting anyway
  • Two large migrations running in parallel (backend + frontend adaptation) instead of one

Design the new API first. Build it against the new domain model. The new frontend targets the new API from day one. Use parity testing to verify the new API matches legacy behavior. The old frontend continues working against the old API until the new one is ready.


“Let’s just rewrite it from scratch. We understand the domain now. It’ll be faster than migrating incrementally.”

The team spends 12-18 months building the new system in isolation. No production traffic. No user feedback. The legacy system continues diverging as business requirements change.

  • The legacy code is genuinely painful to work with
  • Rewriting feels more satisfying than incremental migration
  • Underestimation of legacy system complexity (it took 20 years to build; it will not take 6 months to replace)
  • The “second system effect” — the belief that the rewrite will avoid all past mistakes
  • No feedback loop. Bugs and misunderstandings are discovered only at launch
  • Business requirements change during the rewrite. The new system targets a moving goal
  • Team morale degrades: months of work with no production validation
  • The project is cancelled before completion (statistically, most big bang rewrites are)
  • Joel Spolsky documented this pattern in 2000 (Netscape). It keeps happening

Use the strangler fig pattern. Migrate one bounded context at a time. Ship each extraction to production. Get feedback. Adjust. The legacy system shrinks incrementally until nothing remains.


The old system and new system share the same database during migration. The new system reads legacy tables. The old system reads new tables. Both write to both.

┌────────────────┐ ┌────────────────┐
│ Legacy System │──────┬───▶│ Shared DB │
└────────────────┘ │ │ │
│ │ old_tables │
┌────────────────┐ │ │ new_tables │
│ New System │──────┘ │ hybrid_views │
└────────────────┘ └────────────────┘
  • It avoids the complexity of data synchronization
  • “One database is simpler than two” — in theory
  • Schema migration feels harder than it is
  • Schema changes in either system break the other
  • You cannot deploy the old and new systems independently
  • Transaction boundaries become confused (old system’s transaction scope may differ from new system’s)
  • Performance tuning for one system may degrade the other
  • Rollback becomes impossible: you cannot separate the data

Each system owns its own data store. Use a synchronization mechanism (CDC, events, sync jobs) to keep them consistent during the transition. Accept eventual consistency for the migration period. This is the dual-write infrastructure described in Strangler Fig.


Migration decisions happen in meetings, Slack threads, and hallway conversations. There is no written record of:

  • Why the team chose to extract module X before module Y
  • What the agreed-upon domain boundaries are
  • What the target architecture looks like
  • Which trade-offs were accepted
  • Writing things down takes time
  • “Everyone was in the meeting”
  • Decisions feel obvious in the moment
  • Documentation is often treated as overhead
  • New team members have no context
  • The same decisions get revisited (and sometimes reversed) weeks later
  • Contradictory decisions accumulate without anyone noticing
  • When the project lead leaves, institutional knowledge evaporates
  • AI agents cannot act on decisions that exist only in human memory

Document decisions in the spec. ModernizeSpec’s extraction-plan.json records sequencing decisions. migration-state.json tracks current status. domains.json captures boundary decisions. These are machine-readable, version-controlled, and survive team changes.

Architecture Decision Records (ADRs) capture the why behind decisions: what options were considered, what was chosen, and what the trade-offs were.


“We’ve achieved 95% parity on the invoicing module.” But there is no definition of what 100% means. The 95% is self-assessed based on the tests the team chose to write, not measured against a comprehensive baseline.

  • Defining 100% is hard work (it requires full characterization of legacy behavior)
  • Self-assessment feels productive
  • Stakeholders want progress numbers and the team wants to deliver them
  • “We tested the main scenarios” feels like enough
  • The 5% gap may contain the most critical edge cases
  • Self-selected tests miss scenarios the team did not think of
  • Confidence is inflated: the team believes they are closer to done than they are
  • Production surprises occur in the unmeasured territory
  • The “last 5%” takes as long as the first 95%

Capture baselines first using characterization tests. Record legacy system outputs for comprehensive inputs before building the new implementation. Define 100% as “all baseline outputs match.” Measure parity against this baseline, not against team intuition.

Use confidence scoring to quantify how much of the baseline is covered.


“We have 521 entity types. We need to migrate all 521.” Every module, report, and feature gets the same priority. The team spends equal effort on a rarely-used asset depreciation calculator and the core invoicing engine.

  • Completeness feels safe
  • Nobody wants to make the hard decision about what to cut
  • “Users might need it” for every feature
  • The legacy system’s feature list becomes the migration requirements document
  • The 80/20 rule applies: 20% of features handle 80% of business value
  • Migrating everything takes 5-10x longer than migrating what matters
  • Rarely-used features may have no active users
  • Time spent on low-value features delays high-value ones

Prioritize by business value and usage. Use runtime evidence to identify which modules are actually used in production. Use hot path identification to find the 20% that matters.

PriorityCriteriaAction
P0High traffic, core businessMigrate in early phases
P1Medium traffic, standard businessMigrate in mid phases
P2Low traffic, used monthly/quarterlyMigrate in late phases
P3Zero traffic in 90+ daysDo not migrate. Challenge the requirement

Record priority assignments in extraction-plan.json phases.


“We’re converting Python to Go. How hard can it be?” The team treats migration as a syntax translation problem. They convert Python functions to Go functions line by line.

  • Language translation is a concrete, tractable problem
  • AI tools (transpilers, LLMs) make line-by-line translation easy
  • The framework coupling is invisible until you run the translated code and it fails

Legacy code does not live in isolation. It lives inside a framework that provides:

Framework ServiceImpact on Migration
ORM and database abstractionEvery database call uses framework patterns
Permission systemAuthorization checks are implicit, not explicit
Hook/event systemBusiness logic fires implicitly through lifecycle events
Multi-tenancyTenant isolation is handled by the framework, not the code
CachingFramework manages cache invalidation transparently
File storageFile handling uses framework abstraction layers
Background jobsTask scheduling is framework-managed

Translating a function from Python to Go produces a function that does not have any of these framework services. It compiles but does not work.

Map framework dependencies explicitly using codebase analysis. For each function or module:

  1. Identify every framework call (ORM queries, permission checks, hook registrations)
  2. Classify each as: replaceable (can build equivalent), abstractable (introduce an interface), or fundamental (requires framework alternative)
  3. Build the anti-corruption layer before translating business logic
  4. Extract business logic through the ACL into framework-independent code
  5. Then translate to the target language if needed

The extraction is the hard part. The translation is the easy part.


All eight anti-patterns share a root cause: premature action without sufficient understanding. The team starts building, converting, or wrapping before deeply understanding the legacy system’s behavior, boundaries, dependencies, and runtime characteristics.

ModernizeSpec exists to ensure that understanding comes first. The specification files — domains.json, complexity.json, extraction-plan.json, parity-tests.json — are the artifacts of understanding. Populating them forces the team to answer hard questions before writing migration code.

The techniques in this section (Codebase Analysis, Domain Decomposition, Runtime Evidence, Parity Testing) are the tools for building that understanding. The anti-patterns are what happens when you skip them.