Skip to Content

Your old system is
costing more than
building a new one.

We have seen what happens when teams patch a system that should have been rebuilt years ago. More patches, fewer features, slower hiring, and a growing list of things nobody wants to touch. We help you move out without breaking what is running.

bithost-legacy-audit main ● 3 services SYSTEM AUDIT ▼ legacy-api/ ✗ php-5.6 ✗ mysql-5.5 ⚠ monolith ⚠ no tests ▼ migration/ ✓ auth-service ✓ user-service ◎ billing-svc ○ inventory ○ reports ▼ infra/ ✓ Terraform ◎ K8s cluster ────────────── Risk score: MEDIUM-HIGH Migration: 61% strangler-fig-plan.md # Strangler Fig Migration Plan Phase 1 [complete] Extract Auth ↳ OAuth2 + JWT, zero downtime cutover Phase 2 [in progress] Billing ↳ Parallel run, traffic split 30/70 Phase 3 [planned] Inventory ↳ Event sourcing pattern, +12 wks Phase 4 [planned] Reports ↳ CQRS read model, decommission legacy traffic-routing.yml — live deployment new (30%) legacy (70%) billing-service: canary_weight: 30 stable_weight: 70 error_rate: 0.02% latency_p99: 142ms ✓ health check passing — safe to increase weight rollback: enabled: true trigger: error_rate > 1% time_to_rollback: <30 seconds
Migration Progress
61%
3 of 5 services moved — zero downtime
System Health
Auth — live on new stack
Billing — parallel run
Inventory — scheduled
Strangler Fig Pattern Zero-Downtime Migration PHP 5.6 → PHP 8.3 node 10 → node 25 Python 2.7 → Python 3.14 Odoo 9 → Odoo 19 React x → React 19 Monolith to Microservices On-Prem to Cloud Database Modernization CI/CD Implementation API Gateway Migration Containerization Tech Debt Remediation Cloud-Native Rewrite Security Hardening Strangler Fig Pattern Zero-Downtime Migration PHP 5.6 → PHP 8.3 Monolith to Microservices On-Prem to Cloud Database Modernization CI/CD Implementation API Gateway Migration Containerization Tech Debt Remediation
"Modernizing a legacy system is not a demolition project. It is more like renovating a house while people are still living in it. You do not knock down walls. You replace them one at a time."

Every service you migrate is a wall that gets stronger without anyone noticing the work happened. The trick is sequencing. Start with the parts that are causing the most pain, build the new thing alongside the old, switch traffic over gradually, and only then tear out what you replaced. No big bang. No weekend downtime. No crossed fingers.

What we hear from teams

The system is not broken.
It just keeps getting harder.

These are the patterns we see when we first talk to an engineering team carrying a legacy system they have outgrown.

01
Cannot hire for the stack anymore

Nobody wants to maintain PHP 5.6 or a system with no tests and no documentation. The engineers who built it have moved on. The new ones spend weeks just understanding what they inherited.

02
Security vulnerabilities that cannot be patched

The framework is past end-of-life. The CVEs keep piling up. You cannot upgrade the language version without breaking something. Every month the exposure window gets wider and the fix gets harder.

03
Scaling hits a wall every time

Traffic spikes and the whole system struggles. You cannot scale just the parts that are under load because everything is coupled together. More hardware buys time but does not fix anything.

04
Fear of touching anything in production

The codebase has sections nobody understands. A change in one place breaks something else entirely. Deployments happen at 2am. Everyone holds their breath. Features take twice as long as they should.

What changes

Life before and
after the migration.

This is not a pitch for the ideal architecture. It is what we have seen change in real teams after a structured modernization programme.

Before
Deployments need a senior engineer present and take 30 to 60 minutes
Every feature estimate has a hidden multiplier for untangling dependencies
CVEs pile up because upgrading the runtime breaks the framework
Scaling means vertical upgrades and expensive dedicated hardware
New engineers take two to three months to become productive
Test coverage is below 20% because the codebase resists testing
Transition
After
Deployments are automated, happen multiple times a day, and roll back in under 60 seconds
Services are independently deployable — changes are smaller and safer
Runtime and framework upgrades are routine maintenance, not projects
Services scale horizontally on demand, only the busy parts consume resources
New engineers can ship to a service in their first or second week
Test coverage climbs as services are rewritten with testing as a default
How Bithost can help

We do the work.
You stay in production.

We work alongside your team, not instead of it. Everything below is something Bithost actively runs or implements as part of the engagement.

Legacy System Audit

Before anything moves, we spend time understanding what you actually have. We map dependencies, identify the riskiest parts, surface the CVEs, and produce a plain-language report on where things stand. No assumptions, no generic output.

Strangler Fig Strategy

We plan the migration using the strangler fig pattern — new services grow alongside the old system, traffic shifts gradually, and the legacy parts are decommissioned one by one. Nothing gets rewritten all at once.

Phased Migration Plan

We build a sequenced roadmap that prioritises the highest-pain services first, accounts for team capacity, and sets realistic timelines. Each phase has clear exit criteria so everyone knows when it is actually done.

Risk Assessment and Mitigation

We identify what could go wrong at each stage and build rollback procedures before they are needed. Every service migration includes a parallel-run period where both versions are live and we can switch back in under a minute.

Zero-Downtime Deployment Strategy

We implement blue-green or canary deployment pipelines before the first service migrates. By the time production traffic moves, the deployment process has already been tested dozens of times in staging.

Cloud Migration and Infra Setup

If the move to cloud is part of the plan, we handle the infrastructure side — Kubernetes, Terraform, CI/CD pipelines, environment parity. We set it up right the first time rather than retrofitting later.

Tech Debt Triage and Remediation

Not every piece of legacy code needs a full rewrite. We distinguish between the technical debt that is actively slowing you down and the debt that is stable and can wait. We tackle the first kind and document the second.

Knowledge Transfer Throughout

We document as we go, hold working sessions with your team, and make sure the engineers who will own the new system understand every decision we made. The goal is that we eventually become unnecessary.

How the engagement runs

From audit to
decommissioned legacy.

1
System audit and risk map

We spend one to two weeks reading your codebase, interviewing your engineers, and mapping every dependency and vulnerability. The output is a risk-ranked inventory of what needs to move, in what order, and why.

2
Migration roadmap and tooling setup

We deliver the sequenced migration plan, set up the deployment pipeline, establish the parallel-run infrastructure, and instrument both the old and new systems with the monitoring needed to make safe traffic shifts.

3
Service-by-service migration

We rebuild and migrate one service at a time, running old and new in parallel until the new version proves itself. Traffic moves in increments — 10%, 30%, 70%, then 100%. If something looks wrong we roll back before users notice.

4
Legacy decommission and handover

Once traffic is fully on the new stack and the stability window closes, we decommission the old services, archive what needs to be kept, and hand the system to your team with documentation and a runbook for everything we built.

System Audit — Risk Overview Example client view
Risk by Category
Security exposure
High
Scalability ceiling
Med
Hiring difficulty
High
Deployment risk
Med
Test coverage
18%

14 CVEs identified. PHP 5.6 reached end-of-life in December 2018. No security patches available. Framework version pinned due to tight coupling with deprecated APIs.

Migration Roadmap — Service Priority
Auth ServicePhase 1 — Next
Billing ServicePhase 2 — Q2
User ProfilesPhase 2 — Q2
Inventory APIPhase 3 — Q3
ReportingPhase 4 — Q4
Legacy DBDecommission Q4

Auth migrates first. It has the most CVEs, is the most self-contained service, and unblocking it frees up the framework upgrade for everything downstream. Estimated 6 weeks including parallel run.

Active Migration — Auth Service
New auth service built and testedDone
Canary deployment pipeline configuredDone
10% traffic routed to new serviceDone
Increase canary to 50%In progress
Full traffic cutover (100%)Week 5
Legacy auth service decommissionedWeek 6

Canary performing well. Error rate 0.02% vs 0.18% on legacy. P99 latency 142ms vs 380ms. Safe to increase traffic weight to 50% this week.

Handover Package
Architecture decision records (ADRs)Ready
Deployment and rollback runbookReady
Infrastructure-as-code (Terraform)Ready
Monitoring and alerting guideIn review

Legacy decommissioned. All traffic running on the new stack for 30 days with no incidents. The old system has been archived and the infrastructure costs dropped by 40%.

Business Impact

What the numbers
tend to look like.

These are representative figures from modernization engagements in the 50 to 500 person range. Every situation is different and we will give you honest estimates based on your actual system.

0%
Reduction in time spent on deployments and incident response
0x
Increase in feature delivery speed after the first service migrates
0%
CVEs resolved within 90 days of beginning the programme
0%
Average infrastructure cost reduction after cloud migration
Deployment Frequency Before and After
Number of production deployments per month. Modernization typically unlocks continuous delivery within two to three months of starting.
Before
After
Where Time Goes After Migration
How engineering time shifts once the legacy burden is removed.
Incident Rate Over Migration Timeline
Production incidents per month across a typical 12-month modernization engagement. The parallel-run period occasionally surfaces latent issues early — which is the point.
FAQ

Things people
ask us right away.

Not usually. The strangler fig approach means we migrate the system incrementally rather than rewriting it all at once. Some services genuinely need a rewrite — particularly ones with security issues or hard architectural constraints. Others can be refactored, re-platformed, or left alone if they are stable and low-risk. We make that call based on what we find in the audit, not based on a preference for any particular approach.
Before any traffic moves to a new service, we set up automated rollback triggers. If the error rate or latency on the new version crosses a threshold, traffic switches back to the legacy version automatically. We also run the two versions in parallel for a period before moving traffic, which surfaces most issues before they affect users. We have never had to do a full rollback at the 100% traffic point — by that stage the new service has already handled real production load.
It depends entirely on how many services you have, how coupled they are, and how much of the work your team can absorb alongside product development. A focused engagement on a monolith with three to five bounded contexts typically takes six to twelve months. Larger systems with more external integrations take longer. We scope this precisely after the audit and we do not give estimates before we have seen the codebase.
No. The migration runs alongside your normal development work. We design each phase so that it does not block the product team. There are sometimes short periods where a specific integration point needs to be frozen while a service migrates, and we plan those windows in advance so product can work around them. The goal is that your users notice nothing and your product team notices very little.
Yes. Stalled migrations are one of the more common situations we step into. Usually the issue is either that the original plan did not account for the coupling between services, or the team ran out of bandwidth while keeping the product moving. We do an assessment of where things stand, what was promised versus what was delivered, and what a realistic path forward looks like. We have never encountered a migration that was too far gone to continue sensibly.
Yes. A lot of modernization engagements involve moving from on-premise infrastructure to AWS, GCP, or Azure at the same time as the code migration. We handle the infrastructure side — Kubernetes cluster setup, Terraform for infrastructure-as-code, CI/CD pipeline configuration, and environment parity between staging and production. We treat cloud migration as part of the overall programme rather than a separate project that happens in parallel.
This is one of the key outputs of the audit. The short version is: refactor when the business logic is sound and the problems are in the infrastructure and framework layer. Rewrite when the business logic itself is tangled, undocumented, and resistant to testing. In practice most systems need a combination — some services rewritten cleanly, others refactored in place. We make the recommendation with a clear rationale for each service, and we are happy to be challenged on it.

Not sure where to start?
Start with the audit.

A free 30-minute call is enough to understand your current system, identify the biggest risks, and tell you whether a structured modernization programme makes sense for your situation.

Book a free system audit call