If a new product depends on an old platform, the safest default is usually a legacy system integration layer. It gives the product a stable contract and stops legacy latency, brittle workflows, and strange failure modes from bleeding straight into the user experience. Direct integration only wins when the dependency is narrow, low-risk, and cheap to break.
The real issue is not whether the legacy system exposes an API, a database, or a file export. It is whether the new product can afford to inherit the old system's response times, maintenance windows, data quirks, and release constraints. In most customer-facing flows, it cannot.
Choose the integration pattern by use case
Teams get into trouble when they start with tools. An API gateway, iPaaS platform, or event bus does not tell you how the business flow should behave. A customer profile lookup, an invoice sync, and an order submission carry different failure costs, latency budgets, and recovery paths.
Most practical decisions fall into three use cases when connecting a legacy system to a new product.
Use case 1: The new product needs legacy data
This is the cleanest scenario. The new product needs account status, invoice history, pricing, inventory, or contract details, but it does not need to run the legacy workflow itself.
The key choice is direct read-through versus a replicated read store. Direct reads can work when the source is consistently responsive, traffic is modest, and the business can tolerate occasional failures. If the product needs a response in under one second and the legacy source regularly takes two to five seconds, the decision is already made. Do not put that dependency in the live path.
A replicated read store is usually the better fit when the source slows down under load, has maintenance windows, or makes every query operationally expensive. It also works well when the business can tolerate slight staleness. In many self-service flows, data that is 1 to 15 minutes old is acceptable if the interface shows freshness clearly.
That freshness marker matters. A field such as source_updated_at helps support teams separate stale data from a live processing issue. Without it, every mismatch turns into a vague incident.
A useful threshold question is simple: what happens if the legacy source is unavailable for 10 minutes during peak usage? If the answer is blocked self-service, lost revenue, or a support spike, build the read model.
One pattern shows up often in modernization work: teams ask for real-time data because it sounds safer, then discover the business would have accepted near-real-time data with a visible timestamp. They overestimate freshness needs and underestimate how much instability direct reads introduce.
In one retail billing rollout serving roughly 200,000 customer accounts, invoice status was exposed directly from an older finance platform because an API already existed. During end-of-period processing, response times degraded enough to break the portal when usage peaked. The real friction was not code volume. It was rollout delay caused by support escalation and release freezes around finance close. The fix was not more retries. It was a narrow read adapter with scheduled extraction, selective replication, and explicit freshness indicators.
Use case 2: The new product must trigger legacy transactions
This is where integration work becomes operationally serious. Order creation, account activation, shipment release, refund approval, and policy issuance are not just API calls. They involve retries, duplicate prevention, partial failure, and often manual exception handling on the legacy side.
The main decision is synchronous orchestration versus queue-based processing. Keep the flow synchronous only when the business truly needs an immediate answer and the legacy side can return a reliable result inside the product's latency budget. For most customer-facing journeys, that budget is a few seconds at most. If the old platform sometimes waits on a batch job, enters a lock window, or relies on manual review, promising an instant result is a product mistake, not a technical one.
Queue-based processing is usually the better fit when the transaction can take longer than a user should wait, duplicate submissions are expensive, or the old system has throughput bottlenecks. In those cases, the product should acknowledge receipt, issue a tracking identifier, and expose status honestly: submitted, pending, confirmed, failed.
Short version: if the legacy side cannot behave like an online system, stop designing the product as if it can.
The integration layer then owns retries, timeout policy, dead-letter handling, and idempotency. That last part is not optional when a duplicate action can create a second payment or second shipment. Stripe's public documentation on idempotent requests is still one of the clearest references for designing safe retries in systems where downstream failures are normal.
Some transactional flows also need compensating actions rather than rollback. If the new product creates a customer record and reserves stock but fails to complete the final order in the legacy system, the integration layer may need explicit reversal steps. That is middleware behavior. A thin gateway will not save you.




