Ransomware is now the most financially damaging category of cyber incident for manufacturing, healthcare and critical infrastructure organisations. The average recovery time from a ransomware attack without an adequate DR plan is 23 days. With one that actually works — built around the specific requirements of ransomware recovery, not generic disaster recovery — it is three to five days. This article sets out the specific DR plan capabilities that separate the organisations that recover quickly from those that face weeks or months of operational disruption.
Why Ransomware DR Is Different From Traditional DR
Most disaster recovery plans are built for infrastructure failures: server crashes, network outages, natural disasters, power failures. These events have clear boundaries. You know what has failed, you know what has not, and your recovery process involves restoring the failed component from known-good backups or failover systems.
Ransomware attacks are fundamentally different. The adversary has been inside your environment — often for weeks or months before encryption occurs. Your backups may be compromised. Your failover systems may be infected. Your trusted recovery tools may have been modified. The boundaries of the incident are unknown, and establishing them is itself a complex technical exercise that takes time you do not have while your operations are offline.
A traditional DR plan that does not account for these specific characteristics of ransomware will fail in a ransomware incident. The organisations that recover in three to five days have plans — and tested capabilities — that specifically address the ransomware scenario. The organisations that take 23 days or more are running traditional DR plans against a threat model they were not designed for.
The Eight DR Plan Gaps That Ransomware Exploits
Across ransomware incident response engagements, the same DR plan gaps appear repeatedly as the primary causes of extended recovery timelines:
- Backups are not immutable. Network-accessible backups are encrypted alongside production data in most sophisticated ransomware attacks. If your backups are on network shares accessible from the same credential set as your production environment, they are not protected. Immutable backups — stored in architectures that cannot be modified or deleted even by administrative credentials — are the foundation of ransomware recovery. If you do not have them, everything else is secondary.
- Recovery time has never been tested at scale. A backup that takes 72 hours to restore is not a usable DR capability in a ransomware scenario where every hour of downtime costs tens of thousands of pounds in lost production. Backup restoration speed — from immutable storage, at the volume required to restore critical systems — must be tested and confirmed before an incident, not discovered during one.
- Clean room recovery environments do not exist. Recovering into your existing environment during a ransomware incident risks reinfection. Organisations that recover fastest have pre-planned, pre-provisioned clean room environments — isolated networks, fresh builds, clean credential sets — that can receive recovered data and run critical operations while the compromised environment is investigated and rebuilt.
- The compromise boundary is assumed, not established. Most DR plans assume you know what has been affected. In ransomware incidents, you do not. Your DR plan must include a triage procedure for establishing the compromise boundary before recovery begins — which systems are clean, which are suspected, which are confirmed compromised. Without this, you risk recovering into, or from, compromised systems.
- Credential sets for recovery are not pre-established or protected. If your recovery credentials are stored in systems that were compromised in the attack, you cannot trust them. Break-glass credentials — pre-established, separately stored, independently accessible — for all critical recovery actions must exist before the incident. They are routinely absent from DR plans.
- OT and industrial systems are not in scope. Manufacturing and energy organisations frequently have IT DR plans that do not extend to operational technology — PLC systems, SCADA networks, industrial control systems. Ransomware is increasingly targeting OT environments specifically because recovery timelines are longer and operational pressure to pay is higher. Your DR plan must cover OT recovery, not just IT.
- Communication plans assume infrastructure is available. If your communication plan depends on email, internal messaging platforms or phone systems that run on the compromised infrastructure, it will fail exactly when you need it most. Out-of-band communication — mobile phones, pre-established external communication platforms — must be specified and tested.
- Third-party dependencies are not mapped. Most organisations cannot operate for more than 48 hours without critical third-party systems — cloud services, ERP platforms, supply chain portals. If those systems or your connectivity to them is affected, your DR plan must account for it. Third-party dependency mapping and alternative operating procedures for key dependencies are consistently absent from DR plans tested against the ransomware scenario.
Paying the ransom is not a recovery strategy. The average organisation that paid in 2024–2025 still experienced 19 days of operational disruption. The decryption key, even when provided, is slow. Backup restoration from immutable storage is faster. But only if you have it.
Building a Ransomware-Ready DR Plan — Six Requirements
- Immutable backup architecture. Implement 3-2-1-1 backup strategy: three copies of data, two different media types, one offsite, one immutable and air-gapped or offline. The immutable copy must be genuinely inaccessible from your production environment — not just a different directory on the same network. Test restoration from the immutable copy quarterly, at scale, measuring actual recovery time against your recovery time objectives.
- Clean room recovery environment specification. Document and pre-provision your clean room recovery environment: isolated network segment, fresh operating system builds, clean credential set, pre-installed recovery tools. The clean room should be buildable within four hours of incident declaration — not designed from scratch after the attack has occurred. For critical manufacturers, a cloud-based clean room that can be activated on demand is now a standard capability.
- Compromise boundary triage procedure. Include in your DR plan a specific triage procedure for establishing the ransomware compromise boundary. This should specify: which systems to isolate immediately on incident declaration, how to assess systems for compromise indicators before bringing them into recovery scope, and who has authority to declare a system clean and suitable for inclusion in the recovery environment.
- Pre-established break-glass credential management. Maintain a separate, physically secured credential store — not accessible from your network — containing break-glass credentials for all critical recovery actions: domain administration, backup system access, clean room environment provisioning, and key third-party system access. Review and update these credentials quarterly. Test access annually.
- OT-specific recovery procedures. For manufacturing and industrial organisations, develop OT-specific recovery procedures that account for the differences between IT and OT recovery: longer validation requirements before returning control systems to service, safety interlocks that must be verified before equipment restarts, and vendor support requirements for specialised industrial systems. OT recovery timelines are typically three to five times longer than IT recovery timelines for equivalent scope.
- Ransomware-specific tabletop exercises. Conduct at minimum one ransomware-specific tabletop exercise annually that walks through the full scenario: initial detection, isolation, triage, clean room activation, recovery sequencing, third-party notification, regulatory notification, and communication management. Generic DR exercises do not test the ransomware-specific elements of your plan. The tabletop should identify gaps — that is its purpose.
What Separates Fast Recovery From Extended Disruption
In early 2025, two mid-size European manufacturers were hit by ransomware attacks from the same threat actor group within three weeks of each other. Both had ISO 22301 business continuity certifications. Both had tested DR plans. One was back to full operational capacity in four days. The other was partially operational after six weeks and did not achieve full recovery for eleven weeks.
The difference was three specific capabilities. The first organisation had immutable, air-gapped backups that restored critical systems in under eight hours. It had a pre-provisioned clean room environment in Azure that was activated within two hours of incident declaration. And it had a tested out-of-band communication plan that kept management, customers and suppliers informed throughout. The second organisation had network-accessible backups that were encrypted in the attack. Its clean room environment existed on paper but had never been provisioned or tested. Its communication plan relied on internal email, which was unavailable for the first four days of the incident.
ISO 22301 certification confirmed that both organisations had business continuity management systems. It did not confirm that either had specifically tested ransomware recovery capabilities. The distinction between the two outcomes was not certification — it was capability.
Three Mistakes That Extend Recovery Timelines
- Isolating too slowly. Every minute of delay between ransomware detection and network isolation allows lateral movement to additional systems. The first action in a ransomware incident is isolation — not investigation, not notification, not decision-making about whether the event is real. Your DR plan should specify automatic isolation triggers and authority to isolate without escalation approval for the initial response phase.
- Recovering before the threat is contained. Organisations that begin recovery before the initial access vector has been identified and closed face reinfection during recovery — sometimes within hours of completing the restoration. Your DR plan must sequence threat containment before recovery begins, not in parallel with it.
- Not notifying regulators and insurers at the correct time. Ransomware attacks frequently involve data exfiltration before encryption — making them notifiable data breaches under GDPR (72-hour notification to ICO/DPA), NIS2 (24-hour early warning for essential entities), and sector-specific requirements. Delayed notification creates additional regulatory exposure on top of the incident itself. Your DR plan must include notification timing triggers and responsibilities.
Frequently Asked Questions
How AjaCertX Helps
AjaCertX delivers ransomware readiness assessments, DR plan design and business continuity programme development for manufacturing, technology, healthcare and critical infrastructure organisations. Our Resilience and Continuity practice combines technical DR capability assessment with ISO 22301 and NIS2 compliance expertise.
- Ransomware readiness gap assessment against the eight DR plan requirements above
- DR plan design and documentation — ransomware scenario specifically included
- Tabletop exercise design and facilitation — ransomware and other threat scenarios
- ISO 22301 Business Continuity Management System implementation and certification support
- OT/ICS-specific DR planning for manufacturing and industrial organisations
- NIS2 compliance assessment for essential and important entities
- Cyber insurance policy alignment — ensuring DR capabilities meet insurer requirements
Resilience & Continuity specialists. Assessment and proposal within 48 hours.
Conclusion
Ransomware readiness is not a cyber security question alone — it is a business continuity question. The organisations that recover in days rather than weeks are not the ones with the most sophisticated detection technology. They are the ones with immutable backups they have actually tested, clean room environments they have actually built, and DR plans that specifically account for the characteristics of ransomware rather than treating it as a generic disaster.
Every week without these capabilities is a week of exposure that a single incident can crystallise into weeks or months of operational disruption. The cost of building the capability is a fraction of the cost of the recovery. The organisations that understand this build it before the incident. The rest discover the gap during it.