While disaster recovery is potentially one of the lowest-hanging of fruits when it comes to cloud use cases, it’s not as straightforward as all that.
Now, cloud DR sounds simple enough. You backup or otherwise copy data off to the cloud, and when disaster strikes you spin servers up and run things there until your datacentres are operational again.
But, not all scenarios are as simple as that and cloud DR isn’t a universally deployed solution. There seem to be a few reasonable concerns over how easy it actually is, centred on networking and security.
Here we look at some of the complexities of cloud disaster recovery and some general guidelines to address them.
The reality is that failing over to the cloud and failing back may not be quite as simple as it sounds.
Servers may, for example, go down on site then failover to the cloud. But it could be the case that not all servers are affected and failover. Some may remain working at on-premise locations while some are now in the cloud.
So, connectivity requirement might be split between datacentres and cloud(s), with access needed to some existing and some re-hosted servers, and also including from remote workers. And that’s just failover. Failing back to a production configuration might be fraught with similar issues.
Cloud DR: Popular but potentially complex
A recent Veeam survey found that 39% of 1,277 IT department respondents’ organisations were configured to use cloud infrastructure at a secondary site.
A further 40% use cloud storage to store data but would restore to an on-site location in case of disaster.
One-fifth, (19%) do not use any cloud service as part of their DR strategy.
When Veeam asked how those questioned used the cloud for disaster recovery the answers showed a divergence between locations and methods of restoring data and recovering servers, with permutations spanning the original location, a cloud, or alternate datacentres locations.
Of those asked (1,007), 40% can mount to data in the cloud and run compute from on-site locations. For a quarter (25%) data may reside in the cloud but needs to be brought back on-site to be of use to applications.
Just less than that (22%) can recover servers in the cloud, but networking is a manual process. Only 12% can fully recover in the cloud.
When the survey asked how failover and failback was executed, half of those questioned (50%) said they used written scripts to connect resources running remotely. But 34% said they would have to reconfigure users manually during failover/failback.
Connectivity a huge challenge
The Veeam survey found that more than half (54%) of the 1,007 questioned put network configuration top of the challenges in cloud disaster recovery. Also, 47% cited connectivity for users in on-site locations and 42% for those remotely working as key challenges.
The types of thing that can be a challenge here are repointing external and internal DNS settings, connecting email and remote access. Simply having to re-configure can be a fairly major task. What’s also important is making sure you have the documentation and access rights – including passwords, for example – to do so and that these are not locked away in systems inaccessible during an outage. Ensuring all that is prepared for and tested is a key task in disaster recovery planning.
Cloud DR security and compliance were also a big concern. One fifth of the respondents to the Veeam survey (20%) said they thought the cloud not secure enough or that compliance would be an issue. Often cloud and as-a-service providers are responsible for system availability and underlying infrastructure protecting and managing business data belongs to the customer.
The aforementioned DNS setup can cause potential security weaknesses, for example, if such network details are left unsecured against intruders in potentially vulnerable phases of failover and restore.
Planning is the key
Disaster recovery failover and restore scenarios can vary significantly. In the mix you can get partial failures and failovers, and partial restores, with a mix or on-site, cloud and remote sites to re-connect.
So, planning for the unknown is a case of brainstorming all the likely results of an outage and its effects on existing and secondary resources, with the likely topology of resulting scenarios envisaged.
That takes care of the most general level of things. Within that you will need to map out all the likely resulting issues of access, connectivity and security that will need to be addressed.
Hopefully, much of the preparedness required can be automated and should be tested for, in various permutations, and updated to match any changes in the infrastructure.
Comentarios recientes