Grab a cuppa, and join us on a trip down memory lane to look at how technology has affected business continuity.
From the good old days of tape, to the rise and adoption of cloud (be it public, private or a mix or both), an understanding of the relationship between technology and business continuity is useful. For those new to the industry it provides good insight into the current landscape and how it has developed. And for those who have been on all or part of the journey, a useful opportunity to reflect and encourage considerations of how future technology will further shape business continuity and resilience…
While it would be easy to devote this blog purely to public cloud, that would somewhat miss the point. Very few businesses are 100% public cloud (those of you that are can skip to the end), so as with all good stories, let’s start at the beginning.
Surprisingly, we still support a lot of rehearsals with good old tape. Why? Because even though IT grows over time, many organisations still have systems that are protected in this way. These usually require a physical recovery as they don’t easily lend themselves to virtualisation and the benefits it brings.
In fact, up until the year 2000, virtually all business continuity recoveries were of the physical kind (ooh err) from tape. And in many ways, this was the pivotal year in how recoveries and the technology landscape has changed.
If we rewind a bit to 19 years before 2000, we see the introduction of the IBM PC which opened the world of IT to everyone in the workplace and from there, to homes and remote working.
In this period, we also see the introduction of many key technologies that paved the way for the modern Internet and workplace.
The Domain Name System (DNS) was introduced so that the Internet became useful to humans, who like names, rather than computers who like numbers…
We saw the introduction of the first UK mobile phone, which also promoted keeping fit, being (as it was) the size and weight of a house brick. Dial-up Internet soon followed as did texting, and Mosaic – the first modern web browser.
This naturally led onto the likes of Amazon and Google, VMware, the Exchange mail platform and from humble beginnings, Microsoft Office built on the early work of VisiCalc and WordStar.
And those tapes? Since their introduction in 1951 with a capacity of 224KB, by 1980 they’d reached 400MB and by 2000, grew to 100GB when LTO-1 was introduced.
Business continuity was all about technology recovery, and things broke with alarming frequency. Unfortunately, many tape recoveries experienced issues and backup failures were common. Tapes failed, hardly anyone verified their backups and it was common for tapes to remain on-site.
Being physically-based, recoveries were also very technical. Trained experts were required who were skilled in both hardware rebuilds and fixes, and soon by necessity, in software rebuilds and fixes too. This was the golden age for the “proper” engineer, you know, the ones who knew their way around an oscilloscope and a soldering iron. They probably even restored steam trains as a hobby…
From the year 2000, the widespread adoption of VMware and virtualisation brought a sea change to the industry, and these changes are still being felt today. Amazon launched its cloud in 2006, followed two years later by both Microsoft and Google.
Backups were also evolving. Tapes were being replaced by disk-based backups, thus eliminating the major issues with tape – capacity issues were a thing of the past and verification was as standard. Utopia was in sight! Service tiering then became the norm – servers and applications could now be recovered in related groups. Not only did this streamline recoveries, but data was now able to be recovered by priority and not by its position on a tape.
Now that backups had been sorted, replication was next. Splitting disks from the servers saw the rise of dedicated mainstream storage systems and these supported native disk mirroring and snapshots.
With virtualisation also going mainstream, it was easier than ever to also perform server replication, either through dedicated products, natively within the hypervisor environment, or by integration with the storage platforms.
Finally, with virtualisation becoming the common server platform, standardised server recoveries were possible, as were automation and orchestration.
The stage was now set for the true adoption of the cloud.
So, job done – businsess continuity is dead, and everyone lived happily ever after in the cloud…
…or so it may have been.
Unfortunately, those pesky bad actors (not the ones from your favourite soap), had been busy working on cybercrime.
Every blog should have a “fun fact” and here’s mine: Cybercrime started in the 1900s when an American magician sent some rude words over the telegraph service… But the first recognised modern cyberattack was in 1988 when the Morris worm infected every computer system on the ARPAnet, the forerunner of today’s Internet. Until around 2010, cybercrime was mainly focused on hacking government secrets – from espionage and civil rights to meeting ET.
This all changed around 2010 when Stuxnet, a malicious comuter worm was revealed. Suddenly the cyber landscape had changed, and business became a prime focus. While the press still reported on government-focused stories, cybercrime had been industrialised. The successful email attack on NASA in 2006 proved that anyone was vulnerable.
The rise of zero-day attacks and botnets suddenly became a reality, hitting the mainstream in 2017 with the worldwide WannaCry ransomware attack, affecting over 150 countries, 200,000 computer systems and costing billions to fix. In the UK, more than 70,000 devices were affected.
The NHS was surprised by the impact on non-traditional IT systems, affecting MRI, theatre and blood storage systems, to name a few. Outside the UK, it’s estimated it cost Merck shipping between $600-$800M and FedEx $300M.
Direct monetary costs are easy to quantify, but indirect people, reputation and productivity costs are incalculable. Cybercrime is now big business for both the good guys and the bad guys. In less than ten years, it has evolved to become a normal, daily business threat.
In response to the cybercrime threat, Daisy introduced Safe Haven, allowing for clean recoveries and the use of services in a safe, secure environment.
We’ve also seen an uptake in self-recovery, rather than Daisy-assisted recoveries. This has meant that rehearsals can be used by IT to enhance business value and reduce service risk, allowing patching, training, rehearsing changes, and so on, to be performed in a sandbox environment.
The rise of public cloud and “as-a-Service” has meant that business IT is now a web of on-premise, hosted and public cloud options.
Rehearsals are now more scenario-based than just testing the recovery of systems, with end-to-end testing now the norm, from the work area, through applications to the data centre and cloud. They’re also more complex – involve multiple services, organisations, teams and locations.
There are now more points of failure than ever before, and IT is no longer a single chain, but a complex web of integrated services.
While technology may be all-pervasive, few business continuity providers are multidisciplinary, covering Workplace, networking, services, systems, applications, cloud, InfoSec, and everything-as-a-service.
So, what does the future hold?
The reality is that many of the threats will still exist as they always have done.
We’re just starting to see the impact of reported cybercrime, although many incidents are currently misreported. Cyber breaches are creating a resurgence in the need for more traditional business continuity services like ship-to-site (relocatable) recovery, and calling for robust, tried-and-tested backup and recovery methods. It’s also created a new requirement for work area recovery (WAR) services to deliver a clean recovery environment for users and technology, separate from the live estate in every way.
Unfortunately, the risk of terrorism is also now ever-present with the official UK threat status still being ‘Substantial’ (attack is highly likely) and sadly, we’ve seen this rise to ‘critical’ on two occasions – both times in 2017, following the Manchester Arena and Finsbury Park mosque attacks. When such occurences happen, many businesses fall into what become “exclusion zones” meaning access to office sites is then strictly prohibited for unspecified amounts of time. This has created a greater need for recovery centres in order to minimise disruption to business.
Communications failures for businesses are increasing, especially with the reliance on access to the Internet, the cloud and the rise of the Internet of things (IoT). Now that virtually everything can be connected to the Internet – be it vehicles, toasters, fridges and the more traditional CCTV, the spread of cybercrime outside of the IT arena is inevitable.
The ongoing power and environmental risks will be ever-present, more variable weather patterns also seem to be putting an increasing load on power distribution systems, which are increasingly joining the connected world.
Quick off the mark
The reality now though, is that a proven business continuity plan can be activated and be effective within hours (not days as it was in the early years) and the reluctance to displace personnel to a recovery site is significantly reduced in this era of 24x7x365 everything.
The need for secure, isolated recovery is going to rise. Human beings are sociable creatures; we still see a strong demand and need for work area recovery. As phones continue to evolve into yet another generation of smart devices, I think the future of business continuity for staff will evolve into a recovery in your pocket, with smart devices powering the equivalent of today’s desktop phones and screens.
And the cloud? Well, that is already fragmenting into traditional platform services for your servers, applications such as Microsoft 365 and business services such as Salesforce. It will always continue to evolve, as will the associated threats and the protection against those threats. Sadly, the misconception that the cloud is “safe”, is still widespread and for organisations who are downgrading their business continuity response plans because of this, I can only say, “there may be trouble, ahead!”
About George Wignall
George has worked in IT for more than 30 years, both as a customer using recovery services and in the industry providing them. Heading up a team of recovery specialists, George has overall responsibility for the platforms and infrastructure underpinning Daisy’s multiple-award-winning availability services, including backup, replication, recovery, archiving, storage, cloud and our Shadow-Planner software.