Every year, as the Atlantic hurricane season approaches many businesses have a nagging realization that they are at risk due to a catastrophic “Black Swan ” event. Black Swan events are a constant source of risk in states like Florida where many communities are subject to disruption due to coastal storms. This risk is particularly acute for businesses that depend on the storage of on-line data if there is a chance their critical data could become lost or corrupted. But the threat from Black Swan events isn’t limited to Florida, nor is it limited to large scale disruptive events like hurricanes.The black swan theory or theory of black swan events describes a disruptive event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying which presumed black swans did not exist, but the saying was rewritten after black swans were discovered in the wild. Consider the following scenario…
“We tend to think of disasters in terms of the attacks on the World Trade Center, Hurricane Katrina, or other mega events. Sometimes, however, less notable events occur that can have a catastrophic effect on a business. In February 1981, an electrical fire in the basement of the State Office Building in Binghamton, New York, spread throughout the basement of the building setting fire to a transformer containing over a thousand gallons of toxin-laden oil. Originally thought to be PCBs, the toxins were soon determined to contain dioxin and dibenzofuran, two of the most dangerous chemicals ever created. The fire was smoky and quickly filled the 18-story building with smoke. As the transformer burned, the soot entered the buildings ventilation shafts and quickly spread toxic soot throughout the building. The building was so badly contaminated that it took 13 years and over $47 million to clean before the building could be reentered or used. Because of the nature of the fire, the building and its contents, including all paper records, computers, and google rank position checker personal effects of the people who worked there, were not recoverable. This type of event would be irrecoverable for many businesses.” – Operations Due Diligence, Published by McGraw Hill
What affect would a catastrophic hurricane that affected an entire region or a localized disruptive event like a fire have on the operation of your business? Could you survive that kind of interruption or loss? As the dependence on on-line data has grown in virtually every type of business, so has the risk that loss of their data could disrupt the operation of the business and even result in its complete failure. In response to these threats, there has been an evolution in the approaches used to mitigate these risks as the volume of on-line data has continued to grow. Originally, the concept of Disaster Recovery (DR) emerged as a mitigation strategy that focused on the recovery of critical data after a disruptive event by giving the business the ability to restore disrupted IT operations.
Disaster Recovery (DR) involves a set of policies and procedures that enable the restoration of critical business data and allows the IT infrastructure to be restored to a prior state. DR was originally seen as the domain of the IT department who were given responsibility for mitigating the risk. To minimize the risk, system backups were scheduled frequently and aggressive DR plans that included server cold start procedures and data backups were implemented.
The goal was to restore the infrastructure to the last point where the data had been backed up (at the time, typically on tape). The acceptable DR practices at the time allowed the IT system to be rebooted when the facility power was finally restored… Unless it was in a flood zone or the off-site backup storage facility had also been impacted. In either case, the operation of the facility could potentially be disrupted for some period of time and the data restoration was also potentially at risk depending on where backups were stored.
Now let’s roll the calendar ahead… As technology evolved so did the Disaster Recovery strategies, which lead to new concepts that evolved to the requirements for a Business Continuity solution as a means of mitigating risk. Still seen as the domain of IT, as technology moved towards solutions like shadow servers, distributed data locations and high speed bulk data transmission with hyper connectivity. Data no longer had to be “recovered”, it just had to be connected in distributed locations where it could be remotely accessed. Business Continuity mitigated the risk of data loss and allowed a business to recover much more quickly and efficiently from a Black Swan event because its servers never went completely down.
Business Continuity originally encompassed planning and preparation to ensure that an organization’s IT infrastructure remained intact enabling the business to efficiently recover to an operational state within a reasonably short period following a Black Swan event. Technology today has evolved towards cloud solutions that put both the data and the applications into remote “cloud” locations so it would seem the IT responsibility for mitigating the risk of on-line data loss or corruption has been solved. With highly connected, fully distributed solutions, some people feel the need for business continuity may be fading in criticality. Nothing could be further from the truth…