Four Mega-trends Disrupting Data Protection (Part 1)

As data grows and becomes more critical to organizations, ensuring its protection has become a business imperative. However, there are significant transformations taking place across the information technology landscape exerting tremendous pressure on how traditional data protection services are delivered. In this blog series, I will describe four mega-trends that will disrupt data protection as we know it, and what organizations need to do to prepare for the new realities of the 2020s and beyond.

Trend 1: Data value

The first trend ushering in disruptive changes to data protection is exponential data growth. IDC predicts that by 2025 global data will soar to 175 zettabytes. Organizations will need ways to protect increasing data volumes consistently, reliably and affordably without impacting application performance or compromising data governance, compliance mandates and security.

But it’s not just the quantity of data that is growing, it’s the value of the data itself. Organizations are finding new ways to monetize their data to enhance the customer experience, enter new markets and increase revenue. In short, as organizations undergo digital transformation, their data is not only supporting their business needs, but it in effect becomes the business itself, so data loss is totally unacceptable.

In fact, data loss events are becoming increasingly costly to organizations of all sizes. According to the Global Data Protection Index (GDPI) survey, organizations that experienced data loss lost on average almost US$1M in revenue over the past 12 months.

Respondents to this survey cited complexity, ballooning costs, and the lack of data protection solutions for newer technologies as their most pressing issues. This mismatch between the growing need to protect the data, and the challenges organizations face is a major gap that calls for significant innovation in this space.

Source: GDPI 2018

Trend 2: Application Transformation

The applications used by organizations have evolved, alongside the infrastructure on which they run. We started with the vertically integrated mainframe, where hardware, software, networking and applications were all provided by a single (blue) vendor, and then evolved to the “open systems” era in which software, compute, networking and storage were separated into distinct entities that were connected through standard interfaces.

We have now entered the cloud-native era. Modern applications are increasingly adopting cloud-native (e.g. “12-factor”) design principles, in which a monolithic application is broken into stateless micro-services, that interact with each other through persistent data storage. The code is running in containers, or ad hoc using Function-as-a-Service (FaaS) platform capabilities. This enables the developers to focus on “what they want to be done” instead of thinking on “how it should be done.” In other words, software design is moving from an Imperative to a Declarative mode.

This evolution of enterprise applications changes the way we view the environment. Instead of looking at “compute, networking, storage,” we can now look at “code/function, data, infrastructure.” This also impacts how data protection should be designed since it needs to protect the code and data, and not the storage.

The increasing volume and value of data combined with the deployment of critical business services on physical, virtual and cloud-native application platforms are introducing more complexity, risk and uncertainty into the data protection process for organizations of all sizes.

In the second part of this blog series, I will discuss how the other two mega-trends — Distributed Data and Artificial Intelligence/Machine Learning — will introduce still more complexity into the data protection process across edge, core and multi-cloud computing landscapes.