Looking back on 2017: A turbulent year – but also a year of IT pragmatism

Inevitably as a calendar year wraps up – one reflects back on the year past and dreams of the year to come.    I’ve got a couple I want to get out before I curl up with a cup of eggnog, a book, by the fireplace with my family.

This first one is a look back, and then the next few will be a look forward.

Year in, year out, one thing that is constant is hyperbole and irrational exuberance.  I think this is a natural part of human nature. We tend to like black and white and struggle with nuance and transition.  We often pay too much attention to people who make statements that lack nuance.

There is no more current an example of this than the rise of public cloud.

When public clouds first started to gain traction years ago, prognostications about how each and every application workload would be moving into the cloud were made. They were wrong.

Don’t misunderstand me. Anyone who attended or watched AWS re:Invent 2017, or who looks at the growth rates of Azure, of SFDC and other SaaS options knows that the public cloud is massive and is experiencing massive growth.

Public Cloud (IaaS, PaaS, CaaS, SaaS, and server-less models all) are here to stay and are a critical part of every customer’s ecosystem.

But in 2017, I know from thousands of customer conversations that we’ve evolved our thinking. We’ve moved to a more pragmatic point of view.

FACT: There are whole classes of workloads that for economic, governance, and data gravity (compute tends to co-locate with data, because moving data is generally hard – as if it has “mass”) reasons are not ideally suited to run in a public cloud.

What public clouds did show us is that there is a better way to build IT infrastructure – and they reinforced that standardization, simplification, and software-defined approaches are the foundation on which clouds are built.   The public clouds shine a bright light that complexity and variation are the opposite of agility – which flows from simplification, standardization and automation (which in turn flows from software-defined everything). 

From where I sit, I think a lot about 2017 through the lens of our part of that movement.   We’ve taken those lessons to heart by developing a wide range of converged and hyper-converged infrastructure (HCI) solutions that effectively turn local data centers into a private cloud with hybrid cloud capabilities. Instead of spending months configuring and integrating IT infrastructure, the pre-integrated systems from Dell EMC enable the internal IT organization to function as an internal cloud service provider.  

Note – this is one example of many of the long-running shift from “IT is something you construct” to “IT is something you consume”.

If I think about the dialogs I’ve had with hundreds of customers through 2017, many clearly discovered that for certain workloads, there is an economic imperative to have a real private cloud part of a multi-cloud and hybrid cloud strategy.

FACT: it is much less expensive to deploy long-running workloads, particularly those that have large amounts of data, and are not naturally cloud native app stacks – it’s less expensive in a private cloud in a datacenter or a colo than it is to host them in a public cloud… But only if you can have a stack that is simple enough, standard enough that it doesn’t look/feel like the way IT has been done, but instead follows “cloud-like” tech stacks and operational models.   This economic fact is rooted in simple “rent, lease, own” economics in action.

The result is an era of new found pragmatism in IT circles I find myself in. Rather than assume everything is going to be automatically deployed on a public cloud, the customers I talk to now simply view the local data center as one choice of many clouds. Furthermore, just because an application workload began life in one cloud does not mean it will spend its entire life there.   This is MOST true for cloud-native applications, but is even more true than ever for more traditional applications through containerization.

  • There are many cases of application workloads created in public clouds to be migrated to an on-premises environment to contain costs.
  • At the same time, there are still plenty of workloads, such as legacy applications or databases not used often, that can be lifted and shifted into a public cloud as part of an effort to reduce cost or make room for additional applications deployed in the local data center.
  • And of course – the public IaaS/PaaS/CaaS cloud platforms play a critical workload when you need something for hours or days (and not months/years), or for workloads that have unknown scaling needs.

The need to not only be a public cloud consumer, but also to function as a cloud service provider is what’s driving so many organizations to modernize their data centers.

And, if they’re making this IT transformation, they’re certainly looking at HCI as the “foundation” layer for their cloud models.   International Data Corp. (IDC) recently reported that hyper-converged system sales grew 48.5% year over year during the second quarter of 2017, generating $763.4 million worth of sales. This amounts to 24.2% of the total value for combined integrated infrastructure and certified reference systems valued at $1.56 billion in the second quarter alone. Overall, Dell EMC is the largest supplier of this combined market segment with $763.4 million in sales, or 48.5 percent share of the market segment.

The good news is that all these fierce debates about enabling technologies seem to be taking less time to play out.    The next “wave of pragmatism” is around container orchestration/cluster management.

Earlier in the year, container manager/cluster manager debates raged.

Now it’s clear that Kubernetes is the standard around which everyone is rallying. Kubernetes can, will be, and IS being deployed on top of virtual machines, public clouds and bare-metal servers.

Another debate which seems to have burnt furiously – but has now burnt out, and is in “fierce pragmatism” phase is “kernel mode VMs” vs. “containers.” This was always a silly debate. Who cares? Yes, in some cases, container/cluster managers are deployed on Linux OSes on bare metal. However, in most cases, Kubernetes (and more generally, containers/cluster managers) are being deployed on top of kernel mode hypervisors to isolate applications in a way that not only provides better security, it prevents “noisy neighbor” applications from consuming all the available resources. Oh, and that darn intersection of hardware and software (which is always there). Again, pragmatism has won the day as IT organizations come to realize that containers and virtual machines complement each other.

IT leaders are starting to realize that the best way to approach IT is to start at the highest level of abstraction possible. If a simple process can be accomplished using a software-as-a-service (SaaS) application, chances are high that’s probably going to be the simplest way of achieving that goal.

At the other extreme, if the process involves a high amount of differentiated business value, then chances are high that the organization should build a custom application. What IT organizations should not be wasting their time and resources on in this day and age is stitching together disparate pieces of infrastructure to support those applications. It makes little sense for IT organizations to re-invent the wheel, when vendors like us spend tens of thousands of work hours validating and optimizing complete IT systems for repeatable use.

That’s why infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) environment have become so widely employed.

I’ll say it again: IT is increasingly something consumed, not something constructed. The effort of invention, innovation and unique differentiation at each customer is accelerating its move into the application domain.

Every minute and dollar spent on infrastructure is time and money that gets diverted away from applications and new services. Organizations today differentiate themselves on the quality of the digital experiences they enable for customers. The amount of differentiated business value that can be generated by manually optimizing IT infrastructure is minimal at best. As the business becomes more dependent on IT, there’s a growing appreciation for accelerating outcomes. No business leader especially cares whether an IT administrator could wring an extra 10 percent of utilization out of a server or storage system. They want to have confidence in an IT organization that can responsibly respond to changing business conditions as adroitly whenever and, increasingly, wherever necessary.

I aspire, we aspire  to make the infrastructure invisible—for the benefit of our customers. We want infrastructure, to some degree, to be boring.   It’s when people get pragmatic that things get kind of boring, but also when the ball moves the furthest forward 🙂

The biggest decision any IT leader needs to make today comes down to philosophy.

Inflexible IT infrastructure has contributed greatly to a negative perception of IT departments everywhere. IT leaders now have an opportunity to greatly enhance the perception of IT departments within their organization by focusing more on the art of the possible versus the constraints that have in many ways held IT organizations from reaching their full potential.

That’s a burden.   It’s a leadership challenge to IT to be pragmatic, and lead the creative juices of their most precious assets (hint: humans, creativity/innovation, time) towards things that are valuable.

None of that means that every IT organization should deploy every application workload on premises, BTW.  In the same way that I believe IT is increasingly consumed vs. constructed on premises (using HCI and turnkey cloud stacks), when a workload should be off-premises on IaaS/PaaS or just consumed as SaaS – go for it.

Pragmatism and transformation is all about deciding what you are going to STOP wasting time, blood, sweat and tears on and conversely deciding what matters most.