Flexing AI Workloads Using KubeFlow and OpenShift Container Platform

Want to create a flexible environment for machine learning and deep learning workloads? Deploy Kubeflow on an OpenShift Container platform with Dell EMC PowerEdge servers.

Many enterprises invest in custom infrastructure to support artificial intelligence (AI) and their data science teams. While the goal is right, this approach can be a problem. Oftentimes, these ad-hoc hardware implementations live outside of the mainstream data center, and that can limit adoption.

To facilitate wider adoption of AI-driven applications in the enterprise, organizations can integrate production-grade, experimental AI technologies in already well-defined platforms. That’s the idea behind Kubeflow: The Machine learning Toolkit for Kubernetes.

Kubeflow is an open-source Kubernetes-native platform to accelerate machine learning (ML) projects. It’s a composable, scalable, portable stack that includes the components and automation features you need to integrate ML tools. These tools work together to create a cohesive machine learning pipeline to deploy and operationalize ML applications at scale.

A proven platform for Kubeflow

Kubeflow requires a Kubernetes environment, such as Google Kubernetes Engine or Red Hat OpenShift. To help your organization meet this need, Dell EMC and Red Hat offer a proven platform design that provides accelerated delivery of stateless and stateful cloud-native applications using enterprise-grade container orchestration.

This enterprise-ready architecture serves as the foundation for building a robust, high-performance environment that supports various lifecycle stages of an AI project: model development using Jupyter Notebooks, rapid iteration and testing using Tensorflow, training deep learning (DL) models using graphics processing units (GPUs), and enabling prediction using developed models.

Among other advantages:

  • Running ML workloads in the same environment as the rest of your company’s applications reduces IT complexity.
  • Using Kubernetes as the underlying platform makes it easier for your ML engineers to develop a model locally using readily accessible development systems, such as laptop computers, before deploying the application to a production cloud-native environment.

To learn more

Ready for a deeper dive? To learn more, check out these resources.