How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Services

Get from Here to Anywhere with Your Hybrid Cloud Infrastructure

We are all on a journey. We all have a starting point: the state of our infrastructure now. And our destination is where we want our infrastructure to be.
May 18th, 2020 7:20am by
Featued image for: Get from Here to Anywhere with Your Hybrid Cloud Infrastructure

NetApp sponsored this post.

Alan Cowles
Alan is a Solutions Architect at NetApp.

“From here, anywhere” was the theme of the Red Hat Summit 2020 Virtual Experience. It’s quite a catchy tagline, but it does leave you wondering what it means. My NetApp colleagues and I were eager to find out, so we registered for the free virtual event that took place on April 28 and 29. After attending several of the sessions, I came away with a better idea of what it means.

We are all on a journey. We all have a starting point: the state of our infrastructure now. And we all have a destination, the place where we want our infrastructure to be — an infrastructure transformation, if you will. Many such transformative journeys are taking place in IT right now. One journey that many in our industry have undertaken in the past few years has a starting point of traditional on-premises infrastructure and a destination of the public cloud.

Challenges on the Journey to Simplicity

The on-site infrastructure often comprises compute, networking, storage, and virtualization, all managed by an on-site administration team. Conversely, the public cloud offers infrastructure that’s delivered as a service – which simplifies provisioning, deployment, and management. This hands-off approach has become key to consuming infrastructure for many companies. Although such companies might still host several of their services on-premises, they ultimately aim for simplicity.

With simplicity in mind, on-site data centers have seen a rise in deployments of converged and hyperconverged infrastructure. Hyperconverged solutions allow deployments of any number of similar servers, with the traditional roles of networking and storage being virtualized by the hypervisor. And for ease of use, management is fronted with a straightforward UI, like UIs that are available in the public cloud.

These solutions still have their challenges, however. There are often trade-offs to gain simplicity. In many cases, you’re limited to the specific hypervisor that the vendor has chosen for its platform. Often, the virtualization of critical services — such as storage — causes additional overhead and utilization of system resources. And because similar or identical nodes are required for the entire deployment, you can find yourself having to scale your entire infrastructure just to meet a specific resource requirement. For example, if you need additional storage then you also have to expand your compute footprint and add more hypervisors to the infrastructure, because the two services are conjoined.

Taking a Better Road: Disaggregation and Automation

To help you overcome these concerns, NetApp® HCI was designed as a disaggregated infrastructure. You can scale compute and storage nodes independently. And you can simplify deployment and management through the NetApp Deployment Engine (NDE). NDE is available for you to deploy a default configuration with a default hypervisor.

What makes NetApp HCI really innovative, however, is the fact that the infrastructure consists of separate storage and compute resources. You can, therefore, deploy your hypervisor of choice without any specific dependencies. Although this process isn’t automated, it is still simplified greatly through detailed design and deployment documentation, in the form of NetApp Verified Architecture (NVA) documents. These documents describe the setup and configuration process in minute detail, as performed by engineers at NetApp. Our team took this approach with NetApp HCI for Private Cloud with Red Hat. We engaged subject-matter experts at both NetApp and Red Hat to deploy a private cloud environment on NetApp HCI for virtualized and containerized applications, based on Red Hat OpenStack Platform 13 and OpenShift Container Platform 3.11.

And now, embracing even greater demands for simplicity and flexibility, NetApp and Red Hat are announcing a new joint solution: NetApp HCI for Red Hat OpenShift on Red Hat Virtualization. NetApp wanted to offer even greater simplicity, many more options, and greater freedom of choice with its hybrid cloud infrastructure. So we chose to collaborate again with Red Hat. This new solution supports OpenShift Container Platform 4.4, which runs as a virtualized infrastructure that’s hosted on Red Hat Virtualization 4.4. It’s installed on NetApp HCI compute nodes and has storage nodes running NetApp Element® 12 software.

Red Hat Virtualization is an enterprise-class virtualized data center technology that’s based on Red Hat Enterprise Linux and built on the foundation of the Kernel-Based Virtual Machine (KVM) hypervisor. You get all the features that you expect in a modern virtual data center, such as shared storage, live migration, high availability, and centralized management through the Red Hat Virtualization Manager.

With Red Hat Virtualization, Red Hat OpenShift 4.4 introduced a fully automated installation experience. You get simplified deployment with the installer-provisioned infrastructure method on Red Hat Virtualization and a resource management model for containerized applications. Simply use the openshift-install binary and provide a few pieces of information about your Red Hat Virtualization environment that’s deployed on NetApp HCI. Then a fully functional OpenShift cluster can be deployed for you with no interaction or administrator action. Full-stack automated deployments facilitate trouble-free deployments, enabling your DevOps workflows to get up to speed with ease.

And the entire NetApp and Red Hat solution uses a NetApp Element based storage system to provide iSCSI storage volumes that you can use as shared storage domains in Red Hat Virtualization. You also get persistent volumes for containerized applications in OpenShift, which are dynamically provisioned by using the NetApp Trident storage orchestrator.

View Sessions and Attend the Master Class

If you’re interested in how NetApp and Red Hat are working together to create the best infrastructure solutions for hybrid cloud and containers, you can view several of the sessions from the Red Hat virtual summit on demand. Also, be sure to join us for a special Master Class webinar that’s dedicated to this solution, which will be broadcast live on May 19. This webinar is an opportunity to explore the joint solution in more depth as engineers from both NetApp and Red Hat present on NetApp HCI, Red Hat Virtualization, and OpenShift Container Platform. During the webinar, you will also have a chance to observe a short demonstration of the solution in action. The webinar will conclude with a live Q&A session. I will be there as a subject-matter expert from NetApp, and Andrew Sullivan will be the subject-matter expert from Red Hat.

We sincerely hope that you can join us for this live webinar on May 19. It’s just the next step as we all continue our IT infrastructure journey from here to anywhere.

Register for DevOps Master Class Special Edition: Red Hat on NetApp HCI, May 19, 2020, at 9:30 a.m. Pacific Time.

Following the Master Class we have an opportunity for you to engage 1-on-1 with NetApp technical experts to discuss your unique challenges. 

Red Hat OpenShift is a sponsor of The New Stack.

Feature image from Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Simply.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.