Cloud Services / Data / Kubernetes

Tutorial: Deploy a Kubernetes-Driven PostgreSQL-Hyperscale on Azure Arc

26 Nov 2020 6:00am, by

This article is the last and final part of the Azure Arc series, where we explore Arc enabled data services. See Part 1, Part 2, and Part 3.

In this final part of the Azure Arc series, we will deploy the data controller followed by PostgreSQL-Hyperscale.

Though there are multiple techniques available for deploying Azure Arc enabled data services, we are using the native Kubernetes deployment model.

This article assumes that you have a Kubernetes cluster running version 1.17 or above with a storage class called local-storage is configured. I am using PX-Essentials, the free storage option from Portworx by Pure Storage as the storage layer. You are free to use any Kubernetes compatible storage engine.

Azure Arc enabled data services rely on a data controller for lifecycle management. All the objects of this service are deployed as Custom Resource Definitions (CRD). You need Kubernetes cluster administration permissions to deal with this deployment.

Installing the Data Controller

Let’s start by deploying the required CRDs:

Azure Arc enabled data services are typically installed within a namespace called arc. Let’s create that:

The next step is to deploy a bootstrapper that handles incoming requests for creating, editing, and deleting custom resources:

You should now have the bootstrapper up and running in the arc namespace.

We have to create a secret that holds the username and password of the data controller. On macOS, you can run the below commands to generate a base64 encoded string for username and password:

Take the values from the above commands to create a secret:

Download the data controller YAML file and modify it to reflect your connectivity and storage options:

Update the template with an appropriate resource group, subscription ID, and storage class name. Apply the data controller specification:

The controller is exposed through a LoadBalancer service. Find the IP address and port of the service:

We can now login into the controller with the azdata tool. Run the below commands to install the latest version of the Azure Arc enabled data services CLI:

Running azdata login will prompt us for the details:

Now that the controller is in place, we are ready to deploy PostgreSQL Hyperscale.

Installing PostgreSQL Hyperscale Instance

Start by downloading the YAML template file from the official Microsoft Git repository. Modify it based on the values of your storage class. Set the password value to a bas64 encoded string.

The following specification has a secret called Password@123 with the storage class pointed to local-storage:

Apply the specification with the below kubectl command:

In a few minutes, you will see four new pods belonging to PostgreSQL Hyperscale added to the arc namespace:

The deployment is exposed through a service that can be used to access the database:

We can also use azdata to get the PostgreSQL endpoint:

We can now login into PostgreSQL using any client tools. The below screenshot shows the psql CLI accessing the database instance:

This tutorial walked you through the steps of deploying Azure Arc enabled database services on Kubernetes.

Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at

Portworx is a sponsor of The New Stack.

Feature image by Shot by Cerqueira on Unsplash.

A newsletter digest of the week’s most important stories & analyses.