We hear a lot about how well-executed container orchestration can streamline the IT and business processes. At the Google Cloud Platform conference in March, we saw success in action through a testimonial from e-payment service provider, WePay, which broke its monolithic application into a set of services that were coordinated through the Google open source Kubernetes container orchestration engine.
“It really changed our whole business,” said Richard Steenburg, principal engineer for WePay, in a follow-up interview. “Kubernetes doesn’t put you in a box. You can build on top of it.”
This week, Google released the latest version of Kubernetes, version 1.3, which features enterprise-friendly features such as support for stateful applications.
WePay created a PCI-certified system for processing credit card payments, serving as the middleman between businesses and consumers. Originally the company builds out the service as a monolithic app, with PHP handling the main parts. The company experienced the thorny performance issues that can hamper monolithic systems. For example, an API logging service, which was only used for internal debugging, was querying the database so heavily it was slowing performance.
The company is now in the process of breaking it into a smaller set of services. It has put a freeze on new features, while it carves off bits of the monolithic app into smaller services.
Going forward, Kubernetes appears to offer WePay the flexibility the company will need to expand into new directions. “We look on the roadmap and can see features that will help us,” Steenburg said
WePay’s move to a service-based approach to its architecture unfolded in a couple of steps. At first, the development team peeled off bits of functionality, adding them to Docker containers. The deployment process still involved firing up the containers individually, so WePay moved on to using Ansible, installing one container per virtual machine.
Using an SCM (software configuration management) product like Ansible had its challenges, however. Namely, WePay couldn’t autoscale its services.
“Every deployment, you need to go back and change the deployment scripts if you want to add more, so the scaling is very manual and every time that happens, there’s a risk of downtime, because you’re actually changing the thing that deploys it in order to handle more nodes,” Steenburg said.
Ansible did offer the ability to do rolling updates, which Steenburg admitted was a time-saver. But this is a feature that Kubernetes offered out of the box. So the next step for the company was migrating to Kubernetes.
At first, Kubernetes did present a challenge in rethinking architecture. No longer would specific jobs be tied to a specific set of machines. The company had to think of its operations as a pure state machine, with a whole set of interrelated processes totally independent of the hardware, and without any special hardware needs. “I think it’s good because you want to be pure in the model. You don’t want to go outside of that,” Steenburg said.
Another one of the initial challenges of using Kubernetes is that the open source orchestration engine did not offer a way to encrypt messaging traffic, which the company needed to do to maintain Payment Card Industry Data Security Standard (PCI) compliance. When Kubernetes instantiates the pods, it gives them local host names, so they can’t be verified through third-party certificate authorities outside the network.
So the company built a Kubernetes side pod to offer an in-house certificate authority (CA) service, built on Nginx and Kube DNS, which could be used to sign encrypted messages.
“Payments are hard. Working with banks is hard, working with regulators is hard, and the biggest problem is fraud and identifying fraud and that’s a liability that we’re always going to live with and combat it basically on a day-to-day basis because that’s just the business that we’re in,” Steenburg said.
It looks like that, soon, WePay will have an architecture flexible enough to excel at conquering these challenges.
Feature image: Rich Steenburg at GCP Next 2016.