Modal Title
Cloud Native Ecosystem

Using Cloud Foundry or Kubernetes? Get to Know the Open Service Broker

Jul 23rd, 2018 6:00am by
Featued image for: Using Cloud Foundry or Kubernetes? Get to Know the Open Service Broker

Matt McNeeney
Matt is the project lead for the Cloud Foundry Services API project, which aims to enhance the developer experience provisioning and managing services. Matt is also a co-chair for the Open Service Broker API, a project by Pivotal, Google, IBM, Red Hat and others that allows developers to deliver services to applications running on multiple platforms including Cloud Foundry and Kubernetes. Over the past year, Matt has presented talks on services and service brokers at various conferences including CF Summit EU, VMworld and SpringOne Platform.

Pivotal sponsored this post.

To improve their digital fortunes, enterprises are embracing Cloud Foundry and Kubernetes. These projects offer rock-solid abstractions for apps and containers, respectively. Recently, there is a third abstraction that has also become an industry standard: the Open Service Broker API (OSBAPI).

The OSBAPI was born from a simple truth: custom apps require backing services to do anything interesting. As Cloud Foundry rose to prominence, the community considered how platform authors and service providers should interact. The Cloud Foundry Service Broker API project was started in 2014 to provide a simple and stable contract these parties could use. A few years later when Kubernetes arrived, the K8s community instantly saw the value this contract provided to the ecosystem, which led to the adoption of the same model. To better align with the OSBAPI project’s goal of connecting developers to a global ecosystem of services, the project was renamed the Open Service Broker API and was given a new governance model to better reflect its intentions.

This shared tooling across projects is a big deal. It makes it easier for independent software vendors (ISVs) to offer up their tech to both communities with minimal overhead. IT practitioners benefit as well since there is just a single workflow and API they need to master.

The IT industry has adopted the OSBAPI standard for three key: 

  1. OSBAPI is multicloud. Applications and their associated services must be able to move easily across data centers, including from on-premises to the public cloud. The OSBAPI specification is a solid abstraction that takes care of the underlying resources. It provides a consistent development experience — this is an incredibly important attribute in this multicloud world.
  2. OSBAPI connects anything. Your chosen platforms will run hundreds — and eventually thousands — of applications. That means there’s an incredible diversity of add-on services that need to be supported. Everything from enterprise systems such as DB2 and Oracle to popular microservices management tools such as Spring Cloud Services (SCS). The OSBAPI specification is a consistent interface that allows ISVs to focus on building powerful, scalable and secure services, and provides developers a consistent, self-service experience to obtain any service their applications require. This gives ISVs a new addressable market consisting of thousands of developers using cloud-native platforms.
  3. OSBAPI can meet common InfoSec requirements. When you bind (and unbind) to a service instance using the OSBAPI specification, your application or container receives unique credentials that can be used to access the service. That means you can revoke access at any time for just one application. The OSBAPI project has unique credential management hooks into Cloud Foundry and Kubernetes, as well. Open-source projects like CredHub make platform-to-service integrations even more secure.

Given all of this, how are enterprises using the OSBAPI in conjunction with modern platforms? Let’s take a look at common use cases.

OSBAPI Use Cases and Best Practices

Interacting with Legacy Systems for Modern Apps

Many companies have a large number of legacy systems. These systems likely run on-premises on custom hardware. Here’s the conundrum: modern, cloud-native applications and services need to interact with the valuable data stored in these systems. Yet, granting developers access to these systems is often slow, requiring manual intervention via support tickets.

Thankfully, the  OSBAPI provides a solution. When you build a service broker to handle the interactions between your modern cloud platform and legacy services, the administrative burden plummets. Your development teams are now empowered to innovate faster.

This approach has a long-term benefit, too. Once the broker is in place, this consistent interface allows other development teams to start planning the move of legacy systems to the cloud. The impact on existing applications or teams is minimal.

Easy Access to Public Cloud Services

The hyperscale cloud providers have extensive APIs for interacting with their services. These firms have used the Open Service Broker API to create a simple interface between their services portfolio and platforms. (For more information, see Amazon Web Services, Google Cloud Platform and Microsoft Azure). As a result, developers can easily harness the power of any particular public cloud service, no matter where you host your apps.

Enterprise Security and Compliance

When platforms and service brokers communicate with each other, basic access authentication is required. This works well for many service providers. They can use the credentials from the authentication header to understand which of their customers are talking to their broker. From there, they can implement auditing and billing workflows.

Recently, the Open Service Broker API specification was updated to allow for more secure authentication flows such as OAuth. Sure, this can involve some out-of-band communication between a platform and service broker. But in exchange, it can provide higher levels of security and auditability. We have seen many “hosted” or “managed” service brokers choose this model (see Flexible Deployment Options), especially the cloud-based providers that already offer suites of managed services.

As mentioned above, service brokers should generate unique credentials whenever an application or container attempts to get access to a service instance through a service binding. This provides two key benefits. First, you gain a tightly controlled security/access model. Second, this allows for access to be revoked immediately and without redeploying any code.

This fast and controlled security model is critical for many industries. Many service brokers provide this out of the box today.

For large enterprises with dedicated teams focused on providing services to application teams, the service broker model can be a great asset. The model allows teams to configure the services and plans that a particular broker offers. From there, they can tailor the services to the needs of different applications for performance, compliance or cost reasons. Your application teams can build with confidence, knowing they are using secure, compliant and maintainable services.

Flexible Deployment Options

There are many different deployment methodologies that can be used to build a service broker. The right choice depends on where your service broker will be deployed and how it will be accessed.

Many service brokers are wrapped up as downloadable packages, such as tiles for Pivotal Network or Helm Charts for Kubernetes, allowing platform operators to easily install services in their platforms. From there, they grant their development teams permission to access said services. This model often works well for security-conscious enterprises. It also allows them to use service brokers in Internet-less environments where ingress and egress networking policies are tightly locked down.

Other service providers, such as those that provide hosted APIs or cloud-based services, often prefer to choose the “hosted” or “managed” service broker model. In this deployment methodology, the service provider is responsible for deploying, maintaining and upgrading its own service broker. The provider must also hand out access details to customers. This model allows service providers to continually innovate on their services and update them at any time. (Note that internet connectivity is usually required.)

There are also a class of service brokers which may want to deviate from the Open Service Broker API specification in some way. For example, the Google Cloud Platform Service Broker requires a special OAuth security flow that Cloud Foundry does not support “out of the box.” To mitigate this, a service broker proxy can be used (e.g. the Google Cloud Platform Proxy Service Broker).

The proxy application is deployed to the platform with a simple “cf” push, and the platform interacts with it like any other service broker does. However, behind the scenes, this application is proxying requests to the hosted service broker while taking care of any custom authentication flows. This is a similar model to one we discussed earlier, where a small shim (hosting the API endpoints) could be put in front of a legacy system. That way, you can still enable self-service for the use of existing services.

The OSBAPI: The Consistent, Secure Way to Connect Any Service to Your Platforms of Choice

Distributed systems change frequently, and the underlying technologies are constantly evolving. Under this backdrop, Cloud Foundry and Kubernetes are foundational technologies for enterprise IT. It’s time to think of the OSBAPI as a similar foundational building block.

For more information about the OSBAPI, download the recently published whitepaper from pivotal.io.

Cloud Foundry Foundation, Microsoft and Google are sponsors of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.