Driving Digital Experiences via Cloud Native Applications
This is part of a series of contributed articles leading up to KubeCon + CloudNativeCon on Oct. 24-28.
There may be dozens of off-the-shelf and Software-as-a-Service (SaaS) solutions available, but organizations still need to write their own applications if they expect to compete.
We live in an experience economy, meaning every IT effort and spend inside modern enterprises must focus on providing better digital experiences internally and externally. These unique digital experiences, delivered using application software, are the key to standing out from the crowd and unlocking top-line growth.
And according to Jeff Lawson, Twilio co-founder and CEO, organizations that fail to build this software will die.
What has also become apparent is that the applications delivering digital experiences need to be cloud native to support rapid innovation, and scale dynamically to meet demand.
It is not enough to simply build applications using traditional software development approaches and then deploy them in the cloud. Organizations that have attempted to do so quickly figured out that this approach limits their ability to leverage cloud infrastructure and increases costs. Instead, developers need to adopt an architecture for cloud native applications.
Adopting a Cloud Native Architecture
The Cloud Native Computing Foundation (CNCF) defines cloud native this way:
“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”
In addition, the CNCF has defined a cloud native reference architecture with five stacks:
- Application definition/development
- Orchestration and management
This post outlines a reference architecture for the top layer: application definition/development.
At first sight, the diagram looks like another horizontally drawn layered architecture with shadows of the three tiers. Nevertheless, the intention is to use the sequence of communication and group components based on specific capabilities. Please note that a component is the atomic unit of this architecture.
Let’s look at the rightmost set of components.
Systems of Record (SoR)
Enterprises often rely on prebuilt systems for standard business capabilities. Some SoR examples are accounting, human resources, customer relations and document management systems. In most cases, key business entities, such as employees, customers, orders and revenue reside inside SoRs. These systems are rapidly moving to SaaS, which organizations can subscribe to and consume. However, the default capabilities of SoRs are insufficient to fulfill modern digital businesses’ requirements. Therefore, enterprises must connect their SoRs with applications that enhance the SoRs’ default capabilities.
In addition to the data stored in SoRs, applications must store master, transaction and reference data in private storage spaces. While some data sets solely reside on SoRs, aggregated data is often kept in data storage solutions optimized for the cloud’s performance needs. While SQL, NoSQL and cloud storage provide the foundation, concepts such as data lakes and data meshes play a vital role in creating storage systems and a domain data model. The domain data model represents the business entities (as types) in the way the business wants to see them.
Domain and Business Logic Services
Domain and business logic services are the cornerstone of unique digital experiences that expose business capabilities as application programming interfaces (APIs) by increasing the composability and re-composability of an organization. These APIs are usually for internal consumption, but securely exposing them for external consumption is not uncommon.
Domain services are designed and modeled using domain-driven design (DDD) concepts and are built with a collection of microservices. These microservices introduce business logic by consuming and processing data from data stores and SoRs. Why don’t we call this set of components “microservices”? The reason is that a microservice is too granular to expose as a business capability, map to agile, autonomous team development, or provide the required enterprise governance and security. As a result, enterprises have started using domain-driven architecture styles, such as:
- Domain-oriented microservice architecture (DOMA)
- Mesh architecture of apps, APIs and services (MASA)
- Cell-based architecture (CBA)
Microservices inside the domain services boundaries connect with storage and SoRs using drivers, connectors and APIs. Mesh architecture is a viable communication style for intradomain communication between components. Mutual transport-level security (mTLS) provides an efficient way of securing the intradomain message flows. Managed APIs developed using protocols, such as HTTP/s and gRPC, can be used for interdomain communication. APIs exposed as business capabilities from each domain service are labeled as domain APIs in this reference architecture and are used to access the service externally.
An enterprise service bus (ESB), as a centralized middleware server, was the primary integration technology for service-oriented architecture-based (SOA) application development. Therefore, the ESB pattern does not fit with the distributed nature of a cloud native architecture.
However, cloud-to-cloud integration and some enterprise integration patterns (EIPs) are required in current application development. The best way to handle these integrations is by wrapping them inside integration services designed under microservice architecture (MSA) principles. Integration services then expose APIs, which can be invoked from microservices, other integration services, apps, workflows and triggers.
Experience APIs are a set of APIs optimized for consumption by end-user apps and targeted at app developers. In the current API economy, experience APIs are packaged as API products and put in API marketplaces for discovery and consumption by app developers, while providing the opportunity for API providers to introduce monetization plans.
There are many forms of experience APIs. For example, a domain API can be used as a proxy for an experience API. However, this approach mainly considers enhanced security, observability and monetization of the original domain API. API mashups, chaining and composition are other methods of creating experience APIs. Domain APIs are not content-aware and optimized for each app type. Therefore, backend-for-frontend (BFF) services take on a role here and optimize content by exposing a new experience API. BFFs use MSA principles, and they are developed and deployed as individual components.
Applications vs. Apps
There is a slight confusion between an application and an app. An application has a broader scope than an app, touching on many capabilities that deliver digital experiences and containing many component types. They are essentially a horizontal slice of this reference architecture.
By contrast, an app is a component that acts as a channel connecting humans with these experiences. Web, mobile and the Internet of Things (IoT) are the main channels that deliver digital experiences, but the app landscape is dynamically changing with the metaverse.
Consumers of apps are looking for personalized, real-time, geo-sensitive and predictive digital experiences. At the same time, they want to interact with companies through multiple channels. Therefore, app developers have to pay more attention to the user experience (UX) and information exposed in the apps.
In this reference architecture, experience APIs are typically used to exchange interactions between the user, app and backend systems. However, there is no hard-and-fast rule to only rely on experience APIs, and apps have the flexibility to call external-facing, secure domain APIs for that purpose.
Network data blocks represent the messages and events that flow between components. APIs are the glue between components and build communication in this application architecture. Request/response, publish/subscribe and streaming-style APIs can be used as needed.
Meanwhile, business data types can be represented by different transport styles, such as REST/HTTP, gRPC, GraphQ, and AsyncAPIs, with a list of message formats such as JSON, XML and ProtoBuf. In addition, an API-led network architecture enables security policies to be injected and provides observability into API gateways to allow the enhanced security and monitoring of applications.
Today, enterprises recognize that they must pursue digital initiatives to achieve sustained growth and enhanced innovation. The reference architecture described in this post offers a pragmatic approach to addressing the needs and challenges faced by application development teams who want to deliver cloud native applications.
Teams should also use other cloud native technology stacks and standards, such as Kubernetes, Docker, Gitops, service mesh and the 12-factor app methodology for SaaS applications. By adopting the recommendations in this reference architecture, development teams can apply cloud native best practices to rapidly and reliably deliver applications that drive profitability and growth.
To hear more about cloud native topics, join the Cloud Native Computing Foundation and the cloud native community at KubeCon + CloudNativeCon North America 2022 in Detroit (and virtual) from Oct. 24-28.