The financial services industry has traditionally been very technology dependent, but often has trouble adopting new technologies. The payments sector is somewhat of an exception to this. M-Pesa for example, the mobile payments solution from Vodafone and its subsidiaries that enables unbanked individuals in Africa, India and elsewhere to receive and make payments, has been around for over a decade. Closer to home, teens and tweens are now splitting the cost of a pizza or an Uber using apps like Venmo, which has seen 80 percent growth this year.
Many of these financial technology firms (“fintechs”) have taken advantage of modern application architectures and DevOps practices that are associated with “cloud native” technologies. Monzo, the “mobile” U.K. bank, discussed this in their presentation “Building a Bank With Kubernetes.” They released their annual report in July citing growth from 0 to 750,000 customers in 3 years. And Monzo is not alone. A recent U.S. Government report highlighted the growth of financial services by non-bank firms, chiefly fintechs. Some of the more striking data points:
- 3,300 fintech firms were created between 2010 and 2017
- Financing of fintech firms reached $22 Billion in 2017
- Personal loans by these firms went from 1 percent to 36 percent of loans in that period
So what is cloud native, how does it impact application development and IT Operations, and how can traditional financial services firms leverage it to compete with newer fintechs?
The Cloud Native Computing Foundation (CNCF) charter described cloud native applications as having the following characteristics:
- Container packaged
- Dynamically managed
- Microservices oriented
Containerization enables rapid deployment and updating of applications. This is particularly true when microservices are used. And the dynamic orchestration is achieved through Kubernetes. Kubernetes handles deployments, maximizes resource utilization, provides “desired state management” capabilities, and enables application auto-scaling.
Most development teams find that using containers in their application development processes is not too difficult, but IT operations teams usually have much less experience with Kubernetes. Complicating this is the fact that most established financial services firms can’t or won’t get rid of monolithic core applications overnight. Unlike Monzo, which wrote its back-end in microservices, established financial services firms will need to architect hybrid applications with cloud-native front-ends running either in the cloud, in their data centers, or both, and connecting to back-end services running in the data center.
The Kublr platform enables IT Operations teams to deploy, run, and manage Kubernetes wherever they wish, but to architect a total solution there are several factors to consider. We provide our recommendations below:
Some Considerations Before Going Cloud Native with Kubernetes
Being able to develop, run, and manage cloud native applications in multiple environments means financial services must consider how they will address some key issues:
Leveraging the Scalability of the Cloud: Horizontal Pod Autoscaling vs. Node Autoscaling
Containers, container orchestration, and microservice technologies like Kubernetes and Istio promise scalability and rapid response to changing resource demands. However, running containers still requires a real infrastructure — whether physical or virtual machines. To help leverage the cloud’s scalability, Kubernetes supports scaling on two levels: 1) horizontal pod (auto)scaling, and 2) node (auto)scaling.
While horizontal pod scaling scales applications horizontally increasing and reducing the number of running container replicas, it doesn’t take into account the infrastructure; it merely assumes there is going to be enough resources to start new container replicas when necessary.
Node scaling, on the other hand, is concerned with (automatically) adding new nodes to the cluster when more resources are needed, and stopping or removing underutilized nodes when not needed anymore.
Horizontal pod scaling is usually much faster; reaction time takes seconds vs. minutes for node scaling. Yet both technologies are needed to realize the benefits of automatic scaling in the cloud.
Cloud Native Front-End Applications that Talk to Monolithic Backend Apps (e.g., Core Banking Systems)
Sometimes migration to a cloud native architecture requires additional considerations, such as availability requirements related to compliance and pre-existing technology. For example, a mainframe database that isn’t easily scalable may require special precautions to ensure that cloud native applications in the presentation tier scaling up and down do not affect the availability of backend databases.
Aligning Current Dev, QA, and Release Processes with a Faster Release Schedule
The new cloud native technology stack doesn’t only affect application development and delivery tools; it also requires QA and release process changes. Responsibilities shift and require adjustments to align with the faster release schedule. By its very nature, the infrastructure-as-code approach shifts certain infrastructure management concerns to Dev and introduces new DevOps practices. Some organizations adopt SRE (Site Reliability Engineer) roles to consolidate responsibilities for application quality and availability, and close the gaps between operations, QA, and development teams. In any case, processes and business are affected and need to be adjusted to get the best value out of the technology modernization.
Scaling Cluster and Application Monitoring and Providing the Right Visibility and Alerts to Dev and Ops Teams
A cloud native approach usually implies changes in application monitoring, visibility practices, and technologies. The most notable change is probably that cloud-native application identity and localization are much more fluid — application components include multiple (and dynamically changing) replicas that move freely between nodes within the clusters — or even across clusters — and scale up and down; cluster nodes lack identity and can be stopped and started again in response to changing demand. The application components consist of replicas from different versions and variants, and re-route traffic based on needs, e.g., rolling out a new feature, A/B testing, etc.
These dynamic environments call for new tools, such as Prometheus, Grafana, InfluxDB, M3, ELK stack, FluentD, and Jaeger, to name a few. Integration of these monitoring tools requires serious consideration and planning.
Trouble-Shooting Microservices with Jaeger, Zipkin and Other Solutions
Traceability is one aspect of monitoring and visibility that becomes particularly important as cloud native migration efforts move further along and switch focus from an infrastructure and platform layer to application refactoring. Replacing monoliths with a microservices architecture brings a number of advantages, but also comes with its own challenges, and traceability is one of them. Jaeger, Zipkin and other frameworks emerged to close this gap. They normally integrate well with cloud native microservices frameworks like Istio and container orchestration tools like Kubernetes.
Securing Container Deployments: Container Scanning, Trusted Registries, Admin IAM, and Communication Between Nodes
Security is another facet of the new stack that requires careful consideration and planning. Container security was a legitimate reason for concerns, similar to virtualization security in the early days of virtualization adoption. And just like with virtualization, demand for reliable container security results in solutions being developed and adopted on all levels of the stack:
- Container isolation and security technologies — Kubernetes and Docker integration with SELinux and AppArmor, Linux cgroups and namespaces;
- Infrastructure security — integration of container orchestration frameworks with infrastructure management layer (such as AWS, Azure and other providers for Kubernetes), security policy management and governance across infrastructure and container orchestration layers;
- Network security across levels — infrastructure (VPC, subnets, routing, and network policies, security groups, etc.), containers (overlay network providers, e.g., Weave transparent encryptions), container orchestration (Kubernetes network policies, TLS, etc.), and application (e.g., Istio with transparent encryption);
- Application security — transparent authentication and authorization on the level of application framework, such as Istio;
- Container image security — image repository and the processes supporting image validation, scanning, signing, manual, and automatic approval, and Kubernetes admission controllers to support image deployment policies;
- Support for updates and security patches for all components and layers.
The Cloud Native Future with Kubernetes
Across the industry, we are already seeing innovative financial services firms start to address all of these issues. Cloud native architectures are driving innovation in data science, IoT, and other areas that will provide both the threat of being disrupted and the opportunity for innovation.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.