TriggerMesh sponsored this post.
Virtually every technology publication these days is full of cloud stories, often about the success that has been achieved by webscale companies doing amazing things exclusively in the cloud. Unlike Netflix, Twitter and Facebook, however, most companies have a heritage that predates the availability of cloud computing.
Unlike these relatively young companies that have the benefit of starting more recently and growing to maturity in the cloud native era, there are myriad companies that may feel that they are held hostage by legacy infrastructure that can’t be migrated to the cloud for reasons of risk, compliance or compatibility.
Just because you have a legacy investment that would be disruptive to move doesn’t mean you can’t adopt cloud or cloud native systems that enable new digital initiatives and still capitalize on those legacy investments. However, it does mean that you need to find ways to integrate in a nondisruptive way.
There are a few practices you can put in place to get to a cloud native environment while still using your existing legacy investment. I advocate working on adopting cloud native practices and architecture patterns that can ease your implementation of cloud computing incrementally, which involves adopting cloud computing architecture patterns on-premises.
Deconstruct Monoliths with Composable Infrastructure
In the early days of the internet, the idea of stacks was prevalent. In regards to delivering web-based services, Microsoft had the WIMSA (Windows, IIS, SQL Server and ASP) and open source users had the LAMP (Linux, Apache, MySQL, PHP). The LAMP stack was the most democratic, allowing you to choose the vendors for your stack, and vendors provided “a single throat to choke” should something go awry. The choice to choose the layers of the stack was a benefit many users of legacy technology may not realize today.
When you look at today’s applications, the gold standard for reliability is Java. Though you need to manage the JVMs, you need to tune the stack and use garbage collection to manage memory. You also need an app server to serve the instances. Taking a container-based approach to running individual services, you can leverage Kubernetes and Knative (both housed in the CNCF), which can simplify things by scaling containers automatically both up and down as needed.
Kubernetes and containers make application environments portable from on premises to the cloud and back again. An example of how you could get the best of both worlds is to consider Spring Boot, an open source framework for Java developers aimed at cloud native deployments that can be deployed in containers that can run on premises with Kubernetes or in the cloud.
Using composable infrastructure is the best practice, taking the best technologies and solutions to build systems that are decoupled but well integrated. Gartner describes the Composable Enterprise as a composable business made from interchangeable building blocks and follows four principles of composable business: modularity, autonomy, orchestration and discovery. The idea that any system or application can benefit from composability is often overlooked. Anything can be part of composable infrastructure, not just cloud services.
Deliver Data More Quickly
We experience batch processing every day. Our banks typically process our deposits overnight, and we don’t see that in our banking app until after the batch processes. The same thing applies to our utilities that process the usage on a monthly basis, and we only see the consumption once a month.
Batch processing was used because the load placed on the data warehouse could potentially interrupt or slow down business operations. So the goal would be to move to an architecture that increases the speed of delivery of data without interrupting current business operations. That’s where extract, load, and transform (ELT) and event-driven architecture (EDA) can help.
Replicating and Syncing Data by Moving from ETL to ELT
Many times, we use the term replicating data and syncing data interchangeably. Technically, there’s an important difference. Replication implies a copy of the data (or some subset thereof) is maintained to keep the data closer to the user, often for performance or latency reasons. Synchronization implies that two or more copies of data are being kept up to date but not necessarily that each copy contains all the data, though there is the idea that some consistency is kept between the data sources.
Using an event-streaming technology like Apache Kafka, you can replicate data from read-only data producers (databases, ERP systems, keeping your attack face smaller since you aren’t granting writes to the database). You can also choose to replicate only what’s needed for other systems like mobile apps, web portals and other customer-facing systems without necessarily having them place the load on the canonical database.
Event-Driven Architecture (EDA)
When you look at any major cloud provider, the pattern of event-driven architecture is prevalent. In AWS, for example, services are decoupled and run in response to events. They are made up of three types of infrastructure: event producers, event consumers and an event router.
While AWS deals exclusively in services, your enterprise likely has things like message buses and server software that logs activity on the server. These systems can be event producers. They can be streamed via Kafka or consumed from your log server directly by an event router. In this usage, I suggest the project I work on, the open source TriggerMesh Cloud Native Integration platform to connect, split, enrich and transform these event sources.
For example, you can forward messages from your mainframe using the IBM MQ message bus to integrate your legacy and cloud services like Snowflake. Using the event payloads, you can create data replication without additional load on the producer. You can change that event to a format consumable by the event consumer by changing that event or enriching that event on the fly.
By decoupling the event consumer and producer, you can change the destinations in the event you change vendors (move from AWS to Google) or add additional sources where you may want to replicate data. You also get the benefit of creating synchronizations in real-time, which is in contrast to waiting on batched data to arrive.
EDA isn’t a silver bullet. There are times when you may need to make synchronous API calls. Using APIs, you can make queries based on some set of conditions that can’t be anticipated. In that case, I am a fan of using open source, cloud native technologies like Kong’s API Gateway.
WET or DRY Integration
When you talk about code, you might have heard the term WET (Write Everything Twice) as opposed to DRY (Don’t Repeat Yourself). In the world of development, WET refers to poor coding that needs to be rewritten and DRY is writing more efficient code that doesn’t need to be rewritten. In integration, it’s not an exact correlation, but I believe synchronous API integration is often WET; you write to the API and then write the response that the API returns.
There are many good reasons to do this when you need to complete a complex integration that requires look-ups and a complex answer. However, it can be overkill.
Event-driven architecture (EDA) provides a way for DRY integration by providing an event stream that can be consumed passively. There are many advantages. If you are forwarding changes via the event streams, you can even do what’s called change data capture (CDC).
Change data capture is a software process that identifies and tracks changes to data in a database. CDC provides real-time or near-real-time movement of data by moving and processing data continuously as new database events occur. Event-driven architectures can accomplish this by using events that are already being written but then can be streamed to multiple sources.
Legacy Modernization: Bringing Mainframes to the Cloud
Many corporations face one of the most entrenched pieces of legacy technology in the cloud. Although, until I went digging, I didn’t realize the full extent of this. Mainframes still run a large amount of COBOL. In fact, our whole financial system relies on technology that is unlikely to move to the cloud in the near future.
- 96 of the world’s largest 100 banks, nine out of 10 of the world’s largest insurance companies, 23 of the 25 largest retailers in the United States, and 71% of the Fortune 500 use IBM System z mainframes.
- Mainframes handle 90% of all credit card transactions.
- There are still between 200 and 250 billion lines of COBOL code in production. Roughly 43% of banking systems use COBOL every time you swipe an ATM card. There are 1.5 billion new lines of COBOL programmed each year.
One of the most interesting and unforeseen integrations I have run into is the integration of mainframes with the cloud. While Amazon doesn’t have an AWS Mainframe-as-a-Service, there is a benefit in integrating workflows between mainframes and the cloud. One global rental car company I work with has an extensive workflow that takes data stored in IBM mainframe copybooks and transforms it into events that are consumed to automate workflows in AWS SQS.
There are many reasons you might want to forward mainframe traffic and not just for workflows, but for data replication, real-time dashboards or to take advantage of cloud services that have no data center equivalent. Also, because you aren’t logging in to the event-producing system, there can be a security benefit of a smaller attack surface exposing only the event stream and not the host system.
Case Study — Composable Infrastructure: Security Notification Framework
I believe strongly that going forward there will be two main types of infrastructure: those served by cloud providers as services and open source software. Open source has eaten the world. Linux is the dominant operating system in the cloud and the data center. Kubernetes is becoming the open source cloud native fabric of the cloud. Then there is an abundance of free and open source data center software from multibillion-dollar corporations, consortia and innovative start-ups alike.
One incredibly interesting example of composable infrastructure is the ONUG Cloud Security Notification Framework. CSNF is an open source initiative led by FedEx, Raytheon and Cigna that tackles the difficulty of providing security assurance for multiple clouds at scale caused by the large volume of events and security state messaging. The problem is compounded when using multiple cloud service providers (CSPs) due to the lack of standardized events and alerts among CSPs.
This gap translates into increased toil and decreased efficiency for the enterprise cloud consumer. Cloud Security Notification Framework (CSNF), developed by the ONUG Collaborative’s Automated Cloud Governance (ACG) Working Group, is working to create a standardization process without sacrificing innovation.
The interesting thing about CSNF is that it’s a loosely coupled set of technologies that can incorporate both cloud services and on-premises technologies. While the initial goal is to normalize security events from cloud providers into a single format, it can also incorporate any number of other tools and data sources as appropriate.
While your existing infrastructure may not be completely modern, there’s no reason you can’t benefit from modern technologies and cloud services through integration. Firstly, integration is arguably the key to modernization without the dreaded lift and shift. If you look at your integration layer today, I’d consider a number of tactics:
- Decouple systems — Find opportunities to decouple systems so you can choose the best technologies for each individual need, rather than a monolithic “all-inclusive” stack.
- Integrate, automate, then replace technologies — By decoupling systems, you can introduce technologies that can orchestrate the infrastructure and automate things. Given the lack of qualified cloud talent, it’s a better tactic to automate and make fewer employees much more productive.
- Remove blocking technologies — Remove technologies that block the flow of information and slow the ability of systems to respond, including looking at event-driven ELT solutions over batched technologies.
For IT operations to thrive, they need to adopt agile practices like DevOps and technologies that are open source, event-driven and cloud native. Though, even if you have an IT heritage to consider, it doesn’t mean you are stuck in the past. In the modern world of open source cloud native technologies, you can still reap benefits without a wholesale move to the cloud.
Featured image via Pixabay.