How Data Helps Modernize Monolithic VMware Applications
LogDNA sponsored this post.
As cloud native applications become more pervasive, enterprises are looking to bring similar architectures and services to their heritage monolithic applications. Containerizing a monolithic application benefits IT organizations in many ways, including easier integration with microservices, the ability to scale, and ultimately a better end-user experience. DevOps tools like centralized log management are essential to help users manage the influx of data that comes with modernizing applications.
What Is a Monolithic Application?
A monolithic application is a system that combines the core business logic, the user interface, data layers, APIs and all of the other aspects of the application on the same single stack in a tightly coupled ecosystem. This type of application is often running the entire stack on virtual machines or bare metal servers, sometimes as the single tenant on a box. While a monolith can be deployed on a cloud-based platform, it typically performs poorly in comparison to a cloud native architecture.
Cloud Native: Microservices and Beyond
A microservices architecture is an entire ecosystem where each component of the application — a service — addresses a business need or function rather than being defined by its use within the system. As a result, we typically refer to business-oriented APIs, or APIs defined in business terms, when discussing microservices. A containerized architecture like VMware on OpenShift is essentially one type of microservices architecture that focuses on providing a lightweight environment for each service. Serverless architectures go an entire step further and reduce your application down to functions that define your core business logic. All of these architectures are collectively called cloud native architectures.
Why Go Cloud Native?
Cloud native systems have a lot of benefits. First, you can reuse components consistently across the entire ecosystem, versus just getting to reuse functions or classes in a codebase. As a result, the codebase shrinks and the entire system becomes more flexible (though not necessarily less complex). Second, you can restart services with minimal impact across the rest of the ecosystem — especially in regard to the flow of data in and around that system — due to the ability to add redundancy down at the service level. This capability enables faster scaling, too. Third, you have a faster path to release and deployment because you already have a scaffold of the flow of data mapped out. All you have to do within a container is ensure that the incoming and outgoing transmissions match. Finally, cloud native architectures are better optimized for using the best stack types for each use case across an entire system, from using compute systems optimized for speedy read-write for databases, to using cold storage optimized for size and cost for archival.
The big thing to remember about moving off of a monolithic architecture to a cloud native framework is understanding how data flows through your system. You can think of that as your business needs, but really it’s about understanding where data comes from, how it needs to transform from logical step to logical step, when it needs to be held in one state and for how long, and what it needs to look like when it leaves the application. This fundamental shift in thought is probably what makes this shift from monolith to cloud native so intimidating — aside from just the sheer amount of work.
Starting from the Monolith
Understanding the flow of data across your monolith starts with examining your logs. When looking at your logs for a monolithic application on VMware, you probably have logs from your application, your platform, and your systems from the operating system down through your hardware stack. Each level of the stack from the application on down keeps its data fairly isolated from the other levels, only passing key bits and pieces through. Traditionally, you’d have to install vRealize LogInsights and manage it yourself on VMware to see these application logs.
Moving to Containerization
To move to a containerized architecture from a monolith requires making a plan, optimizing your system by removing the old components and identifying new ones, and then deploying and maintaining your stack. Now that you understand the flow of data across your monolith, you can start with a plan.
Make a Plan
You probably already have a good sense of which elements of your monolith could be services at this point, but what if you don’t? When you open up your logs from your monolithic application, you should start to see patterns of data flowing to and collecting at certain points over time. When you see these points of collection, identify whether those points map to a business value, whether they’re doing the same kinds of things again and again, and whether they are natural breaks in the dataflow. Assuming they are, you likely have identified or validated your assumption of which elements would be considered services.
Let’s say that you’re taking a monolithic VMware application and moving it to a microservices architecture on IBM Cloud. You use vSphere in your monolith to ensure that applications run smoothly when developers release new updates. From this, you determine that the business need is system monitoring and you find a cloud service that meets that need — such as IBM Cloud Availability Monitoring.
Develop New Features
Once you’ve created a map of your data patterns, you need to consider how the new architecture might trim out components or how many new APIs you’ll need to expose your services to. The data you’ve gathered from your logs should identify where the entry points and exits for your new services are and how they will need to access data, just like how your monolith may have exposed data to the outside world.
Let’s take our example application. You will need to set up an API to connect to your database container and its permanent storage volume. OpenShift makes the inbound and outbound connections like this easy by allowing you to expose internal Services, which are different than external Routes since they are intended to be used within the application’s ecosystem only.
Deploy and Maintain Your New System
Now you’re ready to start using this new system. Deploying and maintaining your newly containerized application will be a bit different than your monolithic application. You will no longer need to provision hypervisors and spin up VMs just for your application. You’ll also no longer have long-running systems outside of your persistent storage. Rather than needing to go into your hypervisors to update the operating systems, for example, you’ll often just be restarting your containers to hop to the next available server that’s up-to-date.
As a result, your logs are going to look a lot different, too. You’ll need to lean more heavily on your application logs. You may have access to your OpenShift logs if you have access to that part of your cluster. However, you’re unlikely to see your operating system logs as much, simply because you won’t be maintaining that yourself. That sort of maintenance lives with your cloud provider. The one thing you will need, though, is log aggregation and management. You suddenly will have a lot more information, since you now have multiple services running on pods full of containers — all passing data among them constantly. Services like IBM Log Analysis with LogDNA help you maintain your clusters, by making your logs easier to search when you’re in a hurry, and by providing visibility into how your system’s data flow gets restructured when a new DeploymentConfig hits.
Modernizing your VMware applications allows you to easily integrate with many cloud native tools that give you better visibility into your environments. DevOps tools like IBM Log Analysis with LogDNA can help you understand how data flows through your systems, so that you can develop and debug your newly containerized applications with confidence.
VMware is a sponsor of The New Stack.
Feature image by 8926 from Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: email@example.com.