A Pandemic Plan for Application Architecture
Keeping your business going during unforeseen events is crucial, and key to this is a solid business continuity plan. The current COVID-19 pandemic is putting such plans to the test and stretching applications scalability to the max, forcing companies to rethink the way they serve up and manage the applications they need to keep their operations running.
The coronavirus has forced the masses to work from home. And if you’re like many, you’re scrambling to ensure your employees can access the applications and data they need to do their jobs. Not to mention delivering a superior application user experience that keeps them engaged and productive. App experience, productivity and business sustainability are intrinsically linked, after all.
To achieve this, you need to be sure you have the right application architecture and delivery strategy in place. A strategy that enables you to deliver application performance; because if it’s not performing, it doesn’t matter how good an application is. The strategy must also deliver scale, because if you can’t autoscale your IT then you can’t scale your business to meet the demands placed on it.
Microservices Are Built for Business Continuity
If you are considering modernizing your applications for the cloud era, this current pandemic might be the final push you need. Microservices-based applications are designed for business agility — almost by definition — and provide you with a solid foundation to ensure that your business continues when unforeseen events occur.
Right now, an unprecedented number of people are working from home and putting applications under extreme stress. Nowhere is this more pronounced than in the workforce productivity and collaboration application arena. Webex meeting volumes for March 2020 were up 250% on February and the application is delivering over 4.2 million meetings a day. Similarly, Zoom is seeing 200 million meeting participants a day, up from 10 million just three months before.
This is having such an effect that GoToMeeting is suggesting its customers stagger their meeting start times away from the top of the hour, to ease strain on the infrastructure and enable a better experience. The most intense strain on any application is usually during its initial start-up. Users need to be authenticated, resources allocated and services loaded into memory on the servers. Naturally, if lots of people do this at the same time — say, at the top of the hour — then the application struggles. It’s like queuing for the supermarket to open. The first few minutes are hectic, but then things calm down. Arriving ten minutes after it opens is usually a much better experience.
These examples typify the need to scale applications on-demand, and this is something that microservices-based applications do so effectively. Microservices allows you to spin up or down instances to cope with fluctuating demand. This modularity also makes it easier to update, fix or add new functionality to an application. Similarly, you may have services causing bottlenecks for applications. You can scale out that specific microservice individually, rather than the whole application, which makes for a far better use of resources. Kubernetes automates that.
When unforeseen events happen and you need to expand to new environments for resilience or scale, containerized microservices offer the ultimate in portability. Because the runtime code of the microservice and all the dependent binary libraries are held in the container, it can make moving your applications to the cloud — or among clouds — incredibly simple and fast.
Comprehensive business continuity means not only planning that in-house developed applications can scale, but checking with your SaaS vendors about how they will cope with unforeseen events. Key questions to ask your SaaS application vendor:
1) Are their services designed to expand with increased load?
2) Are they microservices-based?
It is also vital to maintain the operational consistency of your application delivery infrastructure, so that you can shift your workloads quickly when you need to scale or overflow. Keeping things consistent across platforms is so important. It cuts down the learning curve and allows you to be confident that your security posture and the application delivery will not be impacted.
When you do scale workloads, you will have to scale the associated application delivery controller (ADCs) to meet the increased traffic loads. Choosing ADCs with burst licensing will make sure you can cover transient spikes without worrying. When you need to make more substantial changes, then you need to ensure that your ADCs have flexible licensing. One effective approach is a pool of capacity that allows you to move your ADC capacity where you need it and fast — on premises to cloud, or among clouds.
ADCs should be the mainstay of your application delivery strategy. Your business depends on them to provide the best experience for your users. You can’t afford for them to be down. Open source products are good for development, but when it comes to production environments, you need to keep things up and running all the time. That’s when it’s good to know that you have the backing of SLA-governed support contracts, rather than relying on forums and the best efforts of the community to get your business up and running during unforeseen events.
When you have to step outside “normal,” it’s easy to get lost. You need to know how bad things are, what’s broken and how to fix it. Fast. This is why you have site reliability engineers (SREs); they help you quantify performance and can map out how much you need to react to put it right.
In our next article, we will look at the role of SREs in the context of business continuity.
Feature image from Pixabay.