NGINX sponsored this post.
With the rise of microservices and the move away from monolithic architectures, IT and DevOps teams have focused on consolidating everything in infrastructure into a smaller and smaller set of platforms and technologies. The bold vision was that everyone would be on the same single “God platform” with one management plane, one data plane and one easy point of reference. Kubernetes, cloud native and microservices were to be the vehicle for this vision. Everyone would be on the same page, using the same metrics and speaking the same language. Networking and development teams would suddenly mind-meld and joy would ensue.
The trouble is, reality and KPIs have gotten in the way. NetOps, SecOps and DevOps teams need very different capabilities and metrics, to the point that jamming all teams onto a single platform and forcing them to share everything causes real pain to all parties.
The reality for most large enterprises today is that they must live in two worlds: cloud native and the older, but still relevant, monolithic world.
Reality Bites: Teams Need Different Things
For example, NetOps and DevOps have incompatible expectations of traffic management. NetOps wants to maintain security and uptime, with network stability and consistency the primary goal. NetOps teams have to keep the entire company up and running, including applications across finance, HR, marketing and more. So, no, some enthusiastic dev who wants to try out a new Clojure-based microservice cannot do a blue-green rollout this week. File a ticket and get in line, yo. This conservative view makes NetOps unwilling and frankly unable to make changes to firewalls, load balancers and other key networking platforms. NetOps’ caution subverts the entire premise of rapid iteration, continuous testing and snap deployments that are table stakes for agile DevOps.
On the other hand, DevOps, especially when building cloud native and microservices-based apps, needs to be able to make changes to security rules and routing tables quickly to test and iterate at the speed of modern CI/CD pipelines. Services and cloud native applications are, by design, loosely coupled and only succeed when developers have a greater degree of self-determination and control over deployment. At the same time, modern applications are decomposed into so many different services that lack of control at the granular level that can dramatically degrade the ability of DevOps to properly tune and deliver proper application performance, to the point that customers might notice. For DevOps teams, the idea of someone else controlling their deployments feels like a return to the 1990s when they had to wait a month for a server.
For SecOps, microservices and Kubernetes present security challenges that can’t be addressed solely by traditional methods such as ring-fenced security. While SecOps is well-equipped to enforce policies outside the Kubernetes cluster, their tools aren’t suited for lightweight, containerized apps. With inadequate security, configuration errors by Kubernetes teams lead to data breaches and exposures. SecOps becomes the villain when they shield the organization from risk by delaying or halting containerized app deployments in production. DevOps view them as a major constraint on their ability to deliver apps quickly, shadow IT becomes the norm and security is routinely sacrificed in the name of speed.
Cloud Native Has Forced the Issue of Consolidation
As organizations of all sizes have pushed to go cloud native and adopt microservices, contrasting worldviews have collided. The result is not great: Either DevOps is stuck waiting impatiently through NetOps’ two-week change cycles or NetOps is hair-on-fire trying to work at the frantic pace of DevOps and constantly fearing that the firewall or load-balancing rule change requested by one team will bring down the entire application infrastructure, taking hundreds of apps offline in one fell swoop. For their part, SecOps is often caught between two worlds: trying to figure out how a global WAF can handle rules for Kubernetes clusters behind a perimeter while also enforcing microservices-level security.
The cascading effect of consolidation has also forced DevOps, SecOps and NetOps to choose between different beloved sets of tools for observation and management, different principles about security and different expectations for application behavior. For example, for NetOps average packets dropped is a critical KPI, while for DevOps it’s borderline irrelevant to understanding user experience.
The Solution: Duplication, Not Consolidation
The solution is both counterintuitive and obvious. Stop fighting to make consolidation work. Instead, embrace the duplication of infrastructure and tools. Allow NetOps to control the front door with basic security badges and fairly static rules and management. Allow each DevOps team to load balance and configure its own applications as a secondary set of network activities behind the high-capacity appliance or virtual machine sitting sentry at the front, managed by NetOps.
To be clear, we are not advocating a free-for-all where every DevOps team gets to pick its own load balancer and tooling, with NetOps left to pick up the mess. Rather, we are talking about controlled consolidation based on user groups that maintain duplicate capabilities — as in two tracks — so that each side has something workable. This is the Goldilocks recipe for productivity and happiness.
We have seen time and again that once companies embrace this new reality, they are actually able to accelerate code velocity and application iteration without degrading security or stability. Each team can happily focus on its own KPIs. Equally important, dual infrastructure empowers NetOps, SecOps and DevOps to work together rather than struggle over picking and controlling the One Solution. Your developers don’t need to shift all the way left to learn Nagios, something they really don’t care about, and your NetOps team doesn’t need to learn how to configure and tune Prometheus, to name two examples.
Here’s how we have seen smart organizations handle this move towards duplication, how to make it easier and where they gain the most leverage, broken down in specific areas.
Traffic Management: Think Big and Small
The sentry at the front door is still there and will likely remain for quite a while. Also, frankly, it is necessary. For companies that want both performance and control and to protect themselves against DDoS and other threats, a tested, stable, enterprisewide load balancer controlled by the NetOps teams makes a lot of sense. NetOps can function like the front-door security, ensuring that everyone has an entry badge to get in the door. With tightly crafted policies, NetOps can grant DevOps teams the leeway they need, enabling them to deploy their own lightweight networking infrastructure, such as Kubernetes, behind the perimeter load balancers.
DevOps uses a specialized load balancer, an ingress controller, to then play in their own sandbox without bothering NetOps with a stream of requests. DevOps can iterate. NetOps can stabilize. SecOps can maintain the global security infrastructure of firewalls while working with DevOps — or, more likely, DevSecOps — to create a zero-trust framework that distributes security across the application at the service level and enables DevOps teams to cater security to their specific application or service needs.
Monitoring and Performance: Two Sets of Priorities and Metrics
Because the metrics that matter are so different for the two teams, NetOps and DevOps need to each pick their own analytics stack. This works best when each team decides on tooling, but limits sprawl by standardizing on dual stacks with no alternatives. NetOps can then monitor packet loss, general throughput and flows with Nagios, Zabbix, ThousandEyes, or any other tool commonly used for network analysis. DevOps can choose something like Prometheus for service monitoring, but also like NewRelic or AppDynamics for specific application performance monitoring with sufficient granularity.
In more recent years, running parallel analytics stacks has actually gotten easier because integrated dashboards like Grafana and Kibana can accept analytics from both NetOps and DevOps monitoring stacks while maintaining different personas for the different teams.
SecOps is mostly monitoring for anomalous behavior, which is not totally incompatible with either NetOps or DevOps. With both types of monitoring displayed on a unified dashboard that provides alerting and integrations with security platforms like SIEM, security actually gets a better picture of what’s on the ground. For the CTO, CIO and CISO, having a single place to see what’s happening is a major bonus.
Security Infrastructure: Keep the Fence and Shift Left
A dual-tier security structure is equally key to successful traffic management. NetOps and SecOps will rest easy when they can continue to rely on stable perimeter security like their web-scale application firewalls, digital loss prevention and all the endpoint protection requirements mandated at the global policy level.
DevOps teams using Kubernetes with ingress controllers, Kubernetes-native WAF and service mesh can manage application and service-specific security rules that take into account the unique risks created during rapid application iteration — risks that only the developers understand well. Faster revs on security rules enable DevOps to maintain code velocity and rapid-fire feature introduction without putting the enterprise at risk. Compartmentalizing the risk also means that enterprise crown jewels — ERP, finance — can enjoy a stricter security standard without disrupting rapid development cycles.
Deployment Tools: Consolidation With Different Pipeline Setups
While infrastructure-as-code sounds great, in reality infrastructure moves at different speeds depending on what you care about. This is one reason why some cloud native application companies can ship new builds several times a day, while larger enterprises with a broad portfolio of applications move far more slowly with new deployments. Like monitoring and load balancing, mature deployment tools can easily support duplicate CI/CD platforms if required. In this case, you probably want to set up dual pipelines: one for config changes, patching and rule changes on global networking systems and firewalls, and a separate pipeline for individual code teams. You can apply the same CI/CD principles and checkpoints or procedures to both, or customize as needed. For example, you can add static code analysis or unit testing to DevOps pipelines while skipping those steps in the NetOps pipelines. SecOps teams can establish their own requirements on CI/CD environments regarding code review, testing and staging or sandboxing new applications to test behaviors at runtime before deploying live.
Conclusion: A Necessary Compromise
The two worlds speak different languages, work at different speeds and like different types of tools. You cannot order the inhabitants of one world to suddenly change everything they know and like without massively disrupting organizational productivity or, in the case of NetOps, risking global stability of your entire application portfolio. For SecOps, you want to keep front-door security rock solid while allowing DevOps teams to properly secure their own applications without sacrificing agility. DevOps teams simply need to move faster and many of the platforms they use do not fit nicely into NetOps paradigms.
Through this lens, duplication is both inevitable and desirable, a positive outcome to the tooling and consolidation wars that have consumed countless hours and vast amounts of energy striving for a golden mean that doesn’t exist.
Featured image via Pixabay.