Create a Monitoring Subnet in Microsoft Azure to Feed a Security Stack
The 2021 Verizon Data Breach Investigations Report found that 73% of cybersecurity incidents involved external cloud assets; it was the first year in which more attacks targeted the cloud than on-premise. As enterprise and government cloud migrations and attacks accelerate, IT teams in those organizations often come to the unpleasant realization that their security tools no longer have access to traffic in the public cloud — which they do not feel comfortable with at all.
Without the ability to access the deepest and purest form of network data, i.e. packet data, security tools like an Intrusion Detection System (IDS), Intrusion Prevention System (IPS), or Network Detection and Response (NDR) are much less effective and protecting applications and sensitive data in the cloud from malicious actors becomes much more difficult.
Until mid-2019, every major public cloud platform was a “black box” in terms of network visibility. The Network Operations (NetOps) teams within IT could not use their well-tried workflows in the cloud to provide adequate visibility to serve themselves and their application and security counterparts.
Thankfully for the Security Operations (SecOps) team, this has changed as public cloud providers began offering features like VPC traffic/packet mirroring to provide access to raw network traffic. Virtual TAPs (vTAP) and virtual packet broker (vPB) solutions that synergize with these features have hit the market as well. Together, they offer a way to get visibility back into the cloud.
But designing a cloud native network visibility or monitoring setup can be complicated. This post will cover how to create a monitoring subnet within Microsoft Azure that captures packet data and feeds it to downstream cloud native security tools. This data can also be used for troubleshooting or performance monitoring or saved in cloud storage for later forensic analysis, but we’ll focus on the security use case here. It’s possible to create similar monitoring setups in other clouds, but each one has its own idiosyncrasies.
Above is a diagram of a basic monitoring subnet (the blue circle). The arrows represent network traffic, and the red circles represent application subnets. The monitoring subnet consists of (a) an internal load balancer, (b) three virtual packet brokers, (c) virtual security tools and (d) packet storage/analysis tools. Let’s walk through how to set this up and how it fits into the overall network visibility.
Tapping the Firewall Traffic
The goal of this subnet is to monitor traffic between virtual appliances in the cloud, as well as external traffic from the public internet. First, the subnet needs to receive packets from firewalls and Intrusion Prevention Systems (IPS).
To do this, the monitoring subnet needs to be set as the “next hop” by firewalls and IPS solutions, either using Layer-3 with IP forwarding, or Layer-4 with direct UDP or TCP connections. Azure’s ability to specify user-defined routes allows IT to mandate that all traffic coming out of the “DMZ” firewall subnet must pass through the load balancer in the monitoring subnet.
Route Everything Through the Center
Traffic between application subnets is routed through the internal load balancer — it becomes the hub that all network traffic must pass through before it reaches its destination. It’s critical that the load balancer has enough ports enabled to allow balancing of all protocol flows on all ports simultaneously. The load balancer then distributes traffic across three virtual packets brokers, required to maintain high performance and elasticity.
Next, the virtual packet brokers forward the packets on to their destination, while sending a copy to downstream security tools. Load balancers can also be used between the brokers and the downstream tools if needed. Note that virtual packet brokers can be deployed in two modes: data link or endpoint.
In data link mode, the broker will forward packets on to their destination using routing or load balancer rules and make a copy to forward to downstream security, packet analysis, or storage tools as described above. This is the mode used in this example subnet. In other situations, such as AWS deployments, endpoint mode can be used in conjunction with Amazon VPC traffic mirroring to access cloud traffic.
Remember to always deploy packet brokers in clusters of three to prevent a single point of failure and maintain high availability. In fact, based on the scenario and the expected network load, using more than three brokers might be necessary.
Also consider that a packet broker’s total throughput is divided by the number of connections it has — so if a unit is copying packets to two downstream security tools, each connection (the main ethernet connection carrying packets to their destination and the two VXLAN connections carrying copies of those packets to the security tools) will have a throughput of one-third of the box’s theoretical maximum. IT must consider throughput requirements when selecting and sizing virtual packet brokers.
Connect to Security Tools
The point of all of this is, of course, to get packets out of the cloud to a stack of security tools. The virtual packet brokers can feed packets directly to tools that operate in real-time, like an IDS or NDR. Those sensors can then communicate with other more detailed security analysis tools.
It’s also possible to send a separate stream of packets to a packet capture and storage solution and then use that for security forensics and incident response on a bad day.
If a threat is detected, this setup would allow SecOps to look back over the last few days or hours to see how the threat got on the network and what endpoints it may have affected.
Copy/Paste Wherever Needed
It’s possible to copy this entire monitoring subnet and place it in different points around the network (and the deployment can often be automated), as shown in the diagram below.
This allows IT to conduct forensic analysis of specific packets, such as the traffic between two specific appliances, and to compare latency at specific points around the network. For instance, in the example above latency can be measured before and after the firewall DMZ. This is useful for pinpointing trouble spots, especially in large or complex environments — many enterprises have thousands of individual subnets after all.
Overall, this cloud native network visibility setup provides NetOps, SecOps, and CloudOps with adequate visibility into cloud traffic and applications and reduce the risk of attacks slipping through the cracks.
Not only does this monitoring subnet feed packet data to essential security tools, but it also allows NetOps teams to measure Key Performance Indicators (KPIs) like network latency and, depending on the capabilities of the virtual packet brokers, possibly track other useful security metrics such as DNS hosts and request times.
I urge NetOps and SecOps teams not to overlook their new cloud infrastructure — because attackers certainly aren’t.