Several days ago, security researchers from Akamai, Cloudflare and Incapsula reported that they had observed a massive distributed denial of service (DDoS) attack. The attack used a new DDoS amplification attack vector via Memcached servers. Among the targets of the attack was the developer platform GitHub, which reported a traffic hit of 1.35 terabits per second — the most powerful DDoS attack recorded to date.
Unlike the more common botnet attacks used in previous large DDoS attacks, such as the attacks against DNS provider Dyn and French telecom OVH in 2016, Memcached DDoS attacks do not require a large malware-driven botnet to initiate the disruptive traffic. Instead, attackers take advantage of Memcached, a legitimate and popular cache component used to speed up high-scale web applications and databases.
In a normal usage scenario, the application stores small packets of information on the Memcached server and retrieves them upon request. The DDoS attack took advantage of a little-known Memcached feature — its ability to communicate over UDP — combined with the unsafe default configuration of the Memcached server and the huge number of server deployments on the internet (estimated at around 93,000) without adequate security protection for incoming and outgoing UDP connectivity.
The attacker first stored information on the Memcached server like in normal usage. Then, he or she periodically sent data requests to it, specially crafted as if they were sent from the attack’s intended victim (known as “IP spoofing”). The responses from the server, which are sent via UDP from port 11211, are about 51,000 times larger than the attacker’s requests. The result was that with little effort and few resources, the attack caused the exploited Memcached servers to send huge amounts of traffic to the victims, with the intent of overwhelming their network capacity and shutting them down (at least temporarily). In technical terms, this type of attack is known as a “DDoS reflection attack.”
The attack depends on the attacker’s ability to send malicious data requests to the Memcached server, the configuration of the abused Memcached server which enables it to listen and respond to such requests, and the victim’s difficulty in mitigating the attack by identifying and discarding the disruptive traffic from the regular incoming Internet traffic. Of course, each of these dependencies may be addressed to prevent the attack. Obviously, organizations using Memcached should never leave these servers exposed to the Internet, where they could be abused by an external attacker. Rather, they should deploy them behind a firewall that blocks access to UDP port 11211.
The Modern Data Center Angle: You May Still Be a Victim
There is a crucial twist to the plot that has not yet been discussed. In the modern data center, protecting internal services like Memcached servers from external abuse (i.e. by an attack coming from the internet) using a firewall on the data center’s perimeter is not enough.
In recent years, the data center has evolved. Now, it is comprised of many complex distributed applications and hundreds of microservices, which are deployed using a mixture of virtualization technologies like Docker containers, virtual machines and bare metal servers. If even one of those applications or microservices is compromised, by an external attacker gaining a foothold on a web-facing service, by a malicious insider, or by an infected application component inadvertently deployed in a data center with loose security controls, it could be used to initiate attacks either within the data center or from the data center against external targets.
There are reportedly some 93,000 Memcached servers exposed to the internet, we estimate that there may be many more Memcached servers running within organizations’ data centers, which are not exposed to the internet but are also not protected from abuse from within.
Consider a data center in which Memcached servers are deployed in an apparently secure way, behind a traditional firewall. If a malware gains access to the data center’s network, these servers can still be abused as a potential source of DDoS, targeting either an external victim or the data center’s infrastructure itself.
How likely is an attack initiated from within the data center? Consider how prevalent Memcached servers are within modern data centers. While there are reportedly some 93,000 Memcached servers exposed to the internet, we estimate that there may be many more Memcached servers running within organizations’ data centers, which are not exposed to the internet but are also not protected from abuse from within. This estimate is based, for example, on the fact that the official Memcached docker image was pulled more than 10 million times, and that Memcached is packaged by default within Amazon’s Linux distribution, which is used by millions of virtual machines on AWS.
If you’re on a need to know basis … you’d want to know!
Is the security team of the organization aware of every Memcached server instance running in the data center? With the complexity and dynamically-changing nature of modern infrastructure, visibility into what is running there at any given time is challenging. How many internal Memcached servers are protected from maliciously-crafted UDP packets sent by malware running inside the data center, with the aim of initiating a DDoS reflection attack? Are there good security policies and enforcement mechanisms that prevent those servers from sending out the DDoS packets?
Let us assume the organization has full visibility into the sea of application components inhabiting the data center, along with strict security procedures. Furthermore, these procedures mandate that a service like Memcached is only deployed if it is crucial to business functionality, after it was vetted, and after its configuration was approved by the development team. Even in this optimistic case, security personnel must still be able to ensure the health of the infrastructure by applying overall runtime security policies and limitations on what constitutes a legitimate use of such a service, and what traffic should be blocked. And these policies and limitations must be enforceable within the data center, in the context of virtual workloads, high scale and dynamic changes.
Alcide is a sponsor of The New Stack.
Feature image via Pixabay.