vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
References
Configurations
Configuration 1 (hide)
|
History
16 Oct 2025, 18:02
Type | Values Removed | Values Added |
---|---|---|
CPE | cpe:2.3:a:vllm:vllm:0.11.0:rc1:*:*:*:*:*:* cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* |
|
First Time |
Vllm vllm
Vllm |
|
References | () https://github.com/vllm-project/vllm/blob/4b946d693e0af15740e9ca9c0e059d5f333b1083/vllm/entrypoints/openai/api_server.py#L1270-L1274 - Product | |
References | () https://github.com/vllm-project/vllm/commit/ee10d7e6ff5875386c7f136ce8b5f525c8fcef48 - Patch | |
References | () https://github.com/vllm-project/vllm/releases/tag/v0.11.0 - Release Notes | |
References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-wr9h-g72x-mwhm - Exploit, Vendor Advisory |
07 Oct 2025, 14:15
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2025-10-07 14:15
Updated : 2025-10-16 18:02
NVD link : CVE-2025-59425
Mitre link : CVE-2025-59425
CVE.ORG link : CVE-2025-59425
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-385
Covert Timing Channel