vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
References
Link | Resource |
---|---|
https://github.com/vllm-project/vllm/pull/17623 | Issue Tracking Vendor Advisory |
https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65 | Exploit Vendor Advisory |
Configurations
History
01 Jul 2025, 20:42
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2025-05-30 19:15
Updated : 2025-07-01 20:42
NVD link : CVE-2025-48944
Mitre link : CVE-2025-48944
CVE.ORG link : CVE-2025-48944
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-20
Improper Input Validation