CVE-2025-52566

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
Configurations

No configuration.

History

26 Jun 2025, 18:58

Type Values Removed Values Added
New CVE

Information

Published : 2025-06-24 04:15

Updated : 2025-06-26 18:58


NVD link : CVE-2025-52566

Mitre link : CVE-2025-52566

CVE.ORG link : CVE-2025-52566


JSON object : View

Products Affected

No product.

CWE
CWE-119

Improper Restriction of Operations within the Bounds of a Memory Buffer

CWE-195

Signed to Unsigned Conversion Error