CVE-2025-53630

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
CVSS

No CVSS.

Configurations

No configuration.

History

15 Jul 2025, 13:14

Type Values Removed Values Added
New CVE

Information

Published : 2025-07-10 20:15

Updated : 2025-07-15 13:14


NVD link : CVE-2025-53630

Mitre link : CVE-2025-53630

CVE.ORG link : CVE-2025-53630


JSON object : View

Products Affected

No product.

CWE
CWE-122

Heap-based Buffer Overflow

CWE-680

Integer Overflow to Buffer Overflow