CVE-2026-34159
🔴 Łataj teraz
Brak walidacji w llama.cpp umożliwia zdalne wykonanie kodu przez nieautoryzowanego atakującego.
CVSS
9.8
EPSS
0.5%
Exploit
poc
Vendor
ggml
Opis źródłowy (NVD)
llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
exploit rce
Brak patcha
Źródła i daty
| Źródło | Wartość |
|---|---|
| NVD – CVSS | 9.8 |
| CISA KEV (aktywnie wykorzystywane) | Nie |
| FIRST EPSS (prawdopodobieństwo exploita) | 0.5% |
| Opublikowano (NVD) | 2026-04-01 18:16:29 UTC |
| Ostatnia modyfikacja (NVD) | 2026-04-30 19:18:32 UTC |
Referencje
- https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042 (security-advisories@github.com) [Patch]
- https://github.com/ggml-org/llama.cpp/pull/20908 (security-advisories@github.com) [Issue Tracking, Patch]
- https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw (security-advisories@github.com) [Exploit, Vendor Advisory]