Logo
Explore Help
Register Sign In
EngineX-Ascend/enginex-ascend-910-llama.cpp
10
0
Fork 0
You've already forked enginex-ascend-910-llama.cpp
Code Issues Pull Requests Actions 4 Projects Releases Wiki Activity
Files
4512055792cff7ea107f2d2231f79aa1af073c62
enginex-ascend-910-llama.cpp/ggml
History
petterreinholdtsen 4512055792 Told cmake to install ggml-cpp.h as a public header file. (ggml/1126)
It is used by Whisper talk-llama example.

Co-authored-by: Petter Reinholdtsen <pere@debian.org>
2025-03-03 18:18:11 +02:00
..
cmake
cmake: Fix ggml backend dependencies and installation (#11818)
2025-02-27 09:42:48 +02:00
include
ggml : upgrade init_tensor API to return a ggml_status (#11854)
2025-02-28 14:41:47 +01:00
src
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)
2025-03-03 18:18:11 +02:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
Told cmake to install ggml-cpp.h as a public header file. (ggml/1126)
2025-03-03 18:18:11 +02:00
Powered by Gitea Version: 1.24.3 Page: 57ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API