Logo
Explore Help
Register Sign In
EngineX-Ascend/enginex-ascend-910-llama.cpp
10
0
Fork 0
You've already forked enginex-ascend-910-llama.cpp
Code Issues Pull Requests Actions 4 Projects Releases Wiki Activity
Files
7ecd780b1a1d5214b8d04c25ebfc194d310816ed
enginex-ascend-910-llama.cpp/ggml/src/ggml-vulkan
History
Jeff Bolz 7ecd780b1a vulkan: Use fp16 for the flash attention P*V multiplication (#12783)
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-09 07:12:57 +02:00
..
cmake
cmake: fix ggml-shaders-gen compiler paths containing spaces (#12747)
2025-04-04 10:12:40 -03:00
vulkan-shaders
vulkan: Use fp16 for the flash attention P*V multiplication (#12783)
2025-04-09 07:12:57 +02:00
CMakeLists.txt
vulkan: Fix missing cmake logic for dot product extension (#12721)
2025-04-03 10:08:26 -05:00
ggml-vulkan.cpp
vulkan: Use unclamped loads for flash attention mask (#12720)
2025-04-06 10:47:13 +02:00
Powered by Gitea Version: 1.24.3 Page: 84ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API