This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX-Ascend
/
enginex-ascend-910-llama.cpp
Watch
10
Star
0
Fork
0
You've already forked enginex-ascend-910-llama.cpp
Code
Issues
Pull Requests
Actions
4
Projects
Releases
Wiki
Activity
Files
1d49ca37594fb49db6aa9518ba7c512e5ccd0108
enginex-ascend-910-llama.cpp
/
ggml
History
Reese Levine
35266573b9
ggml webgpu: actually add softmax, fix rms_norm offset (
#16400
)
...
* implement soft_max * Fix soft_max data race * Temporary fix, wait on each submit
2025-10-04 20:59:31 -07:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
rpc : add support for multiple devices (
#16276
)
2025-10-04 12:49:16 +03:00
src
ggml webgpu: actually add softmax, fix rms_norm offset (
#16400
)
2025-10-04 20:59:31 -07:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (
#16221
)
2025-10-01 23:09:25 +02:00