R0CKSTAR
716301d1b0
musa: enable fp16 mma (all) and cublas on qy2 ( #13842 )
...
* musa: enable fp16 mma (all) and cublas on qy2
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* Address review comments
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Address review comments
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2025-06-26 12:11:59 +08:00
uvos
0142961a2e
CUDA/HIP: optimize mmv paths taken for HIP devices ( #14324 )
...
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2025-06-24 01:12:56 +02:00
Johannes Gäßler
defe2158dd
CUDA: mul_mat_v support for batch sizes > 1 ( #14262 )
...
* CUDA: mul_mat_v support for batch sizes > 1
* use 64 bit math for initial offset calculation
2025-06-23 13:11:31 +02:00
uvos
af3373f1ad
HIP: enable vec fattn on RDNA4 ( #14323 )
2025-06-22 16:51:23 +02:00
Aman Gupta
aa064b2eb7
CUDA: add mean operation ( #14313 )
...
* CUDA: add mean operation
* add back sum_rows_f32_cuda
* Review: early exit if col!=0
2025-06-22 12:39:54 +08:00
Aman Gupta
c959f462a0
CUDA: add conv_2d_transpose ( #14287 )
...
* CUDA: add conv_2d_transpose
* remove direct include of cuda_fp16
* Review: add brackets for readability, remove ggml_set_param and add asserts
2025-06-20 22:48:24 +08:00
Diego Devesa
e28c1b93fd
cuda : synchronize graph capture and cublas handle destruction ( #14288 )
...
Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread
2025-06-20 13:57:36 +02:00
Aman Gupta
9eaa51e7f0
CUDA: add conv_2d_dw ( #14265 )
...
* CUDA: add conv_2d_dw
* better naming
* simplify using template
* Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const
2025-06-20 09:50:24 +08:00
R0CKSTAR
fe9d60e74a
musa: fix build warning (unused variable) ( #14231 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-06-17 17:48:08 +08:00
uvos
7d6d91babf
HIP: disable rocwmma on gfx12 by default until rocm 7.0 ( #14202 )
2025-06-16 13:47:38 +02:00
uvos
e54b394082
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 ( #14196 )
2025-06-15 17:30:13 +02:00
uvos
2c2caa4443
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ ( #14183 )
2025-06-15 15:45:27 +02:00
xctan
f470bc36be
ggml-cpu : split arch-specific implementations ( #13892 )
...
* move ggml-cpu-aarch64 to repack
* split quantize_row_q8_0/1
* split helper functions
* split ggml_vec_dot_q4_0_q8_0
* split ggml_vec_dot_q4_1_q8_1
* split ggml_vec_dot_q5_0_q8_0
* split ggml_vec_dot_q5_1_q8_1
* split ggml_vec_dot_q8_0_q8_0
* split ggml_vec_dot_tq1_0_q8_K
* split ggml_vec_dot_tq2_0_q8_K
* split ggml_vec_dot_q2_K_q8_K
* split ggml_vec_dot_q3_K_q8_K
* split ggml_vec_dot_q4_K_q8_K
* split ggml_vec_dot_q5_K_q8_K
* split ggml_vec_dot_q6_K_q8_K
* split ggml_vec_dot_iq2_xxs_q8_K
* split ggml_vec_dot_iq2_xs_q8_K
* split ggml_vec_dot_iq2_s_q8_K
* split ggml_vec_dot_iq3_xxs_q8_K
* split ggml_vec_dot_iq3_s_q8_K
* split ggml_vec_dot_iq1_s_q8_K
* split ggml_vec_dot_iq1_m_q8_K
* split ggml_vec_dot_iq4_nl_q8_0
* split ggml_vec_dot_iq4_xs_q8_K
* fix typos
* fix missing prototypes
* rename ggml-cpu-quants.c
* rename ggml-cpu-traits
* rename arm folder
* move cpu-feats-x86.cpp
* rename ggml-cpu-hbm
* update arm detection macro in quants.c
* move iq quant tables
* split ggml_quantize_mat_q8_0/K
* split ggml_gemv_*
* split ggml_gemm_*
* rename namespace aarch64 to repack
* use weak aliases to replace test macros
* rename GGML_CPU_AARCH64 to GGML_CPU_REPACK
* rename more aarch64 to repack
* clean up rebase leftover
* fix compilation errors
* remove trailing spaces
* try to fix clang compilation errors
* try to fix clang compilation errors again
* try to fix clang compilation errors, 3rd attempt
* try to fix clang compilation errors, 4th attempt
* try to fix clang compilation errors, 5th attempt
* try to fix clang compilation errors, 6th attempt
* try to fix clang compilation errors, 7th attempt
* try to fix clang compilation errors, 8th attempt
* try to fix clang compilation errors, 9th attempt
* more cleanup
* fix compilation errors
* fix apple targets
* fix a typo in arm version of ggml_vec_dot_q4_K_q8_K
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2025-06-09 16:47:13 +02:00
Diego Devesa
8f47e25f56
cuda : fix device sync on buffer clear ( #14033 )
2025-06-09 16:36:26 +02:00
Diego Devesa
247e5c6e44
cuda : fix buffer type check with integrated GPUs ( #14069 )
2025-06-08 11:39:56 -07:00
Johannes Gäßler
0b4be4c435
CUDA: fix FTZ in FA for Gemma 3 ( #13991 )
2025-06-04 08:57:05 +02:00
Shawn yang
eb3949938e
CUDA: add a prop in ggml_cuda_device_infor for distinguish iGPU or dGPU in cuda ( #13856 ) ( #13895 )
...
* 1. add "integrated" in ggml_cuda_device_info for distinguish whether it is Intergrate_gpu or discrete_gpu
2. Adjust the func:"ggml_backend_cuda_device_supports_buft" for this new feature
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Adjusted code indentation
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Fixed incorrect setting of variable types
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Adjusted the judgment logic
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* add a host_buft assert in case of integrated_cuda_device with func:'evaluate_and_capture_cuda_graph()'
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Add a defensive security assert
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Adjusted the support judgment logic.
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* revoke the suggest commit changes due to it's not applicable in jetson_device
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Add parentheses to enforce operator precedence
Co-authored-by: Diego Devesa <slarengh@gmail.com >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Fix ci bug: add a spaces
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
---------
Co-authored-by: yangxiao <yang_xl@tju.edu.cn >
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
Co-authored-by: yangxiao <yangxl_zz@qq.com >
Co-authored-by: Diego Devesa <slarengh@gmail.com >
2025-05-31 08:48:04 +02:00
Johannes Gäßler
e562eece7c
CUDA: fix typo in FlashAttention code ( #13926 )
2025-05-30 21:22:03 +02:00
Diego Devesa
df0c0c7d02
cuda : prevent using split buffers with 3d/4d matrices ( #13919 )
2025-05-30 16:37:18 +02:00
Johannes Gäßler
a68247439b
CUDA: fix FA tg at long context for CC >= 8.9 ( #13852 )
2025-05-28 13:33:37 +02:00
Georgi Gerganov
4265a87b59
cuda : avoid cuGetErrorString ( #13791 )
...
ggml-ci
2025-05-26 22:14:52 +03:00
Xuan-Son Nguyen
4c32832c59
ggml : add ggml_gelu_erf() CUDA kernel ( #13719 )
...
* ggml : add ggml_gelu_erf() CUDA kernel
* missing semicolon
2025-05-24 13:06:47 +02:00
Johannes Gäßler
ffd0eae60b
CUDA: fix race condition in FA vector kernels ( #13742 )
2025-05-24 11:46:19 +02:00
R0CKSTAR
33983057d0
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy ( #13647 )
...
* musa: fix build warning (unused parameter)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: upgrade MUSA SDK version to rc4.0.1
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: use mudnn::Unary::IDENTITY op to accelerate D2D memory copy
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Update ggml/src/ggml-cuda/cpy.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* musa: remove MUDNN_CHECK_GEN and use CUDA_CHECK_GEN instead in MUDNN_CHECK
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2025-05-21 09:58:49 +08:00
Johannes Gäßler
b69f1647f9
CUDA: skip fully masked-out KV in FA vec kernel ( #13584 )
...
* CUDA: skip fully masked-out KV in FA vec kernel
2025-05-20 14:45:07 +02:00
Johannes Gäßler
4696d56749
CUDA: fix crash on large batch size for quant. MoE ( #13537 )
2025-05-14 16:41:02 +02:00
Johannes Gäßler
6da34fa276
CUDA: faster Deepseek FA, add Turing support ( #13435 )
2025-05-14 16:08:20 +02:00
Johannes Gäßler
10d2af0eaa
llama/ggml: add LLM training support ( #10544 )
...
* llama/ggml: add LLM training support
more compact progress bar
llama_save_model_to_file
llama_opt_param_filter
ggml_graph_dup force_grads
refactor ggml_opt, fix test-opt
* remove logits_all
* refactor CUDA implementation for ACC
* reset graph at beginning of opt period
2025-05-12 14:44:49 +02:00
Johannes Gäßler
95e18884fc
CUDA: fix misaligned synchronization in FA ( #13469 )
2025-05-12 10:51:21 +02:00
Johannes Gäßler
7474e00b34
CUDA: fix crash with partial offloading of MoE ( #13439 )
2025-05-11 16:09:33 +02:00
Johannes Gäßler
0208355f42
CUDA: fix race conditions FlashAttention kernels ( #13438 )
2025-05-10 22:22:48 +02:00
Johannes Gäßler
d8919424f1
CUDA: fix FlashAttention on Turing ( #13415 )
2025-05-10 09:16:52 +02:00
Johannes Gäßler
0cf6725e9f
CUDA: FA support for Deepseek (Ampere or newer) ( #13306 )
...
* CUDA: FA support for Deepseek (Ampere or newer)
* do loop unrolling via C++ template
2025-05-09 13:34:58 +02:00
Johannes Gäßler
5c86c9ed3e
CUDA: fix crash on large batch size for MoE models ( #13384 )
2025-05-09 12:14:04 +02:00
Daniel Bevenius
13b0a04597
whisper: remove MSVC warnings pragmas (whisper/3090)
...
* ggml : remove MSVC warnings pragmas
This commit removes the MSVC-specific pragmas as these are now handled
in ggml/CMakeLists.txt.
* whisper : remove MSVC warning pragmas
This commit removes the MSVC-specific pragmas. These are now handled in
the ggml/CMakeLists.txt file.
2025-05-07 17:28:36 +03:00
R0CKSTAR
1f73301b63
cuda : remove nrows_x in mul_mat_q_process_tile ( #13325 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-05-07 09:48:23 +02:00
Johannes Gäßler
141a908a59
CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF ( #13135 )
2025-05-06 23:35:51 +02:00
Johannes Gäßler
2356fb1d53
CUDA: fix bad asserts for partial offload ( #13337 )
2025-05-06 13:58:51 +02:00
Johannes Gäßler
15a28ec8c7
CUDA: fix --split-mode row for MMQ ( #13323 )
2025-05-06 08:36:46 +02:00
Johannes Gäßler
9070365020
CUDA: fix logic for clearing padding with -ngl 0 ( #13320 )
2025-05-05 22:32:13 +02:00
Johannes Gäßler
93c4e23905
CUDA: fix race condition in MMQ stream-k fixup ( #13299 )
2025-05-04 14:16:39 +02:00
Johannes Gäßler
8afbd96818
CUDA: fix race condition in MMQ ids_dst ( #13294 )
2025-05-04 13:58:38 +02:00
Diego Devesa
d7a14c42a1
build : fix build info on windows ( #13239 )
...
* build : fix build info on windows
* fix cuda host compiler msg
2025-05-01 21:48:08 +02:00
Georgi Gerganov
9998540149
cuda : fix unused variable compile warning (whisper/0)
...
ggml-ci
2025-05-01 09:58:44 +03:00
Johannes Gäßler
e1e8e0991f
CUDA: batched+noncont MMQ, refactor bs>1 MoE code ( #13199 )
2025-04-30 23:12:59 +02:00
Johannes Gäßler
cdf76586b2
CUDA: fix non-cont. inputs for batched mat mul ( #13155 )
2025-04-29 16:00:27 +02:00
R0CKSTAR
f0dd6a1926
musa: fix typo in cc control ( #13144 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-04-28 09:33:28 +02:00
Johannes Gäßler
69699be48a
CUDA: fix q_nope_absorbed prec for DS 2 Lite f16 ( #13137 )
2025-04-28 09:29:26 +02:00
R0CKSTAR
e291450b76
musa: fix build warning ( #13129 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-04-27 13:22:49 +02:00
Johannes Gäßler
b10d8bfdb1
CUDA: use switch statements in constexpr functions ( #13095 )
2025-04-24 15:57:10 +02:00