Georgi Gerganov
fd1234cb46
llama : add gpt-oss (#15091)
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7)
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1)
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11)
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: slaren <slarengh@gmail.com>
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com>
change kvalues_mxfp4 table to match e2m1 (#6)
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13)
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: slaren <slarengh@gmail.com>
2025-08-05 22:10:36 +03:00
..
2024-03-09 14:17:11 +02:00
2025-07-30 15:12:02 +03:00
2024-01-26 14:18:00 +02:00
2024-01-26 14:18:00 +02:00
2025-05-02 20:27:13 +02:00
2025-05-20 12:03:17 +02:00
2025-01-12 11:32:42 +02:00
2025-08-05 22:10:36 +03:00
2024-11-03 19:34:08 +01:00
2025-07-03 07:48:32 +03:00
2025-05-29 12:17:16 +03:00
2025-08-05 20:43:36 +02:00
2025-08-02 18:04:48 +08:00
2024-07-12 10:46:02 +03:00
2025-04-24 16:00:10 +03:00
2025-06-01 18:08:05 +02:00
2025-05-30 16:25:45 +03:00
2025-04-24 16:00:10 +03:00
2025-04-24 16:00:10 +03:00
2025-05-25 01:48:08 +01:00
2025-05-30 16:25:45 +03:00
2025-04-24 16:00:10 +03:00
2024-10-10 22:57:42 +02:00
2025-06-30 10:17:18 +02:00
2025-01-06 10:55:18 +02:00
2025-05-04 23:43:42 +02:00
2025-05-12 14:44:49 +02:00
2025-03-10 14:07:15 +02:00
2024-11-17 08:30:29 +02:00
2025-04-30 10:44:07 +02:00
2025-05-14 19:50:57 +01:00
2024-12-14 14:43:46 +02:00
2025-05-27 12:07:52 +03:00
2025-07-30 15:12:02 +03:00
2025-01-12 11:32:42 +02:00
2024-05-05 08:07:48 +03:00
2025-06-30 10:17:18 +02:00
2025-04-24 16:00:10 +03:00
2025-04-24 16:00:10 +03:00
2025-01-12 11:32:42 +02:00
2025-06-30 10:17:18 +02:00