anavp-nvidia
a014310374
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) ( #16328 )
...
* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs
* fix to ensure test-backend-ops check passes
2025-09-30 11:13:22 +03:00
Vinkal
72b24d96c6
model : make minicpm embedding_scale, residual_scale and logit_scale optional with legacy defaults ( #16273 )
...
* minicpm: make GGUF scaling keys optional with legacy defaults
Older MiniCPM GGUFs do not include the scaling metadata keys (minicpm.embedding_scale, minicpm.residual_scale, minicpm.logit_scale). The loader currently treats these as required, so quantization fails with:
key not found in model: minicpm.embedding_scale
This change restores backward compatibility by treating these keys as optional in the loader and using the older MiniCPM scaling values:
embedding_scale = 12.0f
residual_scale = 1.4f / sqrt(n_layer)
logit_scale = 256.0f / n_embd
When the GGUF provides the keys, their values override the defaults; otherwise the legacy defaults are used. Newer GGUFs that already include these keys are unaffected.
Fixes : #16192
Signed-off-by: Vinkal Chudgar <vinkal.chudgar@gmail.com >
* Update src/llama-model.cpp
Committed as suggested. Thanks!
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Signed-off-by: Vinkal Chudgar <vinkal.chudgar@gmail.com >
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-09-26 23:28:29 +02:00
Sigbjørn Skjæret
835b2b915c
model : add GroveMoE support ( #15510 )
...
* add GroveMoE support
* remove constexpr that fails on certain compilers
* revert crude scalar div implementation, use cast
* build_attn_inp_kv_unified -> build_attn_inp_kv
* fix build_attn
* re-apply ffn_exps regex changes
2025-09-25 19:50:28 +02:00
Douglas Hanley
b5bd037832
llama : add support for qwen3 reranker ( #15824 )
2025-09-25 11:53:09 +03:00
Johannes Gäßler
e789095502
llama: print memory breakdown on exit ( #15860 )
...
* llama: print memory breakdown on exit
2025-09-24 16:53:48 +02:00
Tarek Dakhran
3a59971967
model : add label for LiquidAI LFM2-2.6B model ( #16204 )
...
* model : add label for LiquidAI LFM2-2.6B model
HF link: [LiquidAI/LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B ).
Support for GGUF conversion and inference is added in #14620 .
However, due to similar `n_embd`, it identifies as a 1.2B model.
Fix the label by using `n_ff` to identify the model instead.
Output of `llama-bench`:
```
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| lfm2 1.2B F16 | 2.18 GiB | 1.17 B | CPU | 10 | pp512 | 223.97 ± 5.32 |
| lfm2 2.6B F16 | 4.79 GiB | 2.57 B | CPU | 10 | pp512 | 92.53 ± 4.14 |
| lfm2 350M F16 | 676.25 MiB | 354.48 M | CPU | 10 | pp512 | 725.52 ± 11.70 |
| lfm2 700M F16 | 1.38 GiB | 742.49 M | CPU | 10 | pp512 | 336.22 ± 12.93 |
```
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-09-24 13:42:26 +02:00
Xuan-Son Nguyen
8f8f2274ee
convert : add Llama4ForCausalLM ( #16042 )
...
* convert : add Llama4ForCausalLM
* handle swa
* half working version
* fix use_kq_norm
* fix use_kq_norm
2025-09-17 19:18:21 +02:00
Shane A
85286f3548
model : add OLMo3 support ( #16015 )
...
* Add HF to gguf conversion logic for Olmo3
* Add Olmo3 implementation
* Update rope comment
* Fix indentation
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-09-17 09:01:58 +02:00
Aman Gupta
6d758839ff
Add LLaDA-7b-MoE diffusion model ( #16003 )
2025-09-16 10:38:28 +08:00
Sigbjørn Skjæret
b8e09f08b9
model : add grok-2 support ( #15539 )
...
* add grok-2 support
* type fix
* type fix
* type fix
* "fix" vocab for invalid sequences
* fix expert tensor mapping and spaces in vocab
* add chat template
* fix norm tensor mapping
* rename layer_out_norm to ffn_post_norm
* ensure ffn_post_norm is mapped
* fix experts merging
* remove erroneous FFN_GATE entry
* concatenate split tensors and add more metadata
* process all expert layers and try cat instead of hstack
* add support for community BPE vocab
* fix expert feed forward length and ffn_down concat
* commit this too
* add ffn_up/gate/down, unsure if sequence is right
* add ffn_gate/down/up to tensor names
* correct residual moe (still not working)
* mess--
* fix embedding scale being applied twice
* add built in chat template
* change beta fast for grok if default value
* remove spm vocab in favor of community bpe vocab
* change attention temp length metadata type to integer
* update attention temp length metadata
* remove comment
* replace M_SQRT2 with std::sqrt(2)
* add yarn metadata, move defaults to hparams
2025-09-14 23:00:59 +02:00
Jie Fu (傅杰)
4f658855fa
llama : support T5 models with unequal number of encoder-decoder layers ( #15909 )
...
* Extend the support of T5 models with different encoder-decoder layers
Signed-off-by: Jie Fu <jiefu@tencent.com >
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update gguf-py/gguf/constants.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update gguf-py/gguf/gguf_writer.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-arch.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-arch.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-hparams.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Rename n_dec_layer --> dec_n_layer
Signed-off-by: Jie Fu <jiefu@tencent.com >
* Adapt to cases when dec_n_layer > n_layer
Signed-off-by: Jie Fu <jiefu@tencent.com >
---------
Signed-off-by: Jie Fu <jiefu@tencent.com >
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-09-10 20:51:51 +02:00
Georgi Gerganov
cf0e3ba150
model : avoid ggml_cont_3d for fused QKV weights ( #15662 )
...
* model : avoid ggml_cont_3d for fused QKV weights
ggml-ci
* kv-cache : make cpy_k and cpy_v implementation more readable
ggml-ci
* cont : add comments
ggml-ci
* cont : minor fix [no ci]
* cont : one more fix
* cont : clarity
ggml-ci
* kv-cache : require contiguous heads of k_cur and v_cur
ggml-ci
2025-09-08 10:25:33 +03:00
Georgi Gerganov
c610b6c11b
kv-cache : fix SWA checks + disable cacheless iSWA ( #15811 )
...
ggml-ci
2025-09-05 10:39:22 +03:00
Daniel Bevenius
fb15d649ed
llama : add support for EmbeddingGemma 300m ( #15798 )
...
This commit add support for the EmbeddingGemma 300m. This model supports
sliding window attention (SWA) and a new swq_type is introduced to
support symmetric SWA masking.
This commit also extracts the code from the function
llama_is_masked_swa in llama-impl.h, so that the logic can be shared
by both llm_graph_input_attn_no_cache::set_input and
llama_kv_cache::set_input_kq_mask.
With this commit the EmbeddingGemma 300m model can be converted to
to GGUF and used with llama.cpp.
Once the model has been uploaded to HuggingFace it can be used like
this:
```console
./build/bin/llama-cli -hf ggml-org/embeddinggemma-300m-GGUF:Q8_0
```
2025-09-04 18:10:29 +02:00
Daniel Bevenius
2c8dac72eb
llama : fix incorrect model type for Gemma 270M ( #15764 )
...
This commit fixes the model type for the Gemma 270M model in
llama_model.cpp which should be LLM_TYPE_270M. I incorrectly added this
previously as LLM_TYPE_537M which was wrong.
The motivation for this is that it causes the model to not be identified
properly when using tools like llama-bench. For example:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model | size | ...
| ------------------------------ | ---------: | ...
| gemma3 ?B Q8_0 | 271.81 MiB | ...
| gemma3 ?B Q8_0 | 271.81 MiB | ...
```
With the changes in this commit the output will be:
```console
$ ./build/bin/llama-bench -m models/gemma-3-270m-Q8_0.gguf
| model | size | ...
| ------------------------------ | ---------: | ...
| gemma3 270M Q8_0 | 271.81 MiB | ...
| gemma3 270M Q8_0 | 271.81 MiB | ...
```
2025-09-03 13:35:49 +02:00
Johannes Gäßler
e81b8e4b7f
llama: use FA + max. GPU layers by default ( #15434 )
...
* llama: use max. GPU layers by default, auto -fa
* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00
Gabe Goodhart
e8d99dd0b6
nvidia nemotron nano v2 (nemotronh) ( #15507 )
...
* feat: Add NEMOTRONH to python arch enum
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* feat: Add NEMOTRONH to c++ arch enum
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* feat: Add NEMOTRONH to llama-arch layer map
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* feat: First pass at conversion for nemotronh
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* feat: Add a verbose log for each tensor loaded
This is really helpful for diagnosing mismatches between the expected and
received tensors
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* feat: First (broken) pass at nemotronh model architecture
It generates tokens, just not valid ones!
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* fix: Explicitly enable add_bos_token during conversion
The `tokenizer.json`/`tokenizer_config.json` in the model are a bit
contradictory. In the config, add_bos_token is set to False, but the
tokenizer model itself has a post_processor that adds the BOS token via
type: TemplateProcessing
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* fix: Use relu2 (LLM_FFN_RELU_SQR) for activation in FFN layers
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* fix: Only allocate attention cache for attention layers (not non-recurrent)
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* fix: Move residual add to after every block
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* fix: Use the correct norm tensor for the MLP blocks
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
* Nemotron-H: MLP gate cleanup (pass NULL for unused gate)
This model does not use a gate in MLP blocks; pass NULLs for gate tensors to make intent clear and avoid unused-pointer noise.
* SSM: respect ssm_dt_rank for dt_dim when provided
Use GGUF-provided time_step_rank (ssm_dt_rank) to set dt_dim when > 0; fallback to max(64, n_embd/16).
* fix: plamo2 - revert dt_dim to default (remove ssm_dt_rank usage)
* Rename nemotronh to nemotron_h for consistency
- Update architecture name from NEMOTRONH to NEMOTRON_H in constants.py
- Change architecture string from 'nemotronh' to 'nemotron_h' in all files
- Update enum LLM_ARCH_NEMOTRONH to LLM_ARCH_NEMOTRON_H
- Update class name llm_build_nemotronh to llm_build_nemotron_h
- Consistent naming with underscore convention (nemotron_h vs nemotronh)
* feat: Support conversion for older NemotronH models
https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
Co-authored-by: Maicon Domingues <dominguesm@outlook.com >
Co-authored-by: weatherman <fxdstudios@gmail.com >
2025-08-28 18:39:31 -06:00
Sigbjørn Skjæret
84ab83cc0b
model : jina-embeddings-v3 support ( #13693 )
...
* initial jina-embeddings-v3 support
* initial jina-embeddings-v3 support
* initial jina-embeddings-v3 support
* fix vocab parsing with only tokenizer.json
* set mask token lstrip attribute
* additional unk_token_id fallback just in case [no ci]
* revert vocab_size() change [no ci]
* merge tensor loading into general bert
* rope
* add lora embedding and loading (non-functional)
* export separate lora ggufs instead
* add adapter metadata api
* use std::string
* convert_hf_to_lora compatibility
* fix assert
* apply suggestions from review
* apply suggestion from review
2025-08-28 15:49:50 +02:00
Georgi Gerganov
b730706a49
kv-cache : support layer reuse ( #15504 )
...
* kv-cache : support layer reuse
ggml-ci
* cont : update comments [no ci]
2025-08-24 13:07:07 +03:00
Piotr Wilkin (ilintar)
b1afcab804
model : add support for Seed-OSS ( #15490 )
...
* First draft
* Fix linter errors
* Added missing sinks nullptr
* Don't forget the llama-arch!
* We're through to the generation stage.
* Fix post-attention norm
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Fix RoPE type
* Fix tensor name and reorder llm_types
* Update gguf-py/gguf/constants.py
Remove nonexistent FFN_POST_NORM tensor
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Add basic chat template
* Add chat template tests
* Remake chat template test
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-chat.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Reorder llm type descriptions
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-08-23 15:21:52 +02:00
Tarek Dakhran
e288693669
readme : model : mtdm : lfm2 improvements ( #15476 )
...
* Support untied embeddings
* Increase number of image tokens to 1024
* Add LFM2-VL to readme
* Actually use untied embeddings
2025-08-22 09:29:08 +02:00
Georgi Gerganov
3f196be84b
graph : remove build_attn_with_sinks overload ( #15469 )
...
ggml-ci
2025-08-21 18:44:45 +03:00
Georgi Gerganov
715a6db02c
kv-cache : drop the "unified" prefix ( #15467 )
...
* kv-cache : drop the "unified" prefix
ggml-ci
* cont : fix comment [no ci]
2025-08-21 17:00:33 +03:00
Georgi Gerganov
9ef6b0b835
model : add gpt-oss type strings ( #15424 )
2025-08-19 19:58:28 +03:00
Sigbjørn Skjæret
baa9255a45
llama : merge conts and reshapes and remove unnecessary cont ( #15380 )
...
* remove unnecessary conts and merge reshapes
* restore necessary conts
* merge more conts and reshapes
* merge even more conts and reshapes
2025-08-18 19:30:17 +02:00
Daniel Bevenius
7a0de96045
llama : add 18-layer model type for Gemma 3-270m ( #15319 )
...
This commit adds support for the 18-layer model type in the Gemma3
series, which is the size of the Gemma3-270m model.
The motivation for this commit is was the only change required for
Gemma3-270m to be converted to GGUF format and used with llama.cpp.
Once the model has been converted and uploaded to Huggingface it can be
used like this:
```console
$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0
```
2025-08-14 17:56:26 +02:00
Georgi Gerganov
fd1234cb46
llama : add gpt-oss ( #15091 )
...
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7 )
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1 )
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11 )
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com >
change kvalues_mxfp4 table to match e2m1 (#6 )
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13 )
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co >
Co-authored-by: slaren <slarengh@gmail.com >
2025-08-05 22:10:36 +03:00
Juk Armstrong
c81de6e107
Fix glm4moe bug ( #15088 )
2025-08-05 13:56:44 +01:00
Sam
ef0144c087
model: support GLM 4.5 family of models ( #14939 )
...
* model: Add GLM 4.5 (#14921 )
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Merge in PR suggestions
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: Add GLM 4.5 family of models (#14921 )
1. Updated tensor_mapping.py with NextN tensor mappings
- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm
2. Added num_nextn_predict_layers configuration
- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config
3. Added FIM tokens for GLM4_MOE
- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
- <|code_prefix|> for FIM_PRE
- <|code_suffix|> for FIM_SUF
- <|code_middle|> for FIM_MID
4. Removed manual NextN tensor handling
- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system
* glm 4.5 update tensors names
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
* Apply suggestions from code review
* patch broken chat template
* typings fix
* add TENSOR_SKIP flag
Co-authored-by: Diego Devesa <slarengh@gmail.com >
* Update src/llama-model-loader.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
Co-authored-by: Diego Devesa <slarengh@gmail.com >
2025-08-04 20:29:25 +02:00
compilade
11a3811164
memory : handle kv_unified for hybrid models ( #15050 )
2025-08-03 21:43:07 +02:00
Douglas Hanley
339bd0268c
model : support Qwen3-Embedding ( #15023 )
2025-08-02 10:44:50 +02:00
stevenkuang
0f5ccd6fd1
model : add hunyuan dense ( #14878 )
...
* support hunyuan_v1_dense
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* update hunyuan_moe to hunyuan_v1_moe
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* fix rope alpha assert and bos token
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* add blank line
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* Revert "update hunyuan_moe to hunyuan_v1_moe"
This reverts commit aa973ca21913aba77f6e81a935270ef7be222e75.
* use hunyuan_dense instead of hunyuan_v1_dense
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* fix hunyuan_moe chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* remove leftover code
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* update hunyuan dense chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
* fix hunyuan dense vocab and chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
---------
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
2025-08-01 15:31:12 +02:00
Diego Devesa
d6818d06a6
llama : allow other bufts when overriding to CPU, add --no-repack option ( #14990 )
2025-07-31 18:11:34 +02:00
Dongliang Wei
c1dacaa99b
llama : merge build_moe_ffn_from_probs function into build_moe_ffn ( #14968 )
2025-07-31 14:12:20 +02:00
Aman Gupta
8a4a856277
Add LLaDA 8b Diffusion model ( #14771 )
...
* Add support for Llada-8b: diffusion model
* Add README
* Fix README and convert_hf_to_gguf
* convert_hf_to_gguf.py: address review comments
* Make everything in a single example
* Remove model-specific sampling
* Remove unused argmax
* Remove braced initializers, improve README.md a bit
* Add diffusion specific gguf params in set_vocab, remove setting rope_theta and rms_norm_eps
* Remove adding the mask token
* Move add_add_bos_token to set_vocab
* use add_bool in gguf_writer.py
2025-07-31 19:49:09 +08:00
Dongliang Wei
6c6e397aff
model : add support for SmallThinker series ( #14898 )
...
* support smallthinker
* support 20b softmax, 4b no sliding window
* new build_moe_ffn_from_probs, and can run 4b
* fix 4b rope bug
* fix python type check
* remove is_moe judge
* remove set_dense_start_swa_pattern function and modify set_swa_pattern function
* trim trailing whitespace
* remove get_vocab_base of SmallThinkerModel in convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* better whitespace
Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* use GGML_ASSERT for expert count validation
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Improve null pointer check for probs
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* use template parameter for SWA attention logic
* better whitespace
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* move the creation of inp_out_ids before the layer loop
* remove redundant judge for probs
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2025-07-28 13:47:00 +02:00
Gabriel Larson
4762ad7316
model : make rope_yarn_log_mul optional for deepseek2 ( #14896 )
...
* make rope_yarn_log_mul optional for deepseek2
* default rope_yarn_log_mul = 0.0f
2025-07-27 11:18:37 +03:00
Shunta Saito
1dc9614e06
llama : fix kq_scale for the attention layers of PLaMo2 ( #14892 )
...
* Fix dimensions for expand
* Change dimensions to copy states to cache
* Fix the default value for plamo2 conversion
* Fix scale given to build_attn
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-07-27 09:38:44 +02:00
yummy
86f5623d90
llama : fix MiniCPM inference after Granite Four changes ( #14850 )
...
MiniCPM models use the llm_build_granite constructor which was changed
in the Granite Four PR to use hparams.rope_finetuned instead of a
use_rope parameter. MiniCPM models need rope enabled by default.
Fixes inference from gibberish to correct responses.
2025-07-24 11:50:51 +02:00
Molly Sophia
d4d1522b20
llama : add model type detection for rwkv7 7B&14B ( #14816 )
...
Signed-off-by: Molly Sophia <mollysophia379@gmail.com >
2025-07-22 23:01:29 +08:00
Georgi Gerganov
eacdeb5bfc
model : fix build after merge conflict ( #14754 )
2025-07-18 11:53:55 +03:00
lgai-exaone
e0cb5c5cb8
model : add EXAONE 4.0 support ( #14630 )
2025-07-18 10:45:49 +02:00
Georgi Gerganov
8f974bc1e9
graph : refactor context to not pass gf explicitly ( #14629 )
...
ggml-ci
2025-07-18 08:29:28 +03:00
Piotr Wilkin (ilintar)
cb887f1bc1
model: add Ernie 4.5 MoE support ( #14658 )
...
* Add Ernie4.5 MoE
* Fix Flake errors.
* Properly encode/decode MoE layer step
* Correct tensor mappings (.weight)
* Pass and read n_ff_exp
* n_ff_shexp calculation and further minor changes
* Rope fixes.
* .gitignore fix
* Add unit32 cast for Linux builds
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Further fixes from code review
* Fix trailing whitespace
* Reenable missing experts error
* Code style from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Fix non-MoE regression
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-07-17 23:15:32 +02:00
Georgi Gerganov
01612b7409
llama : reuse compute graphs ( #14482 )
...
* llama : reuse compute graphs
ggml-ci
* llama-bench : add graph reuse parameter
ggml-ci
* cont : remove the parameter and the sched resets
ggml-ci
* graph : rename update() to can_reuse()
ggml-ci
* params : remove is_same()
ggml-ci
* graph : set res->params in llm_graph_context constructor
ggml-ci
* graph : avoid set_max_nodes in llm_graph_result
ggml-ci
* kv-cache : reuse llama_context's graph result instance
ggml-ci
* context : reset the previous graph result upon memory updates
ggml-ci
* batch : llama_ubatch now carries its data instead of pointing to balloc
ggml-ci
* merge : fix build
ggml-ci
* graph : fix can_reuse() checks when flash-attention is disabled
* graph : move llm_graph_result impl in source file + debug env
ggml-ci
2025-07-17 19:08:33 +03:00
Tarek Dakhran
086cf81e88
llama : fix parallel processing for lfm2 ( #14705 )
2025-07-17 09:22:11 +02:00
tempstudio
b0f0ecc3dc
model : support output bias for qwen2 ( #14711 )
...
Co-authored-by: qwaqrm <qwaqrm@126.com >
2025-07-16 18:02:06 +03:00
Georgi Gerganov
225e7a1438
llama : add high-throughput mode ( #14363 )
...
* kv-cache : prepare K/V buffers for separation
ggml-ci
* batched-bench : fix oob write
ggml-ci
* llama : add "virtual sequences"
ggml-ci
* llama : use "stream" vs "virtual sequence"
ggml-ci
* graph : fix stream splitting when KV cache is not used
ggml-ci
* kv-cache : add multi-stream save/load support
ggml-ci
* llama : add "--attn-streams" flag
ggml-ci
* kv-cache : fix handling when find_slot fails
ggml-ci
* kv-cache : restore find_slot impl
ggml-ci
* kv-cache : add comments
* kv-cache : add bounds checks for sequence id
ggml-ci
* cont : add n_seq_max to batch allocr
ggml-ci
* kv-cache : perform stream copies lazily after llama_synchronize
ggml-ci
* kv-cache : avoid throwing exceptions across the C boundary
ggml-ci
* CUDA: 4D FlashAttention support (#14628 )
* CUDA: 4D FlashAttention support
* CUDA: fix WMMA FA kernel
* llama : rename attn_streams -> kv_unified
ggml-ci
* common : rename kv_split -> kv_unified
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2025-07-16 16:35:42 +03:00
Aman Gupta
ab14019821
Support diffusion models: Add Dream 7B ( #14644 )
...
* Support diffusion models: Add Dream 7B
* Move diffusion to examples
* Move stuff to examples. Add patch to not use kv-cache
* Address review comments
* Make sampling fast
* llama: remove diffusion functions
* Add basic timings + cleanup
* More cleanup
* Review comments: better formating, use LOG instead std::cerr, re-use batch, use ubatch instead of max_length
* fixup!
* Review: move everything to diffusion-cli for now
2025-07-16 20:03:51 +08:00
Shunta Saito
e4841d24d3
llama : fix parallel processing for plamo2 ( #14716 )
2025-07-16 12:12:22 +02:00