初始化项目,由ModelHub XC社区提供模型

Model: inclusionAI/Ling-mini-base-2.0-5T
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-13 18:30:00 +08:00
commit 0c2dddb263
15 changed files with 17641 additions and 0 deletions

49
.gitattributes vendored Normal file
View File

@@ -0,0 +1,49 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

275
README.md Normal file
View File

@@ -0,0 +1,275 @@
---
license: mit
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 78B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 3060% throughput gains with MTP enabled, and 90120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```

51
config.json Normal file
View File

@@ -0,0 +1,51 @@
{
"architectures": [
"BailingMoeV2ForCausalLM"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_bailing_moe_v2.BailingMoeV2Config",
"AutoModel": "modeling_bailing_moe_v2.BailingMoeV2Model",
"AutoModelForCausalLM": "modeling_bailing_moe_v2.BailingMoeV2ForCausalLM"
},
"num_hidden_layers": 20,
"hidden_size": 2048,
"intermediate_size": 5120,
"eos_token_id": 156892,
"pad_token_id": 156892,
"first_k_dense_replace": 1,
"hidden_act": "silu",
"max_position_embeddings": 4096,
"model_type": "bailing_moe",
"moe_intermediate_size": 512,
"norm_topk_prob": true,
"num_experts_per_tok": 8,
"num_attention_heads": 16,
"num_experts": 256,
"num_key_value_heads": 4,
"rope_theta": 10000,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.52.3",
"use_bias": false,
"use_rmsnorm": true,
"rms_norm_eps": 1e-06,
"head_dim": 128,
"num_shared_experts": 1,
"use_cache": true,
"use_qkv_bias": false,
"embedding_dropout": 0.0,
"output_dropout": 0.0,
"vocab_size": 157184,
"partial_rotary_factor": 0.5,
"router_dtype": "fp32",
"moe_router_enable_expert_bias": true,
"routed_scaling_factor": 2.5,
"n_group": 8,
"topk_group": 4,
"use_qk_norm": true,
"score_function": "sigmoid",
"moe_shared_expert_intermediate_size": 512,
"num_nextn_predict_layers": 1
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework":"Pytorch","task":"text-generation"}

View File

@@ -0,0 +1,84 @@
"""Bailing MoE V2 model configuration"""
from transformers.configuration_utils import PretrainedConfig
class BailingMoeV2Config(PretrainedConfig):
def __init__(
self,
vocab_size=157184,
hidden_size=2048,
intermediate_size=5120,
num_hidden_layers=20,
num_attention_heads=16,
num_key_value_heads=4,
hidden_act="silu",
use_qkv_bias=False, # bailing only
use_bias=False, # bailing only
rms_norm_eps=1e-06,
tie_word_embeddings=False, # PretrainedConfig key, here change default value.
embedding_dropout=0.0,
attention_dropout=0.0,
output_dropout=0.0,
initializer_range=0.02,
max_position_embeddings=32768,
rope_theta=600000.0,
use_cache=True,
max_window_layers=20,
rope_scaling=None,
pad_token_id=156892,
eos_token_id=156892,
num_experts=256,
num_shared_experts=1,
num_experts_per_tok=8,
n_group=8,
topk_group=4,
moe_intermediate_size=512,
first_k_dense_replace=1,
head_dim=128,
output_router_logits=False,
use_qk_norm=True,
num_nextn_predict_layers=0,
mtp_loss_scaling_factor=0,
moe_router_enable_expert_bias=True,
routed_scaling_factor=1.0,
**kwargs,
):
self.num_hidden_layers = num_hidden_layers
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_attention_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.use_qkv_bias = use_qkv_bias
self.use_bias = use_bias
self.rms_norm_eps = rms_norm_eps
self.embedding_dropout = embedding_dropout
self.attention_dropout = attention_dropout
self.output_dropout = output_dropout
self.num_nextn_predict_layers = num_nextn_predict_layers
self.mtp_loss_scaling_factor = mtp_loss_scaling_factor
self.initializer_range = initializer_range
self.max_position_embeddings = max_position_embeddings
self.rope_theta = rope_theta
self.use_cache = use_cache
self.max_window_layers = max_window_layers
self.head_dim = head_dim or self.hidden_size // self.num_attention_heads
self.rope_scaling = rope_scaling
self.use_qk_norm = use_qk_norm
self.moe_router_enable_expert_bias = moe_router_enable_expert_bias
self.routed_scaling_factor = routed_scaling_factor
# MoE configs
self.num_experts = num_experts
self.num_shared_experts = num_shared_experts
self.num_experts_per_tok = num_experts_per_tok
self.n_group = n_group
self.topk_group = topk_group
self.moe_intermediate_size = moe_intermediate_size
self.first_k_dense_replace = first_k_dense_replace
self.output_router_logits = output_router_logits
super().__init__(pad_token_id=pad_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs)

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"eos_token_id": 156892,
"pad_token_id": 156892,
"do_sample": false,
"transformers_version": "4.52.3"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:001e90781b9605f5536206dbfb7aa6fe7f19781804c6c8c3c66431b1541d4568
size 10202198792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:624ef5e01ec7abcc379c8c5235fc57fb245e45df589ab00a7591b8789a7853d8
size 10205346968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d870ab4e5aad6b0702b2e3e7338b8a93ec01a2da5c41c44fe1536d0b71d45ea8
size 10247524944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a95bcb61adf2e9c37ff2bdff0d8dc43d5e2e38b7559dc294d5c33f2d93741330
size 3513886136

15603
model.safetensors.index.json Normal file

File diff suppressed because it is too large Load Diff

1533
modeling_bailing_moe_v2.py Normal file

File diff suppressed because it is too large Load Diff

7
special_tokens_map.json Normal file
View File

@@ -0,0 +1,7 @@
{
"bos_token": "<|startoftext|>",
"cls_token": "[CLS]",
"eos_token": "<|endoftext|>",
"gmask_token": "[gMASK]",
"pad_token": "<|endoftext|>"
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:23895938c755ebef359350a758831dc230a481428155d0f50a236d572e860b21
size 7663404

17
tokenizer_config.json Normal file
View File

@@ -0,0 +1,17 @@
{
"add_bos_token": false,
"add_eos_token": false,
"bos_token": "<|startoftext|>",
"chat_template": "{% for message in messages %}{% set role = message['role'] | lower %}{% if role == 'user' %}{% set role = 'HUMAN' %}{% endif %}{% set role = role | upper %}{{ '<role>' + role + '</role>' + message['content'] }}{% endfor %}{% if add_generation_prompt %}{{ '<role>ASSISTANT</role>' }}{% endif %}",
"clean_up_tokenization_spaces": false,
"cls_token": "[CLS]",
"eos_token": "<|endoftext|>",
"fast_tokenizer": true,
"gmask_token": "[gMASK]",
"merges_file": null,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<|endoftext|>",
"tokenizer_class": "PreTrainedTokenizerFast",
"trust_remote_code": true,
"vocab_file": null
}