初始化项目,由ModelHub XC社区提供模型
Model: muverqqw/Noir-Lightning Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
174
README.md
Normal file
174
README.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
---
|
||||||
|
base_model: unsloth/Qwen2.5-0.5B-Instruct
|
||||||
|
tags:
|
||||||
|
- text-generation-inference
|
||||||
|
- transformers
|
||||||
|
- unsloth
|
||||||
|
- qwen2
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
model-index:
|
||||||
|
- name: Noir-Lightning
|
||||||
|
results:
|
||||||
|
- task:
|
||||||
|
type: text-generation
|
||||||
|
name: General Intelligence
|
||||||
|
dataset:
|
||||||
|
type: mmlu_pro
|
||||||
|
name: MMLU Pro
|
||||||
|
metrics:
|
||||||
|
- name: accuracy
|
||||||
|
type: exact_match
|
||||||
|
value: 13.57
|
||||||
|
- task:
|
||||||
|
type: text-generation
|
||||||
|
name: Mathematics
|
||||||
|
dataset:
|
||||||
|
type: gsm8k
|
||||||
|
name: GSM8K
|
||||||
|
metrics:
|
||||||
|
- name: accuracy
|
||||||
|
type: exact_match
|
||||||
|
value: 10.0
|
||||||
|
- task:
|
||||||
|
type: text-generation
|
||||||
|
name: Physics
|
||||||
|
dataset:
|
||||||
|
type: mmlu_pro_physics
|
||||||
|
name: MMLU Pro (Physics)
|
||||||
|
metrics:
|
||||||
|
- name: accuracy
|
||||||
|
type: exact_match
|
||||||
|
value: 30.0
|
||||||
|
- task:
|
||||||
|
type: text-generation
|
||||||
|
name: Programming
|
||||||
|
dataset:
|
||||||
|
type: mmlu_pro_computer_science
|
||||||
|
name: MMLU Pro (Computer Science)
|
||||||
|
metrics:
|
||||||
|
- name: accuracy
|
||||||
|
type: exact_match
|
||||||
|
value: 20.0
|
||||||
|
---
|
||||||
|
|
||||||
|
# ⚡ Noir-Lightning
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
[Noir Family](https://huggingface.co/collections/muverqqw/noir) | [Benchmarks](#-benchmark-results) | [Quickstart](#-quick-start)
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
**Noir-Lightning** is a "pocket-sized" intelligence. It is the lightest and fastest model in the Noir family, built on the Qwen 2.5 architecture. Despite its tiny size (only 0.5 billion parameters), it performs at the level of models many times its size.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Why Noir-Lightning?
|
||||||
|
|
||||||
|
Small models often struggle with consistency, identity confusion, or producing nonsensical wordplay. In this version, we have addressed these core issues:
|
||||||
|
|
||||||
|
* ✅ **Identity Clarity:** The model clearly understands it is an AI. No more confusing phrases like "I am the artificial intelligence of artificial intelligence."
|
||||||
|
* 🗣 **Natural Language:** Significant improvements in English and Russian fluency. It captures nuances and context, maintaining a natural conversational flow without "robotic" artifacts.
|
||||||
|
* 🧠 **Outperforming Competitors:** It surpasses newer models (like Qwen3-0.6B) in logic, reasoning, and mathematical tasks.
|
||||||
|
* ⚡ **Extreme Efficiency:** Runs instantly on low-end laptops, smartphones, and even directly in the browser.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Benchmark Results
|
||||||
|
|
||||||
|
We tested the model against rigorous academic benchmarks. Here is what these numbers mean for such a compact model:
|
||||||
|
|
||||||
|
| Task / Category | Metric | Result (%) | Stderr (%) |
|
||||||
|
| :--- | :--- | :---: | :---: |
|
||||||
|
| **GSM8K** (Primary Math) | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| **MMLU Pro** (Aggregate) | exact_match | **13.57%** | ± 2.96% |
|
||||||
|
| ∟ Physics | exact_match | **30.0%** | ± 15.28% |
|
||||||
|
| ∟ History | exact_match | **20.0%** | ± 13.33% |
|
||||||
|
| ∟ Computer Science | exact_match | **20.0%** | ± 13.33% |
|
||||||
|
| ∟ Engineering | exact_match | **20.0%** | ± 13.33% |
|
||||||
|
| ∟ Psychology | exact_match | **20.0%** | ± 13.33% |
|
||||||
|
| ∟ Business | exact_match | **20.0%** | ± 13.33% |
|
||||||
|
| ∟ Biology | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Chemistry | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Economics | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Law | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Philosophy | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Other | exact_match | **10.0%** | ± 10.0% |
|
||||||
|
| ∟ Health | exact_match | **0.0%** | ± 0.0% |
|
||||||
|
| ∟ Math | exact_match | **0.0%** | ± 0.0% |
|
||||||
|
|
||||||
|
---
|
||||||
|
**Note:** The results are based on a 5-shot evaluation where applicable.
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
|
||||||
|
In its weight class (0.5B), this model is a true champion, especially in hard sciences and logical consistency.
|
||||||
|
|
||||||
|
| Domain | Success Rate | Commentary |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **Physics** | 30% | 🏆 Exceptional result. Solves problems better than many models 2-3x its size. |
|
||||||
|
| **History & Humanities** | 20% | Strong grasp of dates, events, and cultural context. |
|
||||||
|
| **IT & Programming** | 20% | Understands code structures and basic algorithmic logic. |
|
||||||
|
| **General Intelligence (MMLU Pro)** | 13.6% | An "A-tier" performance for the 0.5B parameter segment. |
|
||||||
|
| **Mathematics (GSM8K)** | 10% | Capable of solving multi-step primary school logic problems. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
| Model | Parameters | Role | Key Strength |
|
||||||
|
| :--- | :--- | :--- | :--- |
|
||||||
|
| **Noir-Lightning** | **0.5B** | **The Pocket Assistant** | **Ultra-fast, runs on anything** |
|
||||||
|
| **Noir-Mini** | 1.5B | The Balanced Thinker | High speed with solid grammar |
|
||||||
|
| **Noir-Standard** | 3B | The Versatile Workhorse | 65% GSM8K, perfect for 8GB VRAM |
|
||||||
|
| **Noir-Ultra** | 7B | The Reasoning Master | 91% SciQ & 84% Math |
|
||||||
|
| **Noir-Starlight** | 14B | The Galactic Intelligence | Deep logic & Expert-level STEM |
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛠 Quick Start
|
||||||
|
|
||||||
|
You will need the `transformers` library. Here is the implementation using the high-level `pipeline` API:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import pipeline
|
||||||
|
|
||||||
|
# Initialize the pipeline
|
||||||
|
pipe = pipeline("text-generation", model="muverqqw/Noir-Lightning", device_map="auto")
|
||||||
|
|
||||||
|
# Chat-template based request
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": "You are Noir, a helpful AI assistant."},
|
||||||
|
{"role": "user", "content": "Explain simply: why is the sky blue?"}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Generation
|
||||||
|
outputs = pipe(messages, max_new_tokens=150, do_sample=True, temperature=0.7)
|
||||||
|
print(outputs[0]['generated_text'][-1]['content'])
|
||||||
|
```
|
||||||
|
---
|
||||||
|
# ⚙️ Technical Specifications
|
||||||
|
* **Architecture:** Transformers-based Causal Decoder (Qwen 2.5)
|
||||||
|
|
||||||
|
* **Training Data:** Fine-tuned on a curated blend of logical reasoning, high-quality dialogue, and mathematical datasets.
|
||||||
|
|
||||||
|
* **Format:** Available in float16. GGUF/EXL2 versions coming soon.
|
||||||
|
|
||||||
|
* **Context Length:** Supports up to 32k tokens (optimal performance within 4k-8k).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 👤 About the Developer
|
||||||
|
* **Creator:** IceL1ghtning
|
||||||
|
|
||||||
|
* **Release Year:** 2025
|
||||||
|
|
||||||
|
* **Base Architecture:** Qwen 2.5
|
||||||
|
|
||||||
|
* **License:** Apache 2.0 (Commercial use permitted)
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<sub>Built with a passion for open-source and efficient computing.</sub>
|
||||||
|
</div>
|
||||||
24
added_tokens.json
Normal file
24
added_tokens.json
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"</tool_call>": 151658,
|
||||||
|
"<tool_call>": 151657,
|
||||||
|
"<|box_end|>": 151649,
|
||||||
|
"<|box_start|>": 151648,
|
||||||
|
"<|endoftext|>": 151643,
|
||||||
|
"<|file_sep|>": 151664,
|
||||||
|
"<|fim_middle|>": 151660,
|
||||||
|
"<|fim_pad|>": 151662,
|
||||||
|
"<|fim_prefix|>": 151659,
|
||||||
|
"<|fim_suffix|>": 151661,
|
||||||
|
"<|im_end|>": 151645,
|
||||||
|
"<|im_start|>": 151644,
|
||||||
|
"<|image_pad|>": 151655,
|
||||||
|
"<|object_ref_end|>": 151647,
|
||||||
|
"<|object_ref_start|>": 151646,
|
||||||
|
"<|quad_end|>": 151651,
|
||||||
|
"<|quad_start|>": 151650,
|
||||||
|
"<|repo_name|>": 151663,
|
||||||
|
"<|video_pad|>": 151656,
|
||||||
|
"<|vision_end|>": 151653,
|
||||||
|
"<|vision_pad|>": 151654,
|
||||||
|
"<|vision_start|>": 151652
|
||||||
|
}
|
||||||
53
chat_template.jinja
Normal file
53
chat_template.jinja
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
{%- if tools %}
|
||||||
|
{{- '<|im_start|>system\n' }}
|
||||||
|
{%- if messages[0]['role'] == 'system' %}
|
||||||
|
{{- messages[0]['content'] }}
|
||||||
|
{%- else %}
|
||||||
|
{{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||||
|
{%- for tool in tools %}
|
||||||
|
{{- "\n" }}
|
||||||
|
{{- tool | tojson }}
|
||||||
|
{%- endfor %}
|
||||||
|
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||||
|
{%- else %}
|
||||||
|
{%- if messages[0]['role'] == 'system' %}
|
||||||
|
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
|
||||||
|
{%- else %}
|
||||||
|
{{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- for message in messages %}
|
||||||
|
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
||||||
|
{%- elif message.role == "assistant" %}
|
||||||
|
{{- '<|im_start|>' + message.role }}
|
||||||
|
{%- if message.content %}
|
||||||
|
{{- '\n' + message.content }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- for tool_call in message.tool_calls %}
|
||||||
|
{%- if tool_call.function is defined %}
|
||||||
|
{%- set tool_call = tool_call.function %}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '\n<tool_call>\n{"name": "' }}
|
||||||
|
{{- tool_call.name }}
|
||||||
|
{{- '", "arguments": ' }}
|
||||||
|
{{- tool_call.arguments | tojson }}
|
||||||
|
{{- '}\n</tool_call>' }}
|
||||||
|
{%- endfor %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- elif message.role == "tool" %}
|
||||||
|
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '\n<tool_response>\n' }}
|
||||||
|
{{- message.content }}
|
||||||
|
{{- '\n</tool_response>' }}
|
||||||
|
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- if add_generation_prompt %}
|
||||||
|
{{- '<|im_start|>assistant\n' }}
|
||||||
|
{%- endif %}
|
||||||
56
config.json
Normal file
56
config.json
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Qwen2ForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"torch_dtype": "float16",
|
||||||
|
"eos_token_id": 151645,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 896,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 4864,
|
||||||
|
"layer_types": [
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention"
|
||||||
|
],
|
||||||
|
"max_position_embeddings": 32768,
|
||||||
|
"max_window_layers": 21,
|
||||||
|
"model_type": "qwen2",
|
||||||
|
"num_attention_heads": 14,
|
||||||
|
"num_hidden_layers": 24,
|
||||||
|
"num_key_value_heads": 2,
|
||||||
|
"pad_token_id": 151654,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 1000000.0,
|
||||||
|
"sliding_window": null,
|
||||||
|
"tie_word_embeddings": true,
|
||||||
|
"transformers_version": "4.57.3",
|
||||||
|
"unsloth_fixed": true,
|
||||||
|
"unsloth_version": "2025.12.4",
|
||||||
|
"use_cache": true,
|
||||||
|
"use_sliding_window": false,
|
||||||
|
"vocab_size": 151936
|
||||||
|
}
|
||||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:983067bf33c0b85a9b4316b68b88f0efe38e64434c64b6bc2603d9fe4dafa8c3
|
||||||
|
size 988097824
|
||||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|vision_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
|
||||||
|
size 11421896
|
||||||
209
tokenizer_config.json
Normal file
209
tokenizer_config.json
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": false,
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"151643": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151644": {
|
||||||
|
"content": "<|im_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151645": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151646": {
|
||||||
|
"content": "<|object_ref_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151647": {
|
||||||
|
"content": "<|object_ref_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151648": {
|
||||||
|
"content": "<|box_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151649": {
|
||||||
|
"content": "<|box_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151650": {
|
||||||
|
"content": "<|quad_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151651": {
|
||||||
|
"content": "<|quad_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151652": {
|
||||||
|
"content": "<|vision_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151653": {
|
||||||
|
"content": "<|vision_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151654": {
|
||||||
|
"content": "<|vision_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151655": {
|
||||||
|
"content": "<|image_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151656": {
|
||||||
|
"content": "<|video_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151657": {
|
||||||
|
"content": "<tool_call>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151658": {
|
||||||
|
"content": "</tool_call>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151659": {
|
||||||
|
"content": "<|fim_prefix|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151660": {
|
||||||
|
"content": "<|fim_middle|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151661": {
|
||||||
|
"content": "<|fim_suffix|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151662": {
|
||||||
|
"content": "<|fim_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151663": {
|
||||||
|
"content": "<|repo_name|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151664": {
|
||||||
|
"content": "<|file_sep|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"bos_token": null,
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "<|im_end|>",
|
||||||
|
"errors": "replace",
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"model_max_length": 32768,
|
||||||
|
"pad_token": "<|vision_pad|>",
|
||||||
|
"padding_side": "left",
|
||||||
|
"split_special_tokens": false,
|
||||||
|
"tokenizer_class": "Qwen2Tokenizer",
|
||||||
|
"unk_token": null,
|
||||||
|
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %} {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n"
|
||||||
|
}
|
||||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user