初始化项目,由ModelHub XC社区提供模型
Model: hkust-nlp/WebExplorer-8B Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
132
README.md
Normal file
132
README.md
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
base_model:
|
||||
- Qwen/Qwen3-8B
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
pipeline_tag: image-text-to-text
|
||||
tags:
|
||||
- LLM
|
||||
- agent
|
||||
paper: 2509.06501
|
||||
---
|
||||
|
||||
# 🔍 WebExplorer-8B
|
||||
|
||||
[](https://arxiv.org/abs/2509.06501)
|
||||
[](LICENSE)
|
||||
[](https://github.com/hkust-nlp/WebExplorer)
|
||||
|
||||
A state-of-the-art 8B parameter web agent model designed for complex information-seeking tasks and long-horizon reasoning.
|
||||
|
||||
## Paper Abstract
|
||||
|
||||
The paradigm of Large Language Models (LLMs) has increasingly shifted toward agentic applications, where web browsing capabilities are fundamental for retrieving information from diverse online sources. However, existing open-source web agents either demonstrate limited information-seeking abilities on complex tasks or lack transparent implementations. In this work, we identify that the key challenge lies in the scarcity of challenging data for information seeking. To address this limitation, we introduce WebExplorer: a systematic data generation approach using model-based exploration and iterative, long-to-short query evolution. This method creates challenging query-answer pairs that require multi-step reasoning and complex web navigation. By leveraging our curated high-quality dataset, we successfully develop advanced web agent WebExplorer-8B through supervised fine-tuning followed by reinforcement learning. Our model supports 128K context length and up to 100 tool calling turns, enabling long-horizon problem solving. Across diverse information-seeking benchmarks, WebExplorer-8B achieves the state-of-the-art performance at its scale. Notably, as an 8B-sized model, WebExplorer-8B is able to effectively search over an average of 16 turns after RL training, achieving higher accuracy than WebSailor-72B on BrowseComp-en/zh and attaining the best performance among models up to 100B parameters on WebWalkerQA and FRAMES. Beyond these information-seeking tasks, our model also achieves strong generalization on the HLE benchmark even though it is only trained on knowledge-intensive QA data. These results highlight our approach as a practical path toward long-horizon web agents.
|
||||
|
||||
## ✨ Key Features
|
||||
|
||||
- 🌐 **Long-horizon Reasoning**: Supports up to 128K context length and 100 tool calling turns
|
||||
- 🛠️ **Tool Utilization**: Masters search and browse functionalities
|
||||
- 🏆 **State-of-the-art Performance**: Achieves best-in-class results among models under 10B parameters
|
||||
|
||||
## 🏗️ Model Architecture
|
||||
|
||||
Built on Qwen3-8B base model and trained through a two-phase approach:
|
||||
|
||||
1. **Supervised Fine-tuning (SFT)**: Cold-start initialization with high-quality trajectories
|
||||
2. **Reinforcement Learning (RL)**: Enhanced using GRPO algorithm with progressive context expansion
|
||||
|
||||
## 📊 Performance
|
||||
|
||||
WebExplorer-8B achieves state-of-the-art performance across multiple information-seeking benchmarks at its scale:
|
||||
|
||||
| Model | BC-en | BC-zh | GAIA | WebWalkerQA | FRAMES | Xbench-DS | HLE |
|
||||
|-------|-------|-------|------|-------------|--------|-----------|-----|
|
||||
| OpenAI-o3† | 50.9 | 58.1 | 70.5† | 71.7 | 84.0 | 66.7 | 20.2 |
|
||||
| Claude-4-Sonnet† | 12.2 | 29.1 | 68.3† | 61.7 | 80.7 | 64.6 | 20.3 |
|
||||
| GLM-4.5 | 26.4 | 37.5 | 66.0† | 65.6† | 78.9† | 70.0† | 21.2† |
|
||||
| DeepSeek-V3.1 | 30.0 | 49.2 | 63.1† | 61.2† | 83.7 | 71.2 | 29.8 |
|
||||
| Kimi-K2† | 14.1 | 28.8 | 57.7 | 63.0 | 72.0 | 50.0 | 18.1 |
|
||||
|====|====|====|====|====|====|====|====|
|
||||
| WebShaper-72B | - | - | **60.0** | 52.2 | - | - | - |
|
||||
| WebShaper-32B (QwQ) | - | - | 53.3 | 49.7 | - | - | - |
|
||||
| WebShaper-32B | - | - | 52.4 | 51.4 | - | - | - |
|
||||
| WebSailor-72B | 12.0 | 30.1 | 55.4 | - | - | **55.0** | - |
|
||||
| WebSailor-32B | 10.5 | 25.5 | 53.2 | - | - | 53.3 | - |
|
||||
| WebSailor-7B | 6.7 | 14.2 | 33.0 | - | - | 34.3 | - |
|
||||
| ASearcher-Web-QwQ | 5.2 | 15.6 | 52.8 | 34.3 | 70.9 | 42.1 | 12.5 |
|
||||
| WebThinker-32B | 2.8 | - | 48.5 | 46.5 | - | - | 15.8 |
|
||||
| MiroThinker-32B-DPO-v0.1 | 13.0 | 17.0 | 57.3 | 49.3 | 71.7 | - | 11.8 |
|
||||
| MiroThinker-8B-DPO-v0.1 | 8.7 | 13.6 | 46.6 | 45.7 | 64.4 | - | - |
|
||||
| WebExplorer-8B (SFT) | 7.9 | 21.3 | 43.7 | 59.8 | 72.6 | 47.5 | 16.0 |
|
||||
| WebExplorer-8B (RL) | <u>**15.7**</u> | <u>**32.0**</u> | <u>50.0</u> | <u>**62.7**</u> | <u>**75.7**</u> | <u>53.7</u> | <u>**17.3**</u> |
|
||||
|
||||
Accuracy (%) of web agents on information-seeking benchmarks. BC-en and BC-zh denote BrowseComp-en and BrowseComp-zh respectively. XBench-DS refers to XBench-DeepSearch. **Bold** indicates the best performance among open-source models < 100B, while <u>underlined</u> values represent the best performance among models < 10B parameters. All scores of WebExplorer-8B are computed as Avg@4 using LLM-as-Judge. Entries marked with a dagger (†) were reproduced by us under our scaffold: on model name = entire row; on a number = that entry only.
|
||||
|
||||
## 🛠️ Tool Schema
|
||||
|
||||
WebExplorer-8B supports two tools for web interaction:
|
||||
|
||||
### 1. Browse Tool
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "browse",
|
||||
"type": "function",
|
||||
"description": "Extract specific information from a webpage",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": {
|
||||
"type": "string",
|
||||
"description": "Target URL to browse. The webpage content will be processed by the LLM for information extraction."
|
||||
},
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "Specific query about the webpage content. The LLM will analyze the content to answer this query."
|
||||
}
|
||||
},
|
||||
"required": ["url", "query"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Search Tool
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "search",
|
||||
"type": "function",
|
||||
"description": "Perform web search queries",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"queries": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "List of search queries. Returns search results containing title, URL, and snippet for each query."
|
||||
}
|
||||
},
|
||||
"required": ["queries"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 📝 Citation
|
||||
|
||||
If you find our work useful, please consider citing:
|
||||
|
||||
```bibtex
|
||||
@misc{liu2025webexplorer,
|
||||
title={WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents},
|
||||
author={Junteng Liu and Yunji Li and Chi Zhang and Jingyang Li and Aili Chen and Ke Ji and Weiyu Cheng and Zijia Wu and Chengyu Du and Qidi Xu and Jiayuan Song and Zhengmao Zhu and Wenhu Chen and Pengyu Zhao and Junxian He},
|
||||
year={2025},
|
||||
eprint={2509.06501},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2509.06501},
|
||||
}
|
||||
```
|
||||
28
added_tokens.json
Normal file
28
added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"</think>": 151668,
|
||||
"</tool_call>": 151658,
|
||||
"</tool_response>": 151666,
|
||||
"<think>": 151667,
|
||||
"<tool_call>": 151657,
|
||||
"<tool_response>": 151665,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
34
config.json
Normal file
34
config.json
Normal file
@@ -0,0 +1,34 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Qwen3ForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"eos_token_id": 151645,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 12288,
|
||||
"max_position_embeddings": 131072,
|
||||
"max_window_layers": 36,
|
||||
"model_type": "qwen3",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 36,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 151643,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": {
|
||||
"factor": 4.0,
|
||||
"original_max_position_embeddings": 32768,
|
||||
"rope_type": "yarn"
|
||||
},
|
||||
"rope_theta": 1000000,
|
||||
"sliding_window": null,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.51.1",
|
||||
"use_cache": false,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 151936
|
||||
}
|
||||
13
generation_config.json
Normal file
13
generation_config.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"bos_token_id": 151643,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
151645,
|
||||
151643
|
||||
],
|
||||
"pad_token_id": 151643,
|
||||
"temperature": 0.6,
|
||||
"top_k": 20,
|
||||
"top_p": 0.95,
|
||||
"transformers_version": "4.51.1"
|
||||
}
|
||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:14eb5282d64684ced2f86f8dd3ccabb392bf3cf1b018a18617956b79f2179fd7
|
||||
size 3976495200
|
||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:61433289212d76317dc5190c2d6fb69c4f4db96646f2679c49a60503198680a2
|
||||
size 4910603904
|
||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:33aa794691549d70df71694f6dd3c37f8e7c440b54a404b36f67b3cf2161ff02
|
||||
size 4994497512
|
||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c6764d5c8d4594e6776e53af7350c8e0cc752a798dffaab9255aa0cc8f90ae53
|
||||
size 2499920152
|
||||
406
model.safetensors.index.json
Normal file
406
model.safetensors.index.json
Normal file
@@ -0,0 +1,406 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 16381470720
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00002-of-00004.safetensors",
|
||||
"model.embed_tokens.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.norm.weight": "model-00003-of-00004.safetensors"
|
||||
}
|
||||
}
|
||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
||||
size 11422654
|
||||
240
tokenizer_config.json
Normal file
240
tokenizer_config.json
Normal file
@@ -0,0 +1,240 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|object_ref_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|object_ref_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151648": {
|
||||
"content": "<|box_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151649": {
|
||||
"content": "<|box_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151665": {
|
||||
"content": "<tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151666": {
|
||||
"content": "</tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151667": {
|
||||
"content": "<think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151668": {
|
||||
"content": "</think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 131072,
|
||||
"pad_token": "<|endoftext|>",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user