diff --git a/.gitattributes b/.gitattributes
index 15ba2c6..57fe00d 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -45,3 +45,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
+
+merges.txt filter=lfs diff=lfs merge=lfs -text
+vocab.json filter=lfs diff=lfs merge=lfs -text
+evisrag.png filter=lfs diff=lfs merge=lfs -text
+tokenizer.json filter=lfs diff=lfs merge=lfs -text
+model-00001-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00002-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
diff --git a/README.md b/README.md
index 6943427..71de8a7 100644
--- a/README.md
+++ b/README.md
@@ -1,48 +1,180 @@
---
-license: Apache License 2.0
-tags: []
-
-#model-type:
-##如 gpt、phi、llama、chatglm、baichuan 等
-#- gpt
-
-#domain:
-##如 nlp、cv、audio、multi-modal
-#- nlp
-
-#language:
-##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
-#- cn
-
-#metrics:
-##如 CIDEr、Blue、ROUGE 等
-#- CIDEr
-
-#tags:
-##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
-#- pretrained
-
-#tools:
-##如 vllm、fastchat、llamacpp、AdaSeq 等
-#- vllm
+license: apache-2.0
+language:
+- en
+base_model:
+- Qwen/Qwen2.5-VL-3B-Instruct
+datasets:
+- openbmb/EVisRAG-Train
---
-### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
-#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
-SDK下载
+# VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
+[](https://github.com/OpenBMB/VisRAG)
+[](https://arxiv.org/abs/2510.09733)
+[](https://huggingface.co/openbmb/EVisRAG-7B)
+
+
•
+ 📖 Introduction •
+ 🎉 News •
+ ⚙️ Setup •
+ ⚡️ Training
+
+•
+ 📃 Evaluation •
+ 🔧 Usage •
+ 📄 Lisense •
+ 📧 Contact •
+
+
+# 📖 Introduction
+**EVisRAG (VisRAG 2.0)** is an evidence-guided Vision Retrieval-augmented Generation framework that equips VLMs for multi-image questions by first linguistically observing retrieved images to collect per-image evidence, then reasoning over those cues to answer. **EVisRAG** trains with Reward-Scoped GRPO, applying fine-grained token-level rewards to jointly optimize visual perception and reasoning.
+
+
+
+# 🎉 News
+
+* 20251001: Released **EVisRAG (VisRAG 2.0)**, an end-to-end Vision-Language Model. Released our [Paper]() on arXiv. Released our [Model](https://huggingface.co/openbmb/EVisRAG-7B) on Hugging Face. Released our [Code](https://github.com/OpenBMB/VisRAG) on GitHub
+
+# ✨ EVisRAG Pipeline
+
+**EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and realeased VLRMs with EVisRAG built on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
+
+# ⚙️ Setup
```bash
-#安装ModelScope
-pip install modelscope
+git clone https://github.com/OpenBMB/VisRAG.git
+conda create --name EVisRAG python==3.10
+conda activate EVisRAG
+cd EVisRAG
+pip install -r EVisRAG_requirements.txt
```
-```python
-#SDK模型下载
-from modelscope import snapshot_download
-model_dir = snapshot_download('OpenBMB/EVisRAG-3B')
-```
-Git下载
-```
-#Git模型下载
-git clone https://www.modelscope.cn/OpenBMB/EVisRAG-3B.git
+# ⚡️ Training
+
+***Stage1: SFT*** (based on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory))
+
+```bash
+git clone https://github.com/hiyouga/LLaMA-Factory.git
+bash evisrag_scripts/full_sft.sh
```
-如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
\ No newline at end of file
+***Stage2: RS-GRPO*** (based on [Easy-R1](https://github.com/hiyouga/EasyR1))
+
+```bash
+bash evisrag_scripts/run_rsgrpo.sh
+```
+
+Notes:
+
+1. The training data is available on Hugging Face under `EVisRAG-Train`, which is referenced at the beginning of this page.
+2. We adopt a two-stage training strategy. In the first stage, please clone `LLaMA-Factory` and update the model path in the full_sft.sh script. In the second stage, we built our customized algorithm `RS-GRPO` based on `Easy-R1`, specifically designed for EVisRAG, whose implementation can be found in `src/RS-GRPO`.
+
+# 📃 Evaluation
+```bash
+bash evisrag_scripts/predict.sh
+bash evisrag_scripts/eval.sh
+```
+
+Notes:
+
+1. The test data is available on Hugging Face under `EVisRAG-Test-xxx`, as referenced at the beginning of this page.
+2. To run the evaluation, first execute the `predict.sh` script. The model outputs will be saved in the preds directory. Then, use the `eval.sh` script to evaluate the predictions. The metrics `EM`, `Accuracy`, and `F1` will be reported directly.
+
+# 🔧 Usage
+
+Model on Hugging Face: https://huggingface.co/openbmb/EVisRAG-7B
+
+```python
+from transformers import AutoProcessor
+from vllm import LLM, SamplingParams
+from qwen_vl_utils import process_vision_info
+
+def evidence_promot_grpo(query):
+ return f"""You are an AI Visual QA assistant. I will provide you with a question and several images. Please follow the four steps below:
+
+Step 1: Observe the Images
+First, analyze the question and consider what types of images may contain relevant information. Then, examine each image one by one, paying special attention to aspects related to the question. Identify whether each image contains any potentially relevant information.
+Wrap your observations within tags.
+
+Step 2: Record Evidences from Images
+After reviewing all images, record the evidence you find for each image within tags.
+If you are certain that an image contains no relevant information, record it as: [i]: no relevant information(where i denotes the index of the image).
+If an image contains relevant evidence, record it as: [j]: [the evidence you find for the question](where j is the index of the image).
+
+Step 3: Reason Based on the Question and Evidences
+Based on the recorded evidences, reason about the answer to the question.
+Include your step-by-step reasoning within tags.
+
+Step 4: Answer the Question
+Provide your final answer based only on the evidences you found in the images.
+Wrap your answer within tags.
+Avoid adding unnecessary contents in your final answer, like if the question is a yes/no question, simply answer "yes" or "no".
+If none of the images contain sufficient information to answer the question, respond with insufficient to answer.
+
+Formatting Requirements:
+Use the exact tags , , , and for structured output.
+It is possible that none, one, or several images contain relevant evidence.
+If you find no evidence or few evidences, and insufficient to help you answer the question, follow the instruction above for insufficient information.
+
+Question and images are provided below. Please follow the steps as instructed.
+Question: {query}
+"""
+
+model_path = "xxx"
+processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True, padding_side='left')
+
+imgs, query = ["imgpath1", "imgpath2", ..., "imgpathX"], "What xxx?"
+input_prompt = evidence_promot_grpo(query)
+
+content = [{"type": "text", "text": input_prompt}]
+for imgP in imgs:
+ content.append({
+ "type": "image",
+ "image": imgP
+ })
+msg = [{
+ "role": "user",
+ "content": content,
+ }]
+
+llm = LLM(
+ model=model_path,
+ tensor_parallel_size=1,
+ dtype="bfloat16",
+ limit_mm_per_prompt={"image":5, "video":0},
+)
+
+sampling_params = SamplingParams(
+ temperature=0.1,
+ repetition_penalty=1.05,
+ max_tokens=2048,
+)
+
+prompt = processor.apply_chat_template(
+ msg,
+ tokenize=False,
+ add_generation_prompt=True,
+)
+
+image_inputs, _ = process_vision_info(msg)
+
+msg_input = [{
+ "prompt": prompt,
+ "multi_modal_data": {"image": image_inputs},
+}]
+
+output_texts = llm.generate(msg_input,
+ sampling_params=sampling_params,
+)
+
+print(output_texts[0].outputs[0].text)
+```
+
+
+# 📄 License
+
+* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
+* The usage of **EVisRAG** model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
+
+# 📧 Contact
+## EVisRAG
+- Yubo Sun: syb2000417@stu.pku.edu.cn
+- Chunyi Peng: hm.cypeng@gmail.com
\ No newline at end of file
diff --git a/added_tokens.json b/added_tokens.json
new file mode 100644
index 0000000..482ced4
--- /dev/null
+++ b/added_tokens.json
@@ -0,0 +1,24 @@
+{
+ "": 151658,
+ "": 151657,
+ "<|box_end|>": 151649,
+ "<|box_start|>": 151648,
+ "<|endoftext|>": 151643,
+ "<|file_sep|>": 151664,
+ "<|fim_middle|>": 151660,
+ "<|fim_pad|>": 151662,
+ "<|fim_prefix|>": 151659,
+ "<|fim_suffix|>": 151661,
+ "<|im_end|>": 151645,
+ "<|im_start|>": 151644,
+ "<|image_pad|>": 151655,
+ "<|object_ref_end|>": 151647,
+ "<|object_ref_start|>": 151646,
+ "<|quad_end|>": 151651,
+ "<|quad_start|>": 151650,
+ "<|repo_name|>": 151663,
+ "<|video_pad|>": 151656,
+ "<|vision_end|>": 151653,
+ "<|vision_pad|>": 151654,
+ "<|vision_start|>": 151652
+}
diff --git a/chat_template.json b/chat_template.json
new file mode 100644
index 0000000..13303be
--- /dev/null
+++ b/chat_template.json
@@ -0,0 +1,3 @@
+{
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
+}
diff --git a/config.json b/config.json
new file mode 100644
index 0000000..48d2545
--- /dev/null
+++ b/config.json
@@ -0,0 +1,65 @@
+{
+ "architectures": [
+ "Qwen2_5_VLForConditionalGeneration"
+ ],
+ "attention_dropout": 0.0,
+ "eos_token_id": 151645,
+ "hidden_act": "silu",
+ "hidden_size": 2048,
+ "image_token_id": 151655,
+ "initializer_range": 0.02,
+ "intermediate_size": 11008,
+ "max_position_embeddings": 128000,
+ "max_window_layers": 70,
+ "model_type": "qwen2_5_vl",
+ "num_attention_heads": 16,
+ "num_hidden_layers": 36,
+ "num_key_value_heads": 2,
+ "pad_token_id": 151643,
+ "rms_norm_eps": 1e-06,
+ "rope_scaling": {
+ "mrope_section": [
+ 16,
+ 24,
+ 24
+ ],
+ "rope_type": "default",
+ "type": "default"
+ },
+ "rope_theta": 1000000.0,
+ "sliding_window": 32768,
+ "tie_word_embeddings": true,
+ "torch_dtype": "float32",
+ "transformers_version": "4.51.3",
+ "use_cache": false,
+ "use_sliding_window": false,
+ "video_token_id": 151656,
+ "vision_config": {
+ "depth": 32,
+ "fullatt_block_indexes": [
+ 7,
+ 15,
+ 23,
+ 31
+ ],
+ "hidden_act": "silu",
+ "hidden_size": 1280,
+ "in_channels": 3,
+ "in_chans": 3,
+ "intermediate_size": 3420,
+ "model_type": "qwen2_5_vl",
+ "num_heads": 16,
+ "out_hidden_size": 2048,
+ "patch_size": 14,
+ "spatial_merge_size": 2,
+ "spatial_patch_size": 14,
+ "temporal_patch_size": 2,
+ "tokens_per_second": 2,
+ "torch_dtype": "float32",
+ "window_size": 112
+ },
+ "vision_end_token_id": 151653,
+ "vision_start_token_id": 151652,
+ "vision_token_id": 151654,
+ "vocab_size": 151936
+}
diff --git a/configuration.json b/configuration.json
new file mode 100644
index 0000000..159097f
--- /dev/null
+++ b/configuration.json
@@ -0,0 +1 @@
+{"framework": "pytorch", "task": "others", "allow_remote": true}
\ No newline at end of file
diff --git a/evisrag.png b/evisrag.png
new file mode 100644
index 0000000..b712b8f
--- /dev/null
+++ b/evisrag.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:235ac213de87b1bfcb2669a9ddf78ddb5c139bbd67d220da546d9045091dea81
+size 1979876
diff --git a/generation_config.json b/generation_config.json
new file mode 100644
index 0000000..889dcf8
--- /dev/null
+++ b/generation_config.json
@@ -0,0 +1,7 @@
+{
+ "_from_model_config": true,
+ "eos_token_id": 151645,
+ "pad_token_id": 151643,
+ "transformers_version": "4.51.3",
+ "use_cache": false
+}
diff --git a/merges.txt b/merges.txt
new file mode 100644
index 0000000..80c1a19
--- /dev/null
+++ b/merges.txt
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
+size 1671853
diff --git a/model-00001-of-00002.safetensors b/model-00001-of-00002.safetensors
new file mode 100644
index 0000000..4ffe33a
--- /dev/null
+++ b/model-00001-of-00002.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c6b927982855c90e8031dbfa52e1d5bcbd3dfbcbe7d9a09dd92b8231ffd314c
+size 4996207200
diff --git a/model-00002-of-00002.safetensors b/model-00002-of-00002.safetensors
new file mode 100644
index 0000000..b2b99ed
--- /dev/null
+++ b/model-00002-of-00002.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aff42bee4b0eccb0481d56fa7378fcb05405d98c7f218c5996675988449ff355
+size 3135460800
diff --git a/model.safetensors.index.json b/model.safetensors.index.json
new file mode 100644
index 0000000..3d76a6c
--- /dev/null
+++ b/model.safetensors.index.json
@@ -0,0 +1,832 @@
+{
+ "metadata": {
+ "total_size": 8131575808
+ },
+ "weight_map": {
+ "lm_head.weight": "model-00001-of-00002.safetensors",
+ "model.embed_tokens.weight": "model-00002-of-00002.safetensors",
+ "model.layers.0.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.19.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.2.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.20.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.28.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.29.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.29.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.29.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.29.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.29.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.30.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.31.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.31.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.31.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.32.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.33.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.33.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.33.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.33.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.33.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.33.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.33.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.33.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.33.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.34.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.34.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.34.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.34.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.34.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.34.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.34.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.35.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.35.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.35.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.35.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.35.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.35.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.4.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.norm.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.0.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.0.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.0.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.0.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.1.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.1.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.1.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.1.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.1.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.10.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.10.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.11.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.11.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.11.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.11.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.11.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.11.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.12.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.12.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.12.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.12.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.13.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.13.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.13.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.14.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.14.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.14.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.15.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.15.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.15.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.16.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.16.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.16.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.16.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.17.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.17.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.17.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.17.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.18.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.18.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.18.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.18.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.19.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.19.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.2.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.2.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.20.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.20.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.21.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.21.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.21.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.22.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.22.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.22.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.22.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.22.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.23.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.23.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.24.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.24.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.25.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.25.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.25.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.25.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.25.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.26.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.26.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.27.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.27.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.27.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.28.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.28.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.29.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.29.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.29.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.3.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.3.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.30.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.30.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.30.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.31.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.31.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.31.attn.qkv.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.31.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.31.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.31.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.4.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.attn.qkv.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.4.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.4.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.4.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.5.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.5.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.6.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.attn.proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.6.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.6.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.6.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.7.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.7.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.7.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.attn.proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.8.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.norm1.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.8.norm2.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.attn.proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.attn.proj.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.attn.qkv.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.attn.qkv.weight": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.9.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
+ "visual.blocks.9.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.9.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
+ "visual.blocks.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.9.norm1.weight": "model-00002-of-00002.safetensors",
+ "visual.blocks.9.norm2.weight": "model-00002-of-00002.safetensors",
+ "visual.merger.ln_q.weight": "model-00002-of-00002.safetensors",
+ "visual.merger.mlp.0.bias": "model-00001-of-00002.safetensors",
+ "visual.merger.mlp.0.weight": "model-00001-of-00002.safetensors",
+ "visual.merger.mlp.2.bias": "model-00002-of-00002.safetensors",
+ "visual.merger.mlp.2.weight": "model-00002-of-00002.safetensors",
+ "visual.patch_embed.proj.weight": "model-00002-of-00002.safetensors"
+ }
+}
diff --git a/preprocessor_config.json b/preprocessor_config.json
new file mode 100644
index 0000000..1c234b7
--- /dev/null
+++ b/preprocessor_config.json
@@ -0,0 +1,36 @@
+{
+ "crop_size": null,
+ "data_format": "channels_first",
+ "default_to_square": true,
+ "device": null,
+ "do_center_crop": null,
+ "do_convert_rgb": true,
+ "do_normalize": true,
+ "do_rescale": true,
+ "do_resize": true,
+ "image_mean": [
+ 0.48145466,
+ 0.4578275,
+ 0.40821073
+ ],
+ "image_processor_type": "Qwen2VLImageProcessorFast",
+ "image_std": [
+ 0.26862954,
+ 0.26130258,
+ 0.27577711
+ ],
+ "input_data_format": null,
+ "max_pixels": 12845056,
+ "merge_size": 2,
+ "min_pixels": 3136,
+ "patch_size": 14,
+ "processor_class": "Qwen2_5_VLProcessor",
+ "resample": 3,
+ "rescale_factor": 0.00392156862745098,
+ "return_tensors": null,
+ "size": {
+ "longest_edge": 12845056,
+ "shortest_edge": 3136
+ },
+ "temporal_patch_size": 2
+}
diff --git a/special_tokens_map.json b/special_tokens_map.json
new file mode 100644
index 0000000..ac23c0a
--- /dev/null
+++ b/special_tokens_map.json
@@ -0,0 +1,31 @@
+{
+ "additional_special_tokens": [
+ "<|im_start|>",
+ "<|im_end|>",
+ "<|object_ref_start|>",
+ "<|object_ref_end|>",
+ "<|box_start|>",
+ "<|box_end|>",
+ "<|quad_start|>",
+ "<|quad_end|>",
+ "<|vision_start|>",
+ "<|vision_end|>",
+ "<|vision_pad|>",
+ "<|image_pad|>",
+ "<|video_pad|>"
+ ],
+ "eos_token": {
+ "content": "<|im_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false
+ },
+ "pad_token": {
+ "content": "<|endoftext|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false
+ }
+}
diff --git a/tokenizer.json b/tokenizer.json
new file mode 100644
index 0000000..51ebb3b
--- /dev/null
+++ b/tokenizer.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
+size 11421896
diff --git a/tokenizer_config.json b/tokenizer_config.json
new file mode 100644
index 0000000..4676535
--- /dev/null
+++ b/tokenizer_config.json
@@ -0,0 +1,211 @@
+{
+ "add_bos_token": false,
+ "add_prefix_space": false,
+ "added_tokens_decoder": {
+ "151643": {
+ "content": "<|endoftext|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151644": {
+ "content": "<|im_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151645": {
+ "content": "<|im_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151646": {
+ "content": "<|object_ref_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151647": {
+ "content": "<|object_ref_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151648": {
+ "content": "<|box_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151649": {
+ "content": "<|box_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151650": {
+ "content": "<|quad_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151651": {
+ "content": "<|quad_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151652": {
+ "content": "<|vision_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151653": {
+ "content": "<|vision_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151654": {
+ "content": "<|vision_pad|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151655": {
+ "content": "<|image_pad|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151656": {
+ "content": "<|video_pad|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151657": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151658": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151659": {
+ "content": "<|fim_prefix|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151660": {
+ "content": "<|fim_middle|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151661": {
+ "content": "<|fim_suffix|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151662": {
+ "content": "<|fim_pad|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151663": {
+ "content": "<|repo_name|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ },
+ "151664": {
+ "content": "<|file_sep|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": false
+ }
+ },
+ "additional_special_tokens": [
+ "<|im_start|>",
+ "<|im_end|>",
+ "<|object_ref_start|>",
+ "<|object_ref_end|>",
+ "<|box_start|>",
+ "<|box_end|>",
+ "<|quad_start|>",
+ "<|quad_end|>",
+ "<|vision_start|>",
+ "<|vision_end|>",
+ "<|vision_pad|>",
+ "<|image_pad|>",
+ "<|video_pad|>"
+ ],
+ "bos_token": null,
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
+ "clean_up_tokenization_spaces": false,
+ "eos_token": "<|im_end|>",
+ "errors": "replace",
+ "extra_special_tokens": {},
+ "model_max_length": 131072,
+ "pad_token": "<|endoftext|>",
+ "padding_side": "right",
+ "processor_class": "Qwen2_5_VLProcessor",
+ "split_special_tokens": false,
+ "tokenizer_class": "Qwen2Tokenizer",
+ "unk_token": null,
+ "use_fast": true
+}
diff --git a/vocab.json b/vocab.json
new file mode 100644
index 0000000..6c49fc6
--- /dev/null
+++ b/vocab.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910
+size 2776833