Upload folder using ModelScope SDK (batch 1/1)
This commit is contained in:
7
.gitattributes
vendored
7
.gitattributes
vendored
@@ -45,3 +45,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
|
||||
merges.txt filter=lfs diff=lfs merge=lfs -text
|
||||
vocab.json filter=lfs diff=lfs merge=lfs -text
|
||||
evisrag.png filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
model-00001-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
model-00002-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
214
README.md
214
README.md
@@ -1,48 +1,180 @@
|
||||
---
|
||||
license: Apache License 2.0
|
||||
tags: []
|
||||
|
||||
#model-type:
|
||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
||||
#- gpt
|
||||
|
||||
#domain:
|
||||
##如 nlp、cv、audio、multi-modal
|
||||
#- nlp
|
||||
|
||||
#language:
|
||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
||||
#- cn
|
||||
|
||||
#metrics:
|
||||
##如 CIDEr、Blue、ROUGE 等
|
||||
#- CIDEr
|
||||
|
||||
#tags:
|
||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
||||
#- pretrained
|
||||
|
||||
#tools:
|
||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
||||
#- vllm
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-VL-3B-Instruct
|
||||
datasets:
|
||||
- openbmb/EVisRAG-Train
|
||||
---
|
||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
||||
|
||||
SDK下载
|
||||
# VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
|
||||
[](https://github.com/OpenBMB/VisRAG)
|
||||
[](https://arxiv.org/abs/2510.09733)
|
||||
[](https://huggingface.co/openbmb/EVisRAG-7B)
|
||||
|
||||
<p align="center">•
|
||||
<a href="#-introduction"> 📖 Introduction </a> •
|
||||
<a href="#-news">🎉 News</a> •
|
||||
<a href="#%EF%B8%8F-setup">⚙️ Setup</a> •
|
||||
<a href="#%EF%B8%8F-training">⚡️ Training</a>
|
||||
</p>
|
||||
<p align="center">•
|
||||
<a href="#-evaluation">📃 Evaluation</a> •
|
||||
<a href="#-usage">🔧 Usage</a> •
|
||||
<a href="#-license">📄 Lisense</a> •
|
||||
<a href="#-contact">📧 Contact</a> •
|
||||
</p>
|
||||
|
||||
# 📖 Introduction
|
||||
**EVisRAG (VisRAG 2.0)** is an evidence-guided Vision Retrieval-augmented Generation framework that equips VLMs for multi-image questions by first linguistically observing retrieved images to collect per-image evidence, then reasoning over those cues to answer. **EVisRAG** trains with Reward-Scoped GRPO, applying fine-grained token-level rewards to jointly optimize visual perception and reasoning.
|
||||
|
||||
<p align="center"><img width=800 src="evisrag.png"/></p>
|
||||
|
||||
# 🎉 News
|
||||
|
||||
* 20251001: Released **EVisRAG (VisRAG 2.0)**, an end-to-end Vision-Language Model. Released our [Paper]() on arXiv. Released our [Model](https://huggingface.co/openbmb/EVisRAG-7B) on Hugging Face. Released our [Code](https://github.com/OpenBMB/VisRAG) on GitHub
|
||||
|
||||
# ✨ EVisRAG Pipeline
|
||||
|
||||
**EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and realeased VLRMs with EVisRAG built on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
|
||||
|
||||
# ⚙️ Setup
|
||||
```bash
|
||||
#安装ModelScope
|
||||
pip install modelscope
|
||||
git clone https://github.com/OpenBMB/VisRAG.git
|
||||
conda create --name EVisRAG python==3.10
|
||||
conda activate EVisRAG
|
||||
cd EVisRAG
|
||||
pip install -r EVisRAG_requirements.txt
|
||||
```
|
||||
```python
|
||||
#SDK模型下载
|
||||
from modelscope import snapshot_download
|
||||
model_dir = snapshot_download('OpenBMB/EVisRAG-3B')
|
||||
```
|
||||
Git下载
|
||||
```
|
||||
#Git模型下载
|
||||
git clone https://www.modelscope.cn/OpenBMB/EVisRAG-3B.git
|
||||
# ⚡️ Training
|
||||
|
||||
***Stage1: SFT*** (based on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory))
|
||||
|
||||
```bash
|
||||
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
||||
bash evisrag_scripts/full_sft.sh
|
||||
```
|
||||
|
||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
||||
***Stage2: RS-GRPO*** (based on [Easy-R1](https://github.com/hiyouga/EasyR1))
|
||||
|
||||
```bash
|
||||
bash evisrag_scripts/run_rsgrpo.sh
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
1. The training data is available on Hugging Face under `EVisRAG-Train`, which is referenced at the beginning of this page.
|
||||
2. We adopt a two-stage training strategy. In the first stage, please clone `LLaMA-Factory` and update the model path in the full_sft.sh script. In the second stage, we built our customized algorithm `RS-GRPO` based on `Easy-R1`, specifically designed for EVisRAG, whose implementation can be found in `src/RS-GRPO`.
|
||||
|
||||
# 📃 Evaluation
|
||||
```bash
|
||||
bash evisrag_scripts/predict.sh
|
||||
bash evisrag_scripts/eval.sh
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
1. The test data is available on Hugging Face under `EVisRAG-Test-xxx`, as referenced at the beginning of this page.
|
||||
2. To run the evaluation, first execute the `predict.sh` script. The model outputs will be saved in the preds directory. Then, use the `eval.sh` script to evaluate the predictions. The metrics `EM`, `Accuracy`, and `F1` will be reported directly.
|
||||
|
||||
# 🔧 Usage
|
||||
|
||||
Model on Hugging Face: https://huggingface.co/openbmb/EVisRAG-7B
|
||||
|
||||
```python
|
||||
from transformers import AutoProcessor
|
||||
from vllm import LLM, SamplingParams
|
||||
from qwen_vl_utils import process_vision_info
|
||||
|
||||
def evidence_promot_grpo(query):
|
||||
return f"""You are an AI Visual QA assistant. I will provide you with a question and several images. Please follow the four steps below:
|
||||
|
||||
Step 1: Observe the Images
|
||||
First, analyze the question and consider what types of images may contain relevant information. Then, examine each image one by one, paying special attention to aspects related to the question. Identify whether each image contains any potentially relevant information.
|
||||
Wrap your observations within <observe></observe> tags.
|
||||
|
||||
Step 2: Record Evidences from Images
|
||||
After reviewing all images, record the evidence you find for each image within <evidence></evidence> tags.
|
||||
If you are certain that an image contains no relevant information, record it as: [i]: no relevant information(where i denotes the index of the image).
|
||||
If an image contains relevant evidence, record it as: [j]: [the evidence you find for the question](where j is the index of the image).
|
||||
|
||||
Step 3: Reason Based on the Question and Evidences
|
||||
Based on the recorded evidences, reason about the answer to the question.
|
||||
Include your step-by-step reasoning within <think></think> tags.
|
||||
|
||||
Step 4: Answer the Question
|
||||
Provide your final answer based only on the evidences you found in the images.
|
||||
Wrap your answer within <answer></answer> tags.
|
||||
Avoid adding unnecessary contents in your final answer, like if the question is a yes/no question, simply answer "yes" or "no".
|
||||
If none of the images contain sufficient information to answer the question, respond with <answer>insufficient to answer</answer>.
|
||||
|
||||
Formatting Requirements:
|
||||
Use the exact tags <observe>, <evidence>, <think>, and <answer> for structured output.
|
||||
It is possible that none, one, or several images contain relevant evidence.
|
||||
If you find no evidence or few evidences, and insufficient to help you answer the question, follow the instruction above for insufficient information.
|
||||
|
||||
Question and images are provided below. Please follow the steps as instructed.
|
||||
Question: {query}
|
||||
"""
|
||||
|
||||
model_path = "xxx"
|
||||
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True, padding_side='left')
|
||||
|
||||
imgs, query = ["imgpath1", "imgpath2", ..., "imgpathX"], "What xxx?"
|
||||
input_prompt = evidence_promot_grpo(query)
|
||||
|
||||
content = [{"type": "text", "text": input_prompt}]
|
||||
for imgP in imgs:
|
||||
content.append({
|
||||
"type": "image",
|
||||
"image": imgP
|
||||
})
|
||||
msg = [{
|
||||
"role": "user",
|
||||
"content": content,
|
||||
}]
|
||||
|
||||
llm = LLM(
|
||||
model=model_path,
|
||||
tensor_parallel_size=1,
|
||||
dtype="bfloat16",
|
||||
limit_mm_per_prompt={"image":5, "video":0},
|
||||
)
|
||||
|
||||
sampling_params = SamplingParams(
|
||||
temperature=0.1,
|
||||
repetition_penalty=1.05,
|
||||
max_tokens=2048,
|
||||
)
|
||||
|
||||
prompt = processor.apply_chat_template(
|
||||
msg,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True,
|
||||
)
|
||||
|
||||
image_inputs, _ = process_vision_info(msg)
|
||||
|
||||
msg_input = [{
|
||||
"prompt": prompt,
|
||||
"multi_modal_data": {"image": image_inputs},
|
||||
}]
|
||||
|
||||
output_texts = llm.generate(msg_input,
|
||||
sampling_params=sampling_params,
|
||||
)
|
||||
|
||||
print(output_texts[0].outputs[0].text)
|
||||
```
|
||||
|
||||
|
||||
# 📄 License
|
||||
|
||||
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
|
||||
* The usage of **EVisRAG** model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
|
||||
|
||||
# 📧 Contact
|
||||
## EVisRAG
|
||||
- Yubo Sun: syb2000417@stu.pku.edu.cn
|
||||
- Chunyi Peng: hm.cypeng@gmail.com
|
||||
24
added_tokens.json
Normal file
24
added_tokens.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"</tool_call>": 151658,
|
||||
"<tool_call>": 151657,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
3
chat_template.json
Normal file
3
chat_template.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
|
||||
}
|
||||
65
config.json
Normal file
65
config.json
Normal file
@@ -0,0 +1,65 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Qwen2_5_VLForConditionalGeneration"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"eos_token_id": 151645,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"image_token_id": 151655,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 11008,
|
||||
"max_position_embeddings": 128000,
|
||||
"max_window_layers": 70,
|
||||
"model_type": "qwen2_5_vl",
|
||||
"num_attention_heads": 16,
|
||||
"num_hidden_layers": 36,
|
||||
"num_key_value_heads": 2,
|
||||
"pad_token_id": 151643,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": {
|
||||
"mrope_section": [
|
||||
16,
|
||||
24,
|
||||
24
|
||||
],
|
||||
"rope_type": "default",
|
||||
"type": "default"
|
||||
},
|
||||
"rope_theta": 1000000.0,
|
||||
"sliding_window": 32768,
|
||||
"tie_word_embeddings": true,
|
||||
"torch_dtype": "float32",
|
||||
"transformers_version": "4.51.3",
|
||||
"use_cache": false,
|
||||
"use_sliding_window": false,
|
||||
"video_token_id": 151656,
|
||||
"vision_config": {
|
||||
"depth": 32,
|
||||
"fullatt_block_indexes": [
|
||||
7,
|
||||
15,
|
||||
23,
|
||||
31
|
||||
],
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 1280,
|
||||
"in_channels": 3,
|
||||
"in_chans": 3,
|
||||
"intermediate_size": 3420,
|
||||
"model_type": "qwen2_5_vl",
|
||||
"num_heads": 16,
|
||||
"out_hidden_size": 2048,
|
||||
"patch_size": 14,
|
||||
"spatial_merge_size": 2,
|
||||
"spatial_patch_size": 14,
|
||||
"temporal_patch_size": 2,
|
||||
"tokens_per_second": 2,
|
||||
"torch_dtype": "float32",
|
||||
"window_size": 112
|
||||
},
|
||||
"vision_end_token_id": 151653,
|
||||
"vision_start_token_id": 151652,
|
||||
"vision_token_id": 151654,
|
||||
"vocab_size": 151936
|
||||
}
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "others", "allow_remote": true}
|
||||
3
evisrag.png
Normal file
3
evisrag.png
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:235ac213de87b1bfcb2669a9ddf78ddb5c139bbd67d220da546d9045091dea81
|
||||
size 1979876
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"eos_token_id": 151645,
|
||||
"pad_token_id": 151643,
|
||||
"transformers_version": "4.51.3",
|
||||
"use_cache": false
|
||||
}
|
||||
3
merges.txt
Normal file
3
merges.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
|
||||
size 1671853
|
||||
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1c6b927982855c90e8031dbfa52e1d5bcbd3dfbcbe7d9a09dd92b8231ffd314c
|
||||
size 4996207200
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aff42bee4b0eccb0481d56fa7378fcb05405d98c7f218c5996675988449ff355
|
||||
size 3135460800
|
||||
832
model.safetensors.index.json
Normal file
832
model.safetensors.index.json
Normal file
@@ -0,0 +1,832 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 8131575808
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00001-of-00002.safetensors",
|
||||
"model.embed_tokens.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.norm.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.0.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.0.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.0.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.0.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.1.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.1.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.1.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.10.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.10.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.11.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.11.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.11.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.11.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.12.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.12.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.12.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.13.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.13.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.13.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.14.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.14.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.15.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.15.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.16.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.16.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.16.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.17.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.17.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.17.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.17.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.18.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.18.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.18.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.18.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.19.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.19.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.2.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.2.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.20.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.20.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.21.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.21.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.22.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.22.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.22.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.22.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.23.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.24.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.25.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.25.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.25.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.26.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.26.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.27.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.27.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.27.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.28.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.28.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.29.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.29.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.3.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.3.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.30.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.30.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.31.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.31.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.31.attn.qkv.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.up_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.31.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.31.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.4.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.attn.qkv.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.4.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.4.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.4.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.5.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.5.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.6.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.attn.proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.6.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.down_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.6.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.6.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.7.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.7.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.attn.proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.8.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.norm1.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.8.norm2.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.attn.proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.attn.proj.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.attn.qkv.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.attn.qkv.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.gate_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.blocks.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.9.norm1.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.blocks.9.norm2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.merger.ln_q.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.merger.mlp.0.bias": "model-00001-of-00002.safetensors",
|
||||
"visual.merger.mlp.0.weight": "model-00001-of-00002.safetensors",
|
||||
"visual.merger.mlp.2.bias": "model-00002-of-00002.safetensors",
|
||||
"visual.merger.mlp.2.weight": "model-00002-of-00002.safetensors",
|
||||
"visual.patch_embed.proj.weight": "model-00002-of-00002.safetensors"
|
||||
}
|
||||
}
|
||||
36
preprocessor_config.json
Normal file
36
preprocessor_config.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"crop_size": null,
|
||||
"data_format": "channels_first",
|
||||
"default_to_square": true,
|
||||
"device": null,
|
||||
"do_center_crop": null,
|
||||
"do_convert_rgb": true,
|
||||
"do_normalize": true,
|
||||
"do_rescale": true,
|
||||
"do_resize": true,
|
||||
"image_mean": [
|
||||
0.48145466,
|
||||
0.4578275,
|
||||
0.40821073
|
||||
],
|
||||
"image_processor_type": "Qwen2VLImageProcessorFast",
|
||||
"image_std": [
|
||||
0.26862954,
|
||||
0.26130258,
|
||||
0.27577711
|
||||
],
|
||||
"input_data_format": null,
|
||||
"max_pixels": 12845056,
|
||||
"merge_size": 2,
|
||||
"min_pixels": 3136,
|
||||
"patch_size": 14,
|
||||
"processor_class": "Qwen2_5_VLProcessor",
|
||||
"resample": 3,
|
||||
"rescale_factor": 0.00392156862745098,
|
||||
"return_tensors": null,
|
||||
"size": {
|
||||
"longest_edge": 12845056,
|
||||
"shortest_edge": 3136
|
||||
},
|
||||
"temporal_patch_size": 2
|
||||
}
|
||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
|
||||
size 11421896
|
||||
211
tokenizer_config.json
Normal file
211
tokenizer_config.json
Normal file
@@ -0,0 +1,211 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|object_ref_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|object_ref_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151648": {
|
||||
"content": "<|box_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151649": {
|
||||
"content": "<|box_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 131072,
|
||||
"pad_token": "<|endoftext|>",
|
||||
"padding_side": "right",
|
||||
"processor_class": "Qwen2_5_VLProcessor",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null,
|
||||
"use_fast": true
|
||||
}
|
||||
BIN
vocab.json
(Stored with Git LFS)
Normal file
BIN
vocab.json
(Stored with Git LFS)
Normal file
Binary file not shown.
Reference in New Issue
Block a user