初始化项目,由ModelHub XC社区提供模型
Model: pixas/Miner-8B Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
136
README.md
Normal file
136
README.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- transformers
|
||||
- reasoning
|
||||
- reinforcement-learning
|
||||
- rlvr
|
||||
- math
|
||||
- miner
|
||||
- qwen3
|
||||
- causal-lm
|
||||
model-index:
|
||||
- name: Miner-8B
|
||||
results: []
|
||||
datasets:
|
||||
- agentica-org/DeepScaleR-Preview-Dataset
|
||||
base_model:
|
||||
- Qwen/Qwen3-8B-Base
|
||||
---
|
||||
|
||||
# Miner-8B
|
||||
|
||||
This repository hosts the Hugging Face Transformers checkpoint for **MINER**: *Mining Intrinsic Mastery for Data-Efficient RL in Large Reasoning Models*.
|
||||
|
||||
- Paper: https://arxiv.org/pdf/2601.04731
|
||||
- Code: https://github.com/pixas/Miner
|
||||
|
||||
## Model Description
|
||||
|
||||
Miner-8B is a reasoning model trained with **MINER**, a reinforcement learning method designed to improve data efficiency for large reasoning models. MINER targets the inefficiency of critic-free RL methods on positive homogeneous prompts, where all sampled rollouts are correct and standard relative-advantage training provides little or no learning signal. Instead, MINER leverages the policy’s intrinsic uncertainty as a self-supervised reward signal, without requiring auxiliary reward models or additional inference-time overhead. :contentReference[oaicite:1]{index=1}
|
||||
|
||||
The MINER framework introduces two central ideas:
|
||||
1. **Token-level focal credit assignment**, which amplifies learning on uncertain and critical tokens while suppressing overconfident ones.
|
||||
2. **Adaptive advantage calibration**, which integrates intrinsic and verifiable rewards in a stable way. :contentReference[oaicite:2]{index=2}
|
||||
|
||||
According to the paper, MINER is evaluated on six reasoning benchmarks using Qwen3-8B-Base and Qwen3-8B-Base, and reports stronger sample efficiency and accuracy than several baseline methods including GRPO variants. :contentReference[oaicite:3]{index=3}
|
||||
|
||||
## Intended Use
|
||||
|
||||
This model is intended for **research and experimental use** in:
|
||||
- reasoning and problem solving
|
||||
- reinforcement learning for language models
|
||||
- mathematical and verifiable reasoning tasks
|
||||
- post-training and evaluation of large reasoning models
|
||||
|
||||
Potential use cases include:
|
||||
- academic research on RL for reasoning models
|
||||
- evaluation on reasoning benchmarks
|
||||
- ablation and reproduction studies based on the MINER framework
|
||||
- further finetuning or post-training from this checkpoint
|
||||
|
||||
## How to Use
|
||||
|
||||
### Transformers
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
model_name = "pixas/Miner-8B"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
|
||||
prompt = [{"role": "user", "content": "What is 2+3?"}]
|
||||
inputs = tokenizer(tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False), return_tensors='pt').to(model.device)
|
||||
|
||||
outputs = model.generate(
|
||||
**inputs,
|
||||
max_new_tokens=8192,
|
||||
do_sample=True
|
||||
)
|
||||
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
````
|
||||
|
||||
### vLLM
|
||||
|
||||
```python
|
||||
from vllm import LLM, SamplingParams
|
||||
|
||||
llm = LLM(model="pixas/Miner-8B")
|
||||
sampling_params = SamplingParams(
|
||||
temperature=0.6,
|
||||
max_tokens=8192
|
||||
)
|
||||
prompt = [{"role": "user", "content": "What is 2+3?"}]
|
||||
inputs = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False)
|
||||
outputs = llm.generate(
|
||||
inputs,
|
||||
sampling_params
|
||||
)
|
||||
|
||||
print(outputs[0].outputs[0].text)
|
||||
```
|
||||
|
||||
|
||||
## Limitations
|
||||
|
||||
This model is a research checkpoint and may have several limitations:
|
||||
|
||||
* It may produce incorrect, incomplete, or overconfident reasoning outputs.
|
||||
* Performance may depend heavily on prompt format and decoding setup.
|
||||
* Results reported in the paper may not transfer exactly to this released checkpoint unless the same base model, data mixture, and evaluation pipeline are used.
|
||||
* The model is not intended as a substitute for expert judgment in high-stakes domains.
|
||||
|
||||
## Bias, Risks, and Safety
|
||||
|
||||
Like other large language models, this model may reflect biases present in its training data and may generate harmful, misleading, or factually incorrect outputs. Additional care is required before deployment in user-facing or safety-critical applications.
|
||||
|
||||
## Citation
|
||||
|
||||
If you use this model, please cite:
|
||||
|
||||
```bibtex
|
||||
@article{jiang2026miner,
|
||||
title={Miner: Mining Intrinsic Mastery for Data-Efficient RL in Large Reasoning Models},
|
||||
author={Jiang, Shuyang and Wang, Yuhao and Zhang, Ya and Wang, Yanfeng and Wang, Yu},
|
||||
journal={arXiv preprint arXiv:2601.04731},
|
||||
year={2026}
|
||||
}
|
||||
```
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
This model card is based on the official MINER paper and code repository:
|
||||
|
||||
* Paper: [https://arxiv.org/pdf/2601.04731](https://arxiv.org/pdf/2601.04731)
|
||||
* Code: [https://github.com/pixas/Miner](https://github.com/pixas/Miner)
|
||||
28
added_tokens.json
Normal file
28
added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"</think>": 151668,
|
||||
"</tool_call>": 151658,
|
||||
"</tool_response>": 151666,
|
||||
"<think>": 151667,
|
||||
"<tool_call>": 151657,
|
||||
"<tool_response>": 151665,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
85
chat_template.jinja
Normal file
85
chat_template.jinja
Normal file
@@ -0,0 +1,85 @@
|
||||
{%- if tools %}
|
||||
{{- '<|im_start|>system\n' }}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- messages[0].content + '\n\n' }}
|
||||
{%- endif %}
|
||||
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||
{%- for tool in tools %}
|
||||
{{- "\n" }}
|
||||
{{- tool | tojson }}
|
||||
{%- endfor %}
|
||||
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||
{%- else %}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
||||
{%- for message in messages[::-1] %}
|
||||
{%- set index = (messages|length - 1) - loop.index0 %}
|
||||
{%- if ns.multi_step_tool and message.role == "user" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
||||
{%- set ns.multi_step_tool = false %}
|
||||
{%- set ns.last_query_index = index %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- for message in messages %}
|
||||
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
||||
{%- elif message.role == "assistant" %}
|
||||
{%- set content = message.content %}
|
||||
{%- set reasoning_content = '' %}
|
||||
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
|
||||
{%- set reasoning_content = message.reasoning_content %}
|
||||
{%- else %}
|
||||
{%- if '</think>' in message.content %}
|
||||
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
|
||||
{%- set reasoning_content = message.content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if loop.index0 > ns.last_query_index %}
|
||||
{%- if loop.last or (not loop.last and reasoning_content) %}
|
||||
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- if message.tool_calls %}
|
||||
{%- for tool_call in message.tool_calls %}
|
||||
{%- if (loop.first and content) or (not loop.first) %}
|
||||
{{- '\n' }}
|
||||
{%- endif %}
|
||||
{%- if tool_call.function %}
|
||||
{%- set tool_call = tool_call.function %}
|
||||
{%- endif %}
|
||||
{{- '<tool_call>\n{"name": "' }}
|
||||
{{- tool_call.name }}
|
||||
{{- '", "arguments": ' }}
|
||||
{%- if tool_call.arguments is string %}
|
||||
{{- tool_call.arguments }}
|
||||
{%- else %}
|
||||
{{- tool_call.arguments | tojson }}
|
||||
{%- endif %}
|
||||
{{- '}\n</tool_call>' }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- elif message.role == "tool" %}
|
||||
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
||||
{{- '<|im_start|>user' }}
|
||||
{%- endif %}
|
||||
{{- '\n<tool_response>\n' }}
|
||||
{{- message.content }}
|
||||
{{- '\n</tool_response>' }}
|
||||
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- if add_generation_prompt %}
|
||||
{{- '<|im_start|>assistant\n' }}
|
||||
{%- if enable_thinking is defined and enable_thinking is false %}
|
||||
{{- '<think>\n\n</think>\n\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
68
config.json
Normal file
68
config.json
Normal file
@@ -0,0 +1,68 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Qwen3ForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 151643,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 12288,
|
||||
"layer_types": [
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention"
|
||||
],
|
||||
"max_position_embeddings": 32768,
|
||||
"max_window_layers": 36,
|
||||
"model_type": "qwen3",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 36,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 151643,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 1000000,
|
||||
"sliding_window": null,
|
||||
"tie_word_embeddings": false,
|
||||
"transformers_version": "4.56.2",
|
||||
"use_cache": true,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 151936
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"eos_token_id": 151643,
|
||||
"pad_token_id": 151643,
|
||||
"transformers_version": "4.56.2"
|
||||
}
|
||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a79df3ab6f882a3098899ba0e8fcc449ea2ff1520aacb988808b92f1429c1984
|
||||
size 4974632424
|
||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5cdc2ede9c1cebacb36b1b92e332448a8a0195a111041ec5684023ba91174c35
|
||||
size 4759707384
|
||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:21f00607b5373d5ae55fdb502359f728317d94410d56c2dae4fb4b351f1ee33b
|
||||
size 4977736920
|
||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:673a9e03e8cc77cf08646e5111314e3bc605bce40b07e6f66c134cec6b2f05ed
|
||||
size 1669440088
|
||||
407
model.safetensors.index.json
Normal file
407
model.safetensors.index.json
Normal file
@@ -0,0 +1,407 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_parameters": 8190735360,
|
||||
"total_size": 16381470720
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00003-of-00004.safetensors",
|
||||
"model.embed_tokens.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.norm.weight": "model-00002-of-00004.safetensors"
|
||||
}
|
||||
}
|
||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
||||
size 11422654
|
||||
239
tokenizer_config.json
Normal file
239
tokenizer_config.json
Normal file
@@ -0,0 +1,239 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|object_ref_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|object_ref_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151648": {
|
||||
"content": "<|box_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151649": {
|
||||
"content": "<|box_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151665": {
|
||||
"content": "<tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151666": {
|
||||
"content": "</tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151667": {
|
||||
"content": "<think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151668": {
|
||||
"content": "</think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|endoftext|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 131072,
|
||||
"pad_token": "<|endoftext|>",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user