初始化项目,由ModelHub XC社区提供模型
Model: zcyzcyzcy/qwen3-1.7b-jf-v2math811-ar10 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
99
README.md
Normal file
99
README.md
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model: Qwen/Qwen3-1.7B
|
||||
tags:
|
||||
- jacobi-forcing
|
||||
- speculative-decoding
|
||||
- qwen3
|
||||
- text-generation
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Qwen3-1.7B Jacobi Forcing (v2math811, AR×10)
|
||||
|
||||
Jacobi-Forcing fine-tune of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) trained on a mixed code + math trajectory dataset (`v2math811`). Produces output identical in quality to the base AR model while supporting **Jacobi parallel decoding for ~1.5–1.7× wall-clock speedup**.
|
||||
|
||||
## Highlights
|
||||
|
||||
- **Lossless quality**: HumanEval pass@1 / GSM8K accuracy match base AR generation (within noise).
|
||||
- **Speedup**: 1.65× on HumanEval, 1.53× on GSM8K (vs greedy AR, same model).
|
||||
- **Drop-in compatible** with HuggingFace `AutoModelForCausalLM` for AR generation. Jacobi inference requires the [JacobiForcing repo](https://github.com/) (custom forward kernel).
|
||||
|
||||
## Training recipe
|
||||
|
||||
Continued from base Qwen3-1.7B with the consistency + AR loss from the [JacobiForcing](https://arxiv.org/abs/2403.00835) paper:
|
||||
|
||||
| Setting | Value |
|
||||
| --- | --- |
|
||||
| Base | `Qwen/Qwen3-1.7B` |
|
||||
| Dataset | code (OpenCodeInstruct buckets 8-11) + math (OpenThought2 buckets 8-11), 26 510 trajectory samples after traj_count ≤ 3 filter |
|
||||
| Strategy | progressive noise window, N=32, window=16 |
|
||||
| Epochs | 1 |
|
||||
| Optimizer | AdamW |
|
||||
| LR | 5e-6 (cosine, warmup 0.03) |
|
||||
| Batch | per-device 1 × grad-accum 4 = 4 |
|
||||
| Precision | bf16 |
|
||||
| `AR_LOSS_WEIGHT` | **10** (paper default; tested 20 — slightly worse Jacobi acceptance) |
|
||||
| GPU | 1× A100-80GB, ~4h47m |
|
||||
|
||||
## Benchmarks (1× A100, greedy)
|
||||
|
||||
| Bench | AR pass@1 / acc | Jacobi pass@1 / acc | AR tok/s | Jacobi tok/s | Speedup |
|
||||
| --- | ---: | ---: | ---: | ---: | ---: |
|
||||
| HumanEval (n=164) | 60.4 % | **61.0 %** | 37.2 | 61.3 | **1.65×** |
|
||||
| GSM8K (n=653 subset) | 72.4 % | **74.3 %** | 38.0 | 58.3 | **1.53×** |
|
||||
|
||||
Jacobi internals (HumanEval): tok/iter = 1.74, average accept-window 87 % of N=32.
|
||||
|
||||
## Usage — standard AR
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
import torch
|
||||
|
||||
ckpt = "zcyzcyzcy/qwen3-1.7b-jf-v2math811-ar10"
|
||||
tok = AutoTokenizer.from_pretrained(ckpt)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
ckpt, torch_dtype=torch.bfloat16, device_map="cuda"
|
||||
)
|
||||
|
||||
msgs = [{"role": "user", "content": "Write a Python is_prime(n)."}]
|
||||
inp = tok.apply_chat_template(
|
||||
msgs, tokenize=False, add_generation_prompt=True, enable_thinking=False
|
||||
)
|
||||
ids = tok(inp, return_tensors="pt").to("cuda")
|
||||
out = model.generate(**ids, max_new_tokens=200, do_sample=False)
|
||||
print(tok.decode(out[0][ids["input_ids"].shape[1]:], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
## Usage — Jacobi parallel decoding
|
||||
|
||||
Jacobi inference uses a custom `jacobi_forward_greedy` registered on `Qwen3ForCausalLM`. See the [JacobiForcing repo](https://github.com/) for the full inference script, or use the snippet:
|
||||
|
||||
```python
|
||||
from transformers import Qwen3ForCausalLM
|
||||
from generate_trajectory.generation.qwen3_modeling_jacobi_forcing_greedy import (
|
||||
jacobi_forward_greedy,
|
||||
)
|
||||
Qwen3ForCausalLM.jacobi_forward_greedy = jacobi_forward_greedy
|
||||
# ... call model.jacobi_forward_greedy(...) for prefill + generation phases.
|
||||
```
|
||||
|
||||
The model checkpoint itself is a standard Qwen3 — no architecture changes — so any speculative-decoding framework that accepts a Qwen3 base model can drive it.
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@article{kou2024cllm,
|
||||
title={CLLMs: Consistency Large Language Models},
|
||||
author={Kou, Siqi and Hu, Lanxiang and He, Zhezhi and Deng, Zhijie and Zhang, Hao},
|
||||
journal={arXiv preprint arXiv:2403.00835},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Apache 2.0, inherited from the base Qwen3-1.7B model.
|
||||
28
added_tokens.json
Normal file
28
added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"</think>": 151668,
|
||||
"</tool_call>": 151658,
|
||||
"</tool_response>": 151666,
|
||||
"<think>": 151667,
|
||||
"<tool_call>": 151657,
|
||||
"<tool_response>": 151665,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
89
chat_template.jinja
Normal file
89
chat_template.jinja
Normal file
@@ -0,0 +1,89 @@
|
||||
{%- if tools %}
|
||||
{{- '<|im_start|>system\n' }}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- messages[0].content + '\n\n' }}
|
||||
{%- endif %}
|
||||
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||
{%- for tool in tools %}
|
||||
{{- "\n" }}
|
||||
{{- tool | tojson }}
|
||||
{%- endfor %}
|
||||
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||
{%- else %}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
||||
{%- for message in messages[::-1] %}
|
||||
{%- set index = (messages|length - 1) - loop.index0 %}
|
||||
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
||||
{%- set ns.multi_step_tool = false %}
|
||||
{%- set ns.last_query_index = index %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- for message in messages %}
|
||||
{%- if message.content is string %}
|
||||
{%- set content = message.content %}
|
||||
{%- else %}
|
||||
{%- set content = '' %}
|
||||
{%- endif %}
|
||||
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
||||
{%- elif message.role == "assistant" %}
|
||||
{%- set reasoning_content = '' %}
|
||||
{%- if message.reasoning_content is string %}
|
||||
{%- set reasoning_content = message.reasoning_content %}
|
||||
{%- else %}
|
||||
{%- if '</think>' in content %}
|
||||
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
||||
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if loop.index0 > ns.last_query_index %}
|
||||
{%- if loop.last or (not loop.last and reasoning_content) %}
|
||||
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- if message.tool_calls %}
|
||||
{%- for tool_call in message.tool_calls %}
|
||||
{%- if (loop.first and content) or (not loop.first) %}
|
||||
{{- '\n' }}
|
||||
{%- endif %}
|
||||
{%- if tool_call.function %}
|
||||
{%- set tool_call = tool_call.function %}
|
||||
{%- endif %}
|
||||
{{- '<tool_call>\n{"name": "' }}
|
||||
{{- tool_call.name }}
|
||||
{{- '", "arguments": ' }}
|
||||
{%- if tool_call.arguments is string %}
|
||||
{{- tool_call.arguments }}
|
||||
{%- else %}
|
||||
{{- tool_call.arguments | tojson }}
|
||||
{%- endif %}
|
||||
{{- '}\n</tool_call>' }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- elif message.role == "tool" %}
|
||||
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
||||
{{- '<|im_start|>user' }}
|
||||
{%- endif %}
|
||||
{{- '\n<tool_response>\n' }}
|
||||
{{- content }}
|
||||
{{- '\n</tool_response>' }}
|
||||
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- if add_generation_prompt %}
|
||||
{{- '<|im_start|>assistant\n' }}
|
||||
{%- if enable_thinking is defined and enable_thinking is false %}
|
||||
{{- '<think>\n\n</think>\n\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
60
config.json
Normal file
60
config.json
Normal file
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Qwen3ForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 151643,
|
||||
"eos_token_id": 151645,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 6144,
|
||||
"layer_types": [
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention",
|
||||
"full_attention"
|
||||
],
|
||||
"max_position_embeddings": 40960,
|
||||
"max_window_layers": 28,
|
||||
"model_type": "qwen3",
|
||||
"num_attention_heads": 16,
|
||||
"num_hidden_layers": 28,
|
||||
"num_key_value_heads": 8,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 1000000,
|
||||
"sliding_window": null,
|
||||
"tie_word_embeddings": true,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.53.1",
|
||||
"use_cache": false,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 151936
|
||||
}
|
||||
13
generation_config.json
Normal file
13
generation_config.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"bos_token_id": 151643,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
151645,
|
||||
151643
|
||||
],
|
||||
"pad_token_id": 151643,
|
||||
"temperature": 0.6,
|
||||
"top_k": 20,
|
||||
"top_p": 0.95,
|
||||
"transformers_version": "4.53.1"
|
||||
}
|
||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
pytorch_model-00001-of-00002.bin
Normal file
3
pytorch_model-00001-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dc5699890a1717efdd1518822cbbb16915f5db8c43b6096ef911dd4c5c787a80
|
||||
size 4969247421
|
||||
3
pytorch_model-00002-of-00002.bin
Normal file
3
pytorch_model-00002-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4b230d43dfdea15118ada46dac6d01d860d52232fbd1958b31cb544626bb1287
|
||||
size 1913158707
|
||||
318
pytorch_model.bin.index.json
Normal file
318
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,318 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 6882299904
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.0.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.1.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.10.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.11.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.12.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.13.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.14.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.15.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.16.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.17.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.2.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.4.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.5.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.6.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.7.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.8.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.9.self_attn.k_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.q_norm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.norm.weight": "pytorch_model-00002-of-00002.bin"
|
||||
}
|
||||
}
|
||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
240
tokenizer_config.json
Normal file
240
tokenizer_config.json
Normal file
@@ -0,0 +1,240 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|object_ref_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|object_ref_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151648": {
|
||||
"content": "<|box_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151649": {
|
||||
"content": "<|box_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151665": {
|
||||
"content": "<tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151666": {
|
||||
"content": "</tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151667": {
|
||||
"content": "<think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151668": {
|
||||
"content": "</think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 131072,
|
||||
"pad_token": "<|endoftext|>",
|
||||
"padding_side": "right",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
151645
vocab.json
Normal file
151645
vocab.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user