初始化项目,由ModelHub XC社区提供模型

Model: Madras1/Jade-14B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-29 19:14:45 +08:00
commit 869783a5ec
20 changed files with 152427 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

156
README.md Normal file
View File

@@ -0,0 +1,156 @@
---
language:
- pt
- en
license: apache-2.0
base_model:
- unsloth/Qwen3-14b
- Qwen/Qwen3-14B
base_model_relation: finetune
library_name: transformers
pipeline_tag: text-generation
tags:
- pt-br
- portuguese
- brazilian-portuguese
- conversational
- chatbot
- persona
- unsloth
- 4-bit
- bitsandbytes
- qwen3
---
# Jade-14b
Jade-14b is a Brazilian Portuguese conversational finetune of qwen3-14b built to express a strong, persistent persona. This model is designed for PT-BR chat, chatbot use cases, and character-style interaction, with colloquial language, abbreviations, slang, and a WhatsApp-like tone.
## Model Summary
Jade-14b is a persona-first model. It was intentionally finetuned so the model speaks like **Jade** even without a strong `system prompt`. Because of that, the model often answers in PT-BR with informal phrasing such as `vc`, slang, and a friendly conversational tone from the very first turn.
## Model Details
- Developed by: `Madras1`
- Base model: `unsloth/qwen3-14b`
- Model type: conversational text-generation finetune
- Primary language: Brazilian Portuguese (`pt-BR`)
- License: `apache-2.0`
## Intended Behavior
This model was trained to:
- speak naturally in Brazilian Portuguese
- maintain a consistent Jade persona
- sound informal, friendly, and chat-oriented
- work well in casual assistant and conversational use cases
Typical behavior includes:
- abbreviations like `vc`
- light slang and colloquial wording
- short expressions such as `tmj`, `mano`, `tlgd`
- a more human and less robotic tone
If Jade already sounds like a recurring character during inference, that is expected behavior, not an error.
## Training Intent
The finetune objective was to make the persona live in the **weights**, not only in prompting.
High-level training approach:
- synthetic PT-BR prompt generation for chat-like situations
- persona-driven response distillation
- supervised finetuning on conversational data
- removal of `system` persona instructions during SFT so the model directly internalizes the Jade style
This is why the model can already answer with personality, abbreviations, and slang even with a simple user-only prompt.
## Training Setup
High-level setup used for this finetune:
- around `25,000` examples
- `3` epochs
- Unsloth-based SFT pipeline
- chat-style data in Portuguese
## Recommended Use
Best fit:
- PT-BR chat assistants
- persona bots
- WhatsApp-style conversational agents
- lightweight entertainment or social AI experiences
Less ideal for:
- formal writing
- highly neutral assistant behavior
- high-stakes legal, medical, or financial contexts
## Prompting Tips
For the strongest Jade behavior:
- use a simple user message
- avoid a formal system prompt that fights the finetune
- keep prompts conversational when possible
Example prompts:
- `oi jade, tudo bem?`
- `jade, me explica isso de um jeito simples`
- `vc acha que vale a pena estudar python hoje?`
## Example Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Madras1/Jade-14b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "user", "content": "oi jade, tudo bem?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
Because this is a persona-oriented finetune:
- it may sound informal in contexts where a neutral tone would be better
- it may over-index on chat style depending on the prompt
- it is optimized more for persona consistency than strict formality
## Links
https://github.com/MadrasLe/JadeLLMV-1

29
added_tokens.json Normal file
View File

@@ -0,0 +1,29 @@
{
"</think>": 151668,
"</tool_call>": 151658,
"</tool_response>": 151666,
"<think>": 151667,
"<tool_call>": 151657,
"<tool_response>": 151665,
"<|PAD_TOKEN|>": 151669,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

8
chat_template.jinja Normal file
View File

@@ -0,0 +1,8 @@
{% for message in messages %}{% if message['role'] == 'user' %}{{'<|im_start|>user
' + message['content'] + '<|im_end|>
'}}{% elif message['role'] == 'assistant' %}{{'<|im_start|>assistant
' + message['content'] + '<|im_end|>
' }}{% else %}{{ '<|im_start|>system
' + message['content'] + '<|im_end|>
' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}

73
config.json Normal file
View File

@@ -0,0 +1,73 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"dtype": "bfloat16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17408,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 40960,
"max_window_layers": 40,
"model_type": "qwen3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 8,
"pad_token_id": 151654,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000,
"sliding_window": null,
"tie_word_embeddings": false,
"transformers_version": "4.57.3",
"unsloth_fixed": true,
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}

14
generation_config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"max_length": 40960,
"pad_token_id": 151654,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"transformers_version": "4.57.3"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b0094a2cec5e25ccb34c5096d89fb4a0e8abb0d372324b56fe82868fe01dcdd
size 3841788544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2107d7a8c26a86e4f1d593aa091e4f41999ef45b01ab4e959f105f1ae334de89
size 3963750816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0ed22c84f8702a16eb14add7a4cb98675d1e44379003b1086f15b5fd1d374f9
size 3963750880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:97f39a8b4945df9dfbab75421f5343a3d041216eebb93b45249167efc1e1eb79
size 3963750880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc6375fadfcaab0302ae4eab07a5fca22ed283a19a05682cd7f61576fdb00dc3
size 3963750880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:548e486f697ff773566f11152dcc5ae2863680130567985b1030809450f71abb
size 3963750880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f6e805c82eb355cb45b4c7644e3a8a21cbcefb1a04663ddf7cb5e904d0a69131
size 3963750880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a9578e761cb430e32688cb93b06e9eecae7cf97be7c1215d6563a5c5fe0a717c
size 1912371880

View File

@@ -0,0 +1,451 @@
{
"metadata": {
"total_parameters": 14768307200,
"total_size": 29536614400
},
"weight_map": {
"lm_head.weight": "model-00008-of-00008.safetensors",
"model.embed_tokens.weight": "model-00001-of-00008.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.k_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.15.self_attn.q_norm.weight": "model-00003-of-00008.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.20.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.21.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.k_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.21.self_attn.q_norm.weight": "model-00004-of-00008.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.22.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.27.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.27.self_attn.k_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.27.self_attn.q_norm.weight": "model-00005-of-00008.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.28.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.30.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.33.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.33.self_attn.k_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.33.self_attn.q_norm.weight": "model-00006-of-00008.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.34.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.39.input_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.39.self_attn.k_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.39.self_attn.q_norm.weight": "model-00007-of-00008.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.k_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.self_attn.q_norm.weight": "model-00002-of-00008.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.norm.weight": "model-00008-of-00008.safetensors"
}
}

16
special_tokens_map.json Normal file
View File

@@ -0,0 +1,16 @@
{
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|PAD_TOKEN|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:10ba4ba91270b1a50e5cd8e51023bccc66fc4ac4909dd7ae7ab29433411c9bb9
size 11422844

228
tokenizer_config.json Normal file
View File

@@ -0,0 +1,228 @@
{
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151667": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151668": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151669": {
"content": "<|PAD_TOKEN|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"extra_special_tokens": {},
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<|PAD_TOKEN|>",
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long