初始化项目,由ModelHub XC社区提供模型

Model: stelterlab/EuroLLM-9B-Instruct-AWQ
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 10:58:14 +08:00
commit 4eb366719f
11 changed files with 3344 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

137
README.md Normal file
View File

@@ -0,0 +1,137 @@
---
license: apache-2.0
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
base_model:
- utter-project/EuroLLM-9B-Instruct
---
AWQ quantization: done by stelterlab in INT4 GEMM with AutoAWQ by casper-hansen (https://github.com/casper-hansen/AutoAWQ/)
Original Weights by the utter-project. Original Model Card follows:
# Model Card for EuroLLM-9B-Instruct
This is the model card for EuroLLM-9B-Instruct. You can also check the pre-trained version: [EuroLLM-9B](https://huggingface.co/utter-project/EuroLLM-9B).
- **Developed by:** Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 9B parameter multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-9B is a 9B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 42 |
| Embedding Size | 4,096 |
| FFN Hidden Size | 12,288 |
| Number of Heads | 32 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.524B |
| LM Head Parameters | 0.524B |
| Non-embedding Parameters | 8.105B |
| Total Parameters | 9.154B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-9B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": "You are EuroLLM --- an AI assistant specialized in European languages that provides safe, educational and helpful answers.",
},
{
"role": "user", "content": "What is the capital of Portugal? How would you describe it?"
},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### EU Languages
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f33ecc0be81bdc5d903466/ob_1sLM8c7dxuwpv6AAHA.png)
**Table 1:** Comparison of open-weight LLMs on multilingual benchmarks. The borda count corresponds to the average ranking of the models (see ([Colombo et al., 2022](https://arxiv.org/abs/2202.03799))). For Arc-challenge, Hellaswag, and MMLU we are using Okapi datasets ([Lai et al., 2023](https://aclanthology.org/2023.emnlp-demo.28/)) which include 11 languages. For MMLU-Pro and MUSR we translate the English version with Tower ([Alves et al., 2024](https://arxiv.org/abs/2402.17733)) to 6 EU languages.
\* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
The results in Table 1 highlight EuroLLM-9B's superior performance on multilingual tasks compared to other European-developed models (as shown by the Borda count of 1.0), as well as its strong competitiveness with non-European models, achieving results comparable to Gemma-2-9B and outperforming the rest on most benchmarks.
### English
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f33ecc0be81bdc5d903466/EfilsW_p-JA13mV2ilPkm.png)
**Table 2:** Comparison of open-weight LLMs on English general benchmarks.
\* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
The results in Table 2 demonstrate EuroLLM's strong performance on English tasks, surpassing most European-developed models and matching the performance of Mistral-7B (obtaining the same Borda count).
## Bias, Risks, and Limitations
EuroLLM-9B has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).

38
config.json Normal file
View File

@@ -0,0 +1,38 @@
{
"_name_or_path": "/data/hf/models/models--utter-project--EuroLLM-9B-Instruct/snapshots/880ffbd6f9fb66b82471a21cd43621e0cff780fb",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 4,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 12288,
"max_position_embeddings": 4096,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 42,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"quantization_config": {
"bits": 4,
"group_size": 128,
"modules_to_not_convert": null,
"quant_method": "awq",
"version": "gemm",
"zero_point": true
},
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.46.1",
"use_cache": false,
"vocab_size": 128000
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 3,
"do_sample": true,
"eos_token_id": 4,
"transformers_version": "4.46.1"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a4fcdbeed92b3d9d2650c74324afad5e29c503508b6a4064ab837832c883aa1a
size 4980364296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0c94364d11df83a11c782a24108a01189ba2422fad61d20768085d96bbd5bc1
size 1327561192

View File

@@ -0,0 +1,976 @@
{
"metadata": {
"total_size": 6307815424
},
"weight_map": {
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.28.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.28.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.29.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.29.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.30.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.30.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.31.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.31.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.32.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.32.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.33.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.33.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.34.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.34.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.35.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.35.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.36.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.36.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.37.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.37.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.gate_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.gate_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.gate_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.up_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.up_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.up_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.down_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.down_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.38.mlp.down_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.38.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.q_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.q_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.q_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.k_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.k_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.k_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.v_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.v_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.v_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.o_proj.qweight": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.o_proj.qzeros": "model-00001-of-00002.safetensors",
"model.layers.39.self_attn.o_proj.scales": "model-00001-of-00002.safetensors",
"model.layers.39.mlp.gate_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.gate_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.gate_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.up_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.up_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.up_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.down_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.down_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.down_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.39.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.q_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.q_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.q_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.k_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.k_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.k_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.v_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.v_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.v_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.o_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.o_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.self_attn.o_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.gate_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.gate_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.gate_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.up_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.up_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.up_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.down_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.down_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.40.mlp.down_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.40.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.40.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.q_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.q_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.q_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.k_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.k_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.k_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.v_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.v_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.v_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.o_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.o_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.self_attn.o_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.gate_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.gate_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.gate_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.up_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.up_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.up_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.down_proj.qweight": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.down_proj.qzeros": "model-00002-of-00002.safetensors",
"model.layers.41.mlp.down_proj.scales": "model-00002-of-00002.safetensors",
"model.layers.41.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.41.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.norm.weight": "model-00002-of-00002.safetensors",
"lm_head.weight": "model-00002-of-00002.safetensors"
}
}

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bddecaa5b362553591b9c989b601f60490b417be3d595698e81fcfb27353c6ae
size 15783334

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42957aec88804838805b335da1bbcce9d116024a20c6c4b56757648b9e348254
size 2408875

2108
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff