初始化项目,由ModelHub XC社区提供模型

Model: norallm/normistral-11b-thinking
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-13 08:38:23 +08:00
commit 7b32e681fc
14 changed files with 103290 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

267
README.md Normal file
View File

@@ -0,0 +1,267 @@
---
license: apache-2.0
language:
- nb
- nn
- 'no'
base_model:
- norallm/normistral-11b-long
library_name: transformers
pipeline_tag: text-generation
tags:
- norwegian
- bokmaal
- nynorsk
---
![](https://huggingface.co/norallm/normistral-11b-warm/resolve/main/images/puffin_2.png)
This is our instruction-tuned [NorMistral-11B](https://huggingface.co/norallm/normistral-11b-long) language model for Norwegian, trained on [open datasets](https://huggingface.co/datasets/norallm/normistral-11b-thinking-training) and released under Apache 2.0 license. The model has undergone extensive fluency-preserving reinforcement learning according to our paper [Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages](https://arxiv.org/abs/2512.08777).
**The model is freely available in our public chat interface: https://chat.llm.sigma2.no/**
_____
## License
We release the model under Apache 2.0 license to indicate that we do not impose any additional constraints on the model weights.
However, we do not own the data in the training collection.
_____
## Usage
### 1. HuggingFace `transformers`
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# load the NorMistral tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("norallm/normistral-11b-thinking")
model = AutoModelForCausalLM.from_pretrained(
"norallm/normistral-11b-thinking",
device_map='auto',
torch_dtype=torch.bfloat16
)
# create a conversation and convert it to token indices using the NorMistral chat template
messages = [
{"role": "user", "content": "Hva er hovedstaden i Norge?"},
{"role": "assistant", "content": "Hovedstaden i Norge er Oslo. Denne byen ligger i den sørøstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populære turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berømte norske skulptøren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er også hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser."},
{"role": "user", "content": "Gi meg en liste over de beste stedene å besøke i hovedstaden"}
]
input_tokens = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# run the generation (customizable via the various parameters)
output_tokens = model.generate(
input_tokens,
max_new_tokens=2048, # limit max number of generated tokens
top_k=64, # top-k sampling
top_p=0.9, # nucleus sampling
temperature=0.3, # a low temparature to make the outputs less chaotic
repetition_penalty=1.0, # turn the repetition penalty off, having it on can lead to very bad outputs
do_sample=True, # randomly sample the outputs
use_cache=True # speed-up generation by using kv cache
)
# decode the generated tokens back to text
output_str = tokenizer.decode(output_tokens[0, input_tokens.size(1):]).strip()
# separate the reasoning trace that's enclosed in the special <think> ... </think> tokens
# it should say something like "Brukeren ber: "Gi meg en liste over de beste stedene å besøke i hovedstaden"\n\nDette er en klar forespørsel om informasjon..."
reasoning_trace = output_str.split("</think>")[0].lstrip("<think>").strip()
# separate the actual response that follows after the </think> token
# it should say something like "De beste stedene å besøke i hovedstaden Oslo:\n\n**1. Vigelandsparken**\n En av verdens største skulpturparker med over 200 skulpturer av Gustav Vigeland. Populært..."
response = output_str.split("</think>")[-1].rstrip("</s>").strip()
```
### 2. Faster inference with vLLM
```python
from vllm import LLM, SamplingParams
# load the NorMistral model
llm = LLM(
model="norallm/normistral-11b-thinking",
dtype="bfloat16"
)
# create a conversation
messages = [
{"role": "user", "content": "Hva er hovedstaden i Norge?"},
{"role": "assistant", "content": "Hovedstaden i Norge er Oslo. Denne byen ligger i den sørøstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populære turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berømte norske skulptøren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er også hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser."},
{"role": "user", "content": "Gi meg en liste over de beste stedene å besøke i hovedstaden"}
]
# set up sampling parameters (equivalent to the generate() parameters)
sampling_params = SamplingParams(
max_tokens=2048, # limit max number of generated tokens
top_k=64, # top-k sampling
top_p=0.9, # nucleus sampling
temperature=0.3, # a low temperature to make the outputs less chaotic
repetition_penalty=1.0, # turn the repetition penalty off
)
# run the generation using the chat interface (applies chat template automatically)
outputs = llm.chat(messages, sampling_params=sampling_params)
# get the generated text
output_str = outputs[0].outputs[0].text.strip()
# separate the reasoning trace that's enclosed in the special <think> ... </think> tokens
reasoning_trace = output_str.split("</think>")[0].lstrip("<think>").strip()
# separate the actual response that follows after the </think> token
response = output_str.split("</think>")[-1].rstrip("</s>").strip()
```
### 3. GGUF models for `ollama` / `llama.cpp`
It's often convenient to run models locally with `ollama`. The simplest option is to use the model directly uploaded to [https://ollama.com/LTG/normistral-11b-thinking:latest](https://ollama.com/LTG/normistral-11b-thinking:latest).
That's a GGUF checkpoint with F16 weights, same as the one running at our inference endpoint.
More options are available at [norallm/normistral-11b-thinking-gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf). Specifically checkpoints converted to these floating-point formats:
1. 16-bit BF16 (22.9GB): [normistral-11B-thinking-BF16.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-BF16.gguf)
2. 16-bit F16 (22.9GB): [normistral-11B-thinking-F16.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-F16.gguf)
3. 8-bit Q8_0 (12.1GB): [normistral-11B-thinking-Q8_0.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-Q8_0.gguf)
4. 6-bit Q6_K (9.4GB): [normistral-11B-thinking-Q6_K.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-Q6_K.gguf)
5. 5-bit Q5_K_M (8.1GB): [normistral-11B-thinking-Q5_K_M.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-Q5_K_M.gguf)
6. 5-bit Q5_0 (7.9GB): [normistral-11B-thinking-Q5_0.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-Q5_0.gguf)
7. 4-bit Q4_K_M (6.9GB): [normistral-11B-thinking-Q4_K_M.gguf](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11B-thinking-Q4_K_M.gguf)
We also provide [a working `.modelfile`](https://huggingface.co/norallm/normistral-11b-thinking-gguf/blob/main/normistral-11b-thinking.modelfile), which contains the official chat template converted to Go (as used by `llama.cpp` and `ollama`).
### 4. API
It's possible to use our free inference service at https://chat.llm.sigma2.no/ and get responses from NorMistral via API.
You will need to register at that site and generate an API key by navigating to `Settings` -> `Account` -> `API keys` -> `API Key`.
```python
import requests
BASE_URL = "https://chat.llm.sigma2.no:443"
API_KEY = "your-api-key-here" # <-- Replace with your actual API key
MODEL = "NorMistral-11b-thinking:latest"
# send a POST request
response = requests.post(
f"{BASE_URL}/api/chat/completions",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"model": MODEL,
"messages": [
{"role": "user", "content": "Hva er hovedstaden i Norge?"}
],
},
)
# gather the response
response.raise_for_status()
result = response.json()
output_str = result["choices"][0]["message"]["content"].strip()
# separate the reasoning trace that's enclosed in the special <think> ... </think> tokens
# it should say something like "Brukeren spør: "Hva er hovedstaden i Norge?"\n\nDette er et faktaspørsmål om"
reasoning_trace = output_str.split("</think>")[0].lstrip("<think>").strip()
# separate the actual response that follows after the </think> token
# it should say something like "Oslo er hovedstaden i Norge."
response = output_str.split("</think>")[-1].rstrip("</s>").strip()
```
## Training and data
Generally speaking, the training follows our fluency-preserving post-training setup from [Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages](https://arxiv.org/abs/2512.08777).
The training data is published alongside the model at [norallm/normistral-11b-thinking-training](https://huggingface.co/datasets/norallm/normistral-11b-thinking-training). Training code will be available at [github.com/ltgoslo/normistral-post-training](https://github.com/ltgoslo/normistral-post-training).
**1. Supervised finetuning (SFT)**
We start by "injecting" the instruction-following and reasoning capabilities by SFT training on English responses and reasoning traces from [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking). The full SFT collection is published in [train_sft.jsonl](https://huggingface.co/datasets/norallm/normistral-11b-thinking-training/blob/main/train_sft.jsonl).
**2. Reinforcement learning (d-RLAIF)**
The short SFT stage is followed by on-policy training on a large collection of Norwegian (Bokmål and Nynorsk) prompts (also available at [norallm/normistral-11b-thinking-training](https://huggingface.co/datasets/norallm/normistral-11b-thinking-training)). The specific setup of d-RLAIF (direct reinforcement learning from AI feedback) and its motivation is extensively described in [our paper](https://arxiv.org/abs/2512.08777). The "AI" reward model used here is [Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411).
_____
## Evaluation
We compared NorMistral against state-of-the-art instruction-tuned models of similar size. What follows is a preliminary evaluation on a generative version of [NorEval](https://arxiv.org/abs/2504.07749) (that is still work-in-progress). The responses from all evaluated models below are fully available for closer inspection at [norallm/normistral-11b-thinking-evaluation](https://huggingface.co/datasets/norallm/normistral-11b-thinking-evaluation).
**Classification tasks**
All classification scores are reported as accuracy. NoReC sentiment analysis is done on sentence level. The generative scores (NorRewrite and Norsummarize) are reported as the average win-rates against `Llama-3.1-8B` evaluated using LLM-as-a-judge setup with `Llama-3.3-70B` (see [NorEval](https://arxiv.org/abs/2504.07749) for more information). * denotes "thinking" models.
| Model | NoReC_binary | NoReC_ternary | NorIdiom_NB | NorIdiom_NN | NorCSQA_NB | NorCSQA_NN |
|------------------|--------------|---------------|:-----------:|:-----------:|------------|------------|
| NorMistral-11B* | **86.3** | 65.2 | **55.7** | **27.7** | 70.7 | 64.2 |
| Llama-3.1-8B | 79.8 | 52.9 | 12.7 | 6.7 | 64.0 | 57.9 |
| Mistral-Nemo-12B | 67.9 | 49.1 | 12.9 | 8.5 | 61.6 | 49.5 |
| Qwen3-15B* | 83.5 | **69.6** | 22.1 | 13.2 | **83.8** | 71.6 |
| Gemma3-12B | 85.2 | 67.1 | 43.7 | 23.7 | 81.9 | **80.0** |
| OLMo3-7B* | 72.0 | 63.3 | 5.0 | 2.2 | 50.8 | 17.9 |
| OLMo2-13B | 32.8 | 13.2 | 3.5 | 2.2 | 48.0 | 45.3 |
| Apertus-8B | 78.4 | 58.8 | 34.3 | 15.7 | 69.2 | 63.2 |
| Model | NorOBQA_NB | NorOBQA_NN | NRK_NB | NRK_NN |NorRewrite | NorSummarize |
|------------------|------------|------------|----------|----------|-------------|---------------|
| NorMistral-11B* | 83.0 | 84.4 | 58.8 | **62.3** |51.9 | 54.3 |
| Llama-3.1-8B | 78.5 | 71.1 | 49.8 | 46.2 |50.0 | 50.0 |
| Mistral-Nemo-12B | 75.3 | 67.8 | 47.3 | 45.0 |42.5 | 39.2 |
| Qwen3-15B* | **94.4** | **88.9** | **63.3** | 55.9 |77.6 | **83.1** |
| Gemma3-12B | 91.5 | **88.9** | 59.8 | 58.4 |**86.8** | 77.8. |
| OLMo3-7B* | 70.5 | 54.4 | 43.3 | 35.9 |7.8 | 14.2 |
| OLMo2-13B | 55.3 | 56.7 | 45.3 | 39.4 |48.3 | 53.7 |
| Apertus-8B | 76.1 | 74.4 | 50.2 | 48.3 |39.6 | 42.1 |
_____
## Citation
```bibtex
@misc{samuel2025fluentalignmentdisfluentjudges,
title={Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages},
author={David Samuel and Lilja Øvrelid and Erik Velldal and Andrey Kutuzov},
year={2025},
eprint={2512.08777},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.08777},
}
```
```bibtex
@inproceedings{samuel-etal-2025-small,
title = "Small Languages, Big Models: {A} Study of Continual Training on Languages of {Norway}",
author = "Samuel, David and
Mikhailov, Vladislav and
Velldal, Erik and
{\O}vrelid, Lilja and
Charpentier, Lucas Georges Gabriel and
Kutuzov, Andrey and
Oepen, Stephan",
editor = "Johansson, Richard and
Stymne, Sara",
booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
month = mar,
year = "2025",
address = "Tallinn, Estonia",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2025.nodalida-1.61/",
pages = "573--608",
ISBN = "978-9908-53-109-0",
}
```
## Contact
Please write [a community message](https://huggingface.co/norallm/normistral-11b-thinking/discussions) or contact David Samuel (davisamu@ifi.uio.no) if you have any questions about this model.

15
chat_template.jinja Normal file
View File

@@ -0,0 +1,15 @@
{{- '<s>' }}
{%- for message in messages %}
{%- if message['role'] == 'system' %}
{{- '<system_prompt> ' + message['content'] | trim + '</system_prompt>' }}
{%- elif message['role'] == 'user' %}
{{- '<instruction> ' + message['content'] | trim + '</instruction>' }}
{%- elif message['role'] == 'assistant' %}
{%- if loop.last %}
{{- '<think> ' + message['reasoning'] | trim + '</think>' }}
{%- endif %}
{{- ' ' + message['content'] | trim + '</s>' }}
{%- else %}
{{- raise_exception('Only user, system and assistant roles are supported!') }}
{%- endif %}
{%- endfor %}

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 14336,
"mask_token_id": 4,
"max_position_embeddings": 1024000,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 40,
"num_key_value_heads": 8,
"pad_token_id": 3,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.3",
"use_cache": true,
"vocab_size": 51200
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 3,
"transformers_version": "4.55.3"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:05ddf0d02981f5ac8536a0923b092e5ba1ff173cfa71267bb4bd5dac93b2b714
size 4991394528

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:97ae41bc2df161198e821d1f8c977decb11b1d3e035a55c31cb50778c47c5b0a
size 4907529440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:57b00a3eb0ad3807e0c95c665e7bbcbcf4c467ada2fbffc52a7a177e055dedcd
size 4907529456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:097431ea0eb7bb2ee22cc566e216d838a9136ef2b4dfb6011628b0550fe70b2f
size 4907529456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:16d62b5603be09fc4360a925bcb6731c2427f354393cbfd8c0b7fb108658236a
size 3145845632

View File

@@ -0,0 +1,371 @@
{
"metadata": {
"total_parameters": 11429893120,
"total_size": 22859786240
},
"weight_map": {
"lm_head.weight": "model-00005-of-00005.safetensors",
"model.embed_tokens.weight": "model-00001-of-00005.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.input_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.input_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.35.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.input_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"model.norm.weight": "model-00005-of-00005.safetensors"
}
}

37
special_tokens_map.json Normal file
View File

@@ -0,0 +1,37 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"mask_token": {
"content": "<mask>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

102373
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

142
tokenizer_config.json Normal file
View File

@@ -0,0 +1,142 @@
{
"add_bos_token": true,
"add_eos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"4": {
"content": "<mask>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"5": {
"content": "<instruction>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"6": {
"content": "</instruction>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"7": {
"content": "<system_prompt>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"8": {
"content": "</system_prompt>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"9": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"10": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"11": {
"content": "<special_6>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"12": {
"content": "<special_7>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"13": {
"content": "<special_8>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"14": {
"content": "<special_9>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"15": {
"content": "<special_10>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "<s>",
"pad_token": "<pad>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"model_max_length": 32768,
"tokenizer_class": "PreTrainedTokenizerFast",
"unk_token": "<unk>"
}