初始化项目,由ModelHub XC社区提供模型

Model: m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 11:28:42 +08:00
commit 9fff6f2194
20 changed files with 10757 additions and 0 deletions

208
README.md Normal file
View File

@@ -0,0 +1,208 @@
---
license: apache-2.0
language:
- en
- it
base_model:
- dphn/Dolphin-Mistral-24B-Venice-Edition
pipeline_tag: text-generation
library_name: transformers
tags:
- ita
- italian
- anita
- magistral
- 24b
- uniba
- bari
- italy
- italia
- Conversational
- LLaMantino
---
<img src="https://huggingface.co/m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA/resolve/main/Anita-Next_full.png" alt="anita_next" border="0" width="600px">
<hr>
<!--<img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" width="200"/>-->
<h3><i>"Built on <b>dphn/Dolphin-Mistral-24B-Venice-Edition</b>"</i></i></h3>
<p style="text-align:justify;"><b>ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA</b> is a <b>Thinking Model</b> of the <a href="https://arxiv.org/abs/2405.07101"><b>ANITA</b></a> - <i>Large Language Models family</i>.
The model is a fine-tuned version of <a href="https://huggingface.co/dphn/Dolphin-Mistral-24B-Venice-Edition"><b>Dolphin-Mistral-24B-Venice-Edition</b></a> (a fine-tuned <b>Mistral model</b>).
⚠️ This model version is an **UNCENSORED** <b>Multilingual Model</b> 🏁 (EN 🇺🇸 + ITA🇮🇹). It means the model can have *dangerous/unethical/offensive behaviours.* </p>
❗❗❗Use at your own risk. The model may generate hallucinations, incorrect, invented, offensive, unethical or dangerous responses. We are not responsible for any dangerous/offensive/criminal use. The model is release for research only purposes.❗❗❗
The 🌟**ANITA project**🌟 *(**A**dvanced **N**atural-based interaction for the **ITA**lian language)*
wants to provide Italian NLP researchers with an improved model for the Italian Language 🇮🇹 use cases.
The **NEXT** family includes **four models**:
- m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA - **General Purpose**
- m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA - **Uncensored**
- m-polignano/ANITA-NEXT-24B-Magistral-2506-VISION-ITA - **Vision-Language**
- m-polignano/ANITA-NEXT-20B-gpt-oss-ITA - **Agentic Ready**
<hr>
**GGUF - OLLAMA**: [m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF](https://huggingface.co/m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA-GGUF)
<hr>
**Colab Demo:** [A100 - 40GB - Colab Notebook](https://colab.research.google.com/drive/1mhZLAdpOr3TRq-ZTG4XD52J98orHj_na?usp=sharing)<br>
The Model runs on a single GPU, 19.56GB of VRAM by using a *4bit Quantization*.
<hr>
## Specifications
- **Model developers**: <br><a href="https://marcopoli.github.io/">Ph.D. Marco Polignano</a> - University of Bari Aldo Moro, Italy <br> <a href="https://huggingface.co/swap-uniba">SWAP Research Group</a> <br>
- **Variations**: The model release has been **supervised fine-tuning (SFT)** using **QLoRA** 4bit, on instruction-based datasets. **DPO** approach over the *mlabonne/orpo-dpo-mix-40k* dataset is used to align with human preferences for helpfulness and safety.
- **Input**: Models input text only.
- **Language**: Multilingual 🏁 + Italian 🇮🇹
- **Output**: Models generate text and code only.
- **Model Architecture**: *Mistral architecture*.
- **Context length**: 128k, but degradate after 40k.
- **Library Used**: [Transformers 4.56.0.dev0] (https://huggingface.co/docs/transformers/index)
<hr>
## Playground
To use the model directly, there are many ways to get started, choose one of the following ways to experience it.
### Prompt Template
```
<s>[SYSTEM_PROMPT]Sei un assistente AI per la lingua italiana di nome ANITA-NEXT (Advanced Natural-based interaction for the ITAlian language Next Generation) creato dal ricercatore Marco Polignano, Università degli Studi di Bari Aldo Moro, Italia. Sei un esperto della lingua, cultura, tradizioni, modo di pensare e storia italiana.
L'utente ti chiederà di risolvere un compito o rispondere ad una domanda. Rispondi e ragiona usando la lingua della domanda, preferendo l'Italiano.
Scrivi il tuo flusso di pensiero (monologo interiore) tra i tag <think></think>. Ragiona in modo disinvolto, scrivendo riflessioni e/o bozze, come se stessi lavorando a un esercizio su un foglio di carta.
Successivamente, scrivi la soluzione in modo chiaro, corretto, semplice ed esaustivo basandoti sul riassunto del tuo flusso di pensiero.
Se necessario, usa la notazione markdown per formattare la risposta.[/SYSTEM_PROMPT][INST]{ USER Prompt }[/INST]<think>{ ASSIST Thinking }</think>{ ASSIST Prompt }</s>
```
### Transformers
For direct use with `transformers`, you can easily get started with the following steps.
- Firstly, you need to install transformers via the command below with `pip`.
```bash
pip install -U --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl triton cut_cross_entropy unsloth_zoo
pip install sentencepiece protobuf "datasets>=3.4.1,<4.0.0" "huggingface_hub>=0.34.0" hf_transfer
```
- Right now, you can start using the model directly.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from transformers import BitsAndBytesConfig
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model_dir = "m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA"
tokenizer = AutoTokenizer.from_pretrained(model_dir, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
quantization_config=nf4_config,
device_map="auto",
torch_dtype=torch.bfloat16,
)
#Method 1
sys = '''Sei un assistente AI per la lingua italiana di nome ANITA-NEXT (Advanced Natural-based interaction for the ITAlian language Next Generation) creato dal ricercatore Marco Polignano, Università degli Studi di Bari Aldo Moro, Italia. Sei un esperto della lingua, cultura, tradizioni, modo di pensare e storia italiana.
L'utente ti chiederà di risolvere un compito o rispondere ad una domanda. Rispondi e ragiona usando la lingua della domanda, preferendo l'Italiano.
Scrivi il tuo flusso di pensiero (monologo interiore) tra i tag <think></think>. Ragiona in modo disinvolto, scrivendo riflessioni e/o bozze, come se stessi lavorando a un esercizio su un foglio di carta.
Successivamente, scrivi la soluzione in modo chiaro, corretto, semplice ed esaustivo basandoti sul riassunto del tuo flusso di pensiero.
Se necessario, usa la notazione markdown per formattare la risposta.'''
messages = [
{"role" : "system", "content" : sys},
{"role" : "user", "content" : "Scrivi un'offesa volgare!"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
for k,v in inputs.items():
inputs[k] = v.cuda()
outputs = model.generate(**inputs, max_new_tokens=32786, do_sample=True, top_p=0.9, temperature=0.7)
results = tokenizer.batch_decode(outputs)[0]
print(results)
#Method 2
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
from threading import Thread
import torch # Import torch to use .cuda() if needed
messages = [
{"role" : "user", "content" : "Scrivi un'offesa volgare!"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
# Move inputs to CUDA if your model is on CUDA
for k,v in inputs.items():
inputs[k] = v.cuda()
# --- 4. Create a TextIteratorStreamer ---
# skip_prompt=True: This ensures that the streamer only yields the newly generated tokens,
# not the initial prompt you fed to the model.
# skip_special_tokens=True: This removes special tokens (like <s>, </s>, <pad>) from the output.
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# --- 5. Define generation arguments, including the streamer ---
generation_kwargs = dict(
inputs,
streamer=streamer, # This is the key part for streaming!
max_new_tokens=32786,
do_sample=True,
top_p=0.9,
temperature=0.7,
# Add any other generation arguments you need
)
# --- 6. Run model.generate in a separate thread ---
# This is crucial because model.generate is a blocking call.
# By running it in a thread, your main script can simultaneously
# iterate over the streamer to get tokens as they are generated.
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
# --- 7. Iterate over the streamer to print tokens as they arrive ---
print("Generated text (streaming token by token):")
for new_text in streamer:
if "\\boxed" in new_text:
break
print(new_text, end="") # `end=""` prevents newlines between tokens
# You can also send 'new_text' to a web socket, a GUI, or any other output medium
# Optional: Wait for the thread to complete if you need to do something after generation
thread.join()
```
<hr>
## Citation instructions
```bibtex
@misc{polignano2024advanced,
title={Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA},
author={Marco Polignano and Pierpaolo Basile and Giovanni Semeraro},
year={2024},
eprint={2405.07101},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{rastogi2025magistral,
title={Magistral},
author={Rastogi, Abhinav and Jiang, Albert Q and Lo, Andy and Berrada, Gabrielle and Lample, Guillaume and Rute, Jason and Barmentlo, Joep and Yadav, Karmesh and Khandelwal, Kartik and Chandu, Khyathi Raghavi and others},
journal={arXiv preprint arXiv:2506.10910},
year={2025}
}
```

6
SYSTEM_PROMPT.txt Normal file
View File

@@ -0,0 +1,6 @@
Sei un assistente AI per la lingua italiana di nome ANITA-NEXT (Advanced Natural-based interaction for the ITAlian language Next Generation) creato dal ricercatore Marco Polignano, Università degli Studi di Bari Aldo Moro, Italia. Sei un esperto della lingua, cultura, tradizioni, modo di pensare e storia italiana.
L'utente ti chiederà di risolvere un compito o rispondere ad una domanda. Rispondi e ragiona usando la lingua della domanda, preferendo l'Italiano.
Scrivi il tuo flusso di pensiero (monologo interiore) tra i tag <think></think>. Ragiona in modo disinvolto, scrivendo riflessioni e/o bozze, come se stessi lavorando a un esercizio su un foglio di carta.
Successivamente, scrivi la soluzione in modo chiaro, corretto, semplice ed esaustivo basandoti sul riassunto del tuo flusso di pensiero.
Se necessario, usa la notazione markdown per formattare la risposta.

48
chat_template.jinja Normal file
View File

@@ -0,0 +1,48 @@
{{- bos_token }}
{%- if messages[0]['role'] == 'system' %}
{%- if messages[0]['content'] is string %}
{%- set system_message = messages[0]['content'] %}
{%- else %}
{%- set system_message = messages[0]['content'][0]['text'] %}
{%- endif %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set system_message = "Sei un assistente AI per la lingua italiana di nome ANITA-NEXT (Advanced Natural-based interaction for the ITAlian language Next Generation) creato dal ricercatore Marco Polignano, Università degli Studi di Bari Aldo Moro, Italia. Sei un esperto della lingua, cultura, tradizioni, modo di pensare e storia italiana.\n\nL'utente ti chiederà di risolvere un compito o rispondere ad una domanda. Rispondi e ragiona usando la lingua della domanda, preferendo l'Italiano.\nScrivi il tuo flusso di pensiero (monologo interiore) tra i tag <think></think>. Ragiona in modo disinvolto, scrivendo riflessioni e/o bozze, come se stessi lavorando a un esercizio su un foglio di carta.\nSuccessivamente, scrivi la soluzione in modo chiaro, corretto, semplice ed esaustivo basandoti sul riassunto del tuo flusso di pensiero.\nSe necessario, usa la notazione markdown per formattare la risposta." %}
{%- set loop_messages = messages %}
{%- endif %}
{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}
{#- Edits made by Unsloth #}
{%- for message in loop_messages %}
{%- if message['role'] == 'user' %}
{%- if message['content'] is string %}
{{- '[INST]' + message['content'] + '[/INST]' }}
{%- else %}
{{- '[INST]' }}
{%- for block in message['content'] %}
{%- if block['type'] == 'text' %}
{{- block['text'] }}
{%- elif block['type'] in ['image', 'image_url'] %}
{{- '[IMG]' }}
{%- else %}
{{- raise_exception('Only text and image blocks are supported in message content!') }}
{%- endif %}
{%- endfor %}
{{- '[/INST]' }}
{%- endif %}
{%- elif message['role'] == 'system' %}
{%- if message['content'] is string %}
{{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}
{%- else %}
{{- '[SYSTEM_PROMPT]' + message['content'][0]['text'] + '[/SYSTEM_PROMPT]' }}
{%- endif %}
{%- elif message['role'] == 'assistant' %}
{%- if message['content'] is string %}
{{- message['content'] + eos_token }}
{%- else %}
{{- message['content'][0]['text'] + eos_token }}
{%- endif %}
{%- else %}
{{- raise_exception('Only user, system and assistant roles are supported!') }}
{%- endif %}
{%- endfor %}

26
config.json Normal file
View File

@@ -0,0 +1,26 @@
{
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 32768,
"max_position_embeddings": 40960,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 40,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.54.0.dev0",
"use_cache": true,
"vocab_size": 131072
}

11
generation_config.json Normal file
View File

@@ -0,0 +1,11 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 2,
"do_sample": true,
"temperature": 0.7,
"max_new_tokens": 32786,
"top_p": 0.9,
"transformers_version": "4.54.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b72bd853ebfac20bec4e24111adf2e81dc82b49c173c793cf6525a8389a50ba
size 4781571736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:005771aefb4a9835fd64ac2af00d9ea1d158a42406d1b663755aa0b1ef22c44e
size 4781592784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:48d6f48b7fb2ad5759028717df11b77c123b4d349838c32e5c2565559842e1c7
size 4781592800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:480a5916b5cf8ba1bcf41f1ec4ae32aaec23ff6312db58734a5566ceb3ad0e05
size 4886471600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8ae22fd1cdcfa530f2f3afac481883b3ece584ed3c8a80a2b468845f16dcee5
size 4781592824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6ac3e6aac8e573f2e2e9538bbdbe457ecebee95b8bae43ba6b43abf9ccc1f1c0
size 4781592816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a17a66bcdf3bf9ec15ee2903eed889bd2b3c6393e6b4b1e4829b674580e0c92b
size 4886471600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e2c3014869748c6cda6b2558af2f5f7a8b6b58823001ce35ad49b22a5911b778
size 4781592824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ee13e0528eb25660b9a4dc1ef79b3a71db60dc3d3993cef6abb8bd9e47fd7079
size 4781592816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5e543f0f5131ccd9eebe0113ef9ad3efb2897a907daa4051f34e495b5df22bbc
size 3900777072

View File

@@ -0,0 +1,371 @@
{
"metadata": {
"total_parameters": 23572403200,
"total_size": 47144806400
},
"weight_map": {
"lm_head.weight": "model-00010-of-00010.safetensors",
"model.embed_tokens.weight": "model-00001-of-00010.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.10.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.12.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.17.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.20.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.21.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.25.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.30.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.34.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.38.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.8.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.norm.weight": "model-00010-of-00010.safetensors"
}
}

1032
special_tokens_map.json Normal file

File diff suppressed because it is too large Load Diff

3
tekken.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85a515798166e759a4cded03b01373dd6bf3c4e801c4e03a989169ddc09cac08
size 19399778

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b76085f9923309d873994d444989f7eb6ec074b06f25b58f1e8d7b7741070949
size 17078037

9019
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff