初始化项目,由ModelHub XC社区提供模型

Model: unsloth/Mistral-Small-24B-Instruct-2501
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 07:29:56 +08:00
commit 741cfa154b
19 changed files with 10893 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

361
README.md Normal file
View File

@@ -0,0 +1,361 @@
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
- mistral
- mistral-instruct
- instruct
base_model: mistralai/Mistral-Small-24B-Instruct-2501
---
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Mistral (7B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Model Card for Mistral-Small-24B-Instruct-2501
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501).
Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
Perfect for:
- Fast response conversational agents.
- Low latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
This release demonstrates our commitment to open source, serving as a strong base model.
Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/).
Model developper: Mistral AI Team
## Key Features
- **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 32k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark results
### Human evaluated benchmarks
| Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini |
|----------|-------------|--------------|---------------|------------|
| Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 |
| Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 |
| Ties | 0.052 | 0.060 | 0.236 | 0.160 |
| Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 |
| Other is better | 0.156 | 0.172 | 0.296 | 0.312 |
**Note**:
- We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
- Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
- We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.
### Publicly accesible benchmarks
**Reasoning & Knowledge**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
| gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
**Math & Coding**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
| math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
**Instruction following**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 |
| wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 |
| arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 |
| ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
**Note**:
- Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance
([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)).
- Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.
### Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers)
### vLLM
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
```
**_Installation_**
Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4):
```
pip install --upgrade vllm
```
Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
```
**Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from datetime import datetime, timedelta
url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
messages = [
{
"role": "system",
"content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
},
{
"role": "user",
"content": "Give me 5 non-formal ways to say 'See you later' in French."
},
]
data = {"model": model, "messages": messages}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Function calling
Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Example</summary>
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools}
response = requests.post(url, headers=headers, data=json.dumps(data))
import ipdb; ipdb.set_trace()
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
```
</details>
#### Offline
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
```py
from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
chatbot(messages)
```

29
config.json Normal file
View File

@@ -0,0 +1,29 @@
{
"_name_or_path": "mistralai/Mistral-Small-24B-Instruct-2501",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 32768,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 40,
"num_key_value_heads": 8,
"pad_token_id": 11,
"rms_norm_eps": 1e-05,
"rope_theta": 100000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"unsloth_fixed": true,
"use_cache": true,
"vocab_size": 131072
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

10
generation_config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": 2,
"max_length": 32768,
"pad_token_id": 11,
"temperature": 0.15,
"transformers_version": "4.49.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:75a14c708eea501700a723dc74bc886cf36a1393686a3fb098ee106b160da32f
size 4781571736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ff40fbfd9e042b7dab3f3c9442f870a4701f53e394dda769807a160ba40f32a
size 4781592784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4cc2d059fded71efd2947a414f32053b4ed3fa84383edf97b6d91fd9f04e4235
size 4781592800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa0e9acacf161c45ae0d71ca3f7e4ec9ee55dae2153398da52f81ee4f9e1b8d2
size 4886471600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dafb696763d31a1fda58010b73ecc05c19d395da8ec2c24aa9c41da33f2230d3
size 4781592824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d9a433b19fd4d6986660a616a0d6fc7d02d9e8c0ab3c9b98940217ee6bd4e053
size 4781592816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5ac5c7b042491f917016c3e9635583177058f736be5fa315019b959fc3c43b63
size 4886471600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3c460c5b957ab3ac81f03bb20c0348820225dd9b819fa4487ae733b1e696e573
size 4781592824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:304f66de1aeb55f0b4e1181885e3d15b65b485d5ce5c93b4adcdf7dd2c2d8cc5
size 4781592816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c4c3bcedc02f4dd7e04c8b0fe1199f4f27de7a37790d1510a8772ffe05093543
size 3900777072

View File

@@ -0,0 +1,370 @@
{
"metadata": {
"total_size": 47144806400
},
"weight_map": {
"lm_head.weight": "model-00010-of-00010.safetensors",
"model.embed_tokens.weight": "model-00001-of-00010.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.10.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.12.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.input_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00010.safetensors",
"model.layers.17.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.input_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.20.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00005-of-00010.safetensors",
"model.layers.21.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.input_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00006-of-00010.safetensors",
"model.layers.25.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.input_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00007-of-00010.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00010.safetensors",
"model.layers.30.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.input_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00008-of-00010.safetensors",
"model.layers.34.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.input_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00009-of-00010.safetensors",
"model.layers.38.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.input_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00010-of-00010.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00010.safetensors",
"model.layers.8.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.input_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00010.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00010.safetensors",
"model.norm.weight": "model-00010-of-00010.safetensors"
}
}

1032
special_tokens_map.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b76085f9923309d873994d444989f7eb6ec074b06f25b58f1e8d7b7741070949
size 17078037

9021
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff