初始化项目,由ModelHub XC社区提供模型

Model: Locutusque/Hercules-3.1-Mistral-7B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 15:59:14 +08:00
commit e28ad5528e
17 changed files with 91795 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

201
README.md Normal file
View File

@@ -0,0 +1,201 @@
---
license: apache-2.0
library_name: transformers
tags:
- chemistry
- biology
- code
- medical
- not-for-all-audiences
datasets:
- Locutusque/Hercules-v3.0
model-index:
- name: Hercules-3.1-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.83
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-3.1-Mistral-7B
name: Open LLM Leaderboard
---
# Model Card: Hercules-3.1-Mistral-7B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/Ip9wEG2Ne4vihNStHSDvX.png)
## Model Description
Hercules-3.1-Mistral-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.1. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v3.0, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v3.0 with enhanced abilities in:
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
- Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
## Intended Uses & Potential Bias
Hercules-3.1-Mistral-7B is well-suited to the following applications:
- Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields.
- Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines.
- Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping.
**Important Note: Although Hercules-v3.0 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.**
## Limitations and Risks
- Toxicity: The dataset contains toxic or harmful examples.
- Hallucinations and Factual Errors: Like other language models, Hercules-3.1-Mistral-7B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise.
- Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes.
## Training Data
Hercules-3.1-Mistral-7B is fine-tuned from the following sources:
- `cognitivecomputations/dolphin`
- `Evol Instruct 70K & 140K`
- `teknium/GPT4-LLM-Cleaned`
- `jondurbin/airoboros-3.2`
- `AlekseyKorshuk/camel-chatml`
- `CollectiveCognition/chats-data-2023-09-22`
- `Nebulous/lmsys-chat-1m-smortmodelsonly`
- `glaiveai/glaive-code-assistant-v2`
- `glaiveai/glaive-code-assistant`
- `glaiveai/glaive-function-calling-v2`
- `garage-bAInd/Open-Platypus`
- `meta-math/MetaMathQA`
- `teknium/GPTeacher-General-Instruct`
- `GPTeacher roleplay datasets`
- `BI55/MedText`
- `pubmed_qa labeled subset`
- `Unnatural Instructions`
- `M4-ai/LDJnr_combined_inout_format`
- `CollectiveCognition/chats-data-2023-09-27`
- `CollectiveCognition/chats-data-2023-10-16`
- `NobodyExistsOnTheInternet/sharegptPIPPA`
- `yuekai/openchat_sharegpt_v3_vicuna_format`
- `ise-uiuc/Magicoder-Evol-Instruct-110K`
- `sablo/oasst2_curated`
The bluemoon dataset was filtered from the training data as it showed to cause performance degradation.
## Training Procedure
- This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!)
- A learning rate of 2e-06 with the Adam optimizer. A linear scheduler was used, with an end factor of 0.3. A low learning rate was used to prevent exploding gradients.
- No mixed precision was used, with the default dtype being bfloat16.
- Trained on 700,000 examples of Hercules-v3.0
- No model parameters were frozen.
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>```
This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
# Quants
ExLlamaV2 by bartowski https://huggingface.co/bartowski/Hercules-3.1-Mistral-7B-exl2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hercules-3.1-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.09|
|AI2 Reasoning Challenge (25-Shot)|61.18|
|HellaSwag (10-Shot) |83.55|
|MMLU (5-Shot) |63.65|
|TruthfulQA (0-shot) |42.83|
|Winogrande (5-shot) |79.01|
|GSM8k (5-shot) |42.30|

26
config.json Normal file
View File

@@ -0,0 +1,26 @@
{
"_name_or_path": "mistralai/Mistral-7B-v0.1",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.37.2",
"use_cache": true,
"vocab_size": 32000
}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.37.2"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bd5ffc7c1b9eb66c002e72f3928fda1f161b796e3914be72c1d5daed88c5d638
size 1889587040

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b8118197ced8df274c559c1f9c127157cd5708c121cabf07ba37168c4cfa0479
size 1946243936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cb6b7b18ab36a46e506366340d3dac9860b829872920002c42f594a8ffea6460
size 1979781432

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:876794c86764adb5f263e62697df7ea36e4d9abe830ca5509b592e3b9b80f3ca
size 1946243984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ce6dd5537f457b7f8eb9a60f9d94ef3007653ab68dc6472f53288f40bafd8ed1
size 1979781448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6fc2a8bacc6d3ca1482aa73562db6a50b17881a603d432cec3de24d4708a7836
size 1946243984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88fb81b1b299287f93e90e352e16dd51b33836c0009e7b6bdba378739fdabe72
size 1979781448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e8ed9450c5ff53a7cf15af4299790f2947769230840360307e78bf62b35fe981
size 815834680

View File

@@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 14483464192
},
"weight_map": {
"lm_head.weight": "model-00008-of-00008.safetensors",
"model.embed_tokens.weight": "model-00001-of-00008.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.20.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.27.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.30.input_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.31.input_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.norm.weight": "model-00008-of-00008.safetensors"
}
}

24
special_tokens_map.json Normal file
View File

@@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

91136
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

42
tokenizer_config.json Normal file
View File

@@ -0,0 +1,42 @@
{
"add_bos_token": true,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [],
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}