初始化项目,由ModelHub XC社区提供模型

Model: Arc53/docsgpt-7b-mistral
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 20:43:01 +08:00
commit 7a4f61485b
17 changed files with 91689 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

91
README.md Normal file
View File

@@ -0,0 +1,91 @@
---
license: apache-2.0
tags:
- rag
- closed-qa
- context
- mistral
---
DocsGPT is optimized for Documentation (RAG optimised): Specifically fine-tuned for providing answers that are based on context, making it particularly useful for developers and technical support teams.
We used the Lora fine tuning process.
This model is fine tuned on top of zephyr-7b-beta
It's an apache-2.0 license so you can use it for commercial purposes too.
Benchmarks:
Bacon:
The BACON test is an internal assessment designed to evaluate the capabilities of neural networks in handling questions with substantial content. It focuses on testing the model's understanding of context-driven queries, as well as its tendency for hallucination and attention span. The questions in both parts are carefully crafted, drawing from diverse sources such as scientific papers, complex code problems, and instructional prompts, providing a comprehensive test of the model's ability to process and generate information in various domains.
| Model | Score |
|------------------------------|-------|
| gpt-4 | 8.74 |
| DocsGPT-7b-Mistral | 8.64 |
| gpt-3.5-turbo | 8.42 |
| zephyr-7b-beta | 8.37 |
| neural-chat-7b-v3-1 | 7.88 |
| Mistral-7B-Instruct-v0.1 | 7.44 |
| openinstruct-mistral-7b | 5.86 |
| llama-2-13b | 2.29 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/lWefx5b5uQAt4Uzf_0x-O.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/nAd4icZa2jIer-_JWOpZ0.png)
MTbench with llm judge:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/SOOVW_j908gpB8W804vsG.png)
########## First turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 1 | 8.956250 |
| gpt-3.5-turbo | 1 | 8.075000 |
| DocsGPT-7b-Mistral | 1 | 7.593750 |
| zephyr-7b-beta | 1 | 7.412500 |
| vicuna-13b-v1.3 | 1 | 6.812500 |
| alpaca-13b | 1 | 4.975000 |
| deepseek-coder-6.7b | 1 | 4.506329 |
########## Second turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 2 | 9.025000 |
| gpt-3.5-turbo | 2 | 7.812500 |
| DocsGPT-7b-Mistral | 2 | 6.740000 |
| zephyr-7b-beta | 2 | 6.650000 |
| vicuna-13b-v1.3 | 2 | 5.962500 |
| deepseek-coder-6.7b | 2 | 5.025641 |
| alpaca-13b | 2 | 4.087500 |
########## Average ##########
| Model | Score |
|-----------------------|----------|
| gpt-4 | 8.990625 |
| gpt-3.5-turbo | 7.943750 |
| DocsGPT-7b-Mistral | 7.166875 |
| zephyr-7b-beta | 7.031250 |
| vicuna-13b-v1.3 | 6.387500 |
| deepseek-coder-6.7b | 4.764331 |
| alpaca-13b | 4.531250 |
To prepare your prompts make sure you keep this format:
```
### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer
```

27
config.json Normal file
View File

@@ -0,0 +1,27 @@
{
"_name_or_path": "Arc53/docsgpt-7b-mistral",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pad_token_id": 2,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.36.0.dev0",
"use_cache": true,
"vocab_size": 32000
}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.36.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e6f542c827da02f10c2eea71f76af9ade4ed2f16e476116c0603f8a7fb7a2590
size 1889587008

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:62e61c8201a1599d4c9e59596bcec94c534a838f3cf2dc93699f58d31362f76c
size 1946243896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a34f8db39b10b140eaa0b0196fac0a96b502ff05ef75b5486168de6ca25b465
size 1979781392

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:51a5206a5923b0b44b711ae0c42728030cc6423b157a61d9ed78896c5a706561
size 1946243936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf2ece8bb37b61383aafe1fdcaa3f013470a6d9df63a367b292c65cf2b908bce
size 1979781416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1205f2b366a31da08af273d27db4d52cbc781801beb8339cac16c192bd92cce
size 1946243936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:851dc446538c56253518ecdae0df16589d1a04ed0b8037b3408b5d5a653b1002
size 1979781416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e10a6edc44d8bc56066bac480faf3aaa768193f9b5e6ecd9cba5752329457769
size 815834664

View File

@@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 14483464192
},
"weight_map": {
"lm_head.weight": "model-00008-of-00008.safetensors",
"model.embed_tokens.weight": "model-00001-of-00008.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.13.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
"model.layers.18.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.20.input_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
"model.layers.22.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.input_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
"model.layers.27.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.input_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
"model.layers.30.input_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
"model.layers.31.input_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
"model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
"model.norm.weight": "model-00008-of-00008.safetensors"
}
}

35
special_tokens_map.json Normal file
View File

@@ -0,0 +1,35 @@
{
"additional_special_tokens": [
"<unk>",
"<s>",
"</s>"
],
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

91122
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

48
tokenizer_config.json Normal file
View File

@@ -0,0 +1,48 @@
{
"add_bos_token": true,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<unk>",
"<s>",
"</s>"
],
"bos_token": "<s>",
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"truncation_side": "left",
"unk_token": "<unk>",
"use_default_system_prompt": true
}