初始化项目,由ModelHub XC社区提供模型
Model: medalpaca/medalpaca-13b Source: Original Platform
This commit is contained in:
34
.gitattributes
vendored
Normal file
34
.gitattributes
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
75
README.md
Normal file
75
README.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
license: cc
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- medical
|
||||
---
|
||||
# MedAlpaca 13b
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
[Model Description](#model-description)
|
||||
- [Architecture](#architecture)
|
||||
- [Training Data](#trainig-data)
|
||||
[Model Usage](#model-usage)
|
||||
[Limitations](#limitations)
|
||||
|
||||
## Model Description
|
||||
### Architecture
|
||||
`medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
|
||||
It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
|
||||
The primary goal of this model is to improve question-answering and medical dialogue tasks.
|
||||
|
||||
### Training Data
|
||||
The training data for this project was sourced from various resources.
|
||||
Firstly, we used Anki flashcards to automatically generate questions,
|
||||
from the front of the cards and anwers from the back of the card.
|
||||
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
|
||||
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
|
||||
to generate questions from the headings and using the corresponding paragraphs
|
||||
as answers. This dataset is still under development and we believe
|
||||
that approximately 70% of these question answer pairs are factual correct.
|
||||
Thirdly, we used StackExchange to extract question-answer pairs, taking the
|
||||
top-rated question from five categories: Academia, Bioinformatics, Biology,
|
||||
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
|
||||
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
|
||||
|
||||
| Source | n items |
|
||||
|------------------------------|--------|
|
||||
| ChatDoc large | 200000 |
|
||||
| wikidoc | 67704 |
|
||||
| Stackexchange academia | 40865 |
|
||||
| Anki flashcards | 33955 |
|
||||
| Stackexchange biology | 27887 |
|
||||
| Stackexchange fitness | 9833 |
|
||||
| Stackexchange health | 7721 |
|
||||
| Wikidoc patient information | 5942 |
|
||||
| Stackexchange bioinformatics | 5407 |
|
||||
|
||||
## Model Usage
|
||||
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
|
||||
Inference
|
||||
|
||||
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
|
||||
|
||||
```python
|
||||
|
||||
from transformers import pipeline
|
||||
|
||||
pl = pipeline("text-generation", model="medalpaca/medalpaca-13b", tokenizer="medalpaca/medalpaca-13b")
|
||||
question = "What are the symptoms of diabetes?"
|
||||
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
|
||||
answer = pl(f"Context: {context}\n\nQuestion: {question}\n\nAnswer: ")
|
||||
print(answer)
|
||||
```
|
||||
|
||||
## Limitations
|
||||
The model may not perform effectively outside the scope of the medical domain.
|
||||
The training data primarily targets the knowledge level of medical students,
|
||||
which may result in limitations when addressing the needs of board-certified physicians.
|
||||
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
|
||||
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
|
||||
3
added_tokens.json
Normal file
3
added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"[PAD]": 32000
|
||||
}
|
||||
23
config.json
Normal file
23
config.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"_name_or_path": "decapoda-research/llama-13b-hf",
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"bos_token_id": 0,
|
||||
"eos_token_id": 1,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 5120,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 13824,
|
||||
"max_sequence_length": 2048,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 40,
|
||||
"num_hidden_layers": 40,
|
||||
"pad_token_id": -1,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "float32",
|
||||
"transformers_version": "4.28.0.dev0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32001
|
||||
}
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 0,
|
||||
"eos_token_id": 1,
|
||||
"pad_token_id": 0,
|
||||
"transformers_version": "4.28.0.dev0"
|
||||
}
|
||||
3
optimizer.pt
Normal file
3
optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:457381dc6a99e1f17fd1cdd23f3c211af1043d211a48ebc3b5294575c5922146
|
||||
size 13015909885
|
||||
3
pytorch_model-00001-of-00006.bin
Normal file
3
pytorch_model-00001-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fcf6a31ca1333da6fd4e5d8daa0940960177cbdc743a8a0fe6f1e4586880fe93
|
||||
size 9956566923
|
||||
3
pytorch_model-00002-of-00006.bin
Normal file
3
pytorch_model-00002-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a4261d046acdb2c4a3d4008c0688accf4fd22cf51557deaae37098cde81c1904
|
||||
size 9940859009
|
||||
3
pytorch_model-00003-of-00006.bin
Normal file
3
pytorch_model-00003-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4a21ee30f9b45ed966e22c32f0e09baf6e1414d60d0a4e873a0a07dc3acbbc2b
|
||||
size 9940859567
|
||||
3
pytorch_model-00004-of-00006.bin
Normal file
3
pytorch_model-00004-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7bf930b3c11c614ed478ae50df6ae39afb45fc2c0fb91cb70a3805632fda216f
|
||||
size 9867417913
|
||||
3
pytorch_model-00005-of-00006.bin
Normal file
3
pytorch_model-00005-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ceded3719cfc1bbce8d5538b6db63a964fc08e90d1f6aab4f87899f787893544
|
||||
size 9867459649
|
||||
3
pytorch_model-00006-of-00006.bin
Normal file
3
pytorch_model-00006-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a96e68cc72ac38fa54944bf6415618f899d7925f5ce64f0e65788d89b1a2b817
|
||||
size 2490497199
|
||||
410
pytorch_model.bin.index.json
Normal file
410
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,410 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 52063508480
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.embed_tokens.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||
"model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||
"model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||
"model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||
"model.norm.weight": "pytorch_model-00006-of-00006.bin"
|
||||
}
|
||||
}
|
||||
3
rng_state_0.pth
Normal file
3
rng_state_0.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7d62c2faa2a0c702a05784e06a2dc13acbb592d8156e735778d8c5b41f234842
|
||||
size 14583
|
||||
3
rng_state_1.pth
Normal file
3
rng_state_1.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2398ee347ca289c59a436c0b85e5a64239cbb82090f1ffef87d4ed8f05730229
|
||||
size 14583
|
||||
3
rng_state_2.pth
Normal file
3
rng_state_2.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:adcbd518ba325583f67f4fdeb1ea2c1df84536a5f4c78e26f15c48cfe1a3c120
|
||||
size 14583
|
||||
3
rng_state_3.pth
Normal file
3
rng_state_3.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c7ecae2f5d69ee500eb1409426b96a1f5039020414ef747273e4923e3e57f4f2
|
||||
size 14583
|
||||
3
rng_state_4.pth
Normal file
3
rng_state_4.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3be2dd5399ec85dffba8e211b089258c085d1da1161549b4b0b3e4abf171538f
|
||||
size 14583
|
||||
3
rng_state_5.pth
Normal file
3
rng_state_5.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7f0c2a4df23237c28f78364924f4b3c486e2968e0145adf3dc99aad28c5306e3
|
||||
size 14583
|
||||
3
rng_state_6.pth
Normal file
3
rng_state_6.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:790750b486df0b06c519f909088a1ed3ab8b4618729bfcd16109f5a20e3f1f62
|
||||
size 14583
|
||||
3
rng_state_7.pth
Normal file
3
rng_state_7.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8961a7b9bb1971edf7f4bb594e8bf61efcb70c0686681c9cad59dcab251dd258
|
||||
size 14583
|
||||
3
scheduler.pt
Normal file
3
scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5bf4dd2f6ff53df191c7944cf6c398e7106f097da867db2fc7b2f29cdb5538b3
|
||||
size 627
|
||||
6
special_tokens_map.json
Normal file
6
special_tokens_map.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"bos_token": "</s>",
|
||||
"eos_token": "</s>",
|
||||
"pad_token": "[PAD]",
|
||||
"unk_token": "</s>"
|
||||
}
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
||||
size 499723
|
||||
10
tokenizer_config.json
Normal file
10
tokenizer_config.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"bos_token": "</s>",
|
||||
"eos_token": "</s>",
|
||||
"model_max_length": 512,
|
||||
"padding_side": "right",
|
||||
"special_tokens_map_file": "/sc-projects/sc-proj-cc06-medbert/hfcache/hub/models--decapoda-research--llama-13b-hf/snapshots/438770a656712a5072229b62256521845d4de5ce/special_tokens_map.json",
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"unk_token": "</s>",
|
||||
"pad_token": "[PAD]"
|
||||
}
|
||||
24016
trainer_state.json
Normal file
24016
trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c108afc9e444cf4e054dd0cb24044a89bafc21dfc52acd0f9980f39bf32bcc0a
|
||||
size 3771
|
||||
Reference in New Issue
Block a user