初始化项目,由ModelHub XC社区提供模型
Model: lmsys/vicuna-13b-delta-v1.1 Source: Original Platform
This commit is contained in:
52
.gitattributes
vendored
Normal file
52
.gitattributes
vendored
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
pytorch_model-00001-of-00003.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
pytorch_model-00002-of-00003.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
pytorch_model-00003-of-00003.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.model filter=lfs diff=lfs merge=lfs -text
|
||||||
53
README.md
Normal file
53
README.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
---
|
||||||
|
inference: false
|
||||||
|
---
|
||||||
|
|
||||||
|
**NOTE: New version available**
|
||||||
|
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
|
||||||
|
|
||||||
|
**NOTE: This "delta model" cannot be used directly.**
|
||||||
|
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0).
|
||||||
|
|
||||||
|
<br>
|
||||||
|
<br>
|
||||||
|
|
||||||
|
# Vicuna Model Card
|
||||||
|
|
||||||
|
## Model Details
|
||||||
|
|
||||||
|
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
|
||||||
|
|
||||||
|
- **Developed by:** [LMSYS](https://lmsys.org/)
|
||||||
|
- **Model type:** An auto-regressive language model based on the transformer architecture.
|
||||||
|
- **License:** Non-commercial license
|
||||||
|
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
|
||||||
|
|
||||||
|
### Model Sources
|
||||||
|
|
||||||
|
- **Repository:** https://github.com/lm-sys/FastChat
|
||||||
|
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
|
||||||
|
- **Paper:** https://arxiv.org/abs/2306.05685
|
||||||
|
- **Demo:** https://chat.lmsys.org/
|
||||||
|
|
||||||
|
## Uses
|
||||||
|
|
||||||
|
The primary use of Vicuna is research on large language models and chatbots.
|
||||||
|
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
|
||||||
|
|
||||||
|
## How to Get Started with the Model
|
||||||
|
|
||||||
|
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
|
||||||
|
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
|
||||||
|
|
||||||
|
## Training Details
|
||||||
|
|
||||||
|
Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning.
|
||||||
|
The training data is around 70K conversations collected from ShareGPT.com.
|
||||||
|
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
|
||||||
|
|
||||||
|
## Difference between different versions of Vicuna
|
||||||
|
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
||||||
23
config.json
Normal file
23
config.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "/home/ubuntu/model_weights/vicuna-13b-v1.1-fp16/",
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 5120,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 13824,
|
||||||
|
"max_position_embeddings": 2048,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 40,
|
||||||
|
"num_hidden_layers": 40,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "float16",
|
||||||
|
"transformers_version": "4.28.0.dev0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"transformers_version": "4.28.0.dev0"
|
||||||
|
}
|
||||||
3
pytorch_model-00001-of-00003.bin
Normal file
3
pytorch_model-00001-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f770d881054b125ab670f820f11c7efd34a13ba9131ee8d5ed819efe9536d5f5
|
||||||
|
size 9948728430
|
||||||
3
pytorch_model-00002-of-00003.bin
Normal file
3
pytorch_model-00002-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:92c87a57207a58f49ec7a40f978bafa1a018f625c8564e997ea24007829392dd
|
||||||
|
size 9904165024
|
||||||
3
pytorch_model-00003-of-00003.bin
Normal file
3
pytorch_model-00003-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9596032b8f4e0f35cec2b7dc00956a8e25dd2f955230e082ee3a4ada7569cf79
|
||||||
|
size 6178983625
|
||||||
410
pytorch_model.bin.index.json
Normal file
410
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 26031738880
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.embed_tokens.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.norm.weight": "pytorch_model-00003-of-00003.bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
||||||
|
size 499723
|
||||||
33
tokenizer_config.json
Normal file
33
tokenizer_config.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": true,
|
||||||
|
"add_eos_token": false,
|
||||||
|
"bos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": null,
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user