初始化项目,由ModelHub XC社区提供模型
Model: teknium/OpenHermes-7B Source: Original Platform
This commit is contained in:
51
.gitattributes
vendored
Normal file
51
.gitattributes
vendored
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
|
pytorch_model-00002-of-00002.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
pytorch_model-00001-of-00002.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
131
README.md
Normal file
131
README.md
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
---
|
||||||
|
base_model: NousResearch/Llama-2-7b-hf
|
||||||
|
tags:
|
||||||
|
- llama-2
|
||||||
|
- instruct
|
||||||
|
- finetune
|
||||||
|
- alpaca
|
||||||
|
- gpt4
|
||||||
|
- synthetic data
|
||||||
|
- distillation
|
||||||
|
datasets:
|
||||||
|
- teknium/openhermes
|
||||||
|
model-index:
|
||||||
|
- name: openhermes-7b
|
||||||
|
results: []
|
||||||
|
license: mit
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
---
|
||||||
|
|
||||||
|
# OpenHermes-7B
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Model description
|
||||||
|
|
||||||
|
OpenHermes 7B is the first fine tune of the Hermes dataset that has a fully open source dataset!
|
||||||
|
|
||||||
|
What is unique about this 7B model is that it used sample packing, which speeds up training by many multiples if the dataset token averages arent near the max sequence length.
|
||||||
|
|
||||||
|
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
|
||||||
|
|
||||||
|
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
|
||||||
|
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
|
||||||
|
- Airoboros GPT-4 (v1.0), by JonDurbin
|
||||||
|
- Camel-AI's domain expert datasets, by the Camel-AI Team
|
||||||
|
- CodeAlpaca, by Sahil2801
|
||||||
|
- GPT4-LLM and Unnatural Instructions, by Microsoft
|
||||||
|
|
||||||
|
Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
|
||||||
|
|
||||||
|
The base dataset mix the model was trained on is identical to Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
|
||||||
|
|
||||||
|
The WANDB Project is public and can be examined at this link: https://wandb.ai/teknium1/openhermes/runs/openhermes-v2-qlora-7b-packed
|
||||||
|
|
||||||
|
Huge thank you to [main_horse](https://twitter.com/main_horse) for compute access and a16z for sponsoring my work, and all the dataset creators and other people who's work has contributed to this project!
|
||||||
|
|
||||||
|
## Benchmark Information
|
||||||
|
|
||||||
|
## Benchmark Results
|
||||||
|
|
||||||
|
GPT-4All Benchmark Set
|
||||||
|
```
|
||||||
|
| Task |Version| Metric |Value | |Stderr|
|
||||||
|
|-------------|------:|--------|-----:|---|-----:|
|
||||||
|
|arc_challenge| 0|acc |0.4727|± |0.0146|
|
||||||
|
| | |acc_norm|0.4957|± |0.0146|
|
||||||
|
|arc_easy | 0|acc |0.7862|± |0.0084|
|
||||||
|
| | |acc_norm|0.7643|± |0.0087|
|
||||||
|
|boolq | 1|acc |0.7801|± |0.0072|
|
||||||
|
|hellaswag | 0|acc |0.5789|± |0.0049|
|
||||||
|
| | |acc_norm|0.7654|± |0.0042|
|
||||||
|
|openbookqa | 0|acc |0.3480|± |0.0213|
|
||||||
|
| | |acc_norm|0.4500|± |0.0223|
|
||||||
|
|piqa | 0|acc |0.7867|± |0.0096|
|
||||||
|
| | |acc_norm|0.7938|± |0.0094|
|
||||||
|
|winogrande | 0|acc |0.7048|± |0.0128|
|
||||||
|
|
||||||
|
Average: 0.679
|
||||||
|
```
|
||||||
|
|
||||||
|
BigBench:
|
||||||
|
```
|
||||||
|
| Task |Version| Metric |Value | |Stderr|
|
||||||
|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|
||||||
|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5000|± |0.0364|
|
||||||
|
|bigbench_date_understanding | 0|multiple_choice_grade|0.5908|± |0.0256|
|
||||||
|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|
||||||
|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
|
||||||
|
| | |exact_str_match |0.0000|± |0.0000|
|
||||||
|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|
||||||
|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1871|± |0.0148|
|
||||||
|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.3833|± |0.0281|
|
||||||
|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2500|± |0.0194|
|
||||||
|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|
||||||
|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.4370|± |0.0111|
|
||||||
|
|bigbench_ruin_names | 0|multiple_choice_grade|0.2679|± |0.0209|
|
||||||
|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2495|± |0.0137|
|
||||||
|
|bigbench_snarks | 0|multiple_choice_grade|0.5249|± |0.0372|
|
||||||
|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5406|± |0.0159|
|
||||||
|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.2470|± |0.0136|
|
||||||
|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.1944|± |0.0112|
|
||||||
|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1509|± |0.0086|
|
||||||
|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.3833|± |0.0281|
|
||||||
|
Average: 0.3367
|
||||||
|
```
|
||||||
|
|
||||||
|
AGI Eval
|
||||||
|
```
|
||||||
|
| Task |Version| Metric |Value | |Stderr|
|
||||||
|
|------------------------------|------:|--------|-----:|---|-----:|
|
||||||
|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
|
||||||
|
| | |acc_norm|0.2402|± |0.0269|
|
||||||
|
|agieval_logiqa_en | 0|acc |0.2458|± |0.0169|
|
||||||
|
| | |acc_norm|0.2965|± |0.0179|
|
||||||
|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
|
||||||
|
| | |acc_norm|0.2130|± |0.0271|
|
||||||
|
|agieval_lsat_lr | 0|acc |0.2745|± |0.0198|
|
||||||
|
| | |acc_norm|0.2686|± |0.0196|
|
||||||
|
|agieval_lsat_rc | 0|acc |0.2900|± |0.0277|
|
||||||
|
| | |acc_norm|0.2379|± |0.0260|
|
||||||
|
|agieval_sat_en | 0|acc |0.4466|± |0.0347|
|
||||||
|
| | |acc_norm|0.3738|± |0.0338|
|
||||||
|
|agieval_sat_en_without_passage| 0|acc |0.3738|± |0.0338|
|
||||||
|
| | |acc_norm|0.3301|± |0.0328|
|
||||||
|
|agieval_sat_math | 0|acc |0.2318|± |0.0285|
|
||||||
|
| | |acc_norm|0.1864|± |0.0263|
|
||||||
|
Average: 0.2683
|
||||||
|
```
|
||||||
|
|
||||||
|
TruthfulQA:
|
||||||
|
```
|
||||||
|
hf-causal-experimental (pretrained=teknium/OpenHermes-7B,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|
||||||
|
| Task |Version|Metric|Value | |Stderr|
|
||||||
|
|-------------|------:|------|-----:|---|-----:|
|
||||||
|
|truthfulqa_mc| 1|mc2 |0.4542|± |0.0148|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Training procedure
|
||||||
|
|
||||||
|

|
||||||
27
config.json
Normal file
27
config.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "NousResearch/Llama-2-7b-hf",
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 11008,
|
||||||
|
"max_position_embeddings": 4096,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"num_key_value_heads": 32,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 10000.0,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "float16",
|
||||||
|
"transformers_version": "4.34.0.dev0",
|
||||||
|
"use_cache": false,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
3
pytorch_model-00001-of-00002.bin
Normal file
3
pytorch_model-00001-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:741f0d22735a4ac4a8174a300e68950f91bdaca2aa2932dc62d82fa4d4dca62a
|
||||||
|
size 9976623130
|
||||||
3
pytorch_model-00002-of-00002.bin
Normal file
3
pytorch_model-00002-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:8e5e51b4089ffc9e57979a9bfefbd7e54954b7f622f4f802d9f6dfa90ebb27ca
|
||||||
|
size 3500311811
|
||||||
298
pytorch_model.bin.index.json
Normal file
298
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,298 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 13476831232
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"model.norm.weight": "pytorch_model-00002-of-00002.bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
6
special_tokens_map.json
Normal file
6
special_tokens_map.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"pad_token": "<unk>",
|
||||||
|
"unk_token": "<unk>"
|
||||||
|
}
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
||||||
|
size 499723
|
||||||
38
tokenizer_config.json
Normal file
38
tokenizer_config.json
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": true,
|
||||||
|
"add_eos_token": false,
|
||||||
|
"bos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"legacy": false,
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": null,
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"spaces_between_special_tokens": false,
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"trust_remote_code": false,
|
||||||
|
"unk_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"use_default_system_prompt": true,
|
||||||
|
"use_fast": true
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user