初始化项目,由ModelHub XC社区提供模型
Model: invalid-coder/dolphin-2.1-mistral-7b-snr-laser Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
113
README.md
Normal file
113
README.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- ehartford/dolphin
|
||||
- jondurbin/airoboros-2.2.1
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
# dolphin-2.1-mistral-7b-snr-laser
|
||||
|
||||
It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
|
||||
and the novel training technique - we partially freeze the model according to a laser-like
|
||||
analysis (Official Paper soon) which effectively prevents the significant problem of language
|
||||
models forgetting previously acquired knowledge. This aspect is particularly crucial when attempting
|
||||
to teach the model specific skills, such as function calling.
|
||||
|
||||
Dolphin 2.1 🐬
|
||||
https://erichartford.com/dolphin
|
||||
|
||||
Join Our Discord! https://discord.gg/cognitivecomputations
|
||||
|
||||
Dolphin-2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
|
||||
|
||||
This model is based on mistralAI, with apache-2.0 license, so it is suitable for commercial or non-commercial use.
|
||||
|
||||
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
|
||||
You are responsible for any content you create using this model. Enjoy responsibly.
|
||||
|
||||
## Dataset
|
||||
|
||||
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
|
||||
|
||||
I modified the dataset for uncensoring, deduping, cleaning, and quality.
|
||||
|
||||
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
|
||||
|
||||
## Training
|
||||
It took 48 hours to train 4 epochs on 4x A100s.
|
||||
|
||||
Prompt format:
|
||||
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
|
||||
```
|
||||
<|im_start|>system
|
||||
You are Dolphin, a helpful AI assistant.<|im_end|>
|
||||
<|im_start|>user
|
||||
{prompt}<|im_end|>
|
||||
<|im_start|>assistant
|
||||
|
||||
```
|
||||
|
||||
Example:
|
||||
```
|
||||
<|im_start|>system
|
||||
you are an expert dolphin trainer<|im_end|>
|
||||
<|im_start|>user
|
||||
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
|
||||
<|im_start|>assistant
|
||||
```
|
||||
|
||||
## Gratitude
|
||||
- This model was made possible by the generous sponsorship of a16z.
|
||||
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
|
||||
- Special thanks to Wing Lian, and TheBloke for helpful advice
|
||||
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
|
||||
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
||||
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
<|im_start|>system
|
||||
You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
|
||||
<|im_start|>user
|
||||
Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
|
||||
<|im_start|>assistant
|
||||
To increase horsepower and acceleration in a 2010 Honda Civic without worrying too much about cost, you can consider the following modifications:
|
||||
|
||||
1. Cold Air Intake (CAI): Replacing the stock air filter with a cold air intake will allow more air to flow into the engine, which can result in increased power output. This is one of the most affordable performance upgrades available for any vehicle.
|
||||
|
||||
2. Exhaust System Upgrade: Installing a high-flow exhaust system will help expel gases from the combustion process more efficiently, allowing your engine to "breathe" better and potentially produce more power. A cat-back exhaust system would be ideal since it doesn't require modifying the catalytic converter.
|
||||
|
||||
3. Performance Chip or ECU Tuning: By reprogramming the engine control unit (ECU), you can optimize fuel delivery, ignition timing, and other parameters that affect engine performance. This can lead to significant gains in horsepower and torque. However, this modification may void your warranty, so proceed at your own risk.
|
||||
|
||||
4. Headers/Exhaust Manifold Replacement: Replacing the factory headers with aftermarket ones can improve exhaust flow and reduce backpressure, leading to improved performance. Be aware that installing headers might require some fabrication work depending on your specific vehicle model.
|
||||
|
||||
5. Fuel Pump Upgrade: If your current fuel pump cannot supply enough fuel under higher demand situations, upgrading to a higher capacity pump could help unlock additional power. Make sure to choose a pump compatible with your vehicle's fuel system.
|
||||
|
||||
As for making your car louder, here are some suggestions:
|
||||
|
||||
6. Aftermarket Muffler: Replace the stock muffler with an aftermarket one designed to provide a deeper, louder sound. Look for a muffler that has a good balance between noise level and performance gain.
|
||||
|
||||
7. Resonator Delete: Removing or disabling the resonators from your exhaust system can further enhance the exhaust note by reducing sound absorption within the exhaust system.
|
||||
|
||||
8. Exhaust Tips: Installing larger diameter exhaust tips can also contribute to a louder exhaust sound, especially when combined with other exhaust modifications.
|
||||
|
||||
Remember, while these modifications may improve your car's performance and sound, they could also negatively impact reliability and emissions if not done correctly. Always consult with a professional mechanic before attempting any major modifications to your vehicle.
|
||||
```
|
||||
|
||||
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
||||
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||||
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b)
|
||||
|
||||
| Metric | Value |
|
||||
|-----------------------|---------------------------|
|
||||
| Avg. | 53.47 |
|
||||
| ARC (25-shot) | 64.42 |
|
||||
| HellaSwag (10-shot) | 84.92 |
|
||||
| MMLU (5-shot) | 63.32 |
|
||||
| TruthfulQA (0-shot) | 55.56 |
|
||||
| Winogrande (5-shot) | 77.74 |
|
||||
| GSM8K (5-shot) | 20.77 |
|
||||
| DROP (3-shot) | 7.56 |
|
||||
7
added_tokens.json
Normal file
7
added_tokens.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"</s>": 2,
|
||||
"<s>": 1,
|
||||
"<unk>": 0,
|
||||
"<|im_end|>": 32000,
|
||||
"<|im_start|>": 32001
|
||||
}
|
||||
25
config.json
Normal file
25
config.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"_name_or_path": "cognitivecomputations/dolphin-2.1-mistral-7b",
|
||||
"architectures": [
|
||||
"MistralForCausalLM"
|
||||
],
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 32000,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 32768,
|
||||
"model_type": "mistral",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_theta": 10000.0,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.34.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32002
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.34.0"
|
||||
}
|
||||
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4117f390c7c6e67d927daaef3b766a2331cd994c36a20326f4264c93be300691
|
||||
size 9942998080
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f63c2b4846cd0f91119ad311130988881bf50074338bf3f389c06dff7c2def48
|
||||
size 4540532728
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 14483496960
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00002-of-00002.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.norm.weight": "model-00002-of-00002.safetensors"
|
||||
}
|
||||
}
|
||||
3
pytorch_model-00001-of-00002.bin
Normal file
3
pytorch_model-00001-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:124521001f1bd68068e78af5e4885a2eea28f0daa3b3080e1da9508ad2e9c475
|
||||
size 9943047731
|
||||
3
pytorch_model-00002-of-00002.bin
Normal file
3
pytorch_model-00002-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a57b1607752e85c6da5b2a89c6bae4a10c69959583b5807c930afb904575bfdc
|
||||
size 4540553798
|
||||
298
pytorch_model.bin.index.json
Normal file
298
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 14483496960
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
||||
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
||||
"model.norm.weight": "pytorch_model-00002-of-00002.bin"
|
||||
}
|
||||
}
|
||||
12
special_tokens_map.json
Normal file
12
special_tokens_map.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<unk>",
|
||||
"<s>",
|
||||
"</s>",
|
||||
"<|im_end|>",
|
||||
"<|im_start|>"
|
||||
],
|
||||
"bos_token": "<s>",
|
||||
"eos_token": "<|im_end|>",
|
||||
"unk_token": "<unk>"
|
||||
}
|
||||
91140
tokenizer.json
Normal file
91140
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
67
tokenizer_config.json
Normal file
67
tokenizer_config.json
Normal file
@@ -0,0 +1,67 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"32000": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"32001": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": true,
|
||||
"normalized": false,
|
||||
"rstrip": true,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<unk>",
|
||||
"<s>",
|
||||
"</s>",
|
||||
"<|im_end|>",
|
||||
"<|im_start|>"
|
||||
],
|
||||
"bos_token": "<s>",
|
||||
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"legacy": true,
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": null,
|
||||
"sp_model_kwargs": {},
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"trust_remote_code": false,
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": true,
|
||||
"use_fast": true
|
||||
}
|
||||
Reference in New Issue
Block a user