初始化项目,由ModelHub XC社区提供模型
Model: LatitudeGames/Wayfarer-2-12B Source: Original Platform
This commit is contained in:
37
.gitattributes
vendored
Normal file
37
.gitattributes
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
Wayfarer-2-12B.jpg filter=lfs diff=lfs merge=lfs -text
|
||||
70
README.md
Normal file
70
README.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- mistralai/Mistral-Nemo-Base-2407
|
||||
tags:
|
||||
- text adventure
|
||||
- roleplay
|
||||
library_name: transformers
|
||||
---
|
||||
|
||||

|
||||
|
||||
# Wayfarer-2-12B
|
||||
|
||||
We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. We created Wayfarer as a response, and after much testing, feedback and refining, we’ve developed a worthy sequel.
|
||||
|
||||
Wayfarer 2 further refines the formula that made the original Wayfarer so popular, slowing the pacing, increasing the length and detail of responses and making death a distinct possibility for all characters—not just the user. The stakes have never been higher!
|
||||
|
||||
If you want to try this model for free, you can do so at [https://aidungeon.com](https://aidungeon.com/).
|
||||
|
||||
We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.
|
||||
|
||||
[Quantized GGUF weights can be downloaded here.](https://huggingface.co/LatitudeGames/Wayfarer-2-12B-GGUF)
|
||||
|
||||
## Model details
|
||||
|
||||
Wayfarer 2 12B received SFT training with a simple three ingredient recipe: the Wayfarer 2 dataset itself, a series of sentiment-balanced roleplay transcripts and a small instruct core to help retain its instructional capabilities.
|
||||
|
||||
## How It Was Made
|
||||
|
||||
Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples.
|
||||
|
||||
One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died.
|
||||
|
||||
Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist for anyone. This serves to counter the positivity bias so inherent in our language models nowadays.
|
||||
|
||||
## Inference
|
||||
|
||||
The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.
|
||||
|
||||
```
|
||||
"temperature": 0.8,
|
||||
"repetition_penalty": 1.05,
|
||||
"min_p": 0.025
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other perspectives will work as well but may produce suboptimal results.
|
||||
|
||||
## Prompt Format
|
||||
|
||||
ChatML was used for both finetuning stages.
|
||||
|
||||
```
|
||||
<|im_start|>system
|
||||
You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|im_end|>
|
||||
<|im_start|>user
|
||||
> You peer into the darkness.<|im_end|>
|
||||
<|im_start|>assistant
|
||||
You have been eaten by a grue.
|
||||
|
||||
GAME OVER<|im_end|>
|
||||
```
|
||||
|
||||
## Credits
|
||||
|
||||
Thanks to [Gryphe Padar](https://huggingface.co/Gryphe) for collaborating on this finetune with us!
|
||||
3
Wayfarer-2-12B.jpg
Normal file
3
Wayfarer-2-12B.jpg
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2ceece1c12b874ec17fb16dfe9c66a927cd1c145cde29d6e85de4951956035f0
|
||||
size 1188702
|
||||
4
chat_template.jinja
Normal file
4
chat_template.jinja
Normal file
@@ -0,0 +1,4 @@
|
||||
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
|
||||
' + message['content'] + '<|im_end|>' + '
|
||||
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
|
||||
' }}{% endif %}
|
||||
26
config.json
Normal file
26
config.json
Normal file
@@ -0,0 +1,26 @@
|
||||
{
|
||||
"architectures": [
|
||||
"MistralForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 131072,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 5120,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 131072,
|
||||
"model_type": "mistral",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 40,
|
||||
"num_key_value_heads": 8,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_theta": 1000000.0,
|
||||
"sliding_window": null,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.52.4",
|
||||
"use_cache": true,
|
||||
"vocab_size": 131074
|
||||
}
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"do_sample": true,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.52.4"
|
||||
}
|
||||
3
model-00001-of-00005.safetensors
Normal file
3
model-00001-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fb15e30e70f8f5cab9674f56d38772fa095bfd78feb40ce3d385d04214caf255
|
||||
size 4865542976
|
||||
3
model-00002-of-00005.safetensors
Normal file
3
model-00002-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4f6627b3e34d74837a1b111c0765407420e4a72c93150988202070c11fe20f07
|
||||
size 4907529424
|
||||
3
model-00003-of-00005.safetensors
Normal file
3
model-00003-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4007e837011579fc5501ca9eaf7f702406972bf6b484f45d156fe40aa07dc9f1
|
||||
size 4907529456
|
||||
3
model-00004-of-00005.safetensors
Normal file
3
model-00004-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dd29774a5df478c0c3c2b582781ff2240c71a05f8e29717210c7c1409403c68c
|
||||
size 4907529456
|
||||
3
model-00005-of-00005.safetensors
Normal file
3
model-00005-of-00005.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:083701e8f86480ed9b220591f8f325a0c24a0d4611c2cf474651d87b2b8e84e0
|
||||
size 4907516752
|
||||
370
model.safetensors.index.json
Normal file
370
model.safetensors.index.json
Normal file
@@ -0,0 +1,370 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 24495605760
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00005-of-00005.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.self_attn.k_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.self_attn.o_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.self_attn.q_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.39.self_attn.v_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"model.norm.weight": "model-00005-of-00005.safetensors"
|
||||
}
|
||||
}
|
||||
39
special_tokens_map.json
Normal file
39
special_tokens_map.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
{
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
],
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<pad>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a2fa2956478eaa353c6c4b1f47fdd6868cce6075e52e169c35ae8bd28524e7a8
|
||||
size 17078668
|
||||
8034
tokenizer_config.json
Normal file
8034
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user