初始化项目,由ModelHub XC社区提供模型
Model: agentlans/granite-3.3-2b-instruct-critical-thinking Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
11
Modelfile
Normal file
11
Modelfile
Normal file
@@ -0,0 +1,11 @@
|
||||
# ollama modelfile auto-generated by llamafactory
|
||||
|
||||
FROM .
|
||||
|
||||
TEMPLATE """{{ if .System }}System: {{ .System }}<|end_of_text|>
|
||||
{{ end }}{{ range .Messages }}{{ if eq .Role "user" }}Human: {{ .Content }}<|end_of_text|>
|
||||
Assistant:{{ else if eq .Role "assistant" }}{{ .Content }}<|end_of_text|>
|
||||
{{ end }}{{ end }}"""
|
||||
|
||||
PARAMETER stop "<|end_of_text|>"
|
||||
PARAMETER num_ctx 4096
|
||||
115
README.md
Normal file
115
README.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- agentlans/reddit-logic
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- ibm-granite/granite-3.3-2b-instruct
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- critical-thinking
|
||||
- analysis
|
||||
- review
|
||||
- argument
|
||||
---
|
||||
## granite-3.3-2b-instruct-critical-thinking
|
||||
|
||||
This model is based on [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) and it's designed for analyzing arguments, finding logical fallacies, and providing suggestions for improvement.
|
||||
It was pre-trained on the [agentlans/reddit-logic](https://huggingface.co/datasets/agentlans/reddit-logic) dataset and then fine-tuned using supervised learning on the same dataset.
|
||||
|
||||
### Input Format
|
||||
|
||||
The model expects input in the following format:
|
||||
```
|
||||
Critically analyze:
|
||||
{{YOUR_TEXT_HERE}}
|
||||
```
|
||||
|
||||
For example:
|
||||
```
|
||||
Critically analyze:
|
||||
So I've noticed a trend when it comes to the discourse around tipping and I want to be clear from the get go what my views are. I believe a tipping as a system in the US is to allow busine to owners to not pay a fair wage. I disagree with it being the primary way that servers in full service restaurants make their money. That being said, I also believe that if you go to full service restaurant where the waiter isn't giving horrible service then you should be expected to tip. So back to the discourse, it seems like many people are being disingenuous when it comes to caring about the employees by arguing: "I shouldn't be expected to pay them a fair wage". To me this seems like a cop out, because if they truly cared they would not be supporting business that use that model with any money. It seems to me that a lot of people are cheapskates masquerading as rebels to make themselves feel better about what they're doing. To clarify, I do not agree with tipping fast food or other businesses being an expectation where there are guaranteed hourly wages. I only agree with tipping being expected at sit down full service restaurants where tipped minimum wage is in effect.
|
||||
```
|
||||
|
||||
### Output Format
|
||||
|
||||
The model outputs a JSON object containing an analysis of the input argument. Here's an example of the expected output format:
|
||||
|
||||
```json
|
||||
{
|
||||
"claims": [
|
||||
"Tipping is a cop-out for avoiding fair wages.",
|
||||
"Tipping is acceptable at full-service restaurants with tipped minimum wage."
|
||||
],
|
||||
"ambiguous_terms": [
|
||||
"Cop out",
|
||||
"fair wage"
|
||||
],
|
||||
"assumptions": [
|
||||
"Fair wages are a fundamental human right.",
|
||||
"Supporting businesses with tipping is hypocritical."
|
||||
],
|
||||
"premises": [
|
||||
"Tipping is a means to avoid paying fair wages.",
|
||||
"Full-service restaurants with tipped minimum wage justify tipping."
|
||||
],
|
||||
"evidence": {
|
||||
"credibility": "Moderate",
|
||||
"relevance": "High",
|
||||
"sufficiency": "Adequate for argument's scope"
|
||||
},
|
||||
"additional_data": "Economic studies on tipping systems, employment statistics on full-service restaurants.",
|
||||
"issues": [
|
||||
"Overgeneralizes about tipping and fair wages."
|
||||
],
|
||||
"competing_explanations": [
|
||||
"Tipping can be a way for customers to support quality service.",
|
||||
"Tipping may not be feasible in all economic contexts."
|
||||
],
|
||||
"validity": "Partially valid",
|
||||
"soundness": "Moderate",
|
||||
"recommendations": [
|
||||
"Clarify the distinction between tipping and fair wages.",
|
||||
"Consider the complexities of tipping systems and their impact on workers."
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- The model has the same limitations as the [agentlans/reddit-logic](https://huggingface.co/datasets/agentlans/reddit-logic) dataset.
|
||||
- May not work as well on data outside the training distribution, including other types of communication and fields of discourse.
|
||||
- Lacks specialized knowledge but can offer pointers for continuing research to critically evaluate the arguments.
|
||||
- May possibly misinterpret the input or create malformed output, although this hasn't occurred yet in testing so far.
|
||||
- May miss some logical fallacies.
|
||||
- Doesn't fact check references.
|
||||
|
||||
## Training procedure
|
||||
|
||||
The following setup was used for both pretraining and supervised fine-tuning.
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 5e-05
|
||||
- train_batch_size: 2
|
||||
- eval_batch_size: 8
|
||||
- seed: 42
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 16
|
||||
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||
- lr_scheduler_type: cosine
|
||||
- num_epochs: 1.0
|
||||
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.15.0
|
||||
- Transformers 4.49.0
|
||||
- Pytorch 2.6.0+cu124
|
||||
- Datasets 3.4.1
|
||||
- Tokenizers 0.21.0
|
||||
|
||||
# Licence
|
||||
|
||||
Apache 2.0
|
||||
9
added_tokens.json
Normal file
9
added_tokens.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"<|end_of_cite|>": 49156,
|
||||
"<|end_of_plugin|>": 49158,
|
||||
"<|end_of_role|>": 49153,
|
||||
"<|start_of_cite|>": 49155,
|
||||
"<|start_of_plugin|>": 49157,
|
||||
"<|start_of_role|>": 49152,
|
||||
"<|tool_call|>": 49154
|
||||
}
|
||||
33
config.json
Normal file
33
config.json
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"_name_or_path": "/drive2/granite-3.3-2b-instruct-reddit-logic-pretrain-model",
|
||||
"architectures": [
|
||||
"GraniteForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"attention_multiplier": 0.015625,
|
||||
"bos_token_id": 0,
|
||||
"embedding_multiplier": 12.0,
|
||||
"eos_token_id": 0,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 8192,
|
||||
"logits_scaling": 8.0,
|
||||
"max_position_embeddings": 131072,
|
||||
"mlp_bias": false,
|
||||
"model_type": "granite",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 40,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 0,
|
||||
"residual_multiplier": 0.22,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000000.0,
|
||||
"tie_word_embeddings": true,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.49.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 49159
|
||||
}
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 0,
|
||||
"eos_token_id": 0,
|
||||
"pad_token_id": 0,
|
||||
"transformers_version": "4.49.0"
|
||||
}
|
||||
48892
merges.txt
Normal file
48892
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:908bf50df68fb672c4bc9c304b8fe76d80ce3ba446d732f93bd04d0a58f558c2
|
||||
size 4999999840
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f4498ac094bee357634e2ab0ec4674b36be5f69ad7f50ec37a701a8af1e605fa
|
||||
size 67121712
|
||||
369
model.safetensors.index.json
Normal file
369
model.safetensors.index.json
Normal file
@@ -0,0 +1,369 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 5067079680
|
||||
},
|
||||
"weight_map": {
|
||||
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.36.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.37.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.38.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.39.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.39.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.39.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.39.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.39.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.39.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.39.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.39.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.39.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.norm.weight": "model-00002-of-00002.safetensors"
|
||||
}
|
||||
}
|
||||
39
special_tokens_map.json
Normal file
39
special_tokens_map.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|start_of_role|>",
|
||||
"<|end_of_role|>",
|
||||
"<|tool_call|>",
|
||||
"<|start_of_cite|>",
|
||||
"<|end_of_cite|>",
|
||||
"<|start_of_plugin|>",
|
||||
"<|end_of_plugin|>"
|
||||
],
|
||||
"bos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
244990
tokenizer.json
Normal file
244990
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
236
tokenizer_config.json
Normal file
236
tokenizer_config.json
Normal file
@@ -0,0 +1,236 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<fim_prefix>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "<fim_middle>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"3": {
|
||||
"content": "<fim_suffix>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"4": {
|
||||
"content": "<fim_pad>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"5": {
|
||||
"content": "<filename>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"6": {
|
||||
"content": "<gh_stars>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"7": {
|
||||
"content": "<issue_start>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"8": {
|
||||
"content": "<issue_comment>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"9": {
|
||||
"content": "<issue_closed>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"10": {
|
||||
"content": "<jupyter_start>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"11": {
|
||||
"content": "<jupyter_text>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"12": {
|
||||
"content": "<jupyter_code>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"13": {
|
||||
"content": "<jupyter_output>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"14": {
|
||||
"content": "<empty_output>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"15": {
|
||||
"content": "<commit_before>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"16": {
|
||||
"content": "<commit_msg>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"17": {
|
||||
"content": "<commit_after>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"18": {
|
||||
"content": "<reponame>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49152": {
|
||||
"content": "<|start_of_role|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49153": {
|
||||
"content": "<|end_of_role|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49154": {
|
||||
"content": "<|tool_call|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49155": {
|
||||
"content": "<|start_of_cite|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49156": {
|
||||
"content": "<|end_of_cite|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49157": {
|
||||
"content": "<|start_of_plugin|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"49158": {
|
||||
"content": "<|end_of_plugin|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|start_of_role|>",
|
||||
"<|end_of_role|>",
|
||||
"<|tool_call|>",
|
||||
"<|start_of_cite|>",
|
||||
"<|end_of_cite|>",
|
||||
"<|start_of_plugin|>",
|
||||
"<|end_of_plugin|>"
|
||||
],
|
||||
"bos_token": "<|end_of_text|>",
|
||||
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ 'System: ' + system_message + '<|end_of_text|>' + '\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'Human: ' + content + '<|end_of_text|>' + '\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '<|end_of_text|>' + '\n' }}{% endif %}{% endfor %}",
|
||||
"clean_up_tokenization_spaces": true,
|
||||
"eos_token": "<|end_of_text|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 9223372036854775807,
|
||||
"pad_token": "<|end_of_text|>",
|
||||
"padding_side": "left",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "GPT2Tokenizer",
|
||||
"unk_token": "<|end_of_text|>",
|
||||
"vocab_size": 49152
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user