初始化项目,由ModelHub XC社区提供模型

Model: FlyPig23/Qwen3-4B_Paper_Impact_media_SFT_1ep
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-29 16:04:06 +08:00
commit 65e1a6a5c7
20 changed files with 152946 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

63
README.md Normal file
View File

@@ -0,0 +1,63 @@
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen3-4B_Paper_Impact_media_SFT_1ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B_Paper_Impact_media_SFT_1ep
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the paper_impact_media_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.57.1
- Pytorch 2.6.0+cu124
- Datasets 4.5.0
- Tokenizers 0.22.1

28
added_tokens.json Normal file
View File

@@ -0,0 +1,28 @@
{
"</think>": 151668,
"</tool_call>": 151658,
"</tool_response>": 151666,
"<think>": 151667,
"<tool_call>": 151657,
"<tool_response>": 151665,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

12
all_results.json Normal file
View File

@@ -0,0 +1,12 @@
{
"epoch": 1.0,
"eval_loss": 0.057367920875549316,
"eval_runtime": 295.2637,
"eval_samples_per_second": 53.467,
"eval_steps_per_second": 1.673,
"total_flos": 3.732036479342346e+17,
"train_loss": 0.07862781884086817,
"train_runtime": 1853.8902,
"train_samples_per_second": 11.415,
"train_steps_per_second": 0.179
}

61
chat_template.jinja Normal file
View File

@@ -0,0 +1,61 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if message.content is string %}
{%- set content = message.content %}
{%- else %}
{%- set content = '' %}
{%- endif %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}

68
config.json Normal file
View File

@@ -0,0 +1,68 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"dtype": "bfloat16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 2560,
"initializer_range": 0.02,
"intermediate_size": 9728,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 262144,
"max_window_layers": 36,
"model_type": "qwen3",
"num_attention_heads": 32,
"num_hidden_layers": 36,
"num_key_value_heads": 8,
"pad_token_id": 151643,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 5000000,
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "4.57.1",
"use_cache": false,
"use_sliding_window": false,
"vocab_size": 151936
}

7
eval_results.json Normal file
View File

@@ -0,0 +1,7 @@
{
"epoch": 1.0,
"eval_loss": 0.057367920875549316,
"eval_runtime": 295.2637,
"eval_samples_per_second": 53.467,
"eval_steps_per_second": 1.673
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"transformers_version": "4.57.1"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:90b2904e4388c1eb0808ac05e55fc64428e01481be767cf2528e5fdaa8f169da
size 4967215360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:94a4357e79cb3f4bfa7bb22010020404d81247c69360532dcbed348bc57dbb59
size 3855679144

View File

@@ -0,0 +1,407 @@
{
"metadata": {
"total_parameters": 4022468096,
"total_size": 8822848512
},
"weight_map": {
"lm_head.weight": "model-00002-of-00002.safetensors",
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.20.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.norm.weight": "model-00002-of-00002.safetensors"
}
}

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
size 11422654

240
tokenizer_config.json Normal file
View File

@@ -0,0 +1,240 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151667": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151668": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": {},
"model_max_length": 1010000,
"pad_token": "<|endoftext|>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

8
train_results.json Normal file
View File

@@ -0,0 +1,8 @@
{
"epoch": 1.0,
"total_flos": 3.732036479342346e+17,
"train_loss": 0.07862781884086817,
"train_runtime": 1853.8902,
"train_samples_per_second": 11.415,
"train_steps_per_second": 0.179
}

67
trainer_log.jsonl Normal file
View File

@@ -0,0 +1,67 @@
{"current_steps": 5, "total_steps": 331, "loss": 1.7699, "lr": 2.3529411764705885e-06, "epoch": 0.015105740181268883, "percentage": 1.51, "elapsed_time": "0:00:29", "remaining_time": "0:32:31"}
{"current_steps": 10, "total_steps": 331, "loss": 0.3128, "lr": 5.294117647058824e-06, "epoch": 0.030211480362537766, "percentage": 3.02, "elapsed_time": "0:00:56", "remaining_time": "0:30:01"}
{"current_steps": 15, "total_steps": 331, "loss": 0.0691, "lr": 8.23529411764706e-06, "epoch": 0.045317220543806644, "percentage": 4.53, "elapsed_time": "0:01:24", "remaining_time": "0:29:38"}
{"current_steps": 20, "total_steps": 331, "loss": 0.0675, "lr": 1.1176470588235295e-05, "epoch": 0.06042296072507553, "percentage": 6.04, "elapsed_time": "0:01:51", "remaining_time": "0:28:51"}
{"current_steps": 25, "total_steps": 331, "loss": 0.0669, "lr": 1.4117647058823532e-05, "epoch": 0.0755287009063444, "percentage": 7.55, "elapsed_time": "0:02:19", "remaining_time": "0:28:21"}
{"current_steps": 30, "total_steps": 331, "loss": 0.0658, "lr": 1.7058823529411767e-05, "epoch": 0.09063444108761329, "percentage": 9.06, "elapsed_time": "0:02:46", "remaining_time": "0:27:49"}
{"current_steps": 35, "total_steps": 331, "loss": 0.0648, "lr": 2e-05, "epoch": 0.10574018126888217, "percentage": 10.57, "elapsed_time": "0:03:14", "remaining_time": "0:27:27"}
{"current_steps": 40, "total_steps": 331, "loss": 0.0657, "lr": 1.9986017152454497e-05, "epoch": 0.12084592145015106, "percentage": 12.08, "elapsed_time": "0:03:41", "remaining_time": "0:26:51"}
{"current_steps": 45, "total_steps": 331, "loss": 0.0609, "lr": 1.9944107713823068e-05, "epoch": 0.13595166163141995, "percentage": 13.6, "elapsed_time": "0:04:08", "remaining_time": "0:26:19"}
{"current_steps": 50, "total_steps": 331, "loss": 0.0712, "lr": 1.9874388886763944e-05, "epoch": 0.1510574018126888, "percentage": 15.11, "elapsed_time": "0:04:35", "remaining_time": "0:25:50"}
{"current_steps": 55, "total_steps": 331, "loss": 0.0669, "lr": 1.9777055644823087e-05, "epoch": 0.1661631419939577, "percentage": 16.62, "elapsed_time": "0:05:02", "remaining_time": "0:25:16"}
{"current_steps": 60, "total_steps": 331, "loss": 0.0623, "lr": 1.9652380187177128e-05, "epoch": 0.18126888217522658, "percentage": 18.13, "elapsed_time": "0:05:30", "remaining_time": "0:24:54"}
{"current_steps": 65, "total_steps": 331, "loss": 0.0561, "lr": 1.9500711177409456e-05, "epoch": 0.19637462235649547, "percentage": 19.64, "elapsed_time": "0:05:58", "remaining_time": "0:24:27"}
{"current_steps": 70, "total_steps": 331, "loss": 0.0614, "lr": 1.932247276844826e-05, "epoch": 0.21148036253776434, "percentage": 21.15, "elapsed_time": "0:06:27", "remaining_time": "0:24:03"}
{"current_steps": 75, "total_steps": 331, "loss": 0.0567, "lr": 1.9118163416393392e-05, "epoch": 0.22658610271903323, "percentage": 22.66, "elapsed_time": "0:06:54", "remaining_time": "0:23:35"}
{"current_steps": 80, "total_steps": 331, "loss": 0.0598, "lr": 1.8888354486549238e-05, "epoch": 0.24169184290030213, "percentage": 24.17, "elapsed_time": "0:07:22", "remaining_time": "0:23:08"}
{"current_steps": 85, "total_steps": 331, "loss": 0.0564, "lr": 1.863368865556191e-05, "epoch": 0.256797583081571, "percentage": 25.68, "elapsed_time": "0:07:50", "remaining_time": "0:22:40"}
{"current_steps": 90, "total_steps": 331, "loss": 0.056, "lr": 1.8354878114129368e-05, "epoch": 0.2719033232628399, "percentage": 27.19, "elapsed_time": "0:08:16", "remaining_time": "0:22:10"}
{"current_steps": 95, "total_steps": 331, "loss": 0.0493, "lr": 1.8052702575310588e-05, "epoch": 0.28700906344410876, "percentage": 28.7, "elapsed_time": "0:08:45", "remaining_time": "0:21:44"}
{"current_steps": 100, "total_steps": 331, "loss": 0.048, "lr": 1.772800709400383e-05, "epoch": 0.3021148036253776, "percentage": 30.21, "elapsed_time": "0:09:11", "remaining_time": "0:21:12"}
{"current_steps": 105, "total_steps": 331, "loss": 0.0537, "lr": 1.7381699703691866e-05, "epoch": 0.31722054380664655, "percentage": 31.72, "elapsed_time": "0:09:36", "remaining_time": "0:20:40"}
{"current_steps": 110, "total_steps": 331, "loss": 0.0537, "lr": 1.7014748877063212e-05, "epoch": 0.3323262839879154, "percentage": 33.23, "elapsed_time": "0:10:04", "remaining_time": "0:20:13"}
{"current_steps": 115, "total_steps": 331, "loss": 0.051, "lr": 1.6628180817610963e-05, "epoch": 0.3474320241691843, "percentage": 34.74, "elapsed_time": "0:10:31", "remaining_time": "0:19:46"}
{"current_steps": 120, "total_steps": 331, "loss": 0.0473, "lr": 1.6223076589783368e-05, "epoch": 0.36253776435045315, "percentage": 36.25, "elapsed_time": "0:10:57", "remaining_time": "0:19:15"}
{"current_steps": 125, "total_steps": 331, "loss": 0.0509, "lr": 1.5800569095711983e-05, "epoch": 0.3776435045317221, "percentage": 37.76, "elapsed_time": "0:11:24", "remaining_time": "0:18:48"}
{"current_steps": 130, "total_steps": 331, "loss": 0.0577, "lr": 1.5361839906972095e-05, "epoch": 0.39274924471299094, "percentage": 39.27, "elapsed_time": "0:11:53", "remaining_time": "0:18:22"}
{"current_steps": 135, "total_steps": 331, "loss": 0.0513, "lr": 1.4908115960235683e-05, "epoch": 0.4078549848942598, "percentage": 40.79, "elapsed_time": "0:12:21", "remaining_time": "0:17:56"}
{"current_steps": 140, "total_steps": 331, "loss": 0.0572, "lr": 1.4440666126057743e-05, "epoch": 0.4229607250755287, "percentage": 42.3, "elapsed_time": "0:12:47", "remaining_time": "0:17:27"}
{"current_steps": 145, "total_steps": 331, "loss": 0.0538, "lr": 1.396079766039157e-05, "epoch": 0.4380664652567976, "percentage": 43.81, "elapsed_time": "0:13:15", "remaining_time": "0:17:00"}
{"current_steps": 150, "total_steps": 331, "loss": 0.0512, "lr": 1.3469852548756626e-05, "epoch": 0.45317220543806647, "percentage": 45.32, "elapsed_time": "0:13:43", "remaining_time": "0:16:33"}
{"current_steps": 155, "total_steps": 331, "loss": 0.0494, "lr": 1.296920375328275e-05, "epoch": 0.46827794561933533, "percentage": 46.83, "elapsed_time": "0:14:10", "remaining_time": "0:16:06"}
{"current_steps": 160, "total_steps": 331, "loss": 0.0513, "lr": 1.2460251373126136e-05, "epoch": 0.48338368580060426, "percentage": 48.34, "elapsed_time": "0:14:38", "remaining_time": "0:15:39"}
{"current_steps": 165, "total_steps": 331, "loss": 0.046, "lr": 1.194441872899471e-05, "epoch": 0.4984894259818731, "percentage": 49.85, "elapsed_time": "0:15:06", "remaining_time": "0:15:11"}
{"current_steps": 170, "total_steps": 331, "loss": 0.05, "lr": 1.1423148382732854e-05, "epoch": 0.513595166163142, "percentage": 51.36, "elapsed_time": "0:15:34", "remaining_time": "0:14:45"}
{"current_steps": 175, "total_steps": 331, "loss": 0.0485, "lr": 1.0897898103096917e-05, "epoch": 0.5287009063444109, "percentage": 52.87, "elapsed_time": "0:16:01", "remaining_time": "0:14:17"}
{"current_steps": 180, "total_steps": 331, "loss": 0.0458, "lr": 1.0370136789003582e-05, "epoch": 0.5438066465256798, "percentage": 54.38, "elapsed_time": "0:16:28", "remaining_time": "0:13:49"}
{"current_steps": 185, "total_steps": 331, "loss": 0.0471, "lr": 9.841340361651921e-06, "epoch": 0.5589123867069486, "percentage": 55.89, "elapsed_time": "0:16:56", "remaining_time": "0:13:21"}
{"current_steps": 190, "total_steps": 331, "loss": 0.0397, "lr": 9.312987637007191e-06, "epoch": 0.5740181268882175, "percentage": 57.4, "elapsed_time": "0:17:22", "remaining_time": "0:12:53"}
{"current_steps": 195, "total_steps": 331, "loss": 0.0432, "lr": 8.786556190189183e-06, "epoch": 0.5891238670694864, "percentage": 58.91, "elapsed_time": "0:17:50", "remaining_time": "0:12:26"}
{"current_steps": 200, "total_steps": 331, "loss": 0.0456, "lr": 8.263518223330698e-06, "epoch": 0.6042296072507553, "percentage": 60.42, "elapsed_time": "0:18:17", "remaining_time": "0:11:58"}
{"current_steps": 205, "total_steps": 331, "loss": 0.0464, "lr": 7.745336448461958e-06, "epoch": 0.6193353474320241, "percentage": 61.93, "elapsed_time": "0:18:45", "remaining_time": "0:11:31"}
{"current_steps": 210, "total_steps": 331, "loss": 0.0436, "lr": 7.233459996934731e-06, "epoch": 0.6344410876132931, "percentage": 63.44, "elapsed_time": "0:19:13", "remaining_time": "0:11:04"}
{"current_steps": 215, "total_steps": 331, "loss": 0.0395, "lr": 6.729320366825785e-06, "epoch": 0.649546827794562, "percentage": 64.95, "elapsed_time": "0:19:40", "remaining_time": "0:10:36"}
{"current_steps": 220, "total_steps": 331, "loss": 0.0434, "lr": 6.234327419653013e-06, "epoch": 0.6646525679758308, "percentage": 66.47, "elapsed_time": "0:20:08", "remaining_time": "0:10:09"}
{"current_steps": 225, "total_steps": 331, "loss": 0.0448, "lr": 5.749865437599703e-06, "epoch": 0.6797583081570997, "percentage": 67.98, "elapsed_time": "0:20:35", "remaining_time": "0:09:41"}
{"current_steps": 230, "total_steps": 331, "loss": 0.0443, "lr": 5.277289252273175e-06, "epoch": 0.6948640483383686, "percentage": 69.49, "elapsed_time": "0:21:03", "remaining_time": "0:09:14"}
{"current_steps": 235, "total_steps": 331, "loss": 0.042, "lr": 4.817920455824045e-06, "epoch": 0.7099697885196374, "percentage": 71.0, "elapsed_time": "0:21:30", "remaining_time": "0:08:47"}
{"current_steps": 240, "total_steps": 331, "loss": 0.036, "lr": 4.373043705021899e-06, "epoch": 0.7250755287009063, "percentage": 72.51, "elapsed_time": "0:21:57", "remaining_time": "0:08:19"}
{"current_steps": 245, "total_steps": 331, "loss": 0.0392, "lr": 3.943903128623336e-06, "epoch": 0.7401812688821753, "percentage": 74.02, "elapsed_time": "0:22:25", "remaining_time": "0:07:52"}
{"current_steps": 250, "total_steps": 331, "loss": 0.036, "lr": 3.5316988480794255e-06, "epoch": 0.7552870090634441, "percentage": 75.53, "elapsed_time": "0:22:54", "remaining_time": "0:07:25"}
{"current_steps": 255, "total_steps": 331, "loss": 0.0373, "lr": 3.1375836213126653e-06, "epoch": 0.770392749244713, "percentage": 77.04, "elapsed_time": "0:23:21", "remaining_time": "0:06:57"}
{"current_steps": 260, "total_steps": 331, "loss": 0.0363, "lr": 2.7626596189492983e-06, "epoch": 0.7854984894259819, "percentage": 78.55, "elapsed_time": "0:23:48", "remaining_time": "0:06:29"}
{"current_steps": 265, "total_steps": 331, "loss": 0.0345, "lr": 2.4079753420225694e-06, "epoch": 0.8006042296072508, "percentage": 80.06, "elapsed_time": "0:24:15", "remaining_time": "0:06:02"}
{"current_steps": 270, "total_steps": 331, "loss": 0.0427, "lr": 2.0745226897666858e-06, "epoch": 0.8157099697885196, "percentage": 81.57, "elapsed_time": "0:24:42", "remaining_time": "0:05:34"}
{"current_steps": 275, "total_steps": 331, "loss": 0.0382, "lr": 1.7632341857016733e-06, "epoch": 0.8308157099697885, "percentage": 83.08, "elapsed_time": "0:25:10", "remaining_time": "0:05:07"}
{"current_steps": 280, "total_steps": 331, "loss": 0.0423, "lr": 1.4749803697665366e-06, "epoch": 0.8459214501510574, "percentage": 84.59, "elapsed_time": "0:25:38", "remaining_time": "0:04:40"}
{"current_steps": 285, "total_steps": 331, "loss": 0.0411, "lr": 1.2105673637938054e-06, "epoch": 0.8610271903323263, "percentage": 86.1, "elapsed_time": "0:26:05", "remaining_time": "0:04:12"}
{"current_steps": 290, "total_steps": 331, "loss": 0.0366, "lr": 9.707346171337895e-07, "epoch": 0.8761329305135952, "percentage": 87.61, "elapsed_time": "0:26:33", "remaining_time": "0:03:45"}
{"current_steps": 295, "total_steps": 331, "loss": 0.036, "lr": 7.561528387330797e-07, "epoch": 0.8912386706948641, "percentage": 89.12, "elapsed_time": "0:27:02", "remaining_time": "0:03:18"}
{"current_steps": 300, "total_steps": 331, "loss": 0.0324, "lr": 5.674221214503639e-07, "epoch": 0.9063444108761329, "percentage": 90.63, "elapsed_time": "0:27:31", "remaining_time": "0:02:50"}
{"current_steps": 305, "total_steps": 331, "loss": 0.036, "lr": 4.0507026385502747e-07, "epoch": 0.9214501510574018, "percentage": 92.15, "elapsed_time": "0:27:57", "remaining_time": "0:02:22"}
{"current_steps": 310, "total_steps": 331, "loss": 0.0389, "lr": 2.6955129420176193e-07, "epoch": 0.9365558912386707, "percentage": 93.66, "elapsed_time": "0:28:22", "remaining_time": "0:01:55"}
{"current_steps": 315, "total_steps": 331, "loss": 0.038, "lr": 1.612442007090076e-07, "epoch": 0.9516616314199395, "percentage": 95.17, "elapsed_time": "0:28:50", "remaining_time": "0:01:27"}
{"current_steps": 320, "total_steps": 331, "loss": 0.0294, "lr": 8.04518716920466e-08, "epoch": 0.9667673716012085, "percentage": 96.68, "elapsed_time": "0:29:16", "remaining_time": "0:01:00"}
{"current_steps": 325, "total_steps": 331, "loss": 0.0394, "lr": 2.7400248514776184e-08, "epoch": 0.9818731117824774, "percentage": 98.19, "elapsed_time": "0:29:44", "remaining_time": "0:00:32"}
{"current_steps": 330, "total_steps": 331, "loss": 0.0394, "lr": 2.237693728981416e-09, "epoch": 0.9969788519637462, "percentage": 99.7, "elapsed_time": "0:30:13", "remaining_time": "0:00:05"}
{"current_steps": 331, "total_steps": 331, "epoch": 1.0, "percentage": 100.0, "elapsed_time": "0:30:52", "remaining_time": "0:00:00"}

505
trainer_state.json Normal file
View File

@@ -0,0 +1,505 @@
{
"best_global_step": null,
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 1.0,
"eval_steps": 500,
"global_step": 331,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.015105740181268883,
"grad_norm": 65.32524871826172,
"learning_rate": 2.3529411764705885e-06,
"loss": 1.7699,
"step": 5
},
{
"epoch": 0.030211480362537766,
"grad_norm": 1.1226208209991455,
"learning_rate": 5.294117647058824e-06,
"loss": 0.3128,
"step": 10
},
{
"epoch": 0.045317220543806644,
"grad_norm": 1.0650861263275146,
"learning_rate": 8.23529411764706e-06,
"loss": 0.0691,
"step": 15
},
{
"epoch": 0.06042296072507553,
"grad_norm": 0.5207144021987915,
"learning_rate": 1.1176470588235295e-05,
"loss": 0.0675,
"step": 20
},
{
"epoch": 0.0755287009063444,
"grad_norm": 0.518444299697876,
"learning_rate": 1.4117647058823532e-05,
"loss": 0.0669,
"step": 25
},
{
"epoch": 0.09063444108761329,
"grad_norm": 0.43817996978759766,
"learning_rate": 1.7058823529411767e-05,
"loss": 0.0658,
"step": 30
},
{
"epoch": 0.10574018126888217,
"grad_norm": 1.0586720705032349,
"learning_rate": 2e-05,
"loss": 0.0648,
"step": 35
},
{
"epoch": 0.12084592145015106,
"grad_norm": 1.3109872341156006,
"learning_rate": 1.9986017152454497e-05,
"loss": 0.0657,
"step": 40
},
{
"epoch": 0.13595166163141995,
"grad_norm": 0.48269444704055786,
"learning_rate": 1.9944107713823068e-05,
"loss": 0.0609,
"step": 45
},
{
"epoch": 0.1510574018126888,
"grad_norm": 0.9956803917884827,
"learning_rate": 1.9874388886763944e-05,
"loss": 0.0712,
"step": 50
},
{
"epoch": 0.1661631419939577,
"grad_norm": 0.8316070437431335,
"learning_rate": 1.9777055644823087e-05,
"loss": 0.0669,
"step": 55
},
{
"epoch": 0.18126888217522658,
"grad_norm": 0.09602731466293335,
"learning_rate": 1.9652380187177128e-05,
"loss": 0.0623,
"step": 60
},
{
"epoch": 0.19637462235649547,
"grad_norm": 0.23348145186901093,
"learning_rate": 1.9500711177409456e-05,
"loss": 0.0561,
"step": 65
},
{
"epoch": 0.21148036253776434,
"grad_norm": 0.1733577698469162,
"learning_rate": 1.932247276844826e-05,
"loss": 0.0614,
"step": 70
},
{
"epoch": 0.22658610271903323,
"grad_norm": 0.21881505846977234,
"learning_rate": 1.9118163416393392e-05,
"loss": 0.0567,
"step": 75
},
{
"epoch": 0.24169184290030213,
"grad_norm": 0.3463956117630005,
"learning_rate": 1.8888354486549238e-05,
"loss": 0.0598,
"step": 80
},
{
"epoch": 0.256797583081571,
"grad_norm": 0.11425940692424774,
"learning_rate": 1.863368865556191e-05,
"loss": 0.0564,
"step": 85
},
{
"epoch": 0.2719033232628399,
"grad_norm": 0.15162110328674316,
"learning_rate": 1.8354878114129368e-05,
"loss": 0.056,
"step": 90
},
{
"epoch": 0.28700906344410876,
"grad_norm": 0.1482110172510147,
"learning_rate": 1.8052702575310588e-05,
"loss": 0.0493,
"step": 95
},
{
"epoch": 0.3021148036253776,
"grad_norm": 0.36696767807006836,
"learning_rate": 1.772800709400383e-05,
"loss": 0.048,
"step": 100
},
{
"epoch": 0.31722054380664655,
"grad_norm": 0.25465500354766846,
"learning_rate": 1.7381699703691866e-05,
"loss": 0.0537,
"step": 105
},
{
"epoch": 0.3323262839879154,
"grad_norm": 0.38290655612945557,
"learning_rate": 1.7014748877063212e-05,
"loss": 0.0537,
"step": 110
},
{
"epoch": 0.3474320241691843,
"grad_norm": 0.3702133297920227,
"learning_rate": 1.6628180817610963e-05,
"loss": 0.051,
"step": 115
},
{
"epoch": 0.36253776435045315,
"grad_norm": 0.1437833160161972,
"learning_rate": 1.6223076589783368e-05,
"loss": 0.0473,
"step": 120
},
{
"epoch": 0.3776435045317221,
"grad_norm": 0.15008145570755005,
"learning_rate": 1.5800569095711983e-05,
"loss": 0.0509,
"step": 125
},
{
"epoch": 0.39274924471299094,
"grad_norm": 0.15623906254768372,
"learning_rate": 1.5361839906972095e-05,
"loss": 0.0577,
"step": 130
},
{
"epoch": 0.4078549848942598,
"grad_norm": 0.10994814336299896,
"learning_rate": 1.4908115960235683e-05,
"loss": 0.0513,
"step": 135
},
{
"epoch": 0.4229607250755287,
"grad_norm": 0.15456193685531616,
"learning_rate": 1.4440666126057743e-05,
"loss": 0.0572,
"step": 140
},
{
"epoch": 0.4380664652567976,
"grad_norm": 0.16252553462982178,
"learning_rate": 1.396079766039157e-05,
"loss": 0.0538,
"step": 145
},
{
"epoch": 0.45317220543806647,
"grad_norm": 0.4133623540401459,
"learning_rate": 1.3469852548756626e-05,
"loss": 0.0512,
"step": 150
},
{
"epoch": 0.46827794561933533,
"grad_norm": 0.1537550389766693,
"learning_rate": 1.296920375328275e-05,
"loss": 0.0494,
"step": 155
},
{
"epoch": 0.48338368580060426,
"grad_norm": 0.5091661214828491,
"learning_rate": 1.2460251373126136e-05,
"loss": 0.0513,
"step": 160
},
{
"epoch": 0.4984894259818731,
"grad_norm": 0.18158994615077972,
"learning_rate": 1.194441872899471e-05,
"loss": 0.046,
"step": 165
},
{
"epoch": 0.513595166163142,
"grad_norm": 0.39592429995536804,
"learning_rate": 1.1423148382732854e-05,
"loss": 0.05,
"step": 170
},
{
"epoch": 0.5287009063444109,
"grad_norm": 0.22556884586811066,
"learning_rate": 1.0897898103096917e-05,
"loss": 0.0485,
"step": 175
},
{
"epoch": 0.5438066465256798,
"grad_norm": 0.35634785890579224,
"learning_rate": 1.0370136789003582e-05,
"loss": 0.0458,
"step": 180
},
{
"epoch": 0.5589123867069486,
"grad_norm": 0.5002294778823853,
"learning_rate": 9.841340361651921e-06,
"loss": 0.0471,
"step": 185
},
{
"epoch": 0.5740181268882175,
"grad_norm": 0.1892608106136322,
"learning_rate": 9.312987637007191e-06,
"loss": 0.0397,
"step": 190
},
{
"epoch": 0.5891238670694864,
"grad_norm": 0.2997090816497803,
"learning_rate": 8.786556190189183e-06,
"loss": 0.0432,
"step": 195
},
{
"epoch": 0.6042296072507553,
"grad_norm": 0.10828253626823425,
"learning_rate": 8.263518223330698e-06,
"loss": 0.0456,
"step": 200
},
{
"epoch": 0.6193353474320241,
"grad_norm": 0.11140269786119461,
"learning_rate": 7.745336448461958e-06,
"loss": 0.0464,
"step": 205
},
{
"epoch": 0.6344410876132931,
"grad_norm": 0.21382257342338562,
"learning_rate": 7.233459996934731e-06,
"loss": 0.0436,
"step": 210
},
{
"epoch": 0.649546827794562,
"grad_norm": 0.22236904501914978,
"learning_rate": 6.729320366825785e-06,
"loss": 0.0395,
"step": 215
},
{
"epoch": 0.6646525679758308,
"grad_norm": 0.18689888715744019,
"learning_rate": 6.234327419653013e-06,
"loss": 0.0434,
"step": 220
},
{
"epoch": 0.6797583081570997,
"grad_norm": 0.11527472734451294,
"learning_rate": 5.749865437599703e-06,
"loss": 0.0448,
"step": 225
},
{
"epoch": 0.6948640483383686,
"grad_norm": 0.12977807223796844,
"learning_rate": 5.277289252273175e-06,
"loss": 0.0443,
"step": 230
},
{
"epoch": 0.7099697885196374,
"grad_norm": 0.28918567299842834,
"learning_rate": 4.817920455824045e-06,
"loss": 0.042,
"step": 235
},
{
"epoch": 0.7250755287009063,
"grad_norm": 0.1708402782678604,
"learning_rate": 4.373043705021899e-06,
"loss": 0.036,
"step": 240
},
{
"epoch": 0.7401812688821753,
"grad_norm": 0.2525939345359802,
"learning_rate": 3.943903128623336e-06,
"loss": 0.0392,
"step": 245
},
{
"epoch": 0.7552870090634441,
"grad_norm": 0.3220805525779724,
"learning_rate": 3.5316988480794255e-06,
"loss": 0.036,
"step": 250
},
{
"epoch": 0.770392749244713,
"grad_norm": 0.3768947422504425,
"learning_rate": 3.1375836213126653e-06,
"loss": 0.0373,
"step": 255
},
{
"epoch": 0.7854984894259819,
"grad_norm": 0.15737439692020416,
"learning_rate": 2.7626596189492983e-06,
"loss": 0.0363,
"step": 260
},
{
"epoch": 0.8006042296072508,
"grad_norm": 0.24124516546726227,
"learning_rate": 2.4079753420225694e-06,
"loss": 0.0345,
"step": 265
},
{
"epoch": 0.8157099697885196,
"grad_norm": 0.2828380763530731,
"learning_rate": 2.0745226897666858e-06,
"loss": 0.0427,
"step": 270
},
{
"epoch": 0.8308157099697885,
"grad_norm": 0.4477688670158386,
"learning_rate": 1.7632341857016733e-06,
"loss": 0.0382,
"step": 275
},
{
"epoch": 0.8459214501510574,
"grad_norm": 0.35590699315071106,
"learning_rate": 1.4749803697665366e-06,
"loss": 0.0423,
"step": 280
},
{
"epoch": 0.8610271903323263,
"grad_norm": 0.5138667821884155,
"learning_rate": 1.2105673637938054e-06,
"loss": 0.0411,
"step": 285
},
{
"epoch": 0.8761329305135952,
"grad_norm": 0.2003919929265976,
"learning_rate": 9.707346171337895e-07,
"loss": 0.0366,
"step": 290
},
{
"epoch": 0.8912386706948641,
"grad_norm": 0.4709911346435547,
"learning_rate": 7.561528387330797e-07,
"loss": 0.036,
"step": 295
},
{
"epoch": 0.9063444108761329,
"grad_norm": 0.14394928514957428,
"learning_rate": 5.674221214503639e-07,
"loss": 0.0324,
"step": 300
},
{
"epoch": 0.9214501510574018,
"grad_norm": 0.2846868932247162,
"learning_rate": 4.0507026385502747e-07,
"loss": 0.036,
"step": 305
},
{
"epoch": 0.9365558912386707,
"grad_norm": 0.2303285300731659,
"learning_rate": 2.6955129420176193e-07,
"loss": 0.0389,
"step": 310
},
{
"epoch": 0.9516616314199395,
"grad_norm": 0.27734264731407166,
"learning_rate": 1.612442007090076e-07,
"loss": 0.038,
"step": 315
},
{
"epoch": 0.9667673716012085,
"grad_norm": 0.17142164707183838,
"learning_rate": 8.04518716920466e-08,
"loss": 0.0294,
"step": 320
},
{
"epoch": 0.9818731117824774,
"grad_norm": 0.14783106744289398,
"learning_rate": 2.7400248514776184e-08,
"loss": 0.0394,
"step": 325
},
{
"epoch": 0.9969788519637462,
"grad_norm": 0.2399517148733139,
"learning_rate": 2.237693728981416e-09,
"loss": 0.0394,
"step": 330
},
{
"epoch": 1.0,
"step": 331,
"total_flos": 3.732036479342346e+17,
"train_loss": 0.07862781884086817,
"train_runtime": 1853.8902,
"train_samples_per_second": 11.415,
"train_steps_per_second": 0.179
}
],
"logging_steps": 5,
"max_steps": 331,
"num_input_tokens_seen": 0,
"num_train_epochs": 1,
"save_steps": 500,
"stateful_callbacks": {
"TrainerControl": {
"args": {
"should_epoch_stop": false,
"should_evaluate": false,
"should_log": false,
"should_save": true,
"should_training_stop": true
},
"attributes": {}
}
},
"total_flos": 3.732036479342346e+17,
"train_batch_size": 8,
"trial_name": null,
"trial_params": null
}

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:013314a92b337ec50e3fb59332f8eac9f74bc917688b37616e1055521c102506
size 7416

1
vocab.json Normal file

File diff suppressed because one or more lines are too long