初始化项目,由ModelHub XC社区提供模型

Model: FlyPig23/Qwen3-4B_Paper_Impact_patent_SFT_1ep
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-29 17:05:37 +08:00
commit f2d96db006
20 changed files with 152874 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

63
README.md Normal file
View File

@@ -0,0 +1,63 @@
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen3-4B_Paper_Impact_patent_SFT_1ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B_Paper_Impact_patent_SFT_1ep
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the paper_impact_patents_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.57.1
- Pytorch 2.6.0+cu124
- Datasets 4.5.0
- Tokenizers 0.22.1

28
added_tokens.json Normal file
View File

@@ -0,0 +1,28 @@
{
"</think>": 151668,
"</tool_call>": 151658,
"</tool_response>": 151666,
"<think>": 151667,
"<tool_call>": 151657,
"<tool_response>": 151665,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

12
all_results.json Normal file
View File

@@ -0,0 +1,12 @@
{
"epoch": 1.0,
"eval_loss": 0.05890423804521561,
"eval_runtime": 257.2577,
"eval_samples_per_second": 53.522,
"eval_steps_per_second": 1.675,
"total_flos": 3.259472961077248e+17,
"train_loss": 0.08579332347738618,
"train_runtime": 1625.0751,
"train_samples_per_second": 11.273,
"train_steps_per_second": 0.177
}

61
chat_template.jinja Normal file
View File

@@ -0,0 +1,61 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if message.content is string %}
{%- set content = message.content %}
{%- else %}
{%- set content = '' %}
{%- endif %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}

68
config.json Normal file
View File

@@ -0,0 +1,68 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"dtype": "bfloat16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 2560,
"initializer_range": 0.02,
"intermediate_size": 9728,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 262144,
"max_window_layers": 36,
"model_type": "qwen3",
"num_attention_heads": 32,
"num_hidden_layers": 36,
"num_key_value_heads": 8,
"pad_token_id": 151643,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 5000000,
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "4.57.1",
"use_cache": false,
"use_sliding_window": false,
"vocab_size": 151936
}

7
eval_results.json Normal file
View File

@@ -0,0 +1,7 @@
{
"epoch": 1.0,
"eval_loss": 0.05890423804521561,
"eval_runtime": 257.2577,
"eval_samples_per_second": 53.522,
"eval_steps_per_second": 1.675
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"transformers_version": "4.57.1"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89b26355232ac9c48de36929ca3d5cab3705a14826da543c0917f6f1bc872de2
size 4967215360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2875b9a5509391b15259f1f9d82fce1395c39188170c9e51b37d22ef6bfbacdb
size 3855679144

View File

@@ -0,0 +1,407 @@
{
"metadata": {
"total_parameters": 4022468096,
"total_size": 8822848512
},
"weight_map": {
"lm_head.weight": "model-00002-of-00002.safetensors",
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.20.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.norm.weight": "model-00002-of-00002.safetensors"
}
}

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
size 11422654

240
tokenizer_config.json Normal file
View File

@@ -0,0 +1,240 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151667": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151668": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": {},
"model_max_length": 1010000,
"pad_token": "<|endoftext|>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

8
train_results.json Normal file
View File

@@ -0,0 +1,8 @@
{
"epoch": 1.0,
"total_flos": 3.259472961077248e+17,
"train_loss": 0.08579332347738618,
"train_runtime": 1625.0751,
"train_samples_per_second": 11.273,
"train_steps_per_second": 0.177
}

58
trainer_log.jsonl Normal file
View File

@@ -0,0 +1,58 @@
{"current_steps": 5, "total_steps": 287, "loss": 1.2108, "lr": 2.7586206896551725e-06, "epoch": 0.017452006980802792, "percentage": 1.74, "elapsed_time": "0:00:27", "remaining_time": "0:26:10"}
{"current_steps": 10, "total_steps": 287, "loss": 0.2604, "lr": 6.206896551724138e-06, "epoch": 0.034904013961605584, "percentage": 3.48, "elapsed_time": "0:00:54", "remaining_time": "0:25:23"}
{"current_steps": 15, "total_steps": 287, "loss": 0.0645, "lr": 9.655172413793105e-06, "epoch": 0.05235602094240838, "percentage": 5.23, "elapsed_time": "0:01:20", "remaining_time": "0:24:20"}
{"current_steps": 20, "total_steps": 287, "loss": 0.0643, "lr": 1.310344827586207e-05, "epoch": 0.06980802792321117, "percentage": 6.97, "elapsed_time": "0:01:47", "remaining_time": "0:23:50"}
{"current_steps": 25, "total_steps": 287, "loss": 0.0682, "lr": 1.6551724137931037e-05, "epoch": 0.08726003490401396, "percentage": 8.71, "elapsed_time": "0:02:15", "remaining_time": "0:23:37"}
{"current_steps": 30, "total_steps": 287, "loss": 0.0648, "lr": 2e-05, "epoch": 0.10471204188481675, "percentage": 10.45, "elapsed_time": "0:02:43", "remaining_time": "0:23:19"}
{"current_steps": 35, "total_steps": 287, "loss": 0.0638, "lr": 1.998147167378645e-05, "epoch": 0.12216404886561955, "percentage": 12.2, "elapsed_time": "0:03:10", "remaining_time": "0:22:49"}
{"current_steps": 40, "total_steps": 287, "loss": 0.0631, "lr": 1.9925955354920265e-05, "epoch": 0.13961605584642234, "percentage": 13.94, "elapsed_time": "0:03:39", "remaining_time": "0:22:32"}
{"current_steps": 45, "total_steps": 287, "loss": 0.0637, "lr": 1.983365676829466e-05, "epoch": 0.15706806282722513, "percentage": 15.68, "elapsed_time": "0:04:05", "remaining_time": "0:22:01"}
{"current_steps": 50, "total_steps": 287, "loss": 0.0633, "lr": 1.9704917941574053e-05, "epoch": 0.17452006980802792, "percentage": 17.42, "elapsed_time": "0:04:34", "remaining_time": "0:21:42"}
{"current_steps": 55, "total_steps": 287, "loss": 0.0634, "lr": 1.954021593775401e-05, "epoch": 0.19197207678883071, "percentage": 19.16, "elapsed_time": "0:05:03", "remaining_time": "0:21:19"}
{"current_steps": 60, "total_steps": 287, "loss": 0.0637, "lr": 1.9340161087325483e-05, "epoch": 0.2094240837696335, "percentage": 20.91, "elapsed_time": "0:05:30", "remaining_time": "0:20:49"}
{"current_steps": 65, "total_steps": 287, "loss": 0.0634, "lr": 1.9105494726594344e-05, "epoch": 0.2268760907504363, "percentage": 22.65, "elapsed_time": "0:05:57", "remaining_time": "0:20:20"}
{"current_steps": 70, "total_steps": 287, "loss": 0.0643, "lr": 1.8837086450537195e-05, "epoch": 0.2443280977312391, "percentage": 24.39, "elapsed_time": "0:06:25", "remaining_time": "0:19:53"}
{"current_steps": 75, "total_steps": 287, "loss": 0.0633, "lr": 1.8535930890373467e-05, "epoch": 0.2617801047120419, "percentage": 26.13, "elapsed_time": "0:06:53", "remaining_time": "0:19:29"}
{"current_steps": 80, "total_steps": 287, "loss": 0.0633, "lr": 1.820314402779511e-05, "epoch": 0.2792321116928447, "percentage": 27.87, "elapsed_time": "0:07:20", "remaining_time": "0:19:00"}
{"current_steps": 85, "total_steps": 287, "loss": 0.0638, "lr": 1.7839959059512016e-05, "epoch": 0.29668411867364747, "percentage": 29.62, "elapsed_time": "0:07:48", "remaining_time": "0:18:34"}
{"current_steps": 90, "total_steps": 287, "loss": 0.0633, "lr": 1.744772182743782e-05, "epoch": 0.31413612565445026, "percentage": 31.36, "elapsed_time": "0:08:15", "remaining_time": "0:18:03"}
{"current_steps": 95, "total_steps": 287, "loss": 0.0629, "lr": 1.7027885831450318e-05, "epoch": 0.33158813263525305, "percentage": 33.1, "elapsed_time": "0:08:42", "remaining_time": "0:17:36"}
{"current_steps": 100, "total_steps": 287, "loss": 0.0632, "lr": 1.658200684320748e-05, "epoch": 0.34904013961605584, "percentage": 34.84, "elapsed_time": "0:09:09", "remaining_time": "0:17:08"}
{"current_steps": 105, "total_steps": 287, "loss": 0.0633, "lr": 1.6111737140978495e-05, "epoch": 0.36649214659685864, "percentage": 36.59, "elapsed_time": "0:09:35", "remaining_time": "0:16:38"}
{"current_steps": 110, "total_steps": 287, "loss": 0.0638, "lr": 1.5618819386853607e-05, "epoch": 0.38394415357766143, "percentage": 38.33, "elapsed_time": "0:10:04", "remaining_time": "0:16:12"}
{"current_steps": 115, "total_steps": 287, "loss": 0.0637, "lr": 1.5105080169021792e-05, "epoch": 0.4013961605584642, "percentage": 40.07, "elapsed_time": "0:10:33", "remaining_time": "0:15:47"}
{"current_steps": 120, "total_steps": 287, "loss": 0.064, "lr": 1.4572423233046386e-05, "epoch": 0.418848167539267, "percentage": 41.81, "elapsed_time": "0:11:01", "remaining_time": "0:15:20"}
{"current_steps": 125, "total_steps": 287, "loss": 0.0637, "lr": 1.4022822427221325e-05, "epoch": 0.4363001745200698, "percentage": 43.55, "elapsed_time": "0:11:29", "remaining_time": "0:14:54"}
{"current_steps": 130, "total_steps": 287, "loss": 0.0631, "lr": 1.3458314388150115e-05, "epoch": 0.4537521815008726, "percentage": 45.3, "elapsed_time": "0:11:58", "remaining_time": "0:14:27"}
{"current_steps": 135, "total_steps": 287, "loss": 0.0627, "lr": 1.2880990993652379e-05, "epoch": 0.4712041884816754, "percentage": 47.04, "elapsed_time": "0:12:26", "remaining_time": "0:14:00"}
{"current_steps": 140, "total_steps": 287, "loss": 0.0637, "lr": 1.2292991610964902e-05, "epoch": 0.4886561954624782, "percentage": 48.78, "elapsed_time": "0:12:54", "remaining_time": "0:13:33"}
{"current_steps": 145, "total_steps": 287, "loss": 0.0639, "lr": 1.1696495168962848e-05, "epoch": 0.506108202443281, "percentage": 50.52, "elapsed_time": "0:13:21", "remaining_time": "0:13:05"}
{"current_steps": 150, "total_steps": 287, "loss": 0.0644, "lr": 1.1093712083778748e-05, "epoch": 0.5235602094240838, "percentage": 52.26, "elapsed_time": "0:13:50", "remaining_time": "0:12:38"}
{"current_steps": 155, "total_steps": 287, "loss": 0.0655, "lr": 1.0486876067740253e-05, "epoch": 0.5410122164048866, "percentage": 54.01, "elapsed_time": "0:14:17", "remaining_time": "0:12:10"}
{"current_steps": 160, "total_steps": 287, "loss": 0.0639, "lr": 9.878235851980027e-06, "epoch": 0.5584642233856894, "percentage": 55.75, "elapsed_time": "0:14:44", "remaining_time": "0:11:42"}
{"current_steps": 165, "total_steps": 287, "loss": 0.0636, "lr": 9.270046853390924e-06, "epoch": 0.5759162303664922, "percentage": 57.49, "elapsed_time": "0:15:13", "remaining_time": "0:11:15"}
{"current_steps": 170, "total_steps": 287, "loss": 0.0644, "lr": 8.664562816806022e-06, "epoch": 0.5933682373472949, "percentage": 59.23, "elapsed_time": "0:15:40", "remaining_time": "0:10:47"}
{"current_steps": 175, "total_steps": 287, "loss": 0.0629, "lr": 8.064027463374702e-06, "epoch": 0.6108202443280978, "percentage": 60.98, "elapsed_time": "0:16:08", "remaining_time": "0:10:19"}
{"current_steps": 180, "total_steps": 287, "loss": 0.0645, "lr": 7.470666176083193e-06, "epoch": 0.6282722513089005, "percentage": 62.72, "elapsed_time": "0:16:35", "remaining_time": "0:09:51"}
{"current_steps": 185, "total_steps": 287, "loss": 0.0623, "lr": 6.886677753230184e-06, "epoch": 0.6457242582897034, "percentage": 64.46, "elapsed_time": "0:17:05", "remaining_time": "0:09:25"}
{"current_steps": 190, "total_steps": 287, "loss": 0.0624, "lr": 6.314226260416383e-06, "epoch": 0.6631762652705061, "percentage": 66.2, "elapsed_time": "0:17:32", "remaining_time": "0:08:57"}
{"current_steps": 195, "total_steps": 287, "loss": 0.0621, "lr": 5.755433011241851e-06, "epoch": 0.680628272251309, "percentage": 67.94, "elapsed_time": "0:17:59", "remaining_time": "0:08:29"}
{"current_steps": 200, "total_steps": 287, "loss": 0.0638, "lr": 5.212368706427913e-06, "epoch": 0.6980802792321117, "percentage": 69.69, "elapsed_time": "0:18:26", "remaining_time": "0:08:01"}
{"current_steps": 205, "total_steps": 287, "loss": 0.0614, "lr": 4.687045760493468e-06, "epoch": 0.7155322862129145, "percentage": 71.43, "elapsed_time": "0:18:53", "remaining_time": "0:07:33"}
{"current_steps": 210, "total_steps": 287, "loss": 0.0623, "lr": 4.181410844420473e-06, "epoch": 0.7329842931937173, "percentage": 73.17, "elapsed_time": "0:19:21", "remaining_time": "0:07:05"}
{"current_steps": 215, "total_steps": 287, "loss": 0.0604, "lr": 3.6973376719429134e-06, "epoch": 0.7504363001745201, "percentage": 74.91, "elapsed_time": "0:19:48", "remaining_time": "0:06:38"}
{"current_steps": 220, "total_steps": 287, "loss": 0.0601, "lr": 3.236620056190972e-06, "epoch": 0.7678883071553229, "percentage": 76.66, "elapsed_time": "0:20:15", "remaining_time": "0:06:10"}
{"current_steps": 225, "total_steps": 287, "loss": 0.0613, "lr": 2.8009652624200436e-06, "epoch": 0.7853403141361257, "percentage": 78.4, "elapsed_time": "0:20:43", "remaining_time": "0:05:42"}
{"current_steps": 230, "total_steps": 287, "loss": 0.0592, "lr": 2.3919876814572197e-06, "epoch": 0.8027923211169284, "percentage": 80.14, "elapsed_time": "0:21:11", "remaining_time": "0:05:15"}
{"current_steps": 235, "total_steps": 287, "loss": 0.0595, "lr": 2.0112028473093294e-06, "epoch": 0.8202443280977313, "percentage": 81.88, "elapsed_time": "0:21:39", "remaining_time": "0:04:47"}
{"current_steps": 240, "total_steps": 287, "loss": 0.0567, "lr": 1.660021821101222e-06, "epoch": 0.837696335078534, "percentage": 83.62, "elapsed_time": "0:22:08", "remaining_time": "0:04:20"}
{"current_steps": 245, "total_steps": 287, "loss": 0.0571, "lr": 1.339745962155613e-06, "epoch": 0.8551483420593369, "percentage": 85.37, "elapsed_time": "0:22:37", "remaining_time": "0:03:52"}
{"current_steps": 250, "total_steps": 287, "loss": 0.0607, "lr": 1.051562105591082e-06, "epoch": 0.8726003490401396, "percentage": 87.11, "elapsed_time": "0:23:06", "remaining_time": "0:03:25"}
{"current_steps": 255, "total_steps": 287, "loss": 0.0582, "lr": 7.965381643084069e-07, "epoch": 0.8900523560209425, "percentage": 88.85, "elapsed_time": "0:23:34", "remaining_time": "0:02:57"}
{"current_steps": 260, "total_steps": 287, "loss": 0.0621, "lr": 5.756191716628556e-07, "epoch": 0.9075043630017452, "percentage": 90.59, "elapsed_time": "0:24:01", "remaining_time": "0:02:29"}
{"current_steps": 265, "total_steps": 287, "loss": 0.0579, "lr": 3.8962377948693395e-07, "epoch": 0.924956369982548, "percentage": 92.33, "elapsed_time": "0:24:29", "remaining_time": "0:02:01"}
{"current_steps": 270, "total_steps": 287, "loss": 0.058, "lr": 2.392412244407294e-07, "epoch": 0.9424083769633508, "percentage": 94.08, "elapsed_time": "0:24:57", "remaining_time": "0:01:34"}
{"current_steps": 275, "total_steps": 287, "loss": 0.0592, "lr": 1.2502877393158587e-07, "epoch": 0.9598603839441536, "percentage": 95.82, "elapsed_time": "0:25:24", "remaining_time": "0:01:06"}
{"current_steps": 280, "total_steps": 287, "loss": 0.0565, "lr": 4.740966106764222e-08, "epoch": 0.9773123909249564, "percentage": 97.56, "elapsed_time": "0:25:54", "remaining_time": "0:00:38"}
{"current_steps": 285, "total_steps": 287, "loss": 0.0593, "lr": 6.671516297606095e-09, "epoch": 0.9947643979057592, "percentage": 99.3, "elapsed_time": "0:26:22", "remaining_time": "0:00:11"}
{"current_steps": 287, "total_steps": 287, "epoch": 1.0, "percentage": 100.0, "elapsed_time": "0:27:03", "remaining_time": "0:00:00"}

442
trainer_state.json Normal file
View File

@@ -0,0 +1,442 @@
{
"best_global_step": null,
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 1.0,
"eval_steps": 500,
"global_step": 287,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.017452006980802792,
"grad_norm": 18.758426666259766,
"learning_rate": 2.7586206896551725e-06,
"loss": 1.2108,
"step": 5
},
{
"epoch": 0.034904013961605584,
"grad_norm": 0.6289834976196289,
"learning_rate": 6.206896551724138e-06,
"loss": 0.2604,
"step": 10
},
{
"epoch": 0.05235602094240838,
"grad_norm": 0.7779368758201599,
"learning_rate": 9.655172413793105e-06,
"loss": 0.0645,
"step": 15
},
{
"epoch": 0.06980802792321117,
"grad_norm": 0.7711329460144043,
"learning_rate": 1.310344827586207e-05,
"loss": 0.0643,
"step": 20
},
{
"epoch": 0.08726003490401396,
"grad_norm": 1.1758290529251099,
"learning_rate": 1.6551724137931037e-05,
"loss": 0.0682,
"step": 25
},
{
"epoch": 0.10471204188481675,
"grad_norm": 0.20803742110729218,
"learning_rate": 2e-05,
"loss": 0.0648,
"step": 30
},
{
"epoch": 0.12216404886561955,
"grad_norm": 0.3231872320175171,
"learning_rate": 1.998147167378645e-05,
"loss": 0.0638,
"step": 35
},
{
"epoch": 0.13961605584642234,
"grad_norm": 0.20903366804122925,
"learning_rate": 1.9925955354920265e-05,
"loss": 0.0631,
"step": 40
},
{
"epoch": 0.15706806282722513,
"grad_norm": 0.1551412045955658,
"learning_rate": 1.983365676829466e-05,
"loss": 0.0637,
"step": 45
},
{
"epoch": 0.17452006980802792,
"grad_norm": 0.10300405323505402,
"learning_rate": 1.9704917941574053e-05,
"loss": 0.0633,
"step": 50
},
{
"epoch": 0.19197207678883071,
"grad_norm": 0.05463937669992447,
"learning_rate": 1.954021593775401e-05,
"loss": 0.0634,
"step": 55
},
{
"epoch": 0.2094240837696335,
"grad_norm": 0.05097668617963791,
"learning_rate": 1.9340161087325483e-05,
"loss": 0.0637,
"step": 60
},
{
"epoch": 0.2268760907504363,
"grad_norm": 0.025731965899467468,
"learning_rate": 1.9105494726594344e-05,
"loss": 0.0634,
"step": 65
},
{
"epoch": 0.2443280977312391,
"grad_norm": 0.5567801594734192,
"learning_rate": 1.8837086450537195e-05,
"loss": 0.0643,
"step": 70
},
{
"epoch": 0.2617801047120419,
"grad_norm": 0.07695559412240982,
"learning_rate": 1.8535930890373467e-05,
"loss": 0.0633,
"step": 75
},
{
"epoch": 0.2792321116928447,
"grad_norm": 0.13339029252529144,
"learning_rate": 1.820314402779511e-05,
"loss": 0.0633,
"step": 80
},
{
"epoch": 0.29668411867364747,
"grad_norm": 0.10536781698465347,
"learning_rate": 1.7839959059512016e-05,
"loss": 0.0638,
"step": 85
},
{
"epoch": 0.31413612565445026,
"grad_norm": 0.12401806563138962,
"learning_rate": 1.744772182743782e-05,
"loss": 0.0633,
"step": 90
},
{
"epoch": 0.33158813263525305,
"grad_norm": 0.1011064425110817,
"learning_rate": 1.7027885831450318e-05,
"loss": 0.0629,
"step": 95
},
{
"epoch": 0.34904013961605584,
"grad_norm": 0.13563387095928192,
"learning_rate": 1.658200684320748e-05,
"loss": 0.0632,
"step": 100
},
{
"epoch": 0.36649214659685864,
"grad_norm": 0.26744481921195984,
"learning_rate": 1.6111737140978495e-05,
"loss": 0.0633,
"step": 105
},
{
"epoch": 0.38394415357766143,
"grad_norm": 0.6496581435203552,
"learning_rate": 1.5618819386853607e-05,
"loss": 0.0638,
"step": 110
},
{
"epoch": 0.4013961605584642,
"grad_norm": 0.2886026203632355,
"learning_rate": 1.5105080169021792e-05,
"loss": 0.0637,
"step": 115
},
{
"epoch": 0.418848167539267,
"grad_norm": 0.07766488194465637,
"learning_rate": 1.4572423233046386e-05,
"loss": 0.064,
"step": 120
},
{
"epoch": 0.4363001745200698,
"grad_norm": 0.152951180934906,
"learning_rate": 1.4022822427221325e-05,
"loss": 0.0637,
"step": 125
},
{
"epoch": 0.4537521815008726,
"grad_norm": 0.4545815587043762,
"learning_rate": 1.3458314388150115e-05,
"loss": 0.0631,
"step": 130
},
{
"epoch": 0.4712041884816754,
"grad_norm": 0.13478335738182068,
"learning_rate": 1.2880990993652379e-05,
"loss": 0.0627,
"step": 135
},
{
"epoch": 0.4886561954624782,
"grad_norm": 0.45286211371421814,
"learning_rate": 1.2292991610964902e-05,
"loss": 0.0637,
"step": 140
},
{
"epoch": 0.506108202443281,
"grad_norm": 0.44334903359413147,
"learning_rate": 1.1696495168962848e-05,
"loss": 0.0639,
"step": 145
},
{
"epoch": 0.5235602094240838,
"grad_norm": 0.6045412421226501,
"learning_rate": 1.1093712083778748e-05,
"loss": 0.0644,
"step": 150
},
{
"epoch": 0.5410122164048866,
"grad_norm": 0.5224294066429138,
"learning_rate": 1.0486876067740253e-05,
"loss": 0.0655,
"step": 155
},
{
"epoch": 0.5584642233856894,
"grad_norm": 0.37020203471183777,
"learning_rate": 9.878235851980027e-06,
"loss": 0.0639,
"step": 160
},
{
"epoch": 0.5759162303664922,
"grad_norm": 0.01445784978568554,
"learning_rate": 9.270046853390924e-06,
"loss": 0.0636,
"step": 165
},
{
"epoch": 0.5933682373472949,
"grad_norm": 0.5739990472793579,
"learning_rate": 8.664562816806022e-06,
"loss": 0.0644,
"step": 170
},
{
"epoch": 0.6108202443280978,
"grad_norm": 0.21191075444221497,
"learning_rate": 8.064027463374702e-06,
"loss": 0.0629,
"step": 175
},
{
"epoch": 0.6282722513089005,
"grad_norm": 0.3500339686870575,
"learning_rate": 7.470666176083193e-06,
"loss": 0.0645,
"step": 180
},
{
"epoch": 0.6457242582897034,
"grad_norm": 0.31313106417655945,
"learning_rate": 6.886677753230184e-06,
"loss": 0.0623,
"step": 185
},
{
"epoch": 0.6631762652705061,
"grad_norm": 0.3150012791156769,
"learning_rate": 6.314226260416383e-06,
"loss": 0.0624,
"step": 190
},
{
"epoch": 0.680628272251309,
"grad_norm": 0.19164550304412842,
"learning_rate": 5.755433011241851e-06,
"loss": 0.0621,
"step": 195
},
{
"epoch": 0.6980802792321117,
"grad_norm": 0.448416143655777,
"learning_rate": 5.212368706427913e-06,
"loss": 0.0638,
"step": 200
},
{
"epoch": 0.7155322862129145,
"grad_norm": 0.0443989560008049,
"learning_rate": 4.687045760493468e-06,
"loss": 0.0614,
"step": 205
},
{
"epoch": 0.7329842931937173,
"grad_norm": 0.32341665029525757,
"learning_rate": 4.181410844420473e-06,
"loss": 0.0623,
"step": 210
},
{
"epoch": 0.7504363001745201,
"grad_norm": 0.2636391222476959,
"learning_rate": 3.6973376719429134e-06,
"loss": 0.0604,
"step": 215
},
{
"epoch": 0.7678883071553229,
"grad_norm": 0.27186042070388794,
"learning_rate": 3.236620056190972e-06,
"loss": 0.0601,
"step": 220
},
{
"epoch": 0.7853403141361257,
"grad_norm": 0.5704047679901123,
"learning_rate": 2.8009652624200436e-06,
"loss": 0.0613,
"step": 225
},
{
"epoch": 0.8027923211169284,
"grad_norm": 0.4298834204673767,
"learning_rate": 2.3919876814572197e-06,
"loss": 0.0592,
"step": 230
},
{
"epoch": 0.8202443280977313,
"grad_norm": 0.08873734623193741,
"learning_rate": 2.0112028473093294e-06,
"loss": 0.0595,
"step": 235
},
{
"epoch": 0.837696335078534,
"grad_norm": 0.39123955368995667,
"learning_rate": 1.660021821101222e-06,
"loss": 0.0567,
"step": 240
},
{
"epoch": 0.8551483420593369,
"grad_norm": 0.16050003468990326,
"learning_rate": 1.339745962155613e-06,
"loss": 0.0571,
"step": 245
},
{
"epoch": 0.8726003490401396,
"grad_norm": 0.12748093903064728,
"learning_rate": 1.051562105591082e-06,
"loss": 0.0607,
"step": 250
},
{
"epoch": 0.8900523560209425,
"grad_norm": 0.1128767654299736,
"learning_rate": 7.965381643084069e-07,
"loss": 0.0582,
"step": 255
},
{
"epoch": 0.9075043630017452,
"grad_norm": 0.5375702381134033,
"learning_rate": 5.756191716628556e-07,
"loss": 0.0621,
"step": 260
},
{
"epoch": 0.924956369982548,
"grad_norm": 0.272128164768219,
"learning_rate": 3.8962377948693395e-07,
"loss": 0.0579,
"step": 265
},
{
"epoch": 0.9424083769633508,
"grad_norm": 0.12358862906694412,
"learning_rate": 2.392412244407294e-07,
"loss": 0.058,
"step": 270
},
{
"epoch": 0.9598603839441536,
"grad_norm": 0.13405446708202362,
"learning_rate": 1.2502877393158587e-07,
"loss": 0.0592,
"step": 275
},
{
"epoch": 0.9773123909249564,
"grad_norm": 0.12268463522195816,
"learning_rate": 4.740966106764222e-08,
"loss": 0.0565,
"step": 280
},
{
"epoch": 0.9947643979057592,
"grad_norm": 0.6271886825561523,
"learning_rate": 6.671516297606095e-09,
"loss": 0.0593,
"step": 285
},
{
"epoch": 1.0,
"step": 287,
"total_flos": 3.259472961077248e+17,
"train_loss": 0.08579332347738618,
"train_runtime": 1625.0751,
"train_samples_per_second": 11.273,
"train_steps_per_second": 0.177
}
],
"logging_steps": 5,
"max_steps": 287,
"num_input_tokens_seen": 0,
"num_train_epochs": 1,
"save_steps": 500,
"stateful_callbacks": {
"TrainerControl": {
"args": {
"should_epoch_stop": false,
"should_evaluate": false,
"should_log": false,
"should_save": true,
"should_training_stop": true
},
"attributes": {}
}
},
"total_flos": 3.259472961077248e+17,
"train_batch_size": 8,
"trial_name": null,
"trial_params": null
}

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e7f3c056008b52a6eee2e813100deffa2eb7b3d1d13a3578e694da0745e90918
size 7416

1
vocab.json Normal file

File diff suppressed because one or more lines are too long