初始化项目,由ModelHub XC社区提供模型

Model: laion/allenai-sera-unified-3160__Qwen3-8B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 06:22:47 +08:00
commit afe1b83817
23 changed files with 153020 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

61
README.md Normal file
View File

@@ -0,0 +1,61 @@
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sera-3160__Qwen3-8B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sera-3160__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--allenai-sera-unified-3160/snapshots/099497cdf98a9c3da57ca8873d9d734da4be1361_thinking_preprocessed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.57.6
- Pytorch 2.9.1+cu130
- Datasets 4.7.0
- Tokenizers 0.22.2

28
added_tokens.json Normal file
View File

@@ -0,0 +1,28 @@
{
"</think>": 151668,
"</tool_call>": 151658,
"</tool_response>": 151666,
"<think>": 151667,
"<tool_call>": 151657,
"<tool_response>": 151665,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

16
all_results.json Normal file
View File

@@ -0,0 +1,16 @@
{
"achieved_tflops_per_gpu": 48633.33732429779,
"achieved_tflops_per_gpu_theoretical": 678580.3444106664,
"epoch": 7.0,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.057392802089452744,
"mfu_percent": 3436.984969915038,
"mfu_percent_theoretical": 47956.20808555946,
"total_flos": 1.7948424939556045e+18,
"train_loss": 0.0,
"train_runtime": 1.1533,
"train_samples_per_second": 19179.804,
"train_steps_per_second": 200.295,
"valid_targets_mean": 15044.6,
"valid_targets_min": 4391
}

89
chat_template.jinja Normal file
View File

@@ -0,0 +1,89 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for message in messages[::-1] %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endfor %}
{%- for message in messages %}
{%- if message.content is string %}
{%- set content = message.content %}
{%- else %}
{%- set content = '' %}
{%- endif %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set reasoning_content = '' %}
{%- if message.reasoning_content is string %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '</think>' in content %}
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
{%- endif %}
{%- endif %}
{%- if loop.index0 > ns.last_query_index %}
{%- if loop.last or (not loop.last and reasoning_content) %}
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is false %}
{{- '<think>\n\n</think>\n\n' }}
{%- endif %}
{%- endif %}

68
config.json Normal file
View File

@@ -0,0 +1,68 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"dtype": "bfloat16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 12288,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 40960,
"max_window_layers": 36,
"model_type": "qwen3",
"num_attention_heads": 32,
"num_hidden_layers": 36,
"num_key_value_heads": 8,
"pad_token_id": 151643,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000,
"sliding_window": null,
"tie_word_embeddings": false,
"transformers_version": "4.57.6",
"use_cache": false,
"use_sliding_window": false,
"vocab_size": 151936
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"transformers_version": "4.57.6"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f80021213fb3cf692098c587567f7e873ccd1e39432e6940e38e69abdbbc6858
size 4902257696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0232c8db378f60cbc9cbd425c7302980649a8a69349b7d2c3c853ee385fbc35
size 4915960368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6f830efc19d3390d66ed7bd24c417867b1ad3ac2475ebb5757dbfaa914107ee2
size 4983068496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c54586a99927f89e039f66a42d98affc224bdc25091d5fd86c23e1517802eab0
size 1580230264

View File

@@ -0,0 +1,407 @@
{
"metadata": {
"total_parameters": 308224,
"total_size": 16381470720
},
"weight_map": {
"lm_head.weight": "model-00004-of-00004.safetensors",
"model.embed_tokens.weight": "model-00001-of-00004.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.22.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.input_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.35.input_layernorm.weight": "model-00004-of-00004.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
"model.layers.35.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
"model.layers.35.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"model.norm.weight": "model-00004-of-00004.safetensors"
}
}

12
run_summary.json Normal file
View File

@@ -0,0 +1,12 @@
{
"agent_name": "099497cdf98a9c3da57ca8873d9d734da4be1361_thinking_preprocessed",
"training_start": null,
"training_end": null,
"created_by": "DCAgent",
"base_model_name": "Qwen/Qwen3-8B",
"dataset_name": "/e/data1/datasets/playground/ot/hf_hub/datasets--laion--allenai-sera-unified-3160/snapshots/099497cdf98a9c3da57ca8873d9d734da4be1361_thinking_preprocessed",
"training_type": "SFT",
"training_parameters": "https://huggingface.co/laion/allenai-sera-unified-3160__Qwen3-8B/blob/main/config.json",
"wandb_link": null,
"traces_location_s3": null
}

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
size 11422654

240
tokenizer_config.json Normal file
View File

@@ -0,0 +1,240 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151667": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151668": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": {},
"model_max_length": 32768,
"pad_token": "<|endoftext|>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

12
train_results.json Normal file
View File

@@ -0,0 +1,12 @@
{
"achieved_tflops_per_gpu": 48633.33732429779,
"achieved_tflops_per_gpu_theoretical": 678580.3444106664,
"epoch": 7.0,
"mfu_percent": 3436.984969915038,
"mfu_percent_theoretical": 47956.20808555946,
"total_flos": 1.7948424939556045e+18,
"train_loss": 0.0,
"train_runtime": 1.1533,
"train_samples_per_second": 19179.804,
"train_steps_per_second": 200.295
}

52
trainer_log.jsonl Normal file
View File

@@ -0,0 +1,52 @@
{"current_steps": 5, "total_steps": 231, "loss": 0.4313, "lr": 6.666666666666667e-06, "epoch": 0.15151515151515152, "percentage": 2.16, "elapsed_time": "0:02:47", "remaining_time": "2:06:27"}
{"current_steps": 10, "total_steps": 231, "loss": 0.3895, "lr": 1.5000000000000002e-05, "epoch": 0.30303030303030304, "percentage": 4.33, "elapsed_time": "0:05:24", "remaining_time": "1:59:39"}
{"current_steps": 15, "total_steps": 231, "loss": 0.3418, "lr": 2.3333333333333336e-05, "epoch": 0.45454545454545453, "percentage": 6.49, "elapsed_time": "0:08:00", "remaining_time": "1:55:26"}
{"current_steps": 20, "total_steps": 231, "loss": 0.3153, "lr": 3.1666666666666666e-05, "epoch": 0.6060606060606061, "percentage": 8.66, "elapsed_time": "0:10:36", "remaining_time": "1:51:53"}
{"current_steps": 25, "total_steps": 231, "loss": 0.2947, "lr": 4e-05, "epoch": 0.7575757575757576, "percentage": 10.82, "elapsed_time": "0:13:12", "remaining_time": "1:48:49"}
{"current_steps": 30, "total_steps": 231, "loss": 0.271, "lr": 3.994244399375679e-05, "epoch": 0.9090909090909091, "percentage": 12.99, "elapsed_time": "0:15:48", "remaining_time": "1:45:54"}
{"current_steps": 35, "total_steps": 231, "loss": 0.2523, "lr": 3.977010724441261e-05, "epoch": 1.0606060606060606, "percentage": 15.15, "elapsed_time": "0:18:23", "remaining_time": "1:43:01"}
{"current_steps": 40, "total_steps": 231, "loss": 0.2413, "lr": 3.9483981653469586e-05, "epoch": 1.2121212121212122, "percentage": 17.32, "elapsed_time": "0:20:58", "remaining_time": "1:40:11"}
{"current_steps": 45, "total_steps": 231, "loss": 0.2315, "lr": 3.908571404555758e-05, "epoch": 1.3636363636363638, "percentage": 19.48, "elapsed_time": "0:23:34", "remaining_time": "1:37:26"}
{"current_steps": 50, "total_steps": 231, "loss": 0.228, "lr": 3.8577596689969346e-05, "epoch": 1.5151515151515151, "percentage": 21.65, "elapsed_time": "0:26:10", "remaining_time": "1:34:46"}
{"current_steps": 55, "total_steps": 231, "loss": 0.2184, "lr": 3.7962554107273926e-05, "epoch": 1.6666666666666665, "percentage": 23.81, "elapsed_time": "0:28:46", "remaining_time": "1:32:05"}
{"current_steps": 60, "total_steps": 231, "loss": 0.2129, "lr": 3.724412623694427e-05, "epoch": 1.8181818181818183, "percentage": 25.97, "elapsed_time": "0:31:21", "remaining_time": "1:29:21"}
{"current_steps": 65, "total_steps": 231, "loss": 0.2075, "lr": 3.642644806287938e-05, "epoch": 1.9696969696969697, "percentage": 28.14, "elapsed_time": "0:33:56", "remaining_time": "1:26:41"}
{"current_steps": 70, "total_steps": 231, "loss": 0.2062, "lr": 3.55142258140884e-05, "epoch": 2.121212121212121, "percentage": 30.3, "elapsed_time": "0:36:31", "remaining_time": "1:24:00"}
{"current_steps": 75, "total_steps": 231, "loss": 0.2062, "lr": 3.451270987751598e-05, "epoch": 2.2727272727272725, "percentage": 32.47, "elapsed_time": "0:39:06", "remaining_time": "1:21:20"}
{"current_steps": 80, "total_steps": 231, "loss": 0.1972, "lr": 3.342766457891194e-05, "epoch": 2.4242424242424243, "percentage": 34.63, "elapsed_time": "0:41:41", "remaining_time": "1:18:40"}
{"current_steps": 85, "total_steps": 231, "loss": 0.1948, "lr": 3.226533500567433e-05, "epoch": 2.5757575757575757, "percentage": 36.8, "elapsed_time": "0:44:16", "remaining_time": "1:16:02"}
{"current_steps": 90, "total_steps": 231, "loss": 0.1969, "lr": 3.1032411062620544e-05, "epoch": 2.7272727272727275, "percentage": 38.96, "elapsed_time": "0:46:51", "remaining_time": "1:13:24"}
{"current_steps": 95, "total_steps": 231, "loss": 0.1923, "lr": 2.973598896756697e-05, "epoch": 2.878787878787879, "percentage": 41.13, "elapsed_time": "0:49:26", "remaining_time": "1:10:46"}
{"current_steps": 100, "total_steps": 231, "loss": 0.1914, "lr": 2.8383530408333285e-05, "epoch": 3.0303030303030303, "percentage": 43.29, "elapsed_time": "0:52:02", "remaining_time": "1:08:10"}
{"current_steps": 105, "total_steps": 231, "loss": 0.1892, "lr": 2.6982819596247373e-05, "epoch": 3.1818181818181817, "percentage": 45.45, "elapsed_time": "0:54:37", "remaining_time": "1:05:33"}
{"current_steps": 110, "total_steps": 231, "loss": 0.1852, "lr": 2.554191846333378e-05, "epoch": 3.3333333333333335, "percentage": 47.62, "elapsed_time": "0:57:12", "remaining_time": "1:02:55"}
{"current_steps": 115, "total_steps": 231, "loss": 0.1861, "lr": 2.4069120261052682e-05, "epoch": 3.484848484848485, "percentage": 49.78, "elapsed_time": "0:59:47", "remaining_time": "1:00:18"}
{"current_steps": 120, "total_steps": 231, "loss": 0.1872, "lr": 2.2572901827656626e-05, "epoch": 3.6363636363636362, "percentage": 51.95, "elapsed_time": "1:02:21", "remaining_time": "0:57:41"}
{"current_steps": 125, "total_steps": 231, "loss": 0.1849, "lr": 2.1061874798894992e-05, "epoch": 3.787878787878788, "percentage": 54.11, "elapsed_time": "1:04:56", "remaining_time": "0:55:04"}
{"current_steps": 130, "total_steps": 231, "loss": 0.1827, "lr": 1.9544736042877886e-05, "epoch": 3.9393939393939394, "percentage": 56.28, "elapsed_time": "1:07:30", "remaining_time": "0:52:27"}
{"current_steps": 135, "total_steps": 231, "loss": 0.183, "lr": 1.8030217604376628e-05, "epoch": 4.090909090909091, "percentage": 58.44, "elapsed_time": "1:10:05", "remaining_time": "0:49:50"}
{"current_steps": 140, "total_steps": 231, "loss": 0.1838, "lr": 1.6527036446661396e-05, "epoch": 4.242424242424242, "percentage": 60.61, "elapsed_time": "1:12:40", "remaining_time": "0:47:14"}
{"current_steps": 145, "total_steps": 231, "loss": 0.1811, "lr": 1.5043844280142005e-05, "epoch": 4.393939393939394, "percentage": 62.77, "elapsed_time": "1:15:15", "remaining_time": "0:44:38"}
{"current_steps": 150, "total_steps": 231, "loss": 0.1782, "lr": 1.358917776657806e-05, "epoch": 4.545454545454545, "percentage": 64.94, "elapsed_time": "1:17:50", "remaining_time": "0:42:02"}
{"current_steps": 155, "total_steps": 231, "loss": 0.1767, "lr": 1.2171409385463218e-05, "epoch": 4.696969696969697, "percentage": 67.1, "elapsed_time": "1:20:24", "remaining_time": "0:39:25"}
{"current_steps": 160, "total_steps": 231, "loss": 0.1781, "lr": 1.0798699245376959e-05, "epoch": 4.848484848484849, "percentage": 69.26, "elapsed_time": "1:22:59", "remaining_time": "0:36:49"}
{"current_steps": 165, "total_steps": 231, "loss": 0.1786, "lr": 9.478948117658577e-06, "epoch": 5.0, "percentage": 71.43, "elapsed_time": "1:25:33", "remaining_time": "0:34:13"}
{"current_steps": 170, "total_steps": 231, "loss": 0.1803, "lr": 8.219751962722726e-06, "epoch": 5.151515151515151, "percentage": 73.59, "elapsed_time": "1:28:08", "remaining_time": "0:31:37"}
{"current_steps": 175, "total_steps": 231, "loss": 0.1723, "lr": 7.028358210744881e-06, "epoch": 5.303030303030303, "percentage": 75.76, "elapsed_time": "1:30:42", "remaining_time": "0:29:01"}
{"current_steps": 180, "total_steps": 231, "loss": 0.1782, "lr": 5.911624048347757e-06, "epoch": 5.454545454545454, "percentage": 77.92, "elapsed_time": "1:33:16", "remaining_time": "0:26:25"}
{"current_steps": 185, "total_steps": 231, "loss": 0.177, "lr": 4.875976951373633e-06, "epoch": 5.606060606060606, "percentage": 80.09, "elapsed_time": "1:35:50", "remaining_time": "0:23:49"}
{"current_steps": 190, "total_steps": 231, "loss": 0.1755, "lr": 3.927377690900436e-06, "epoch": 5.757575757575758, "percentage": 82.25, "elapsed_time": "1:38:24", "remaining_time": "0:21:14"}
{"current_steps": 195, "total_steps": 231, "loss": 0.1762, "lr": 3.071286025423983e-06, "epoch": 5.909090909090909, "percentage": 84.42, "elapsed_time": "1:40:59", "remaining_time": "0:18:38"}
{"current_steps": 200, "total_steps": 231, "loss": 0.1746, "lr": 2.312629276668554e-06, "epoch": 6.0606060606060606, "percentage": 86.58, "elapsed_time": "1:43:33", "remaining_time": "0:16:03"}
{"current_steps": 205, "total_steps": 231, "loss": 0.1764, "lr": 1.6557739698909436e-06, "epoch": 6.212121212121212, "percentage": 88.74, "elapsed_time": "1:46:06", "remaining_time": "0:13:27"}
{"current_steps": 210, "total_steps": 231, "loss": 0.1753, "lr": 1.1045007019049182e-06, "epoch": 6.363636363636363, "percentage": 90.91, "elapsed_time": "1:48:41", "remaining_time": "0:10:52"}
{"current_steps": 215, "total_steps": 231, "loss": 0.1725, "lr": 6.619823814758786e-07, "epoch": 6.515151515151516, "percentage": 93.07, "elapsed_time": "1:51:16", "remaining_time": "0:08:16"}
{"current_steps": 220, "total_steps": 231, "loss": 0.1759, "lr": 3.307659673251595e-07, "epoch": 6.666666666666667, "percentage": 95.24, "elapsed_time": "1:53:50", "remaining_time": "0:05:41"}
{"current_steps": 225, "total_steps": 231, "loss": 0.1764, "lr": 1.1275780885282806e-07, "epoch": 6.818181818181818, "percentage": 97.4, "elapsed_time": "1:56:24", "remaining_time": "0:03:06"}
{"current_steps": 230, "total_steps": 231, "loss": 0.176, "lr": 9.212673951897177e-09, "epoch": 6.96969696969697, "percentage": 99.57, "elapsed_time": "1:58:57", "remaining_time": "0:00:31"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "1:59:36", "remaining_time": "0:00:00"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "0:00:00", "remaining_time": "0:00:00"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "0:00:00", "remaining_time": "0:00:00"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "0:00:00", "remaining_time": "0:00:00"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "0:00:00", "remaining_time": "0:00:00"}
{"current_steps": 231, "total_steps": 231, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "0:00:00", "remaining_time": "0:00:00"}

549
trainer_state.json Normal file
View File

@@ -0,0 +1,549 @@
{
"best_global_step": null,
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 7.0,
"eval_steps": 500,
"global_step": 231,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.15151515151515152,
"grad_norm": 4.114635968171536,
"learning_rate": 6.666666666666667e-06,
"loss": 0.4313,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.13727591931819916,
"step": 5,
"valid_targets_mean": 14623.7,
"valid_targets_min": 8472
},
{
"epoch": 0.30303030303030304,
"grad_norm": 1.4536543569838813,
"learning_rate": 1.5000000000000002e-05,
"loss": 0.3895,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.12720003724098206,
"step": 10,
"valid_targets_mean": 13794.9,
"valid_targets_min": 8012
},
{
"epoch": 0.45454545454545453,
"grad_norm": 0.5050980157229342,
"learning_rate": 2.3333333333333336e-05,
"loss": 0.3418,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.10843911021947861,
"step": 15,
"valid_targets_mean": 13961.8,
"valid_targets_min": 7669
},
{
"epoch": 0.6060606060606061,
"grad_norm": 0.42246792506810416,
"learning_rate": 3.1666666666666666e-05,
"loss": 0.3153,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.10476820170879364,
"step": 20,
"valid_targets_mean": 13416.8,
"valid_targets_min": 8687
},
{
"epoch": 0.7575757575757576,
"grad_norm": 0.32959630386266386,
"learning_rate": 4e-05,
"loss": 0.2947,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.09988001734018326,
"step": 25,
"valid_targets_mean": 14083.4,
"valid_targets_min": 6271
},
{
"epoch": 0.9090909090909091,
"grad_norm": 0.2350625305435842,
"learning_rate": 3.994244399375679e-05,
"loss": 0.271,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.09274592995643616,
"step": 30,
"valid_targets_mean": 14132.5,
"valid_targets_min": 5776
},
{
"epoch": 1.0606060606060606,
"grad_norm": 0.19138295760535673,
"learning_rate": 3.977010724441261e-05,
"loss": 0.2523,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.08168983459472656,
"step": 35,
"valid_targets_mean": 13705.5,
"valid_targets_min": 6252
},
{
"epoch": 1.2121212121212122,
"grad_norm": 0.16875241063378532,
"learning_rate": 3.9483981653469586e-05,
"loss": 0.2413,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.07980272173881531,
"step": 40,
"valid_targets_mean": 13126.1,
"valid_targets_min": 3159
},
{
"epoch": 1.3636363636363638,
"grad_norm": 0.14569427919381664,
"learning_rate": 3.908571404555758e-05,
"loss": 0.2315,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0792369619011879,
"step": 45,
"valid_targets_mean": 14597.3,
"valid_targets_min": 6419
},
{
"epoch": 1.5151515151515151,
"grad_norm": 0.12740714547686136,
"learning_rate": 3.8577596689969346e-05,
"loss": 0.228,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.07131356000900269,
"step": 50,
"valid_targets_mean": 14610.6,
"valid_targets_min": 9878
},
{
"epoch": 1.6666666666666665,
"grad_norm": 0.13219914885031364,
"learning_rate": 3.7962554107273926e-05,
"loss": 0.2184,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.07061326503753662,
"step": 55,
"valid_targets_mean": 14205.5,
"valid_targets_min": 8994
},
{
"epoch": 1.8181818181818183,
"grad_norm": 0.12537992825560595,
"learning_rate": 3.724412623694427e-05,
"loss": 0.2129,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0706273764371872,
"step": 60,
"valid_targets_mean": 14081.0,
"valid_targets_min": 7128
},
{
"epoch": 1.9696969696969697,
"grad_norm": 0.1317216509879008,
"learning_rate": 3.642644806287938e-05,
"loss": 0.2075,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06866224110126495,
"step": 65,
"valid_targets_mean": 13509.2,
"valid_targets_min": 6997
},
{
"epoch": 2.121212121212121,
"grad_norm": 0.13400162499874246,
"learning_rate": 3.55142258140884e-05,
"loss": 0.2062,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06507541239261627,
"step": 70,
"valid_targets_mean": 13193.5,
"valid_targets_min": 3186
},
{
"epoch": 2.2727272727272725,
"grad_norm": 0.14105797671143652,
"learning_rate": 3.451270987751598e-05,
"loss": 0.2062,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.07145757973194122,
"step": 75,
"valid_targets_mean": 14511.4,
"valid_targets_min": 5311
},
{
"epoch": 2.4242424242424243,
"grad_norm": 0.1342111928769276,
"learning_rate": 3.342766457891194e-05,
"loss": 0.1972,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.07077564299106598,
"step": 80,
"valid_targets_mean": 14961.1,
"valid_targets_min": 9318
},
{
"epoch": 2.5757575757575757,
"grad_norm": 0.12636739349250312,
"learning_rate": 3.226533500567433e-05,
"loss": 0.1948,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0638246163725853,
"step": 85,
"valid_targets_mean": 14331.9,
"valid_targets_min": 8629
},
{
"epoch": 2.7272727272727275,
"grad_norm": 0.1376354010348529,
"learning_rate": 3.1032411062620544e-05,
"loss": 0.1969,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06627653539180756,
"step": 90,
"valid_targets_mean": 14391.3,
"valid_targets_min": 9017
},
{
"epoch": 2.878787878787879,
"grad_norm": 0.12491286274381265,
"learning_rate": 2.973598896756697e-05,
"loss": 0.1923,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05493774637579918,
"step": 95,
"valid_targets_mean": 14151.3,
"valid_targets_min": 6828
},
{
"epoch": 3.0303030303030303,
"grad_norm": 0.13478052499964224,
"learning_rate": 2.8383530408333285e-05,
"loss": 0.1914,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06786017119884491,
"step": 100,
"valid_targets_mean": 13563.6,
"valid_targets_min": 5230
},
{
"epoch": 3.1818181818181817,
"grad_norm": 0.13423564287671794,
"learning_rate": 2.6982819596247373e-05,
"loss": 0.1892,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06517742574214935,
"step": 105,
"valid_targets_mean": 13841.5,
"valid_targets_min": 7737
},
{
"epoch": 3.3333333333333335,
"grad_norm": 0.12892135797363977,
"learning_rate": 2.554191846333378e-05,
"loss": 0.1852,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06293950974941254,
"step": 110,
"valid_targets_mean": 13982.2,
"valid_targets_min": 6906
},
{
"epoch": 3.484848484848485,
"grad_norm": 0.13050995936742432,
"learning_rate": 2.4069120261052682e-05,
"loss": 0.1861,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.060917340219020844,
"step": 115,
"valid_targets_mean": 14595.4,
"valid_targets_min": 7001
},
{
"epoch": 3.6363636363636362,
"grad_norm": 0.12603585687316865,
"learning_rate": 2.2572901827656626e-05,
"loss": 0.1872,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06335198879241943,
"step": 120,
"valid_targets_mean": 14134.8,
"valid_targets_min": 4539
},
{
"epoch": 3.787878787878788,
"grad_norm": 0.12458026081937879,
"learning_rate": 2.1061874798894992e-05,
"loss": 0.1849,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0566466748714447,
"step": 125,
"valid_targets_mean": 13645.1,
"valid_targets_min": 6953
},
{
"epoch": 3.9393939393939394,
"grad_norm": 0.1237313936998869,
"learning_rate": 1.9544736042877886e-05,
"loss": 0.1827,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06262873858213425,
"step": 130,
"valid_targets_mean": 14213.2,
"valid_targets_min": 7309
},
{
"epoch": 4.090909090909091,
"grad_norm": 0.12687157253526618,
"learning_rate": 1.8030217604376628e-05,
"loss": 0.183,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06545929610729218,
"step": 135,
"valid_targets_mean": 14312.2,
"valid_targets_min": 4618
},
{
"epoch": 4.242424242424242,
"grad_norm": 0.13286557514006167,
"learning_rate": 1.6527036446661396e-05,
"loss": 0.1838,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0621417798101902,
"step": 140,
"valid_targets_mean": 13995.9,
"valid_targets_min": 3159
},
{
"epoch": 4.393939393939394,
"grad_norm": 0.13911478169619562,
"learning_rate": 1.5043844280142005e-05,
"loss": 0.1811,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06362389773130417,
"step": 145,
"valid_targets_mean": 14415.2,
"valid_targets_min": 7395
},
{
"epoch": 4.545454545454545,
"grad_norm": 0.12748545316526233,
"learning_rate": 1.358917776657806e-05,
"loss": 0.1782,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05887322872877121,
"step": 150,
"valid_targets_mean": 13608.7,
"valid_targets_min": 4237
},
{
"epoch": 4.696969696969697,
"grad_norm": 0.12065267038653112,
"learning_rate": 1.2171409385463218e-05,
"loss": 0.1767,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05912807583808899,
"step": 155,
"valid_targets_mean": 13554.9,
"valid_targets_min": 6402
},
{
"epoch": 4.848484848484849,
"grad_norm": 0.1242188167578349,
"learning_rate": 1.0798699245376959e-05,
"loss": 0.1781,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05947456508874893,
"step": 160,
"valid_targets_mean": 14594.9,
"valid_targets_min": 7248
},
{
"epoch": 5.0,
"grad_norm": 0.14434343975971592,
"learning_rate": 9.478948117658577e-06,
"loss": 0.1786,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.059081535786390305,
"step": 165,
"valid_targets_mean": 13341.7,
"valid_targets_min": 4728
},
{
"epoch": 5.151515151515151,
"grad_norm": 0.13210237534940392,
"learning_rate": 8.219751962722726e-06,
"loss": 0.1803,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.059808894991874695,
"step": 170,
"valid_targets_mean": 14140.9,
"valid_targets_min": 5241
},
{
"epoch": 5.303030303030303,
"grad_norm": 0.11760754357701442,
"learning_rate": 7.028358210744881e-06,
"loss": 0.1723,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05813150107860565,
"step": 175,
"valid_targets_mean": 13437.8,
"valid_targets_min": 3456
},
{
"epoch": 5.454545454545454,
"grad_norm": 0.23292822424304802,
"learning_rate": 5.911624048347757e-06,
"loss": 0.1782,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.054474152624607086,
"step": 180,
"valid_targets_mean": 13450.3,
"valid_targets_min": 8527
},
{
"epoch": 5.606060606060606,
"grad_norm": 0.11629211221337708,
"learning_rate": 4.875976951373633e-06,
"loss": 0.177,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05472658574581146,
"step": 185,
"valid_targets_mean": 14527.5,
"valid_targets_min": 6291
},
{
"epoch": 5.757575757575758,
"grad_norm": 0.12053316478201255,
"learning_rate": 3.927377690900436e-06,
"loss": 0.1755,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06293582916259766,
"step": 190,
"valid_targets_mean": 15079.5,
"valid_targets_min": 5311
},
{
"epoch": 5.909090909090909,
"grad_norm": 0.14093615075318672,
"learning_rate": 3.071286025423983e-06,
"loss": 0.1762,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0564623698592186,
"step": 195,
"valid_targets_mean": 13432.3,
"valid_targets_min": 4391
},
{
"epoch": 6.0606060606060606,
"grad_norm": 0.11794615480974098,
"learning_rate": 2.312629276668554e-06,
"loss": 0.1746,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.059501372277736664,
"step": 200,
"valid_targets_mean": 14715.9,
"valid_targets_min": 8912
},
{
"epoch": 6.212121212121212,
"grad_norm": 0.12194911582381218,
"learning_rate": 1.6557739698909436e-06,
"loss": 0.1764,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0586993545293808,
"step": 205,
"valid_targets_mean": 13863.8,
"valid_targets_min": 5335
},
{
"epoch": 6.363636363636363,
"grad_norm": 0.12137779369589421,
"learning_rate": 1.1045007019049182e-06,
"loss": 0.1753,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05951303243637085,
"step": 210,
"valid_targets_mean": 13876.2,
"valid_targets_min": 4510
},
{
"epoch": 6.515151515151516,
"grad_norm": 0.11115005771790042,
"learning_rate": 6.619823814758786e-07,
"loss": 0.1725,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.0475098192691803,
"step": 215,
"valid_targets_mean": 13260.7,
"valid_targets_min": 2975
},
{
"epoch": 6.666666666666667,
"grad_norm": 0.11465648239113958,
"learning_rate": 3.307659673251595e-07,
"loss": 0.1759,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.05422068014740944,
"step": 220,
"valid_targets_mean": 13885.9,
"valid_targets_min": 4618
},
{
"epoch": 6.818181818181818,
"grad_norm": 0.12646402426260042,
"learning_rate": 1.1275780885282806e-07,
"loss": 0.1764,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.06054677814245224,
"step": 225,
"valid_targets_mean": 13998.0,
"valid_targets_min": 7669
},
{
"epoch": 6.96969696969697,
"grad_norm": 0.11699617410563377,
"learning_rate": 9.212673951897177e-09,
"loss": 0.176,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.061230286955833435,
"step": 230,
"valid_targets_mean": 14301.2,
"valid_targets_min": 7992
},
{
"epoch": 7.0,
"step": 231,
"total_flos": 1.7948424939556045e+18,
"train_loss": 0.0,
"train_runtime": 1.1533,
"train_samples_per_second": 19179.804,
"train_steps_per_second": 200.295
}
],
"logging_steps": 5,
"max_steps": 231,
"num_input_tokens_seen": 0,
"num_train_epochs": 7,
"save_steps": 300,
"stateful_callbacks": {
"TrainerControl": {
"args": {
"should_epoch_stop": false,
"should_evaluate": false,
"should_log": false,
"should_save": true,
"should_training_stop": true
},
"attributes": {}
}
},
"total_flos": 1.7948424939556045e+18,
"train_batch_size": 1,
"trial_name": null,
"trial_params": null
}

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:59bbc9968d73062b5b3e4f2040d595686011d9226306efab12dd5e8124616439
size 8657

BIN
training_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

1
vocab.json Normal file

File diff suppressed because one or more lines are too long