初始化项目,由ModelHub XC社区提供模型
Model: laion/sft__stackexchange-tezos-sandboxes__Kimi-2-5-smaxeps-32k__Qwen3-8B Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
61
README.md
Normal file
61
README.md
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
---
|
||||||
|
library_name: transformers
|
||||||
|
license: other
|
||||||
|
base_model: Qwen/Qwen3-8B
|
||||||
|
tags:
|
||||||
|
- llama-factory
|
||||||
|
- full
|
||||||
|
- generated_from_trainer
|
||||||
|
model-index:
|
||||||
|
- name: sft__stackexchange-tezos-sandboxes__Kimi-2-5-smaxeps-32k__40-0__Qwen3-8B
|
||||||
|
results: []
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||||
|
should probably proofread and complete it, then remove this comment. -->
|
||||||
|
|
||||||
|
# sft__stackexchange-tezos-sandboxes__Kimi-2-5-smaxeps-32k__40-0__Qwen3-8B
|
||||||
|
|
||||||
|
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the /e/data1/datasets/playground/ot/hf_hub/datasets--penfever--stackexchange-tezos-sandboxes__Kimi-2.5-smaxeps-32k/snapshots/33375d18f3a1d98976944789905e380fce397c46_thinking_preprocessed dataset.
|
||||||
|
|
||||||
|
## Model description
|
||||||
|
|
||||||
|
More information needed
|
||||||
|
|
||||||
|
## Intended uses & limitations
|
||||||
|
|
||||||
|
More information needed
|
||||||
|
|
||||||
|
## Training and evaluation data
|
||||||
|
|
||||||
|
More information needed
|
||||||
|
|
||||||
|
## Training procedure
|
||||||
|
|
||||||
|
### Training hyperparameters
|
||||||
|
|
||||||
|
The following hyperparameters were used during training:
|
||||||
|
- learning_rate: 4e-05
|
||||||
|
- train_batch_size: 1
|
||||||
|
- eval_batch_size: 8
|
||||||
|
- seed: 42
|
||||||
|
- distributed_type: multi-GPU
|
||||||
|
- num_devices: 32
|
||||||
|
- gradient_accumulation_steps: 3
|
||||||
|
- total_train_batch_size: 96
|
||||||
|
- total_eval_batch_size: 256
|
||||||
|
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||||
|
- lr_scheduler_type: cosine
|
||||||
|
- lr_scheduler_warmup_ratio: 0.1
|
||||||
|
- num_epochs: 7.0
|
||||||
|
|
||||||
|
### Training results
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Framework versions
|
||||||
|
|
||||||
|
- Transformers 4.57.6
|
||||||
|
- Pytorch 2.9.1+cu130
|
||||||
|
- Datasets 4.7.0
|
||||||
|
- Tokenizers 0.22.2
|
||||||
28
added_tokens.json
Normal file
28
added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"</think>": 151668,
|
||||||
|
"</tool_call>": 151658,
|
||||||
|
"</tool_response>": 151666,
|
||||||
|
"<think>": 151667,
|
||||||
|
"<tool_call>": 151657,
|
||||||
|
"<tool_response>": 151665,
|
||||||
|
"<|box_end|>": 151649,
|
||||||
|
"<|box_start|>": 151648,
|
||||||
|
"<|endoftext|>": 151643,
|
||||||
|
"<|file_sep|>": 151664,
|
||||||
|
"<|fim_middle|>": 151660,
|
||||||
|
"<|fim_pad|>": 151662,
|
||||||
|
"<|fim_prefix|>": 151659,
|
||||||
|
"<|fim_suffix|>": 151661,
|
||||||
|
"<|im_end|>": 151645,
|
||||||
|
"<|im_start|>": 151644,
|
||||||
|
"<|image_pad|>": 151655,
|
||||||
|
"<|object_ref_end|>": 151647,
|
||||||
|
"<|object_ref_start|>": 151646,
|
||||||
|
"<|quad_end|>": 151651,
|
||||||
|
"<|quad_start|>": 151650,
|
||||||
|
"<|repo_name|>": 151663,
|
||||||
|
"<|video_pad|>": 151656,
|
||||||
|
"<|vision_end|>": 151653,
|
||||||
|
"<|vision_pad|>": 151654,
|
||||||
|
"<|vision_start|>": 151652
|
||||||
|
}
|
||||||
16
all_results.json
Normal file
16
all_results.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"achieved_tflops_per_gpu": 5.062071165920845,
|
||||||
|
"achieved_tflops_per_gpu_theoretical": 237.15331939762177,
|
||||||
|
"epoch": 7.0,
|
||||||
|
"loss_nan_ranks": 0,
|
||||||
|
"loss_rank_avg": 0.13302914798259735,
|
||||||
|
"mfu_percent": 0.3577435452947593,
|
||||||
|
"mfu_percent_theoretical": 16.759951900892,
|
||||||
|
"total_flos": 1.4527566279155384e+18,
|
||||||
|
"train_loss": 0.4386636906199985,
|
||||||
|
"train_runtime": 8968.3932,
|
||||||
|
"train_samples_per_second": 6.727,
|
||||||
|
"train_steps_per_second": 0.07,
|
||||||
|
"valid_targets_mean": 3211.4,
|
||||||
|
"valid_targets_min": 220
|
||||||
|
}
|
||||||
89
chat_template.jinja
Normal file
89
chat_template.jinja
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
{%- if tools %}
|
||||||
|
{{- '<|im_start|>system\n' }}
|
||||||
|
{%- if messages[0].role == 'system' %}
|
||||||
|
{{- messages[0].content + '\n\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||||
|
{%- for tool in tools %}
|
||||||
|
{{- "\n" }}
|
||||||
|
{{- tool | tojson }}
|
||||||
|
{%- endfor %}
|
||||||
|
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||||
|
{%- else %}
|
||||||
|
{%- if messages[0].role == 'system' %}
|
||||||
|
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
||||||
|
{%- for message in messages[::-1] %}
|
||||||
|
{%- set index = (messages|length - 1) - loop.index0 %}
|
||||||
|
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
||||||
|
{%- set ns.multi_step_tool = false %}
|
||||||
|
{%- set ns.last_query_index = index %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- for message in messages %}
|
||||||
|
{%- if message.content is string %}
|
||||||
|
{%- set content = message.content %}
|
||||||
|
{%- else %}
|
||||||
|
{%- set content = '' %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
||||||
|
{%- elif message.role == "assistant" %}
|
||||||
|
{%- set reasoning_content = '' %}
|
||||||
|
{%- if message.reasoning_content is string %}
|
||||||
|
{%- set reasoning_content = message.reasoning_content %}
|
||||||
|
{%- else %}
|
||||||
|
{%- if '</think>' in content %}
|
||||||
|
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
||||||
|
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if loop.index0 > ns.last_query_index %}
|
||||||
|
{%- if loop.last or (not loop.last and reasoning_content) %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
||||||
|
{%- else %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- else %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if message.tool_calls %}
|
||||||
|
{%- for tool_call in message.tool_calls %}
|
||||||
|
{%- if (loop.first and content) or (not loop.first) %}
|
||||||
|
{{- '\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if tool_call.function %}
|
||||||
|
{%- set tool_call = tool_call.function %}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '<tool_call>\n{"name": "' }}
|
||||||
|
{{- tool_call.name }}
|
||||||
|
{{- '", "arguments": ' }}
|
||||||
|
{%- if tool_call.arguments is string %}
|
||||||
|
{{- tool_call.arguments }}
|
||||||
|
{%- else %}
|
||||||
|
{{- tool_call.arguments | tojson }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '}\n</tool_call>' }}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- elif message.role == "tool" %}
|
||||||
|
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
||||||
|
{{- '<|im_start|>user' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '\n<tool_response>\n' }}
|
||||||
|
{{- content }}
|
||||||
|
{{- '\n</tool_response>' }}
|
||||||
|
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- if add_generation_prompt %}
|
||||||
|
{{- '<|im_start|>assistant\n' }}
|
||||||
|
{%- if enable_thinking is defined and enable_thinking is false %}
|
||||||
|
{{- '<think>\n\n</think>\n\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
68
config.json
Normal file
68
config.json
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Qwen3ForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"dtype": "bfloat16",
|
||||||
|
"eos_token_id": 151645,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 12288,
|
||||||
|
"layer_types": [
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention"
|
||||||
|
],
|
||||||
|
"max_position_embeddings": 40960,
|
||||||
|
"max_window_layers": 36,
|
||||||
|
"model_type": "qwen3",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 36,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": 151643,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 1000000,
|
||||||
|
"sliding_window": null,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"transformers_version": "4.57.6",
|
||||||
|
"use_cache": false,
|
||||||
|
"use_sliding_window": false,
|
||||||
|
"vocab_size": 151936
|
||||||
|
}
|
||||||
12
generation_config.json
Normal file
12
generation_config.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"do_sample": true,
|
||||||
|
"eos_token_id": [
|
||||||
|
151645,
|
||||||
|
151643
|
||||||
|
],
|
||||||
|
"pad_token_id": 151643,
|
||||||
|
"temperature": 0.6,
|
||||||
|
"top_k": 20,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"transformers_version": "4.57.6"
|
||||||
|
}
|
||||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b96f2014ed82d0bd528ff1980fc1708db4aa7ee7786b7641eda91b86f857cc5a
|
||||||
|
size 4902257696
|
||||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1bbf767de12476e053f7550a46cdcb855fd079a43fa1efdcd2bd5a2fc7fac63a
|
||||||
|
size 4915960368
|
||||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:0530c5de808229b416f88f8a913f90e760b919136db39b5b5ba88c17724c656a
|
||||||
|
size 4983068496
|
||||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:47cc6266fddf0bcd86ba7e030927dfe999aa7a33369119c3282581815a7a4b85
|
||||||
|
size 1580230264
|
||||||
407
model.safetensors.index.json
Normal file
407
model.safetensors.index.json
Normal file
@@ -0,0 +1,407 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_parameters": 308224,
|
||||||
|
"total_size": 16381470720
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.embed_tokens.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.32.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.33.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.34.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.35.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||||
|
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||||
|
"model.norm.weight": "model-00004-of-00004.safetensors"
|
||||||
|
}
|
||||||
|
}
|
||||||
12
run_summary.json
Normal file
12
run_summary.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"agent_name": "33375d18f3a1d98976944789905e380fce397c46_thinking_preprocessed",
|
||||||
|
"training_start": null,
|
||||||
|
"training_end": null,
|
||||||
|
"created_by": "DCAgent",
|
||||||
|
"base_model_name": "Qwen/Qwen3-8B",
|
||||||
|
"dataset_name": "/e/data1/datasets/playground/ot/hf_hub/datasets--penfever--stackexchange-tezos-sandboxes__Kimi-2.5-smaxeps-32k/snapshots/33375d18f3a1d98976944789905e380fce397c46_thinking_preprocessed",
|
||||||
|
"training_type": "SFT",
|
||||||
|
"training_parameters": "https://huggingface.co/laion/stackexchange-tezos-sandboxes__Kimi-2_5-smaxeps-32k/blob/main/config.json",
|
||||||
|
"wandb_link": null,
|
||||||
|
"traces_location_s3": null
|
||||||
|
}
|
||||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
||||||
|
size 11422654
|
||||||
240
tokenizer_config.json
Normal file
240
tokenizer_config.json
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": false,
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"151643": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151644": {
|
||||||
|
"content": "<|im_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151645": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151646": {
|
||||||
|
"content": "<|object_ref_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151647": {
|
||||||
|
"content": "<|object_ref_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151648": {
|
||||||
|
"content": "<|box_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151649": {
|
||||||
|
"content": "<|box_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151650": {
|
||||||
|
"content": "<|quad_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151651": {
|
||||||
|
"content": "<|quad_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151652": {
|
||||||
|
"content": "<|vision_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151653": {
|
||||||
|
"content": "<|vision_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151654": {
|
||||||
|
"content": "<|vision_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151655": {
|
||||||
|
"content": "<|image_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151656": {
|
||||||
|
"content": "<|video_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"151657": {
|
||||||
|
"content": "<tool_call>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151658": {
|
||||||
|
"content": "</tool_call>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151659": {
|
||||||
|
"content": "<|fim_prefix|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151660": {
|
||||||
|
"content": "<|fim_middle|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151661": {
|
||||||
|
"content": "<|fim_suffix|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151662": {
|
||||||
|
"content": "<|fim_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151663": {
|
||||||
|
"content": "<|repo_name|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151664": {
|
||||||
|
"content": "<|file_sep|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151665": {
|
||||||
|
"content": "<tool_response>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151666": {
|
||||||
|
"content": "</tool_response>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151667": {
|
||||||
|
"content": "<think>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
},
|
||||||
|
"151668": {
|
||||||
|
"content": "</think>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"bos_token": null,
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "<|im_end|>",
|
||||||
|
"errors": "replace",
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"model_max_length": 32768,
|
||||||
|
"pad_token": "<|endoftext|>",
|
||||||
|
"padding_side": "right",
|
||||||
|
"split_special_tokens": false,
|
||||||
|
"tokenizer_class": "Qwen2Tokenizer",
|
||||||
|
"unk_token": null
|
||||||
|
}
|
||||||
16
train_results.json
Normal file
16
train_results.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"achieved_tflops_per_gpu": 5.062071165920845,
|
||||||
|
"achieved_tflops_per_gpu_theoretical": 237.15331939762177,
|
||||||
|
"epoch": 7.0,
|
||||||
|
"loss_nan_ranks": 0,
|
||||||
|
"loss_rank_avg": 0.13302914798259735,
|
||||||
|
"mfu_percent": 0.3577435452947593,
|
||||||
|
"mfu_percent_theoretical": 16.759951900892,
|
||||||
|
"total_flos": 1.4527566279155384e+18,
|
||||||
|
"train_loss": 0.4386636906199985,
|
||||||
|
"train_runtime": 8968.3932,
|
||||||
|
"train_samples_per_second": 6.727,
|
||||||
|
"train_steps_per_second": 0.07,
|
||||||
|
"valid_targets_mean": 3211.4,
|
||||||
|
"valid_targets_min": 220
|
||||||
|
}
|
||||||
127
trainer_log.jsonl
Normal file
127
trainer_log.jsonl
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
{"current_steps": 5, "total_steps": 630, "loss": 0.9975, "lr": 2.53968253968254e-06, "epoch": 0.05555555555555555, "percentage": 0.79, "elapsed_time": "0:01:24", "remaining_time": "2:55:20"}
|
||||||
|
{"current_steps": 10, "total_steps": 630, "loss": 0.9275, "lr": 5.7142857142857145e-06, "epoch": 0.1111111111111111, "percentage": 1.59, "elapsed_time": "0:02:38", "remaining_time": "2:44:09"}
|
||||||
|
{"current_steps": 15, "total_steps": 630, "loss": 0.7902, "lr": 8.888888888888888e-06, "epoch": 0.16666666666666666, "percentage": 2.38, "elapsed_time": "0:03:54", "remaining_time": "2:40:09"}
|
||||||
|
{"current_steps": 20, "total_steps": 630, "loss": 0.7268, "lr": 1.2063492063492064e-05, "epoch": 0.2222222222222222, "percentage": 3.17, "elapsed_time": "0:05:08", "remaining_time": "2:37:00"}
|
||||||
|
{"current_steps": 25, "total_steps": 630, "loss": 0.6872, "lr": 1.523809523809524e-05, "epoch": 0.2777777777777778, "percentage": 3.97, "elapsed_time": "0:06:19", "remaining_time": "2:33:06"}
|
||||||
|
{"current_steps": 30, "total_steps": 630, "loss": 0.6531, "lr": 1.8412698412698415e-05, "epoch": 0.3333333333333333, "percentage": 4.76, "elapsed_time": "0:07:34", "remaining_time": "2:31:21"}
|
||||||
|
{"current_steps": 35, "total_steps": 630, "loss": 0.6291, "lr": 2.158730158730159e-05, "epoch": 0.3888888888888889, "percentage": 5.56, "elapsed_time": "0:08:46", "remaining_time": "2:29:17"}
|
||||||
|
{"current_steps": 40, "total_steps": 630, "loss": 0.6125, "lr": 2.4761904761904766e-05, "epoch": 0.4444444444444444, "percentage": 6.35, "elapsed_time": "0:10:00", "remaining_time": "2:27:41"}
|
||||||
|
{"current_steps": 45, "total_steps": 630, "loss": 0.5954, "lr": 2.7936507936507936e-05, "epoch": 0.5, "percentage": 7.14, "elapsed_time": "0:11:13", "remaining_time": "2:25:55"}
|
||||||
|
{"current_steps": 50, "total_steps": 630, "loss": 0.5695, "lr": 3.111111111111112e-05, "epoch": 0.5555555555555556, "percentage": 7.94, "elapsed_time": "0:12:29", "remaining_time": "2:24:53"}
|
||||||
|
{"current_steps": 55, "total_steps": 630, "loss": 0.546, "lr": 3.4285714285714284e-05, "epoch": 0.6111111111111112, "percentage": 8.73, "elapsed_time": "0:13:46", "remaining_time": "2:23:58"}
|
||||||
|
{"current_steps": 60, "total_steps": 630, "loss": 0.5496, "lr": 3.7460317460317464e-05, "epoch": 0.6666666666666666, "percentage": 9.52, "elapsed_time": "0:14:57", "remaining_time": "2:22:05"}
|
||||||
|
{"current_steps": 65, "total_steps": 630, "loss": 0.5259, "lr": 3.9999693004141615e-05, "epoch": 0.7222222222222222, "percentage": 10.32, "elapsed_time": "0:16:13", "remaining_time": "2:20:58"}
|
||||||
|
{"current_steps": 70, "total_steps": 630, "loss": 0.5241, "lr": 3.998894913865352e-05, "epoch": 0.7777777777777778, "percentage": 11.11, "elapsed_time": "0:17:28", "remaining_time": "2:19:50"}
|
||||||
|
{"current_steps": 75, "total_steps": 630, "loss": 0.5204, "lr": 3.9962864903591375e-05, "epoch": 0.8333333333333334, "percentage": 11.9, "elapsed_time": "0:18:43", "remaining_time": "2:18:31"}
|
||||||
|
{"current_steps": 80, "total_steps": 630, "loss": 0.5089, "lr": 3.992146031710637e-05, "epoch": 0.8888888888888888, "percentage": 12.7, "elapsed_time": "0:19:58", "remaining_time": "2:17:20"}
|
||||||
|
{"current_steps": 85, "total_steps": 630, "loss": 0.4974, "lr": 3.9864767154838864e-05, "epoch": 0.9444444444444444, "percentage": 13.49, "elapsed_time": "0:21:13", "remaining_time": "2:16:05"}
|
||||||
|
{"current_steps": 90, "total_steps": 630, "loss": 0.4945, "lr": 3.9792828925532376e-05, "epoch": 1.0, "percentage": 14.29, "elapsed_time": "0:22:24", "remaining_time": "2:14:24"}
|
||||||
|
{"current_steps": 95, "total_steps": 630, "loss": 0.48, "lr": 3.970570083764316e-05, "epoch": 1.0555555555555556, "percentage": 15.08, "elapsed_time": "0:23:39", "remaining_time": "2:13:13"}
|
||||||
|
{"current_steps": 100, "total_steps": 630, "loss": 0.4807, "lr": 3.9603449756970877e-05, "epoch": 1.1111111111111112, "percentage": 15.87, "elapsed_time": "0:24:53", "remaining_time": "2:11:55"}
|
||||||
|
{"current_steps": 105, "total_steps": 630, "loss": 0.479, "lr": 3.948615415534294e-05, "epoch": 1.1666666666666667, "percentage": 16.67, "elapsed_time": "0:26:07", "remaining_time": "2:10:39"}
|
||||||
|
{"current_steps": 110, "total_steps": 630, "loss": 0.4686, "lr": 3.9353904050391874e-05, "epoch": 1.2222222222222223, "percentage": 17.46, "elapsed_time": "0:27:19", "remaining_time": "2:09:08"}
|
||||||
|
{"current_steps": 115, "total_steps": 630, "loss": 0.4699, "lr": 3.9206800936472e-05, "epoch": 1.2777777777777777, "percentage": 18.25, "elapsed_time": "0:28:32", "remaining_time": "2:07:47"}
|
||||||
|
{"current_steps": 120, "total_steps": 630, "loss": 0.4818, "lr": 3.904495770676831e-05, "epoch": 1.3333333333333333, "percentage": 19.05, "elapsed_time": "0:29:43", "remaining_time": "2:06:18"}
|
||||||
|
{"current_steps": 125, "total_steps": 630, "loss": 0.468, "lr": 3.886849856665746e-05, "epoch": 1.3888888888888888, "percentage": 19.84, "elapsed_time": "0:30:54", "remaining_time": "2:04:51"}
|
||||||
|
{"current_steps": 130, "total_steps": 630, "loss": 0.4596, "lr": 3.8677558938387276e-05, "epoch": 1.4444444444444444, "percentage": 20.63, "elapsed_time": "0:32:07", "remaining_time": "2:03:35"}
|
||||||
|
{"current_steps": 135, "total_steps": 630, "loss": 0.465, "lr": 3.8472285357147966e-05, "epoch": 1.5, "percentage": 21.43, "elapsed_time": "0:33:20", "remaining_time": "2:02:13"}
|
||||||
|
{"current_steps": 140, "total_steps": 630, "loss": 0.461, "lr": 3.825283535861476e-05, "epoch": 1.5555555555555556, "percentage": 22.22, "elapsed_time": "0:34:29", "remaining_time": "2:00:43"}
|
||||||
|
{"current_steps": 145, "total_steps": 630, "loss": 0.4573, "lr": 3.801937735804838e-05, "epoch": 1.6111111111111112, "percentage": 23.02, "elapsed_time": "0:35:40", "remaining_time": "1:59:18"}
|
||||||
|
{"current_steps": 150, "total_steps": 630, "loss": 0.4602, "lr": 3.777209052104598e-05, "epoch": 1.6666666666666665, "percentage": 23.81, "elapsed_time": "0:36:49", "remaining_time": "1:57:51"}
|
||||||
|
{"current_steps": 155, "total_steps": 630, "loss": 0.455, "lr": 3.7511164626041823e-05, "epoch": 1.7222222222222223, "percentage": 24.6, "elapsed_time": "0:37:57", "remaining_time": "1:56:19"}
|
||||||
|
{"current_steps": 160, "total_steps": 630, "loss": 0.4544, "lr": 3.7236799918663284e-05, "epoch": 1.7777777777777777, "percentage": 25.4, "elapsed_time": "0:39:06", "remaining_time": "1:54:53"}
|
||||||
|
{"current_steps": 165, "total_steps": 630, "loss": 0.4473, "lr": 3.6949206958053825e-05, "epoch": 1.8333333333333335, "percentage": 26.19, "elapsed_time": "0:40:19", "remaining_time": "1:53:38"}
|
||||||
|
{"current_steps": 170, "total_steps": 630, "loss": 0.4503, "lr": 3.6648606455280944e-05, "epoch": 1.8888888888888888, "percentage": 26.98, "elapsed_time": "0:41:27", "remaining_time": "1:52:11"}
|
||||||
|
{"current_steps": 175, "total_steps": 630, "loss": 0.4467, "lr": 3.633522910395314e-05, "epoch": 1.9444444444444444, "percentage": 27.78, "elapsed_time": "0:42:43", "remaining_time": "1:51:03"}
|
||||||
|
{"current_steps": 180, "total_steps": 630, "loss": 0.4529, "lr": 3.6009315403175786e-05, "epoch": 2.0, "percentage": 28.57, "elapsed_time": "0:43:54", "remaining_time": "1:49:47"}
|
||||||
|
{"current_steps": 185, "total_steps": 630, "loss": 0.4301, "lr": 3.567111547298194e-05, "epoch": 2.0555555555555554, "percentage": 29.37, "elapsed_time": "0:45:06", "remaining_time": "1:48:30"}
|
||||||
|
{"current_steps": 190, "total_steps": 630, "loss": 0.4315, "lr": 3.532088886237956e-05, "epoch": 2.111111111111111, "percentage": 30.16, "elapsed_time": "0:46:14", "remaining_time": "1:47:05"}
|
||||||
|
{"current_steps": 195, "total_steps": 630, "loss": 0.4422, "lr": 3.495890435016258e-05, "epoch": 2.1666666666666665, "percentage": 30.95, "elapsed_time": "0:47:26", "remaining_time": "1:45:50"}
|
||||||
|
{"current_steps": 200, "total_steps": 630, "loss": 0.4329, "lr": 3.458543973863859e-05, "epoch": 2.2222222222222223, "percentage": 31.75, "elapsed_time": "0:48:35", "remaining_time": "1:44:28"}
|
||||||
|
{"current_steps": 205, "total_steps": 630, "loss": 0.4284, "lr": 3.420078164043161e-05, "epoch": 2.2777777777777777, "percentage": 32.54, "elapsed_time": "0:49:45", "remaining_time": "1:43:09"}
|
||||||
|
{"current_steps": 210, "total_steps": 630, "loss": 0.4226, "lr": 3.38052252585233e-05, "epoch": 2.3333333333333335, "percentage": 33.33, "elapsed_time": "0:50:55", "remaining_time": "1:41:51"}
|
||||||
|
{"current_steps": 215, "total_steps": 630, "loss": 0.4235, "lr": 3.339907415970168e-05, "epoch": 2.388888888888889, "percentage": 34.13, "elapsed_time": "0:52:09", "remaining_time": "1:40:39"}
|
||||||
|
{"current_steps": 220, "total_steps": 630, "loss": 0.4206, "lr": 3.298264004159104e-05, "epoch": 2.4444444444444446, "percentage": 34.92, "elapsed_time": "0:53:22", "remaining_time": "1:39:28"}
|
||||||
|
{"current_steps": 225, "total_steps": 630, "loss": 0.4243, "lr": 3.255624249344198e-05, "epoch": 2.5, "percentage": 35.71, "elapsed_time": "0:54:31", "remaining_time": "1:38:09"}
|
||||||
|
{"current_steps": 230, "total_steps": 630, "loss": 0.4272, "lr": 3.212020875086495e-05, "epoch": 2.5555555555555554, "percentage": 36.51, "elapsed_time": "0:55:42", "remaining_time": "1:36:53"}
|
||||||
|
{"current_steps": 235, "total_steps": 630, "loss": 0.4347, "lr": 3.1674873444695804e-05, "epoch": 2.611111111111111, "percentage": 37.3, "elapsed_time": "0:56:56", "remaining_time": "1:35:42"}
|
||||||
|
{"current_steps": 240, "total_steps": 630, "loss": 0.422, "lr": 3.122057834418582e-05, "epoch": 2.6666666666666665, "percentage": 38.1, "elapsed_time": "0:58:07", "remaining_time": "1:34:27"}
|
||||||
|
{"current_steps": 245, "total_steps": 630, "loss": 0.4147, "lr": 3.075767209471345e-05, "epoch": 2.7222222222222223, "percentage": 38.89, "elapsed_time": "0:59:17", "remaining_time": "1:33:09"}
|
||||||
|
{"current_steps": 250, "total_steps": 630, "loss": 0.4207, "lr": 3.0286509950219077e-05, "epoch": 2.7777777777777777, "percentage": 39.68, "elapsed_time": "1:00:26", "remaining_time": "1:31:52"}
|
||||||
|
{"current_steps": 255, "total_steps": 630, "loss": 0.4165, "lr": 2.9807453500567937e-05, "epoch": 2.8333333333333335, "percentage": 40.48, "elapsed_time": "1:01:39", "remaining_time": "1:30:40"}
|
||||||
|
{"current_steps": 260, "total_steps": 630, "loss": 0.426, "lr": 2.9320870394050783e-05, "epoch": 2.888888888888889, "percentage": 41.27, "elapsed_time": "1:02:49", "remaining_time": "1:29:23"}
|
||||||
|
{"current_steps": 265, "total_steps": 630, "loss": 0.4227, "lr": 2.8827134055234883e-05, "epoch": 2.9444444444444446, "percentage": 42.06, "elapsed_time": "1:04:01", "remaining_time": "1:28:10"}
|
||||||
|
{"current_steps": 270, "total_steps": 630, "loss": 0.4186, "lr": 2.8326623398382174e-05, "epoch": 3.0, "percentage": 42.86, "elapsed_time": "1:05:10", "remaining_time": "1:26:53"}
|
||||||
|
{"current_steps": 275, "total_steps": 630, "loss": 0.4132, "lr": 2.781972253665431e-05, "epoch": 3.0555555555555554, "percentage": 43.65, "elapsed_time": "1:06:19", "remaining_time": "1:25:37"}
|
||||||
|
{"current_steps": 280, "total_steps": 630, "loss": 0.4058, "lr": 2.7306820487327906e-05, "epoch": 3.111111111111111, "percentage": 44.44, "elapsed_time": "1:07:29", "remaining_time": "1:24:21"}
|
||||||
|
{"current_steps": 285, "total_steps": 630, "loss": 0.4037, "lr": 2.6788310873246133e-05, "epoch": 3.1666666666666665, "percentage": 45.24, "elapsed_time": "1:08:43", "remaining_time": "1:23:12"}
|
||||||
|
{"current_steps": 290, "total_steps": 630, "loss": 0.4022, "lr": 2.62645916207358e-05, "epoch": 3.2222222222222223, "percentage": 46.03, "elapsed_time": "1:09:51", "remaining_time": "1:21:54"}
|
||||||
|
{"current_steps": 295, "total_steps": 630, "loss": 0.4078, "lr": 2.5736064654221808e-05, "epoch": 3.2777777777777777, "percentage": 46.83, "elapsed_time": "1:11:02", "remaining_time": "1:20:40"}
|
||||||
|
{"current_steps": 300, "total_steps": 630, "loss": 0.4064, "lr": 2.5203135587773196e-05, "epoch": 3.3333333333333335, "percentage": 47.62, "elapsed_time": "1:12:14", "remaining_time": "1:19:27"}
|
||||||
|
{"current_steps": 305, "total_steps": 630, "loss": 0.4022, "lr": 2.4666213413817696e-05, "epoch": 3.388888888888889, "percentage": 48.41, "elapsed_time": "1:13:33", "remaining_time": "1:18:22"}
|
||||||
|
{"current_steps": 310, "total_steps": 630, "loss": 0.4, "lr": 2.4125710189263555e-05, "epoch": 3.4444444444444446, "percentage": 49.21, "elapsed_time": "1:14:43", "remaining_time": "1:17:08"}
|
||||||
|
{"current_steps": 315, "total_steps": 630, "loss": 0.3979, "lr": 2.3582040719269504e-05, "epoch": 3.5, "percentage": 50.0, "elapsed_time": "1:15:53", "remaining_time": "1:15:53"}
|
||||||
|
{"current_steps": 320, "total_steps": 630, "loss": 0.404, "lr": 2.3035622238905694e-05, "epoch": 3.5555555555555554, "percentage": 50.79, "elapsed_time": "1:17:04", "remaining_time": "1:14:39"}
|
||||||
|
{"current_steps": 325, "total_steps": 630, "loss": 0.4026, "lr": 2.2486874092949708e-05, "epoch": 3.611111111111111, "percentage": 51.59, "elapsed_time": "1:18:14", "remaining_time": "1:13:25"}
|
||||||
|
{"current_steps": 330, "total_steps": 630, "loss": 0.4045, "lr": 2.1936217414063584e-05, "epoch": 3.6666666666666665, "percentage": 52.38, "elapsed_time": "1:19:25", "remaining_time": "1:12:12"}
|
||||||
|
{"current_steps": 335, "total_steps": 630, "loss": 0.4008, "lr": 2.138407479959869e-05, "epoch": 3.7222222222222223, "percentage": 53.17, "elapsed_time": "1:20:34", "remaining_time": "1:10:56"}
|
||||||
|
{"current_steps": 340, "total_steps": 630, "loss": 0.3996, "lr": 2.0830869987276537e-05, "epoch": 3.7777777777777777, "percentage": 53.97, "elapsed_time": "1:21:46", "remaining_time": "1:09:45"}
|
||||||
|
{"current_steps": 345, "total_steps": 630, "loss": 0.4018, "lr": 2.027702752999444e-05, "epoch": 3.8333333333333335, "percentage": 54.76, "elapsed_time": "1:22:56", "remaining_time": "1:08:30"}
|
||||||
|
{"current_steps": 350, "total_steps": 630, "loss": 0.4019, "lr": 1.9722972470005573e-05, "epoch": 3.888888888888889, "percentage": 55.56, "elapsed_time": "1:24:10", "remaining_time": "1:07:20"}
|
||||||
|
{"current_steps": 355, "total_steps": 630, "loss": 0.4015, "lr": 1.916913001272347e-05, "epoch": 3.9444444444444446, "percentage": 56.35, "elapsed_time": "1:25:16", "remaining_time": "1:06:03"}
|
||||||
|
{"current_steps": 360, "total_steps": 630, "loss": 0.4046, "lr": 1.8615925200401318e-05, "epoch": 4.0, "percentage": 57.14, "elapsed_time": "1:26:25", "remaining_time": "1:04:49"}
|
||||||
|
{"current_steps": 365, "total_steps": 630, "loss": 0.4018, "lr": 1.806378258593642e-05, "epoch": 4.055555555555555, "percentage": 57.94, "elapsed_time": "1:27:31", "remaining_time": "1:03:33"}
|
||||||
|
{"current_steps": 370, "total_steps": 630, "loss": 0.3874, "lr": 1.7513125907050302e-05, "epoch": 4.111111111111111, "percentage": 58.73, "elapsed_time": "1:28:46", "remaining_time": "1:02:22"}
|
||||||
|
{"current_steps": 375, "total_steps": 630, "loss": 0.3855, "lr": 1.6964377761094313e-05, "epoch": 4.166666666666667, "percentage": 59.52, "elapsed_time": "1:29:57", "remaining_time": "1:01:10"}
|
||||||
|
{"current_steps": 380, "total_steps": 630, "loss": 0.3956, "lr": 1.6417959280730506e-05, "epoch": 4.222222222222222, "percentage": 60.32, "elapsed_time": "1:31:08", "remaining_time": "0:59:57"}
|
||||||
|
{"current_steps": 385, "total_steps": 630, "loss": 0.3854, "lr": 1.5874289810736452e-05, "epoch": 4.277777777777778, "percentage": 61.11, "elapsed_time": "1:32:20", "remaining_time": "0:58:45"}
|
||||||
|
{"current_steps": 390, "total_steps": 630, "loss": 0.3886, "lr": 1.5333786586182308e-05, "epoch": 4.333333333333333, "percentage": 61.9, "elapsed_time": "1:33:31", "remaining_time": "0:57:33"}
|
||||||
|
{"current_steps": 395, "total_steps": 630, "loss": 0.3912, "lr": 1.4796864412226812e-05, "epoch": 4.388888888888889, "percentage": 62.7, "elapsed_time": "1:34:41", "remaining_time": "0:56:19"}
|
||||||
|
{"current_steps": 400, "total_steps": 630, "loss": 0.3885, "lr": 1.4263935345778202e-05, "epoch": 4.444444444444445, "percentage": 63.49, "elapsed_time": "1:35:55", "remaining_time": "0:55:09"}
|
||||||
|
{"current_steps": 405, "total_steps": 630, "loss": 0.3939, "lr": 1.37354083792642e-05, "epoch": 4.5, "percentage": 64.29, "elapsed_time": "1:37:07", "remaining_time": "0:53:57"}
|
||||||
|
{"current_steps": 410, "total_steps": 630, "loss": 0.3893, "lr": 1.3211689126753879e-05, "epoch": 4.555555555555555, "percentage": 65.08, "elapsed_time": "1:38:16", "remaining_time": "0:52:44"}
|
||||||
|
{"current_steps": 415, "total_steps": 630, "loss": 0.3932, "lr": 1.26931795126721e-05, "epoch": 4.611111111111111, "percentage": 65.87, "elapsed_time": "1:39:25", "remaining_time": "0:51:30"}
|
||||||
|
{"current_steps": 420, "total_steps": 630, "loss": 0.3825, "lr": 1.2180277463345697e-05, "epoch": 4.666666666666667, "percentage": 66.67, "elapsed_time": "1:40:32", "remaining_time": "0:50:16"}
|
||||||
|
{"current_steps": 425, "total_steps": 630, "loss": 0.3879, "lr": 1.167337660161783e-05, "epoch": 4.722222222222222, "percentage": 67.46, "elapsed_time": "1:41:42", "remaining_time": "0:49:03"}
|
||||||
|
{"current_steps": 430, "total_steps": 630, "loss": 0.3905, "lr": 1.1172865944765122e-05, "epoch": 4.777777777777778, "percentage": 68.25, "elapsed_time": "1:42:52", "remaining_time": "0:47:51"}
|
||||||
|
{"current_steps": 435, "total_steps": 630, "loss": 0.3922, "lr": 1.067912960594923e-05, "epoch": 4.833333333333333, "percentage": 69.05, "elapsed_time": "1:44:01", "remaining_time": "0:46:38"}
|
||||||
|
{"current_steps": 440, "total_steps": 630, "loss": 0.3845, "lr": 1.0192546499432066e-05, "epoch": 4.888888888888889, "percentage": 69.84, "elapsed_time": "1:45:09", "remaining_time": "0:45:24"}
|
||||||
|
{"current_steps": 445, "total_steps": 630, "loss": 0.3871, "lr": 9.713490049780931e-06, "epoch": 4.944444444444445, "percentage": 70.63, "elapsed_time": "1:46:22", "remaining_time": "0:44:13"}
|
||||||
|
{"current_steps": 450, "total_steps": 630, "loss": 0.3789, "lr": 9.242327905286552e-06, "epoch": 5.0, "percentage": 71.43, "elapsed_time": "1:47:26", "remaining_time": "0:42:58"}
|
||||||
|
{"current_steps": 455, "total_steps": 630, "loss": 0.3779, "lr": 8.779421655814189e-06, "epoch": 5.055555555555555, "percentage": 72.22, "elapsed_time": "1:48:34", "remaining_time": "0:41:45"}
|
||||||
|
{"current_steps": 460, "total_steps": 630, "loss": 0.3854, "lr": 8.325126555304208e-06, "epoch": 5.111111111111111, "percentage": 73.02, "elapsed_time": "1:49:46", "remaining_time": "0:40:34"}
|
||||||
|
{"current_steps": 465, "total_steps": 630, "loss": 0.3782, "lr": 7.879791249135059e-06, "epoch": 5.166666666666667, "percentage": 73.81, "elapsed_time": "1:50:56", "remaining_time": "0:39:21"}
|
||||||
|
{"current_steps": 470, "total_steps": 630, "loss": 0.3803, "lr": 7.443757506558033e-06, "epoch": 5.222222222222222, "percentage": 74.6, "elapsed_time": "1:52:06", "remaining_time": "0:38:09"}
|
||||||
|
{"current_steps": 475, "total_steps": 630, "loss": 0.374, "lr": 7.0173599584089625e-06, "epoch": 5.277777777777778, "percentage": 75.4, "elapsed_time": "1:53:13", "remaining_time": "0:36:56"}
|
||||||
|
{"current_steps": 480, "total_steps": 630, "loss": 0.3786, "lr": 6.600925840298331e-06, "epoch": 5.333333333333333, "percentage": 76.19, "elapsed_time": "1:54:23", "remaining_time": "0:35:44"}
|
||||||
|
{"current_steps": 485, "total_steps": 630, "loss": 0.3803, "lr": 6.1947747414767035e-06, "epoch": 5.388888888888889, "percentage": 76.98, "elapsed_time": "1:55:33", "remaining_time": "0:34:32"}
|
||||||
|
{"current_steps": 490, "total_steps": 630, "loss": 0.3874, "lr": 5.799218359568395e-06, "epoch": 5.444444444444445, "percentage": 77.78, "elapsed_time": "1:56:46", "remaining_time": "0:33:21"}
|
||||||
|
{"current_steps": 495, "total_steps": 630, "loss": 0.3827, "lr": 5.414560261361415e-06, "epoch": 5.5, "percentage": 78.57, "elapsed_time": "1:57:54", "remaining_time": "0:32:09"}
|
||||||
|
{"current_steps": 500, "total_steps": 630, "loss": 0.382, "lr": 5.041095649837429e-06, "epoch": 5.555555555555555, "percentage": 79.37, "elapsed_time": "1:59:01", "remaining_time": "0:30:56"}
|
||||||
|
{"current_steps": 505, "total_steps": 630, "loss": 0.3855, "lr": 4.679111137620442e-06, "epoch": 5.611111111111111, "percentage": 80.16, "elapsed_time": "2:00:13", "remaining_time": "0:29:45"}
|
||||||
|
{"current_steps": 510, "total_steps": 630, "loss": 0.3768, "lr": 4.328884527018067e-06, "epoch": 5.666666666666667, "percentage": 80.95, "elapsed_time": "2:01:22", "remaining_time": "0:28:33"}
|
||||||
|
{"current_steps": 515, "total_steps": 630, "loss": 0.384, "lr": 3.990684596824219e-06, "epoch": 5.722222222222222, "percentage": 81.75, "elapsed_time": "2:02:30", "remaining_time": "0:27:21"}
|
||||||
|
{"current_steps": 520, "total_steps": 630, "loss": 0.3836, "lr": 3.6647708960468696e-06, "epoch": 5.777777777777778, "percentage": 82.54, "elapsed_time": "2:03:40", "remaining_time": "0:26:09"}
|
||||||
|
{"current_steps": 525, "total_steps": 630, "loss": 0.3762, "lr": 3.3513935447190595e-06, "epoch": 5.833333333333333, "percentage": 83.33, "elapsed_time": "2:04:52", "remaining_time": "0:24:58"}
|
||||||
|
{"current_steps": 530, "total_steps": 630, "loss": 0.3797, "lr": 3.050793041946183e-06, "epoch": 5.888888888888889, "percentage": 84.13, "elapsed_time": "2:05:58", "remaining_time": "0:23:46"}
|
||||||
|
{"current_steps": 535, "total_steps": 630, "loss": 0.3765, "lr": 2.763200081336721e-06, "epoch": 5.944444444444445, "percentage": 84.92, "elapsed_time": "2:07:04", "remaining_time": "0:22:33"}
|
||||||
|
{"current_steps": 540, "total_steps": 630, "loss": 0.387, "lr": 2.488835373958185e-06, "epoch": 6.0, "percentage": 85.71, "elapsed_time": "2:08:14", "remaining_time": "0:21:22"}
|
||||||
|
{"current_steps": 545, "total_steps": 630, "loss": 0.3797, "lr": 2.2279094789540244e-06, "epoch": 6.055555555555555, "percentage": 86.51, "elapsed_time": "2:09:23", "remaining_time": "0:20:10"}
|
||||||
|
{"current_steps": 550, "total_steps": 630, "loss": 0.3752, "lr": 1.9806226419516195e-06, "epoch": 6.111111111111111, "percentage": 87.3, "elapsed_time": "2:10:36", "remaining_time": "0:18:59"}
|
||||||
|
{"current_steps": 555, "total_steps": 630, "loss": 0.3751, "lr": 1.7471646413852439e-06, "epoch": 6.166666666666667, "percentage": 88.1, "elapsed_time": "2:11:47", "remaining_time": "0:17:48"}
|
||||||
|
{"current_steps": 560, "total_steps": 630, "loss": 0.3771, "lr": 1.527714642852045e-06, "epoch": 6.222222222222222, "percentage": 88.89, "elapsed_time": "2:12:54", "remaining_time": "0:16:36"}
|
||||||
|
{"current_steps": 565, "total_steps": 630, "loss": 0.3786, "lr": 1.3224410616127292e-06, "epoch": 6.277777777777778, "percentage": 89.68, "elapsed_time": "2:14:06", "remaining_time": "0:15:25"}
|
||||||
|
{"current_steps": 570, "total_steps": 630, "loss": 0.3729, "lr": 1.1315014333425455e-06, "epoch": 6.333333333333333, "percentage": 90.48, "elapsed_time": "2:15:14", "remaining_time": "0:14:14"}
|
||||||
|
{"current_steps": 575, "total_steps": 630, "loss": 0.3707, "lr": 9.550422932316938e-07, "epoch": 6.388888888888889, "percentage": 91.27, "elapsed_time": "2:16:21", "remaining_time": "0:13:02"}
|
||||||
|
{"current_steps": 580, "total_steps": 630, "loss": 0.381, "lr": 7.931990635280052e-07, "epoch": 6.444444444444445, "percentage": 92.06, "elapsed_time": "2:17:29", "remaining_time": "0:11:51"}
|
||||||
|
{"current_steps": 585, "total_steps": 630, "loss": 0.3729, "lr": 6.460959496081276e-07, "epoch": 6.5, "percentage": 92.86, "elapsed_time": "2:18:38", "remaining_time": "0:10:39"}
|
||||||
|
{"current_steps": 590, "total_steps": 630, "loss": 0.3766, "lr": 5.13845844657066e-07, "epoch": 6.555555555555555, "percentage": 93.65, "elapsed_time": "2:19:51", "remaining_time": "0:09:28"}
|
||||||
|
{"current_steps": 595, "total_steps": 630, "loss": 0.38, "lr": 3.965502430291235e-07, "epoch": 6.611111111111111, "percentage": 94.44, "elapsed_time": "2:21:01", "remaining_time": "0:08:17"}
|
||||||
|
{"current_steps": 600, "total_steps": 630, "loss": 0.3816, "lr": 2.942991623568436e-07, "epoch": 6.666666666666667, "percentage": 95.24, "elapsed_time": "2:22:11", "remaining_time": "0:07:06"}
|
||||||
|
{"current_steps": 605, "total_steps": 630, "loss": 0.3803, "lr": 2.0717107446762696e-07, "epoch": 6.722222222222222, "percentage": 96.03, "elapsed_time": "2:23:31", "remaining_time": "0:05:55"}
|
||||||
|
{"current_steps": 610, "total_steps": 630, "loss": 0.3797, "lr": 1.3523284516113955e-07, "epoch": 6.777777777777778, "percentage": 96.83, "elapsed_time": "2:24:39", "remaining_time": "0:04:44"}
|
||||||
|
{"current_steps": 615, "total_steps": 630, "loss": 0.3783, "lr": 7.853968289363245e-08, "epoch": 6.833333333333333, "percentage": 97.62, "elapsed_time": "2:25:46", "remaining_time": "0:03:33"}
|
||||||
|
{"current_steps": 620, "total_steps": 630, "loss": 0.3762, "lr": 3.7135096408631443e-08, "epoch": 6.888888888888889, "percentage": 98.41, "elapsed_time": "2:26:53", "remaining_time": "0:02:22"}
|
||||||
|
{"current_steps": 625, "total_steps": 630, "loss": 0.3821, "lr": 1.1050861346488806e-08, "epoch": 6.944444444444445, "percentage": 99.21, "elapsed_time": "2:28:05", "remaining_time": "0:01:11"}
|
||||||
|
{"current_steps": 630, "total_steps": 630, "loss": 0.3799, "lr": 3.069958583856725e-10, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "2:29:12", "remaining_time": "0:00:00"}
|
||||||
|
{"current_steps": 630, "total_steps": 630, "epoch": 7.0, "percentage": 100.0, "elapsed_time": "2:29:22", "remaining_time": "0:00:00"}
|
||||||
1433
trainer_state.json
Normal file
1433
trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9797f294ad3109236a581a3db1f3357bc3a302b0c5e3b83662d09b499ee6d34a
|
||||||
|
size 8721
|
||||||
BIN
training_loss.png
Normal file
BIN
training_loss.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 38 KiB |
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user