初始化项目,由ModelHub XC社区提供模型
Model: thetmon/c20 Source: Original Platform
This commit is contained in:
202
checkpoint-1000/README.md
Normal file
202
checkpoint-1000/README.md
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
base_model: unsloth/Qwen3-4B-Instruct-2507
|
||||
library_name: peft
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.13.2
|
||||
38
checkpoint-1000/adapter_config.json
Normal file
38
checkpoint-1000/adapter_config.json
Normal file
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"alpha_pattern": {},
|
||||
"auto_mapping": {
|
||||
"base_model_class": "Qwen3ForCausalLM",
|
||||
"parent_library": "transformers.models.qwen3.modeling_qwen3",
|
||||
"unsloth_fixed": true
|
||||
},
|
||||
"base_model_name_or_path": "unsloth/Qwen3-4B-Instruct-2507",
|
||||
"bias": "none",
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 128,
|
||||
"lora_dropout": 0.0,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"r": 64,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"down_proj",
|
||||
"k_proj",
|
||||
"q_proj",
|
||||
"gate_proj",
|
||||
"o_proj",
|
||||
"v_proj",
|
||||
"up_proj"
|
||||
],
|
||||
"task_type": "CAUSAL_LM",
|
||||
"use_dora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-1000/adapter_model.safetensors
Normal file
3
checkpoint-1000/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:41f724b734ca99c054cb3a05c3d9aee2aac5801ae960d29d868df8765f2461cc
|
||||
size 528550256
|
||||
28
checkpoint-1000/added_tokens.json
Normal file
28
checkpoint-1000/added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"</think>": 151668,
|
||||
"</tool_call>": 151658,
|
||||
"</tool_response>": 151666,
|
||||
"<think>": 151667,
|
||||
"<tool_call>": 151657,
|
||||
"<tool_response>": 151665,
|
||||
"<|box_end|>": 151649,
|
||||
"<|box_start|>": 151648,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|file_sep|>": 151664,
|
||||
"<|fim_middle|>": 151660,
|
||||
"<|fim_pad|>": 151662,
|
||||
"<|fim_prefix|>": 151659,
|
||||
"<|fim_suffix|>": 151661,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644,
|
||||
"<|image_pad|>": 151655,
|
||||
"<|object_ref_end|>": 151647,
|
||||
"<|object_ref_start|>": 151646,
|
||||
"<|quad_end|>": 151651,
|
||||
"<|quad_start|>": 151650,
|
||||
"<|repo_name|>": 151663,
|
||||
"<|video_pad|>": 151656,
|
||||
"<|vision_end|>": 151653,
|
||||
"<|vision_pad|>": 151654,
|
||||
"<|vision_start|>": 151652
|
||||
}
|
||||
86
checkpoint-1000/chat_template.jinja
Normal file
86
checkpoint-1000/chat_template.jinja
Normal file
@@ -0,0 +1,86 @@
|
||||
{%- if tools %}
|
||||
{{- '<|im_start|>system\n' }}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- messages[0].content + '\n\n' }}
|
||||
{%- endif %}
|
||||
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||
{%- for tool in tools %}
|
||||
{{- "\n" }}
|
||||
{{- tool | tojson }}
|
||||
{%- endfor %}
|
||||
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||
{%- else %}
|
||||
{%- if messages[0].role == 'system' %}
|
||||
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
||||
{%- for message in messages[::-1] %}
|
||||
{%- set index = (messages|length - 1) - loop.index0 %}
|
||||
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
||||
{%- set ns.multi_step_tool = false %}
|
||||
{%- set ns.last_query_index = index %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- for message in messages %}
|
||||
{%- if message.content is string %}
|
||||
{%- set content = message.content %}
|
||||
{%- else %}
|
||||
{%- set content = '' %}
|
||||
{%- endif %}
|
||||
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
||||
{%- elif message.role == "assistant" %}
|
||||
{%- set reasoning_content = '' %}
|
||||
{%- if message.reasoning_content is string %}
|
||||
{%- set reasoning_content = message.reasoning_content %}
|
||||
{%- else %}
|
||||
{%- if '</think>' in content %}
|
||||
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
||||
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if loop.index0 > ns.last_query_index %}
|
||||
{%- if loop.last or (not loop.last and reasoning_content) %}
|
||||
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- else %}
|
||||
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||
{%- endif %}
|
||||
{%- if message.tool_calls %}
|
||||
{%- for tool_call in message.tool_calls %}
|
||||
{%- if (loop.first and content) or (not loop.first) %}
|
||||
{{- '\n' }}
|
||||
{%- endif %}
|
||||
{%- if tool_call.function %}
|
||||
{%- set tool_call = tool_call.function %}
|
||||
{%- endif %}
|
||||
{{- '<tool_call>\n{"name": "' }}
|
||||
{{- tool_call.name }}
|
||||
{{- '", "arguments": ' }}
|
||||
{%- if tool_call.arguments is string %}
|
||||
{{- tool_call.arguments }}
|
||||
{%- else %}
|
||||
{{- tool_call.arguments | tojson }}
|
||||
{%- endif %}
|
||||
{{- '}\n</tool_call>' }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- elif message.role == "tool" %}
|
||||
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
||||
{{- '<|im_start|>user' }}
|
||||
{%- endif %}
|
||||
{{- '\n<tool_response>\n' }}
|
||||
{{- content }}
|
||||
{{- '\n</tool_response>' }}
|
||||
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||
{{- '<|im_end|>\n' }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- if add_generation_prompt %}
|
||||
{{- '<|im_start|>assistant\n' }}
|
||||
{%- endif %}
|
||||
151388
checkpoint-1000/merges.txt
Normal file
151388
checkpoint-1000/merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
checkpoint-1000/optimizer.pt
Normal file
3
checkpoint-1000/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e93bfceb31d7bc7a77b895f7e5004d2726558c1aae54c9b10415627c5337fe24
|
||||
size 1057397963
|
||||
3
checkpoint-1000/rng_state.pth
Normal file
3
checkpoint-1000/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:791e306163214f06c6b349951ec087c99f0dab0a96081fb6bf886a2d1885fbb2
|
||||
size 14645
|
||||
3
checkpoint-1000/scheduler.pt
Normal file
3
checkpoint-1000/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:827163a89a932257954b4cfff430b0d817ffc0874b3ededec16d3d23b14a5e93
|
||||
size 1465
|
||||
31
checkpoint-1000/special_tokens_map.json
Normal file
31
checkpoint-1000/special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-1000/tokenizer.json
Normal file
3
checkpoint-1000/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
||||
size 11422654
|
||||
240
checkpoint-1000/tokenizer_config.json
Normal file
240
checkpoint-1000/tokenizer_config.json
Normal file
@@ -0,0 +1,240 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|object_ref_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|object_ref_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151648": {
|
||||
"content": "<|box_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151649": {
|
||||
"content": "<|box_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151665": {
|
||||
"content": "<tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151666": {
|
||||
"content": "</tool_response>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151667": {
|
||||
"content": "<think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151668": {
|
||||
"content": "</think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>",
|
||||
"<|object_ref_start|>",
|
||||
"<|object_ref_end|>",
|
||||
"<|box_start|>",
|
||||
"<|box_end|>",
|
||||
"<|quad_start|>",
|
||||
"<|quad_end|>",
|
||||
"<|vision_start|>",
|
||||
"<|vision_end|>",
|
||||
"<|vision_pad|>",
|
||||
"<|image_pad|>",
|
||||
"<|video_pad|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"errors": "replace",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 262144,
|
||||
"pad_token": "<|vision_pad|>",
|
||||
"padding_side": "right",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
998
checkpoint-1000/trainer_state.json
Normal file
998
checkpoint-1000/trainer_state.json
Normal file
@@ -0,0 +1,998 @@
|
||||
{
|
||||
"best_global_step": null,
|
||||
"best_metric": null,
|
||||
"best_model_checkpoint": null,
|
||||
"epoch": 1.8328365053886724,
|
||||
"eval_steps": 30,
|
||||
"global_step": 1000,
|
||||
"is_hyper_param_search": false,
|
||||
"is_local_process_zero": true,
|
||||
"is_world_process_zero": true,
|
||||
"log_history": [
|
||||
{
|
||||
"epoch": 0.018344416418252695,
|
||||
"grad_norm": 2.4060168266296387,
|
||||
"learning_rate": 3.272727272727273e-05,
|
||||
"loss": 2.2041,
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"epoch": 0.03668883283650539,
|
||||
"grad_norm": 0.6585841774940491,
|
||||
"learning_rate": 6.90909090909091e-05,
|
||||
"loss": 0.6873,
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"epoch": 0.05503324925475808,
|
||||
"grad_norm": 0.5287330150604248,
|
||||
"learning_rate": 0.00010545454545454545,
|
||||
"loss": 0.4392,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"epoch": 0.05503324925475808,
|
||||
"eval_loss": 0.40955930948257446,
|
||||
"eval_runtime": 55.5305,
|
||||
"eval_samples_per_second": 4.142,
|
||||
"eval_steps_per_second": 4.142,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"epoch": 0.07337766567301078,
|
||||
"grad_norm": 0.6942081451416016,
|
||||
"learning_rate": 0.00014181818181818184,
|
||||
"loss": 0.3259,
|
||||
"step": 40
|
||||
},
|
||||
{
|
||||
"epoch": 0.09172208209126347,
|
||||
"grad_norm": 0.5697287917137146,
|
||||
"learning_rate": 0.0001781818181818182,
|
||||
"loss": 0.2795,
|
||||
"step": 50
|
||||
},
|
||||
{
|
||||
"epoch": 0.11006649850951616,
|
||||
"grad_norm": 0.4920896887779236,
|
||||
"learning_rate": 0.0001999926577882564,
|
||||
"loss": 0.2213,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"epoch": 0.11006649850951616,
|
||||
"eval_loss": 0.22321809828281403,
|
||||
"eval_runtime": 54.872,
|
||||
"eval_samples_per_second": 4.192,
|
||||
"eval_steps_per_second": 4.192,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"epoch": 0.12841091492776885,
|
||||
"grad_norm": 0.3659333884716034,
|
||||
"learning_rate": 0.00019991007028765122,
|
||||
"loss": 0.2043,
|
||||
"step": 70
|
||||
},
|
||||
{
|
||||
"epoch": 0.14675533134602156,
|
||||
"grad_norm": 0.2682117223739624,
|
||||
"learning_rate": 0.0001997357935664527,
|
||||
"loss": 0.1995,
|
||||
"step": 80
|
||||
},
|
||||
{
|
||||
"epoch": 0.16509974776427425,
|
||||
"grad_norm": 0.265464186668396,
|
||||
"learning_rate": 0.0001994699875614589,
|
||||
"loss": 0.1663,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"epoch": 0.16509974776427425,
|
||||
"eval_loss": 0.18330270051956177,
|
||||
"eval_runtime": 54.8391,
|
||||
"eval_samples_per_second": 4.194,
|
||||
"eval_steps_per_second": 4.194,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"epoch": 0.18344416418252693,
|
||||
"grad_norm": 0.4005824029445648,
|
||||
"learning_rate": 0.000199112896207494,
|
||||
"loss": 0.1586,
|
||||
"step": 100
|
||||
},
|
||||
{
|
||||
"epoch": 0.20178858060077964,
|
||||
"grad_norm": 0.2857477366924286,
|
||||
"learning_rate": 0.00019866484721354499,
|
||||
"loss": 0.1596,
|
||||
"step": 110
|
||||
},
|
||||
{
|
||||
"epoch": 0.22013299701903233,
|
||||
"grad_norm": 0.18151400983333588,
|
||||
"learning_rate": 0.00019812625176201745,
|
||||
"loss": 0.1597,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"epoch": 0.22013299701903233,
|
||||
"eval_loss": 0.15281935036182404,
|
||||
"eval_runtime": 55.1288,
|
||||
"eval_samples_per_second": 4.172,
|
||||
"eval_steps_per_second": 4.172,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"epoch": 0.238477413437285,
|
||||
"grad_norm": 0.2823415696620941,
|
||||
"learning_rate": 0.00019749760413138626,
|
||||
"loss": 0.16,
|
||||
"step": 130
|
||||
},
|
||||
{
|
||||
"epoch": 0.2568218298555377,
|
||||
"grad_norm": 0.1662892997264862,
|
||||
"learning_rate": 0.00019677948124258748,
|
||||
"loss": 0.1453,
|
||||
"step": 140
|
||||
},
|
||||
{
|
||||
"epoch": 0.27516624627379044,
|
||||
"grad_norm": 0.18668028712272644,
|
||||
"learning_rate": 0.00019597254212956822,
|
||||
"loss": 0.144,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"epoch": 0.27516624627379044,
|
||||
"eval_loss": 0.14610502123832703,
|
||||
"eval_runtime": 54.955,
|
||||
"eval_samples_per_second": 4.185,
|
||||
"eval_steps_per_second": 4.185,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"epoch": 0.2935106626920431,
|
||||
"grad_norm": 0.19153951108455658,
|
||||
"learning_rate": 0.0001950775273344792,
|
||||
"loss": 0.1508,
|
||||
"step": 160
|
||||
},
|
||||
{
|
||||
"epoch": 0.3118550791102958,
|
||||
"grad_norm": 0.5208008885383606,
|
||||
"learning_rate": 0.00019409525822806662,
|
||||
"loss": 0.1332,
|
||||
"step": 170
|
||||
},
|
||||
{
|
||||
"epoch": 0.3301994955285485,
|
||||
"grad_norm": 0.20292238891124725,
|
||||
"learning_rate": 0.00019302663625588563,
|
||||
"loss": 0.1368,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"epoch": 0.3301994955285485,
|
||||
"eval_loss": 0.1378733068704605,
|
||||
"eval_runtime": 55.012,
|
||||
"eval_samples_per_second": 4.181,
|
||||
"eval_steps_per_second": 4.181,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"epoch": 0.3485439119468012,
|
||||
"grad_norm": 0.15841814875602722,
|
||||
"learning_rate": 0.0001918726421110282,
|
||||
"loss": 0.1376,
|
||||
"step": 190
|
||||
},
|
||||
{
|
||||
"epoch": 0.36688832836505386,
|
||||
"grad_norm": 0.1384090781211853,
|
||||
"learning_rate": 0.00019063433483412347,
|
||||
"loss": 0.1382,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.3852327447833066,
|
||||
"grad_norm": 0.14293262362480164,
|
||||
"learning_rate": 0.00018931285084143818,
|
||||
"loss": 0.1328,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"epoch": 0.3852327447833066,
|
||||
"eval_loss": 0.134722039103508,
|
||||
"eval_runtime": 55.0139,
|
||||
"eval_samples_per_second": 4.181,
|
||||
"eval_steps_per_second": 4.181,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"epoch": 0.4035771612015593,
|
||||
"grad_norm": 0.2324746549129486,
|
||||
"learning_rate": 0.00018790940288196715,
|
||||
"loss": 0.135,
|
||||
"step": 220
|
||||
},
|
||||
{
|
||||
"epoch": 0.42192157761981197,
|
||||
"grad_norm": 0.1578933596611023,
|
||||
"learning_rate": 0.00018642527892447243,
|
||||
"loss": 0.1253,
|
||||
"step": 230
|
||||
},
|
||||
{
|
||||
"epoch": 0.44026599403806466,
|
||||
"grad_norm": 0.20755188167095184,
|
||||
"learning_rate": 0.00018486184097549186,
|
||||
"loss": 0.1399,
|
||||
"step": 240
|
||||
},
|
||||
{
|
||||
"epoch": 0.44026599403806466,
|
||||
"eval_loss": 0.1301199346780777,
|
||||
"eval_runtime": 55.1504,
|
||||
"eval_samples_per_second": 4.17,
|
||||
"eval_steps_per_second": 4.17,
|
||||
"step": 240
|
||||
},
|
||||
{
|
||||
"epoch": 0.45861041045631734,
|
||||
"grad_norm": 0.1697942614555359,
|
||||
"learning_rate": 0.0001832205238294018,
|
||||
"loss": 0.1229,
|
||||
"step": 250
|
||||
},
|
||||
{
|
||||
"epoch": 0.47695482687457,
|
||||
"grad_norm": 0.10918751358985901,
|
||||
"learning_rate": 0.00018150283375168114,
|
||||
"loss": 0.1243,
|
||||
"step": 260
|
||||
},
|
||||
{
|
||||
"epoch": 0.49529924329282277,
|
||||
"grad_norm": 0.4525628089904785,
|
||||
"learning_rate": 0.0001797103470965852,
|
||||
"loss": 0.1351,
|
||||
"step": 270
|
||||
},
|
||||
{
|
||||
"epoch": 0.49529924329282277,
|
||||
"eval_loss": 0.12848101556301117,
|
||||
"eval_runtime": 55.622,
|
||||
"eval_samples_per_second": 4.135,
|
||||
"eval_steps_per_second": 4.135,
|
||||
"step": 270
|
||||
},
|
||||
{
|
||||
"epoch": 0.5136436597110754,
|
||||
"grad_norm": 0.17496690154075623,
|
||||
"learning_rate": 0.00017784470886049783,
|
||||
"loss": 0.1329,
|
||||
"step": 280
|
||||
},
|
||||
{
|
||||
"epoch": 0.5319880761293282,
|
||||
"grad_norm": 0.14707504212856293,
|
||||
"learning_rate": 0.00017590763117228934,
|
||||
"loss": 0.1317,
|
||||
"step": 290
|
||||
},
|
||||
{
|
||||
"epoch": 0.5503324925475809,
|
||||
"grad_norm": 0.15233491361141205,
|
||||
"learning_rate": 0.00017390089172206592,
|
||||
"loss": 0.1353,
|
||||
"step": 300
|
||||
},
|
||||
{
|
||||
"epoch": 0.5503324925475809,
|
||||
"eval_loss": 0.12714248895645142,
|
||||
"eval_runtime": 55.4821,
|
||||
"eval_samples_per_second": 4.145,
|
||||
"eval_steps_per_second": 4.145,
|
||||
"step": 300
|
||||
},
|
||||
{
|
||||
"epoch": 0.5686769089658336,
|
||||
"grad_norm": 0.20287233591079712,
|
||||
"learning_rate": 0.0001718263321297523,
|
||||
"loss": 0.1273,
|
||||
"step": 310
|
||||
},
|
||||
{
|
||||
"epoch": 0.5870213253840862,
|
||||
"grad_norm": 0.2743270993232727,
|
||||
"learning_rate": 0.00016968585625500498,
|
||||
"loss": 0.1373,
|
||||
"step": 320
|
||||
},
|
||||
{
|
||||
"epoch": 0.6053657418023389,
|
||||
"grad_norm": 0.40428221225738525,
|
||||
"learning_rate": 0.0001674814284500068,
|
||||
"loss": 0.1292,
|
||||
"step": 330
|
||||
},
|
||||
{
|
||||
"epoch": 0.6053657418023389,
|
||||
"eval_loss": 0.1255505383014679,
|
||||
"eval_runtime": 55.4814,
|
||||
"eval_samples_per_second": 4.146,
|
||||
"eval_steps_per_second": 4.146,
|
||||
"step": 330
|
||||
},
|
||||
{
|
||||
"epoch": 0.6237101582205916,
|
||||
"grad_norm": 0.3385097086429596,
|
||||
"learning_rate": 0.00016521507175674643,
|
||||
"loss": 0.1399,
|
||||
"step": 340
|
||||
},
|
||||
{
|
||||
"epoch": 0.6420545746388443,
|
||||
"grad_norm": 0.2672514319419861,
|
||||
"learning_rate": 0.00016288886605043764,
|
||||
"loss": 0.1345,
|
||||
"step": 350
|
||||
},
|
||||
{
|
||||
"epoch": 0.660398991057097,
|
||||
"grad_norm": 0.16421453654766083,
|
||||
"learning_rate": 0.0001605049461307812,
|
||||
"loss": 0.1278,
|
||||
"step": 360
|
||||
},
|
||||
{
|
||||
"epoch": 0.660398991057097,
|
||||
"eval_loss": 0.1262081414461136,
|
||||
"eval_runtime": 55.639,
|
||||
"eval_samples_per_second": 4.134,
|
||||
"eval_steps_per_second": 4.134,
|
||||
"step": 360
|
||||
},
|
||||
{
|
||||
"epoch": 0.6787434074753497,
|
||||
"grad_norm": 0.18313640356063843,
|
||||
"learning_rate": 0.00015806549976282182,
|
||||
"loss": 0.1269,
|
||||
"step": 370
|
||||
},
|
||||
{
|
||||
"epoch": 0.6970878238936024,
|
||||
"grad_norm": 0.2421526312828064,
|
||||
"learning_rate": 0.00015557276566919784,
|
||||
"loss": 0.1352,
|
||||
"step": 380
|
||||
},
|
||||
{
|
||||
"epoch": 0.715432240311855,
|
||||
"grad_norm": 0.11791064590215683,
|
||||
"learning_rate": 0.0001530290314756265,
|
||||
"loss": 0.1206,
|
||||
"step": 390
|
||||
},
|
||||
{
|
||||
"epoch": 0.715432240311855,
|
||||
"eval_loss": 0.12255965173244476,
|
||||
"eval_runtime": 55.4524,
|
||||
"eval_samples_per_second": 4.148,
|
||||
"eval_steps_per_second": 4.148,
|
||||
"step": 390
|
||||
},
|
||||
{
|
||||
"epoch": 0.7337766567301077,
|
||||
"grad_norm": 0.10551954060792923,
|
||||
"learning_rate": 0.00015043663161150937,
|
||||
"loss": 0.117,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"epoch": 0.7521210731483605,
|
||||
"grad_norm": 0.11994520574808121,
|
||||
"learning_rate": 0.0001477979451675861,
|
||||
"loss": 0.1266,
|
||||
"step": 410
|
||||
},
|
||||
{
|
||||
"epoch": 0.7704654895666132,
|
||||
"grad_norm": 0.11859820783138275,
|
||||
"learning_rate": 0.00014511539371260074,
|
||||
"loss": 0.1313,
|
||||
"step": 420
|
||||
},
|
||||
{
|
||||
"epoch": 0.7704654895666132,
|
||||
"eval_loss": 0.12076492607593536,
|
||||
"eval_runtime": 55.5451,
|
||||
"eval_samples_per_second": 4.141,
|
||||
"eval_steps_per_second": 4.141,
|
||||
"step": 420
|
||||
},
|
||||
{
|
||||
"epoch": 0.7888099059848659,
|
||||
"grad_norm": 0.09068579971790314,
|
||||
"learning_rate": 0.0001423914390709861,
|
||||
"loss": 0.1272,
|
||||
"step": 430
|
||||
},
|
||||
{
|
||||
"epoch": 0.8071543224031186,
|
||||
"grad_norm": 0.13292035460472107,
|
||||
"learning_rate": 0.00013962858106360398,
|
||||
"loss": 0.1346,
|
||||
"step": 440
|
||||
},
|
||||
{
|
||||
"epoch": 0.8254987388213713,
|
||||
"grad_norm": 0.2738932967185974,
|
||||
"learning_rate": 0.00013682935521361627,
|
||||
"loss": 0.1221,
|
||||
"step": 450
|
||||
},
|
||||
{
|
||||
"epoch": 0.8254987388213713,
|
||||
"eval_loss": 0.11899405717849731,
|
||||
"eval_runtime": 55.8291,
|
||||
"eval_samples_per_second": 4.12,
|
||||
"eval_steps_per_second": 4.12,
|
||||
"step": 450
|
||||
},
|
||||
{
|
||||
"epoch": 0.8438431552396239,
|
||||
"grad_norm": 0.09868068993091583,
|
||||
"learning_rate": 0.00013399633041959047,
|
||||
"loss": 0.1215,
|
||||
"step": 460
|
||||
},
|
||||
{
|
||||
"epoch": 0.8621875716578766,
|
||||
"grad_norm": 0.08525373786687851,
|
||||
"learning_rate": 0.00013113210659797687,
|
||||
"loss": 0.123,
|
||||
"step": 470
|
||||
},
|
||||
{
|
||||
"epoch": 0.8805319880761293,
|
||||
"grad_norm": 0.08514482527971268,
|
||||
"learning_rate": 0.00012823931229711944,
|
||||
"loss": 0.1301,
|
||||
"step": 480
|
||||
},
|
||||
{
|
||||
"epoch": 0.8805319880761293,
|
||||
"eval_loss": 0.11885283887386322,
|
||||
"eval_runtime": 55.4431,
|
||||
"eval_samples_per_second": 4.148,
|
||||
"eval_steps_per_second": 4.148,
|
||||
"step": 480
|
||||
},
|
||||
{
|
||||
"epoch": 0.898876404494382,
|
||||
"grad_norm": 0.09567002952098846,
|
||||
"learning_rate": 0.00012532060228499136,
|
||||
"loss": 0.1202,
|
||||
"step": 490
|
||||
},
|
||||
{
|
||||
"epoch": 0.9172208209126347,
|
||||
"grad_norm": 0.11083484441041946,
|
||||
"learning_rate": 0.00012237865511286746,
|
||||
"loss": 0.1189,
|
||||
"step": 500
|
||||
},
|
||||
{
|
||||
"epoch": 0.9355652373308874,
|
||||
"grad_norm": 0.10928696393966675,
|
||||
"learning_rate": 0.00011941617065717124,
|
||||
"loss": 0.127,
|
||||
"step": 510
|
||||
},
|
||||
{
|
||||
"epoch": 0.9355652373308874,
|
||||
"eval_loss": 0.11898388713598251,
|
||||
"eval_runtime": 55.5894,
|
||||
"eval_samples_per_second": 4.137,
|
||||
"eval_steps_per_second": 4.137,
|
||||
"step": 510
|
||||
},
|
||||
{
|
||||
"epoch": 0.95390965374914,
|
||||
"grad_norm": 0.10917045921087265,
|
||||
"learning_rate": 0.00011643586764175092,
|
||||
"loss": 0.1203,
|
||||
"step": 520
|
||||
},
|
||||
{
|
||||
"epoch": 0.9722540701673928,
|
||||
"grad_norm": 0.11703202873468399,
|
||||
"learning_rate": 0.00011344048114285882,
|
||||
"loss": 0.1265,
|
||||
"step": 530
|
||||
},
|
||||
{
|
||||
"epoch": 0.9905984865856455,
|
||||
"grad_norm": 0.08545742928981781,
|
||||
"learning_rate": 0.00011043276007912413,
|
||||
"loss": 0.1194,
|
||||
"step": 540
|
||||
},
|
||||
{
|
||||
"epoch": 0.9905984865856455,
|
||||
"eval_loss": 0.11826686561107635,
|
||||
"eval_runtime": 55.4536,
|
||||
"eval_samples_per_second": 4.148,
|
||||
"eval_steps_per_second": 4.148,
|
||||
"step": 540
|
||||
},
|
||||
{
|
||||
"epoch": 1.0073377665673011,
|
||||
"grad_norm": 0.14749501645565033,
|
||||
"learning_rate": 0.00010741546468882223,
|
||||
"loss": 0.1094,
|
||||
"step": 550
|
||||
},
|
||||
{
|
||||
"epoch": 1.0256821829855538,
|
||||
"grad_norm": 0.08938182145357132,
|
||||
"learning_rate": 0.00010439136399675542,
|
||||
"loss": 0.1123,
|
||||
"step": 560
|
||||
},
|
||||
{
|
||||
"epoch": 1.0440265994038065,
|
||||
"grad_norm": 0.16764287650585175,
|
||||
"learning_rate": 0.00010136323327307075,
|
||||
"loss": 0.1301,
|
||||
"step": 570
|
||||
},
|
||||
{
|
||||
"epoch": 1.0440265994038065,
|
||||
"eval_loss": 0.11838380247354507,
|
||||
"eval_runtime": 55.6355,
|
||||
"eval_samples_per_second": 4.134,
|
||||
"eval_steps_per_second": 4.134,
|
||||
"step": 570
|
||||
},
|
||||
{
|
||||
"epoch": 1.0623710158220592,
|
||||
"grad_norm": 0.07769570499658585,
|
||||
"learning_rate": 9.833385148634574e-05,
|
||||
"loss": 0.1194,
|
||||
"step": 580
|
||||
},
|
||||
{
|
||||
"epoch": 1.0807154322403119,
|
||||
"grad_norm": 0.08358050137758255,
|
||||
"learning_rate": 9.53059987532804e-05,
|
||||
"loss": 0.1187,
|
||||
"step": 590
|
||||
},
|
||||
{
|
||||
"epoch": 1.0990598486585645,
|
||||
"grad_norm": 0.10176610946655273,
|
||||
"learning_rate": 9.228245378733537e-05,
|
||||
"loss": 0.1087,
|
||||
"step": 600
|
||||
},
|
||||
{
|
||||
"epoch": 1.0990598486585645,
|
||||
"eval_loss": 0.11805912852287292,
|
||||
"eval_runtime": 55.6508,
|
||||
"eval_samples_per_second": 4.133,
|
||||
"eval_steps_per_second": 4.133,
|
||||
"step": 600
|
||||
},
|
||||
{
|
||||
"epoch": 1.1174042650768172,
|
||||
"grad_norm": 0.09811729192733765,
|
||||
"learning_rate": 8.926599134865808e-05,
|
||||
"loss": 0.1267,
|
||||
"step": 610
|
||||
},
|
||||
{
|
||||
"epoch": 1.13574868149507,
|
||||
"grad_norm": 0.08882371336221695,
|
||||
"learning_rate": 8.625937969763662e-05,
|
||||
"loss": 0.1291,
|
||||
"step": 620
|
||||
},
|
||||
{
|
||||
"epoch": 1.1540930979133226,
|
||||
"grad_norm": 0.07570777833461761,
|
||||
"learning_rate": 8.326537805441884e-05,
|
||||
"loss": 0.1182,
|
||||
"step": 630
|
||||
},
|
||||
{
|
||||
"epoch": 1.1540930979133226,
|
||||
"eval_loss": 0.11645928770303726,
|
||||
"eval_runtime": 55.7654,
|
||||
"eval_samples_per_second": 4.124,
|
||||
"eval_steps_per_second": 4.124,
|
||||
"step": 630
|
||||
},
|
||||
{
|
||||
"epoch": 1.1724375143315753,
|
||||
"grad_norm": 0.06548488140106201,
|
||||
"learning_rate": 8.028673406672763e-05,
|
||||
"loss": 0.1148,
|
||||
"step": 640
|
||||
},
|
||||
{
|
||||
"epoch": 1.190781930749828,
|
||||
"grad_norm": 0.07197605818510056,
|
||||
"learning_rate": 7.732618128829656e-05,
|
||||
"loss": 0.1204,
|
||||
"step": 650
|
||||
},
|
||||
{
|
||||
"epoch": 1.2091263471680807,
|
||||
"grad_norm": 0.0930318832397461,
|
||||
"learning_rate": 7.438643667023979e-05,
|
||||
"loss": 0.1157,
|
||||
"step": 660
|
||||
},
|
||||
{
|
||||
"epoch": 1.2091263471680807,
|
||||
"eval_loss": 0.11639692634344101,
|
||||
"eval_runtime": 55.6937,
|
||||
"eval_samples_per_second": 4.13,
|
||||
"eval_steps_per_second": 4.13,
|
||||
"step": 660
|
||||
},
|
||||
{
|
||||
"epoch": 1.2274707635863333,
|
||||
"grad_norm": 0.07502172142267227,
|
||||
"learning_rate": 7.147019806765836e-05,
|
||||
"loss": 0.1194,
|
||||
"step": 670
|
||||
},
|
||||
{
|
||||
"epoch": 1.245815180004586,
|
||||
"grad_norm": 0.06672611832618713,
|
||||
"learning_rate": 6.858014176377139e-05,
|
||||
"loss": 0.119,
|
||||
"step": 680
|
||||
},
|
||||
{
|
||||
"epoch": 1.264159596422839,
|
||||
"grad_norm": 0.07496988028287888,
|
||||
"learning_rate": 6.57189200138442e-05,
|
||||
"loss": 0.1162,
|
||||
"step": 690
|
||||
},
|
||||
{
|
||||
"epoch": 1.264159596422839,
|
||||
"eval_loss": 0.1162952408194542,
|
||||
"eval_runtime": 55.495,
|
||||
"eval_samples_per_second": 4.145,
|
||||
"eval_steps_per_second": 4.145,
|
||||
"step": 690
|
||||
},
|
||||
{
|
||||
"epoch": 1.2825040128410916,
|
||||
"grad_norm": 0.08055031299591064,
|
||||
"learning_rate": 6.288915861116706e-05,
|
||||
"loss": 0.1193,
|
||||
"step": 700
|
||||
},
|
||||
{
|
||||
"epoch": 1.3008484292593443,
|
||||
"grad_norm": 0.0835222527384758,
|
||||
"learning_rate": 6.009345447731886e-05,
|
||||
"loss": 0.1166,
|
||||
"step": 710
|
||||
},
|
||||
{
|
||||
"epoch": 1.319192845677597,
|
||||
"grad_norm": 0.16637521982192993,
|
||||
"learning_rate": 5.733437327892661e-05,
|
||||
"loss": 0.1205,
|
||||
"step": 720
|
||||
},
|
||||
{
|
||||
"epoch": 1.319192845677597,
|
||||
"eval_loss": 0.11508560180664062,
|
||||
"eval_runtime": 55.8161,
|
||||
"eval_samples_per_second": 4.121,
|
||||
"eval_steps_per_second": 4.121,
|
||||
"step": 720
|
||||
},
|
||||
{
|
||||
"epoch": 1.3375372620958497,
|
||||
"grad_norm": 0.07946062088012695,
|
||||
"learning_rate": 5.4614447073108375e-05,
|
||||
"loss": 0.1143,
|
||||
"step": 730
|
||||
},
|
||||
{
|
||||
"epoch": 1.3558816785141024,
|
||||
"grad_norm": 0.09776254743337631,
|
||||
"learning_rate": 5.193617198376004e-05,
|
||||
"loss": 0.1214,
|
||||
"step": 740
|
||||
},
|
||||
{
|
||||
"epoch": 1.374226094932355,
|
||||
"grad_norm": 0.13098250329494476,
|
||||
"learning_rate": 4.930200591081865e-05,
|
||||
"loss": 0.1159,
|
||||
"step": 750
|
||||
},
|
||||
{
|
||||
"epoch": 1.374226094932355,
|
||||
"eval_loss": 0.1152450293302536,
|
||||
"eval_runtime": 55.5887,
|
||||
"eval_samples_per_second": 4.138,
|
||||
"eval_steps_per_second": 4.138,
|
||||
"step": 750
|
||||
},
|
||||
{
|
||||
"epoch": 1.3925705113506077,
|
||||
"grad_norm": 0.07567308843135834,
|
||||
"learning_rate": 4.671436627460479e-05,
|
||||
"loss": 0.1178,
|
||||
"step": 760
|
||||
},
|
||||
{
|
||||
"epoch": 1.4109149277688604,
|
||||
"grad_norm": 0.09156125038862228,
|
||||
"learning_rate": 4.417562779731355e-05,
|
||||
"loss": 0.1157,
|
||||
"step": 770
|
||||
},
|
||||
{
|
||||
"epoch": 1.429259344187113,
|
||||
"grad_norm": 0.09289383143186569,
|
||||
"learning_rate": 4.168812032369026e-05,
|
||||
"loss": 0.12,
|
||||
"step": 780
|
||||
},
|
||||
{
|
||||
"epoch": 1.429259344187113,
|
||||
"eval_loss": 0.11563212424516678,
|
||||
"eval_runtime": 55.559,
|
||||
"eval_samples_per_second": 4.14,
|
||||
"eval_steps_per_second": 4.14,
|
||||
"step": 780
|
||||
},
|
||||
{
|
||||
"epoch": 1.4476037606053658,
|
||||
"grad_norm": 0.07849477976560593,
|
||||
"learning_rate": 3.9254126682891425e-05,
|
||||
"loss": 0.1205,
|
||||
"step": 790
|
||||
},
|
||||
{
|
||||
"epoch": 1.4659481770236185,
|
||||
"grad_norm": 0.09009351581335068,
|
||||
"learning_rate": 3.68758805934923e-05,
|
||||
"loss": 0.1188,
|
||||
"step": 800
|
||||
},
|
||||
{
|
||||
"epoch": 1.4842925934418711,
|
||||
"grad_norm": 0.0810457393527031,
|
||||
"learning_rate": 3.455556461356413e-05,
|
||||
"loss": 0.1199,
|
||||
"step": 810
|
||||
},
|
||||
{
|
||||
"epoch": 1.4842925934418711,
|
||||
"eval_loss": 0.11508457362651825,
|
||||
"eval_runtime": 55.6047,
|
||||
"eval_samples_per_second": 4.136,
|
||||
"eval_steps_per_second": 4.136,
|
||||
"step": 810
|
||||
},
|
||||
{
|
||||
"epoch": 1.5026370098601238,
|
||||
"grad_norm": 0.07768921554088593,
|
||||
"learning_rate": 3.229530813770281e-05,
|
||||
"loss": 0.1109,
|
||||
"step": 820
|
||||
},
|
||||
{
|
||||
"epoch": 1.5209814262783765,
|
||||
"grad_norm": 0.08873719722032547,
|
||||
"learning_rate": 3.0097185442845653e-05,
|
||||
"loss": 0.1141,
|
||||
"step": 830
|
||||
},
|
||||
{
|
||||
"epoch": 1.5393258426966292,
|
||||
"grad_norm": 0.12609460949897766,
|
||||
"learning_rate": 2.796321378467146e-05,
|
||||
"loss": 0.1244,
|
||||
"step": 840
|
||||
},
|
||||
{
|
||||
"epoch": 1.5393258426966292,
|
||||
"eval_loss": 0.11498970538377762,
|
||||
"eval_runtime": 55.7795,
|
||||
"eval_samples_per_second": 4.123,
|
||||
"eval_steps_per_second": 4.123,
|
||||
"step": 840
|
||||
},
|
||||
{
|
||||
"epoch": 1.5576702591148819,
|
||||
"grad_norm": 0.07744992524385452,
|
||||
"learning_rate": 2.5895351546329717e-05,
|
||||
"loss": 0.1121,
|
||||
"step": 850
|
||||
},
|
||||
{
|
||||
"epoch": 1.5760146755331346,
|
||||
"grad_norm": 0.084147609770298,
|
||||
"learning_rate": 2.3895496441197806e-05,
|
||||
"loss": 0.1177,
|
||||
"step": 860
|
||||
},
|
||||
{
|
||||
"epoch": 1.5943590919513873,
|
||||
"grad_norm": 0.08521833270788193,
|
||||
"learning_rate": 2.1965483771316498e-05,
|
||||
"loss": 0.1223,
|
||||
"step": 870
|
||||
},
|
||||
{
|
||||
"epoch": 1.5943590919513873,
|
||||
"eval_loss": 0.11485826224088669,
|
||||
"eval_runtime": 55.6756,
|
||||
"eval_samples_per_second": 4.131,
|
||||
"eval_steps_per_second": 4.131,
|
||||
"step": 870
|
||||
},
|
||||
{
|
||||
"epoch": 1.61270350836964,
|
||||
"grad_norm": 0.08053518086671829,
|
||||
"learning_rate": 2.0107084743101024e-05,
|
||||
"loss": 0.1114,
|
||||
"step": 880
|
||||
},
|
||||
{
|
||||
"epoch": 1.6310479247878926,
|
||||
"grad_norm": 0.07093213498592377,
|
||||
"learning_rate": 1.8322004841873842e-05,
|
||||
"loss": 0.1213,
|
||||
"step": 890
|
||||
},
|
||||
{
|
||||
"epoch": 1.6493923412061453,
|
||||
"grad_norm": 0.08286295086145401,
|
||||
"learning_rate": 1.661188226671111e-05,
|
||||
"loss": 0.1124,
|
||||
"step": 900
|
||||
},
|
||||
{
|
||||
"epoch": 1.6493923412061453,
|
||||
"eval_loss": 0.1149599477648735,
|
||||
"eval_runtime": 55.8031,
|
||||
"eval_samples_per_second": 4.122,
|
||||
"eval_steps_per_second": 4.122,
|
||||
"step": 900
|
||||
},
|
||||
{
|
||||
"epoch": 1.667736757624398,
|
||||
"grad_norm": 0.07299927622079849,
|
||||
"learning_rate": 1.4978286427038601e-05,
|
||||
"loss": 0.1123,
|
||||
"step": 910
|
||||
},
|
||||
{
|
||||
"epoch": 1.6860811740426507,
|
||||
"grad_norm": 0.07889826595783234,
|
||||
"learning_rate": 1.3422716502357102e-05,
|
||||
"loss": 0.1135,
|
||||
"step": 920
|
||||
},
|
||||
{
|
||||
"epoch": 1.7044255904609034,
|
||||
"grad_norm": 0.08562670648097992,
|
||||
"learning_rate": 1.1946600066419345e-05,
|
||||
"loss": 0.1193,
|
||||
"step": 930
|
||||
},
|
||||
{
|
||||
"epoch": 1.7044255904609034,
|
||||
"eval_loss": 0.11482664942741394,
|
||||
"eval_runtime": 56.1319,
|
||||
"eval_samples_per_second": 4.097,
|
||||
"eval_steps_per_second": 4.097,
|
||||
"step": 930
|
||||
},
|
||||
{
|
||||
"epoch": 1.722770006879156,
|
||||
"grad_norm": 0.09471631050109863,
|
||||
"learning_rate": 1.0551291777120464e-05,
|
||||
"loss": 0.1173,
|
||||
"step": 940
|
||||
},
|
||||
{
|
||||
"epoch": 1.7411144232974087,
|
||||
"grad_norm": 0.07673942297697067,
|
||||
"learning_rate": 9.238072133304653e-06,
|
||||
"loss": 0.1121,
|
||||
"step": 950
|
||||
},
|
||||
{
|
||||
"epoch": 1.7594588397156614,
|
||||
"grad_norm": 0.10446635633707047,
|
||||
"learning_rate": 8.00814629962916e-06,
|
||||
"loss": 0.1212,
|
||||
"step": 960
|
||||
},
|
||||
{
|
||||
"epoch": 1.7594588397156614,
|
||||
"eval_loss": 0.11442519724369049,
|
||||
"eval_runtime": 55.8226,
|
||||
"eval_samples_per_second": 4.12,
|
||||
"eval_steps_per_second": 4.12,
|
||||
"step": 960
|
||||
},
|
||||
{
|
||||
"epoch": 1.777803256133914,
|
||||
"grad_norm": 0.08229784667491913,
|
||||
"learning_rate": 6.862643000563407e-06,
|
||||
"loss": 0.1186,
|
||||
"step": 970
|
||||
},
|
||||
{
|
||||
"epoch": 1.7961476725521668,
|
||||
"grad_norm": 0.08047077804803848,
|
||||
"learning_rate": 5.802613484538888e-06,
|
||||
"loss": 0.112,
|
||||
"step": 980
|
||||
},
|
||||
{
|
||||
"epoch": 1.8144920889704195,
|
||||
"grad_norm": 0.06683830171823502,
|
||||
"learning_rate": 4.829030559200032e-06,
|
||||
"loss": 0.1208,
|
||||
"step": 990
|
||||
},
|
||||
{
|
||||
"epoch": 1.8144920889704195,
|
||||
"eval_loss": 0.11440839618444443,
|
||||
"eval_runtime": 55.775,
|
||||
"eval_samples_per_second": 4.124,
|
||||
"eval_steps_per_second": 4.124,
|
||||
"step": 990
|
||||
},
|
||||
{
|
||||
"epoch": 1.8328365053886724,
|
||||
"grad_norm": 0.07382703572511673,
|
||||
"learning_rate": 3.942787698641548e-06,
|
||||
"loss": 0.1272,
|
||||
"step": 1000
|
||||
}
|
||||
],
|
||||
"logging_steps": 10,
|
||||
"max_steps": 1092,
|
||||
"num_input_tokens_seen": 0,
|
||||
"num_train_epochs": 2,
|
||||
"save_steps": 100,
|
||||
"stateful_callbacks": {
|
||||
"TrainerControl": {
|
||||
"args": {
|
||||
"should_epoch_stop": false,
|
||||
"should_evaluate": false,
|
||||
"should_log": false,
|
||||
"should_save": true,
|
||||
"should_training_stop": false
|
||||
},
|
||||
"attributes": {}
|
||||
}
|
||||
},
|
||||
"total_flos": 1.647524172869591e+17,
|
||||
"train_batch_size": 1,
|
||||
"trial_name": null,
|
||||
"trial_params": null
|
||||
}
|
||||
3
checkpoint-1000/training_args.bin
Normal file
3
checkpoint-1000/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:700f411f906bee1d28ab5b7b6098cfafc5c22166b5a1c13239687899b550329d
|
||||
size 5777
|
||||
1
checkpoint-1000/vocab.json
Normal file
1
checkpoint-1000/vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user