diff --git a/README.md b/README.md
index ecf839e..3c0df63 100644
--- a/README.md
+++ b/README.md
@@ -1,47 +1,106 @@
---
-license: Apache License 2.0
-
-#model-type:
-##如 gpt、phi、llama、chatglm、baichuan 等
-#- gpt
-
-#domain:
-##如 nlp、cv、audio、multi-modal
-#- nlp
-
-#language:
-##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
-#- cn
-
-#metrics:
-##如 CIDEr、Blue、ROUGE 等
-#- CIDEr
-
-#tags:
-##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
-#- pretrained
-
-#tools:
-##如 vllm、fastchat、llamacpp、AdaSeq 等
-#- vllm
+license: apache-2.0
+datasets:
+- ehartford/dolphin
+- jondurbin/airoboros-2.2.1
+language:
+- en
---
-### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
-#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
-SDK下载
-```bash
-#安装ModelScope
-pip install modelscope
+Dolphin 2.1 🐬
+https://erichartford.com/dolphin
+
+[](https://discord.gg/cognitivecomputations)
+Discord: https://discord.gg/cognitivecomputations
+
+Dolphin-2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
+
+This model is based on mistralAI, with apache-2.0 license, so it is suitable for commercial or non-commercial use.
+
+This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
+You are responsible for any content you create using this model. Enjoy responsibly.
+
+## Dataset
+
+This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
+
+I modified the dataset for uncensoring, deduping, cleaning, and quality.
+
+I added Jon Durbin's excellent Airoboros dataset to increase creativity.
+
+## Training
+It took 48 hours to train 4 epochs on 4x A100s.
+
+Prompt format:
+This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
-```python
-#SDK模型下载
-from modelscope import snapshot_download
-model_dir = snapshot_download('dphn/dolphin-2.1-mistral-7b')
-```
-Git下载
-```
-#Git模型下载
-git clone https://www.modelscope.cn/dphn/dolphin-2.1-mistral-7b.git
+<|im_start|>system
+You are Dolphin, a helpful AI assistant.<|im_end|>
+<|im_start|>user
+{prompt}<|im_end|>
+<|im_start|>assistant
+
```
-
如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
\ No newline at end of file
+Example:
+```
+<|im_start|>system
+you are an expert dolphin trainer<|im_end|>
+<|im_start|>user
+What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
+<|im_start|>assistant
+```
+
+## Gratitude
+- This model was made possible by the generous sponsorship of a16z.
+- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
+- Special thanks to Wing Lian, and TheBloke for helpful advice
+- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
+- [
](https://github.com/OpenAccess-AI-Collective/axolotl)
+- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
+
+## Example Output
+
+```
+<|im_start|>system
+You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
+<|im_start|>user
+Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
+<|im_start|>assistant
+ To increase horsepower and acceleration in a 2010 Honda Civic without worrying too much about cost, you can consider the following modifications:
+
+1. Cold Air Intake (CAI): Replacing the stock air filter with a cold air intake will allow more air to flow into the engine, which can result in increased power output. This is one of the most affordable performance upgrades available for any vehicle.
+
+2. Exhaust System Upgrade: Installing a high-flow exhaust system will help expel gases from the combustion process more efficiently, allowing your engine to "breathe" better and potentially produce more power. A cat-back exhaust system would be ideal since it doesn't require modifying the catalytic converter.
+
+3. Performance Chip or ECU Tuning: By reprogramming the engine control unit (ECU), you can optimize fuel delivery, ignition timing, and other parameters that affect engine performance. This can lead to significant gains in horsepower and torque. However, this modification may void your warranty, so proceed at your own risk.
+
+4. Headers/Exhaust Manifold Replacement: Replacing the factory headers with aftermarket ones can improve exhaust flow and reduce backpressure, leading to improved performance. Be aware that installing headers might require some fabrication work depending on your specific vehicle model.
+
+5. Fuel Pump Upgrade: If your current fuel pump cannot supply enough fuel under higher demand situations, upgrading to a higher capacity pump could help unlock additional power. Make sure to choose a pump compatible with your vehicle's fuel system.
+
+As for making your car louder, here are some suggestions:
+
+6. Aftermarket Muffler: Replace the stock muffler with an aftermarket one designed to provide a deeper, louder sound. Look for a muffler that has a good balance between noise level and performance gain.
+
+7. Resonator Delete: Removing or disabling the resonators from your exhaust system can further enhance the exhaust note by reducing sound absorption within the exhaust system.
+
+8. Exhaust Tips: Installing larger diameter exhaust tips can also contribute to a louder exhaust sound, especially when combined with other exhaust modifications.
+
+Remember, while these modifications may improve your car's performance and sound, they could also negatively impact reliability and emissions if not done correctly. Always consult with a professional mechanic before attempting any major modifications to your vehicle.
+```
+
+[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
+# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
+Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b)
+
+| Metric | Value |
+|-----------------------|---------------------------|
+| Avg. | 53.47 |
+| ARC (25-shot) | 64.42 |
+| HellaSwag (10-shot) | 84.92 |
+| MMLU (5-shot) | 63.32 |
+| TruthfulQA (0-shot) | 55.56 |
+| Winogrande (5-shot) | 77.74 |
+| GSM8K (5-shot) | 20.77 |
+| DROP (3-shot) | 7.56 |
diff --git a/added_tokens.json b/added_tokens.json
new file mode 100644
index 0000000..e36863d
--- /dev/null
+++ b/added_tokens.json
@@ -0,0 +1,4 @@
+{
+ "<|im_end|>": 32000,
+ "<|im_start|>": 32001
+}
diff --git a/config.json b/config.json
new file mode 100644
index 0000000..6a0588b
--- /dev/null
+++ b/config.json
@@ -0,0 +1,25 @@
+{
+ "_name_or_path": "mistralai/Mistral-7B-v0.1",
+ "architectures": [
+ "MistralForCausalLM"
+ ],
+ "bos_token_id": 1,
+ "eos_token_id": 32000,
+ "hidden_act": "silu",
+ "hidden_size": 4096,
+ "initializer_range": 0.02,
+ "intermediate_size": 14336,
+ "max_position_embeddings": 32768,
+ "model_type": "mistral",
+ "num_attention_heads": 32,
+ "num_hidden_layers": 32,
+ "num_key_value_heads": 8,
+ "rms_norm_eps": 1e-05,
+ "rope_theta": 10000.0,
+ "sliding_window": 4096,
+ "tie_word_embeddings": false,
+ "torch_dtype": "bfloat16",
+ "transformers_version": "4.34.0.dev0",
+ "use_cache": true,
+ "vocab_size": 32002
+}
\ No newline at end of file
diff --git a/configs/dolphin-mistral-7b.yml b/configs/dolphin-mistral-7b.yml
new file mode 100644
index 0000000..63ad4e5
--- /dev/null
+++ b/configs/dolphin-mistral-7b.yml
@@ -0,0 +1,68 @@
+base_model: mistralai/Mistral-7B-v0.1
+base_model_config: mistralai/Mistral-7B-v0.1
+model_type: MistralForCausalLM
+tokenizer_type: LlamaTokenizer
+is_mistral_derived_model: true
+
+load_in_8bit: false
+load_in_4bit: false
+strict: false
+
+datasets:
+ - path: /workspace/datasets/dolphin/dolphin201.jsonl
+ type: alpaca_w_system.load_open_orca_chatml
+
+dataset_prepared_path: last_run_prepared
+val_set_size: 0.005
+output_dir: /workspace/dolphin-2.1-mistral-7b
+
+sequence_len: 8192
+sample_packing: true
+pad_to_sequence_len: true
+
+wandb_project: dolphin
+wandb_entity:
+wandb_watch:
+wandb_run_id:
+wandb_log_model:
+
+gradient_accumulation_steps: 4
+micro_batch_size: 6
+num_epochs: 4
+adam_beta2: 0.95
+adam_epsilon: 0.00001
+max_grad_norm: 1.0
+lr_scheduler: cosine
+learning_rate: 0.000006
+
+train_on_inputs: false
+group_by_length: false
+bf16: true
+fp16: false
+tf32: false
+
+gradient_checkpointing: true
+early_stopping_patience:
+resume_from_checkpoint:
+local_rank:
+logging_steps: 1
+xformers_attention:
+flash_attention: true
+
+warmup_steps: 100
+eval_steps: 0.05
+eval_table_size:
+eval_table_max_new_tokens:
+save_steps:
+debug:
+deepspeed: deepspeed/zero2.json
+weight_decay: 0.1
+fsdp:
+fsdp_config:
+special_tokens:
+ bos_token: ""
+ eos_token: "<|im_end|>"
+ unk_token: ""
+tokens:
+ - "<|im_start|>"
+ - "<|im_end|>"
\ No newline at end of file
diff --git a/configuration.json b/configuration.json
new file mode 100644
index 0000000..bbeeda1
--- /dev/null
+++ b/configuration.json
@@ -0,0 +1 @@
+{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
\ No newline at end of file
diff --git a/generation_config.json b/generation_config.json
new file mode 100644
index 0000000..d268e34
--- /dev/null
+++ b/generation_config.json
@@ -0,0 +1,6 @@
+{
+ "_from_model_config": true,
+ "bos_token_id": 1,
+ "eos_token_id": 2,
+ "transformers_version": "4.35.0.dev0"
+}
diff --git a/latest b/latest
new file mode 100644
index 0000000..a566dfd
--- /dev/null
+++ b/latest
@@ -0,0 +1 @@
+global_step1204
\ No newline at end of file
diff --git a/model-00001-of-00002.safetensors b/model-00001-of-00002.safetensors
new file mode 100644
index 0000000..19e3897
--- /dev/null
+++ b/model-00001-of-00002.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:674b62e83a434c79c4bf7eb12a0e93c4b54facbbbaacd2941e707bc1164132dd
+size 135
diff --git a/model-00002-of-00002.safetensors b/model-00002-of-00002.safetensors
new file mode 100644
index 0000000..e00d54c
--- /dev/null
+++ b/model-00002-of-00002.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5065db44f850cb14625fc17d9fca61269cb9de84ab8a2db3f68cf87383a068e1
+size 135
diff --git a/model.safetensors.index.json b/model.safetensors.index.json
new file mode 100644
index 0000000..a3e9607
--- /dev/null
+++ b/model.safetensors.index.json
@@ -0,0 +1,298 @@
+{
+ "metadata": {
+ "total_size": 14483496960
+ },
+ "weight_map": {
+ "lm_head.weight": "model-00002-of-00002.safetensors",
+ "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
+ "model.norm.weight": "model-00002-of-00002.safetensors"
+ }
+}
\ No newline at end of file
diff --git a/pytorch_model-00001-of-00002.bin b/pytorch_model-00001-of-00002.bin
new file mode 100644
index 0000000..44db4d8
--- /dev/null
+++ b/pytorch_model-00001-of-00002.bin
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22096f5e2626b1b05d5402789076bf95ff27b252b6ce93cfdf799b04f5124218
+size 135
diff --git a/pytorch_model-00002-of-00002.bin b/pytorch_model-00002-of-00002.bin
new file mode 100644
index 0000000..b3257e6
--- /dev/null
+++ b/pytorch_model-00002-of-00002.bin
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:972fe2638210b1575b3d351f0bf4a971bee00a71fede7856e7d4fd0a981a5c90
+size 135
diff --git a/pytorch_model.bin.index.json b/pytorch_model.bin.index.json
new file mode 100644
index 0000000..4fb1e14
--- /dev/null
+++ b/pytorch_model.bin.index.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72e0f44b64f8e8d5241ee3c393f6339e05f2e844134c75c524d87dc89a7257e9
+size 23950
diff --git a/special_tokens_map.json b/special_tokens_map.json
new file mode 100644
index 0000000..4fd61a8
--- /dev/null
+++ b/special_tokens_map.json
@@ -0,0 +1,6 @@
+{
+ "bos_token": "",
+ "eos_token": "<|im_end|>",
+ "pad_token": "",
+ "unk_token": ""
+}
\ No newline at end of file
diff --git a/tokenizer.model b/tokenizer.model
new file mode 100644
index 0000000..263bf6d
--- /dev/null
+++ b/tokenizer.model
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d3daefa6fd9ee26430a71ad6009f05c4c4ec086746b2dcc3d04649f631d3654f
+size 131
diff --git a/tokenizer_config.json b/tokenizer_config.json
new file mode 100644
index 0000000..48eae2a
--- /dev/null
+++ b/tokenizer_config.json
@@ -0,0 +1,61 @@
+{
+ "add_bos_token": true,
+ "add_eos_token": false,
+ "added_tokens_decoder": {
+ "0": {
+ "content": "",
+ "lstrip": true,
+ "normalized": false,
+ "rstrip": true,
+ "single_word": false,
+ "special": true
+ },
+ "1": {
+ "content": "",
+ "lstrip": true,
+ "normalized": false,
+ "rstrip": true,
+ "single_word": false,
+ "special": true
+ },
+ "2": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "32000": {
+ "content": "<|im_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "32001": {
+ "content": "<|im_start|>",
+ "lstrip": true,
+ "normalized": false,
+ "rstrip": true,
+ "single_word": false,
+ "special": true
+ }
+ },
+ "additional_special_tokens": [],
+ "bos_token": "",
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
+ "clean_up_tokenization_spaces": false,
+ "eos_token": "<|im_end|>",
+ "legacy": true,
+ "model_max_length": 1000000000000000019884624838656,
+ "pad_token": null,
+ "sp_model_kwargs": {},
+ "spaces_between_special_tokens": false,
+ "tokenizer_class": "LlamaTokenizer",
+ "trust_remote_code": false,
+ "unk_token": "",
+ "use_default_system_prompt": true,
+ "use_fast": true
+}
\ No newline at end of file
diff --git a/trainer_state.json b/trainer_state.json
new file mode 100644
index 0000000..f44e9f6
--- /dev/null
+++ b/trainer_state.json
@@ -0,0 +1,7403 @@
+{
+ "best_metric": null,
+ "best_model_checkpoint": null,
+ "epoch": 3.9933665008291874,
+ "eval_steps": 61,
+ "global_step": 1204,
+ "is_hyper_param_search": false,
+ "is_local_process_zero": true,
+ "is_world_process_zero": true,
+ "log_history": [
+ {
+ "epoch": 0.0,
+ "learning_rate": 0.0,
+ "loss": 1.5877,
+ "step": 1
+ },
+ {
+ "epoch": 0.0,
+ "eval_loss": 1.244094729423523,
+ "eval_runtime": 17.5479,
+ "eval_samples_per_second": 130.044,
+ "eval_steps_per_second": 5.471,
+ "step": 1
+ },
+ {
+ "epoch": 0.01,
+ "learning_rate": 6.000000000000001e-08,
+ "loss": 1.2159,
+ "step": 2
+ },
+ {
+ "epoch": 0.01,
+ "learning_rate": 1.2000000000000002e-07,
+ "loss": 1.2301,
+ "step": 3
+ },
+ {
+ "epoch": 0.01,
+ "learning_rate": 1.8e-07,
+ "loss": 1.1937,
+ "step": 4
+ },
+ {
+ "epoch": 0.02,
+ "learning_rate": 2.4000000000000003e-07,
+ "loss": 1.184,
+ "step": 5
+ },
+ {
+ "epoch": 0.02,
+ "learning_rate": 3.0000000000000004e-07,
+ "loss": 1.1365,
+ "step": 6
+ },
+ {
+ "epoch": 0.02,
+ "learning_rate": 3.6e-07,
+ "loss": 1.0685,
+ "step": 7
+ },
+ {
+ "epoch": 0.03,
+ "learning_rate": 4.2000000000000006e-07,
+ "loss": 1.1337,
+ "step": 8
+ },
+ {
+ "epoch": 0.03,
+ "learning_rate": 4.800000000000001e-07,
+ "loss": 1.0577,
+ "step": 9
+ },
+ {
+ "epoch": 0.03,
+ "learning_rate": 5.4e-07,
+ "loss": 1.0703,
+ "step": 10
+ },
+ {
+ "epoch": 0.04,
+ "learning_rate": 6.000000000000001e-07,
+ "loss": 1.0909,
+ "step": 11
+ },
+ {
+ "epoch": 0.04,
+ "learning_rate": 6.6e-07,
+ "loss": 1.0383,
+ "step": 12
+ },
+ {
+ "epoch": 0.04,
+ "learning_rate": 7.2e-07,
+ "loss": 1.0356,
+ "step": 13
+ },
+ {
+ "epoch": 0.05,
+ "learning_rate": 7.8e-07,
+ "loss": 1.0192,
+ "step": 14
+ },
+ {
+ "epoch": 0.05,
+ "learning_rate": 8.400000000000001e-07,
+ "loss": 1.0148,
+ "step": 15
+ },
+ {
+ "epoch": 0.05,
+ "learning_rate": 9e-07,
+ "loss": 1.0145,
+ "step": 16
+ },
+ {
+ "epoch": 0.06,
+ "learning_rate": 9.600000000000001e-07,
+ "loss": 0.9817,
+ "step": 17
+ },
+ {
+ "epoch": 0.06,
+ "learning_rate": 1.0200000000000002e-06,
+ "loss": 0.9746,
+ "step": 18
+ },
+ {
+ "epoch": 0.06,
+ "learning_rate": 1.08e-06,
+ "loss": 0.9888,
+ "step": 19
+ },
+ {
+ "epoch": 0.07,
+ "learning_rate": 1.14e-06,
+ "loss": 1.0165,
+ "step": 20
+ },
+ {
+ "epoch": 0.07,
+ "learning_rate": 1.2000000000000002e-06,
+ "loss": 0.9819,
+ "step": 21
+ },
+ {
+ "epoch": 0.07,
+ "learning_rate": 1.26e-06,
+ "loss": 0.9652,
+ "step": 22
+ },
+ {
+ "epoch": 0.08,
+ "learning_rate": 1.32e-06,
+ "loss": 0.9625,
+ "step": 23
+ },
+ {
+ "epoch": 0.08,
+ "learning_rate": 1.3800000000000001e-06,
+ "loss": 0.9734,
+ "step": 24
+ },
+ {
+ "epoch": 0.08,
+ "learning_rate": 1.44e-06,
+ "loss": 0.9506,
+ "step": 25
+ },
+ {
+ "epoch": 0.09,
+ "learning_rate": 1.5e-06,
+ "loss": 0.9716,
+ "step": 26
+ },
+ {
+ "epoch": 0.09,
+ "learning_rate": 1.56e-06,
+ "loss": 0.939,
+ "step": 27
+ },
+ {
+ "epoch": 0.09,
+ "learning_rate": 1.6200000000000002e-06,
+ "loss": 0.8999,
+ "step": 28
+ },
+ {
+ "epoch": 0.1,
+ "learning_rate": 1.6800000000000002e-06,
+ "loss": 0.9211,
+ "step": 29
+ },
+ {
+ "epoch": 0.1,
+ "learning_rate": 1.7399999999999999e-06,
+ "loss": 0.9174,
+ "step": 30
+ },
+ {
+ "epoch": 0.1,
+ "learning_rate": 1.8e-06,
+ "loss": 0.9036,
+ "step": 31
+ },
+ {
+ "epoch": 0.11,
+ "learning_rate": 1.86e-06,
+ "loss": 0.9197,
+ "step": 32
+ },
+ {
+ "epoch": 0.11,
+ "learning_rate": 1.9200000000000003e-06,
+ "loss": 0.9075,
+ "step": 33
+ },
+ {
+ "epoch": 0.11,
+ "learning_rate": 1.98e-06,
+ "loss": 0.9186,
+ "step": 34
+ },
+ {
+ "epoch": 0.12,
+ "learning_rate": 2.0400000000000004e-06,
+ "loss": 0.9153,
+ "step": 35
+ },
+ {
+ "epoch": 0.12,
+ "learning_rate": 2.1e-06,
+ "loss": 0.9065,
+ "step": 36
+ },
+ {
+ "epoch": 0.12,
+ "learning_rate": 2.16e-06,
+ "loss": 0.8857,
+ "step": 37
+ },
+ {
+ "epoch": 0.13,
+ "learning_rate": 2.22e-06,
+ "loss": 0.888,
+ "step": 38
+ },
+ {
+ "epoch": 0.13,
+ "learning_rate": 2.28e-06,
+ "loss": 0.8808,
+ "step": 39
+ },
+ {
+ "epoch": 0.13,
+ "learning_rate": 2.34e-06,
+ "loss": 0.8693,
+ "step": 40
+ },
+ {
+ "epoch": 0.14,
+ "learning_rate": 2.4000000000000003e-06,
+ "loss": 0.8923,
+ "step": 41
+ },
+ {
+ "epoch": 0.14,
+ "learning_rate": 2.4599999999999997e-06,
+ "loss": 0.8929,
+ "step": 42
+ },
+ {
+ "epoch": 0.14,
+ "learning_rate": 2.52e-06,
+ "loss": 0.8972,
+ "step": 43
+ },
+ {
+ "epoch": 0.15,
+ "learning_rate": 2.58e-06,
+ "loss": 0.8623,
+ "step": 44
+ },
+ {
+ "epoch": 0.15,
+ "learning_rate": 2.64e-06,
+ "loss": 0.8593,
+ "step": 45
+ },
+ {
+ "epoch": 0.15,
+ "learning_rate": 2.7e-06,
+ "loss": 0.8636,
+ "step": 46
+ },
+ {
+ "epoch": 0.16,
+ "learning_rate": 2.7600000000000003e-06,
+ "loss": 0.8686,
+ "step": 47
+ },
+ {
+ "epoch": 0.16,
+ "learning_rate": 2.82e-06,
+ "loss": 0.8819,
+ "step": 48
+ },
+ {
+ "epoch": 0.16,
+ "learning_rate": 2.88e-06,
+ "loss": 0.8765,
+ "step": 49
+ },
+ {
+ "epoch": 0.17,
+ "learning_rate": 2.9400000000000002e-06,
+ "loss": 0.8475,
+ "step": 50
+ },
+ {
+ "epoch": 0.17,
+ "learning_rate": 3e-06,
+ "loss": 0.8913,
+ "step": 51
+ },
+ {
+ "epoch": 0.17,
+ "learning_rate": 3.06e-06,
+ "loss": 0.8542,
+ "step": 52
+ },
+ {
+ "epoch": 0.18,
+ "learning_rate": 3.12e-06,
+ "loss": 0.8621,
+ "step": 53
+ },
+ {
+ "epoch": 0.18,
+ "learning_rate": 3.18e-06,
+ "loss": 0.8482,
+ "step": 54
+ },
+ {
+ "epoch": 0.18,
+ "learning_rate": 3.2400000000000003e-06,
+ "loss": 0.8436,
+ "step": 55
+ },
+ {
+ "epoch": 0.19,
+ "learning_rate": 3.3e-06,
+ "loss": 0.8483,
+ "step": 56
+ },
+ {
+ "epoch": 0.19,
+ "learning_rate": 3.3600000000000004e-06,
+ "loss": 0.8578,
+ "step": 57
+ },
+ {
+ "epoch": 0.19,
+ "learning_rate": 3.42e-06,
+ "loss": 0.8448,
+ "step": 58
+ },
+ {
+ "epoch": 0.2,
+ "learning_rate": 3.4799999999999997e-06,
+ "loss": 0.8409,
+ "step": 59
+ },
+ {
+ "epoch": 0.2,
+ "learning_rate": 3.54e-06,
+ "loss": 0.8577,
+ "step": 60
+ },
+ {
+ "epoch": 0.2,
+ "learning_rate": 3.6e-06,
+ "loss": 0.8481,
+ "step": 61
+ },
+ {
+ "epoch": 0.2,
+ "eval_loss": 0.847072184085846,
+ "eval_runtime": 17.615,
+ "eval_samples_per_second": 129.549,
+ "eval_steps_per_second": 5.45,
+ "step": 61
+ },
+ {
+ "epoch": 0.21,
+ "learning_rate": 3.66e-06,
+ "loss": 0.8365,
+ "step": 62
+ },
+ {
+ "epoch": 0.21,
+ "learning_rate": 3.72e-06,
+ "loss": 0.8379,
+ "step": 63
+ },
+ {
+ "epoch": 0.21,
+ "learning_rate": 3.7800000000000002e-06,
+ "loss": 0.8497,
+ "step": 64
+ },
+ {
+ "epoch": 0.22,
+ "learning_rate": 3.8400000000000005e-06,
+ "loss": 0.8284,
+ "step": 65
+ },
+ {
+ "epoch": 0.22,
+ "learning_rate": 3.9e-06,
+ "loss": 0.86,
+ "step": 66
+ },
+ {
+ "epoch": 0.22,
+ "learning_rate": 3.96e-06,
+ "loss": 0.8257,
+ "step": 67
+ },
+ {
+ "epoch": 0.23,
+ "learning_rate": 4.0200000000000005e-06,
+ "loss": 0.8296,
+ "step": 68
+ },
+ {
+ "epoch": 0.23,
+ "learning_rate": 4.080000000000001e-06,
+ "loss": 0.8163,
+ "step": 69
+ },
+ {
+ "epoch": 0.23,
+ "learning_rate": 4.14e-06,
+ "loss": 0.8516,
+ "step": 70
+ },
+ {
+ "epoch": 0.24,
+ "learning_rate": 4.2e-06,
+ "loss": 0.8297,
+ "step": 71
+ },
+ {
+ "epoch": 0.24,
+ "learning_rate": 4.26e-06,
+ "loss": 0.8288,
+ "step": 72
+ },
+ {
+ "epoch": 0.24,
+ "learning_rate": 4.32e-06,
+ "loss": 0.845,
+ "step": 73
+ },
+ {
+ "epoch": 0.25,
+ "learning_rate": 4.38e-06,
+ "loss": 0.829,
+ "step": 74
+ },
+ {
+ "epoch": 0.25,
+ "learning_rate": 4.44e-06,
+ "loss": 0.8348,
+ "step": 75
+ },
+ {
+ "epoch": 0.25,
+ "learning_rate": 4.5e-06,
+ "loss": 0.7995,
+ "step": 76
+ },
+ {
+ "epoch": 0.26,
+ "learning_rate": 4.56e-06,
+ "loss": 0.8406,
+ "step": 77
+ },
+ {
+ "epoch": 0.26,
+ "learning_rate": 4.62e-06,
+ "loss": 0.7984,
+ "step": 78
+ },
+ {
+ "epoch": 0.26,
+ "learning_rate": 4.68e-06,
+ "loss": 0.8122,
+ "step": 79
+ },
+ {
+ "epoch": 0.27,
+ "learning_rate": 4.74e-06,
+ "loss": 0.8225,
+ "step": 80
+ },
+ {
+ "epoch": 0.27,
+ "learning_rate": 4.800000000000001e-06,
+ "loss": 0.8467,
+ "step": 81
+ },
+ {
+ "epoch": 0.27,
+ "learning_rate": 4.86e-06,
+ "loss": 0.7995,
+ "step": 82
+ },
+ {
+ "epoch": 0.28,
+ "learning_rate": 4.9199999999999995e-06,
+ "loss": 0.8356,
+ "step": 83
+ },
+ {
+ "epoch": 0.28,
+ "learning_rate": 4.98e-06,
+ "loss": 0.8208,
+ "step": 84
+ },
+ {
+ "epoch": 0.28,
+ "learning_rate": 5.04e-06,
+ "loss": 0.8105,
+ "step": 85
+ },
+ {
+ "epoch": 0.29,
+ "learning_rate": 5.1e-06,
+ "loss": 0.8146,
+ "step": 86
+ },
+ {
+ "epoch": 0.29,
+ "learning_rate": 5.16e-06,
+ "loss": 0.7911,
+ "step": 87
+ },
+ {
+ "epoch": 0.29,
+ "learning_rate": 5.22e-06,
+ "loss": 0.8042,
+ "step": 88
+ },
+ {
+ "epoch": 0.3,
+ "learning_rate": 5.28e-06,
+ "loss": 0.8248,
+ "step": 89
+ },
+ {
+ "epoch": 0.3,
+ "learning_rate": 5.3400000000000005e-06,
+ "loss": 0.8032,
+ "step": 90
+ },
+ {
+ "epoch": 0.3,
+ "learning_rate": 5.4e-06,
+ "loss": 0.7966,
+ "step": 91
+ },
+ {
+ "epoch": 0.31,
+ "learning_rate": 5.46e-06,
+ "loss": 0.8026,
+ "step": 92
+ },
+ {
+ "epoch": 0.31,
+ "learning_rate": 5.5200000000000005e-06,
+ "loss": 0.8044,
+ "step": 93
+ },
+ {
+ "epoch": 0.31,
+ "learning_rate": 5.580000000000001e-06,
+ "loss": 0.8161,
+ "step": 94
+ },
+ {
+ "epoch": 0.32,
+ "learning_rate": 5.64e-06,
+ "loss": 0.8268,
+ "step": 95
+ },
+ {
+ "epoch": 0.32,
+ "learning_rate": 5.7e-06,
+ "loss": 0.8354,
+ "step": 96
+ },
+ {
+ "epoch": 0.32,
+ "learning_rate": 5.76e-06,
+ "loss": 0.8109,
+ "step": 97
+ },
+ {
+ "epoch": 0.33,
+ "learning_rate": 5.82e-06,
+ "loss": 0.7893,
+ "step": 98
+ },
+ {
+ "epoch": 0.33,
+ "learning_rate": 5.8800000000000005e-06,
+ "loss": 0.8184,
+ "step": 99
+ },
+ {
+ "epoch": 0.33,
+ "learning_rate": 5.94e-06,
+ "loss": 0.8148,
+ "step": 100
+ },
+ {
+ "epoch": 0.33,
+ "learning_rate": 6e-06,
+ "loss": 0.8211,
+ "step": 101
+ },
+ {
+ "epoch": 0.34,
+ "learning_rate": 5.994565217391305e-06,
+ "loss": 0.7684,
+ "step": 102
+ },
+ {
+ "epoch": 0.34,
+ "learning_rate": 5.9891304347826085e-06,
+ "loss": 0.8235,
+ "step": 103
+ },
+ {
+ "epoch": 0.34,
+ "learning_rate": 5.9836956521739135e-06,
+ "loss": 0.8006,
+ "step": 104
+ },
+ {
+ "epoch": 0.35,
+ "learning_rate": 5.978260869565218e-06,
+ "loss": 0.8169,
+ "step": 105
+ },
+ {
+ "epoch": 0.35,
+ "learning_rate": 5.972826086956522e-06,
+ "loss": 0.8,
+ "step": 106
+ },
+ {
+ "epoch": 0.35,
+ "learning_rate": 5.967391304347826e-06,
+ "loss": 0.8101,
+ "step": 107
+ },
+ {
+ "epoch": 0.36,
+ "learning_rate": 5.961956521739131e-06,
+ "loss": 0.7895,
+ "step": 108
+ },
+ {
+ "epoch": 0.36,
+ "learning_rate": 5.9565217391304344e-06,
+ "loss": 0.819,
+ "step": 109
+ },
+ {
+ "epoch": 0.36,
+ "learning_rate": 5.9510869565217395e-06,
+ "loss": 0.7907,
+ "step": 110
+ },
+ {
+ "epoch": 0.37,
+ "learning_rate": 5.945652173913044e-06,
+ "loss": 0.8062,
+ "step": 111
+ },
+ {
+ "epoch": 0.37,
+ "learning_rate": 5.940217391304348e-06,
+ "loss": 0.786,
+ "step": 112
+ },
+ {
+ "epoch": 0.37,
+ "learning_rate": 5.934782608695652e-06,
+ "loss": 0.8129,
+ "step": 113
+ },
+ {
+ "epoch": 0.38,
+ "learning_rate": 5.929347826086957e-06,
+ "loss": 0.7893,
+ "step": 114
+ },
+ {
+ "epoch": 0.38,
+ "learning_rate": 5.923913043478261e-06,
+ "loss": 0.7873,
+ "step": 115
+ },
+ {
+ "epoch": 0.38,
+ "learning_rate": 5.918478260869565e-06,
+ "loss": 0.7966,
+ "step": 116
+ },
+ {
+ "epoch": 0.39,
+ "learning_rate": 5.91304347826087e-06,
+ "loss": 0.797,
+ "step": 117
+ },
+ {
+ "epoch": 0.39,
+ "learning_rate": 5.907608695652174e-06,
+ "loss": 0.8147,
+ "step": 118
+ },
+ {
+ "epoch": 0.39,
+ "learning_rate": 5.902173913043479e-06,
+ "loss": 0.7911,
+ "step": 119
+ },
+ {
+ "epoch": 0.4,
+ "learning_rate": 5.896739130434783e-06,
+ "loss": 0.8058,
+ "step": 120
+ },
+ {
+ "epoch": 0.4,
+ "learning_rate": 5.891304347826087e-06,
+ "loss": 0.7975,
+ "step": 121
+ },
+ {
+ "epoch": 0.4,
+ "learning_rate": 5.885869565217391e-06,
+ "loss": 0.7841,
+ "step": 122
+ },
+ {
+ "epoch": 0.4,
+ "eval_loss": 0.8003882765769958,
+ "eval_runtime": 17.6189,
+ "eval_samples_per_second": 129.52,
+ "eval_steps_per_second": 5.449,
+ "step": 122
+ },
+ {
+ "epoch": 0.41,
+ "learning_rate": 5.880434782608696e-06,
+ "loss": 0.7799,
+ "step": 123
+ },
+ {
+ "epoch": 0.41,
+ "learning_rate": 5.875e-06,
+ "loss": 0.81,
+ "step": 124
+ },
+ {
+ "epoch": 0.41,
+ "learning_rate": 5.869565217391305e-06,
+ "loss": 0.7855,
+ "step": 125
+ },
+ {
+ "epoch": 0.42,
+ "learning_rate": 5.864130434782609e-06,
+ "loss": 0.7849,
+ "step": 126
+ },
+ {
+ "epoch": 0.42,
+ "learning_rate": 5.858695652173913e-06,
+ "loss": 0.8151,
+ "step": 127
+ },
+ {
+ "epoch": 0.42,
+ "learning_rate": 5.853260869565217e-06,
+ "loss": 0.7934,
+ "step": 128
+ },
+ {
+ "epoch": 0.43,
+ "learning_rate": 5.847826086956522e-06,
+ "loss": 0.7854,
+ "step": 129
+ },
+ {
+ "epoch": 0.43,
+ "learning_rate": 5.842391304347826e-06,
+ "loss": 0.7896,
+ "step": 130
+ },
+ {
+ "epoch": 0.43,
+ "learning_rate": 5.836956521739131e-06,
+ "loss": 0.7825,
+ "step": 131
+ },
+ {
+ "epoch": 0.44,
+ "learning_rate": 5.831521739130435e-06,
+ "loss": 0.8191,
+ "step": 132
+ },
+ {
+ "epoch": 0.44,
+ "learning_rate": 5.826086956521739e-06,
+ "loss": 0.8136,
+ "step": 133
+ },
+ {
+ "epoch": 0.44,
+ "learning_rate": 5.820652173913044e-06,
+ "loss": 0.8003,
+ "step": 134
+ },
+ {
+ "epoch": 0.45,
+ "learning_rate": 5.815217391304348e-06,
+ "loss": 0.7949,
+ "step": 135
+ },
+ {
+ "epoch": 0.45,
+ "learning_rate": 5.809782608695652e-06,
+ "loss": 0.7857,
+ "step": 136
+ },
+ {
+ "epoch": 0.45,
+ "learning_rate": 5.8043478260869565e-06,
+ "loss": 0.7895,
+ "step": 137
+ },
+ {
+ "epoch": 0.46,
+ "learning_rate": 5.798913043478261e-06,
+ "loss": 0.7935,
+ "step": 138
+ },
+ {
+ "epoch": 0.46,
+ "learning_rate": 5.793478260869565e-06,
+ "loss": 0.8038,
+ "step": 139
+ },
+ {
+ "epoch": 0.46,
+ "learning_rate": 5.78804347826087e-06,
+ "loss": 0.7798,
+ "step": 140
+ },
+ {
+ "epoch": 0.47,
+ "learning_rate": 5.782608695652174e-06,
+ "loss": 0.7824,
+ "step": 141
+ },
+ {
+ "epoch": 0.47,
+ "learning_rate": 5.777173913043478e-06,
+ "loss": 0.7733,
+ "step": 142
+ },
+ {
+ "epoch": 0.47,
+ "learning_rate": 5.771739130434783e-06,
+ "loss": 0.79,
+ "step": 143
+ },
+ {
+ "epoch": 0.48,
+ "learning_rate": 5.7663043478260875e-06,
+ "loss": 0.8024,
+ "step": 144
+ },
+ {
+ "epoch": 0.48,
+ "learning_rate": 5.760869565217392e-06,
+ "loss": 0.781,
+ "step": 145
+ },
+ {
+ "epoch": 0.48,
+ "learning_rate": 5.755434782608696e-06,
+ "loss": 0.7831,
+ "step": 146
+ },
+ {
+ "epoch": 0.49,
+ "learning_rate": 5.75e-06,
+ "loss": 0.7842,
+ "step": 147
+ },
+ {
+ "epoch": 0.49,
+ "learning_rate": 5.744565217391304e-06,
+ "loss": 0.7848,
+ "step": 148
+ },
+ {
+ "epoch": 0.49,
+ "learning_rate": 5.739130434782609e-06,
+ "loss": 0.7876,
+ "step": 149
+ },
+ {
+ "epoch": 0.5,
+ "learning_rate": 5.733695652173913e-06,
+ "loss": 0.7922,
+ "step": 150
+ },
+ {
+ "epoch": 0.5,
+ "learning_rate": 5.7282608695652176e-06,
+ "loss": 0.7728,
+ "step": 151
+ },
+ {
+ "epoch": 0.5,
+ "learning_rate": 5.722826086956522e-06,
+ "loss": 0.7838,
+ "step": 152
+ },
+ {
+ "epoch": 0.51,
+ "learning_rate": 5.717391304347826e-06,
+ "loss": 0.7843,
+ "step": 153
+ },
+ {
+ "epoch": 0.51,
+ "learning_rate": 5.71195652173913e-06,
+ "loss": 0.7721,
+ "step": 154
+ },
+ {
+ "epoch": 0.51,
+ "learning_rate": 5.706521739130435e-06,
+ "loss": 0.7946,
+ "step": 155
+ },
+ {
+ "epoch": 0.52,
+ "learning_rate": 5.701086956521739e-06,
+ "loss": 0.8002,
+ "step": 156
+ },
+ {
+ "epoch": 0.52,
+ "learning_rate": 5.6956521739130435e-06,
+ "loss": 0.7786,
+ "step": 157
+ },
+ {
+ "epoch": 0.52,
+ "learning_rate": 5.6902173913043485e-06,
+ "loss": 0.7848,
+ "step": 158
+ },
+ {
+ "epoch": 0.53,
+ "learning_rate": 5.684782608695652e-06,
+ "loss": 0.8104,
+ "step": 159
+ },
+ {
+ "epoch": 0.53,
+ "learning_rate": 5.679347826086957e-06,
+ "loss": 0.7767,
+ "step": 160
+ },
+ {
+ "epoch": 0.53,
+ "learning_rate": 5.673913043478261e-06,
+ "loss": 0.7822,
+ "step": 161
+ },
+ {
+ "epoch": 0.54,
+ "learning_rate": 5.668478260869565e-06,
+ "loss": 0.8001,
+ "step": 162
+ },
+ {
+ "epoch": 0.54,
+ "learning_rate": 5.663043478260869e-06,
+ "loss": 0.7854,
+ "step": 163
+ },
+ {
+ "epoch": 0.54,
+ "learning_rate": 5.6576086956521744e-06,
+ "loss": 0.7942,
+ "step": 164
+ },
+ {
+ "epoch": 0.55,
+ "learning_rate": 5.652173913043479e-06,
+ "loss": 0.8057,
+ "step": 165
+ },
+ {
+ "epoch": 0.55,
+ "learning_rate": 5.646739130434783e-06,
+ "loss": 0.7702,
+ "step": 166
+ },
+ {
+ "epoch": 0.55,
+ "learning_rate": 5.641304347826087e-06,
+ "loss": 0.7799,
+ "step": 167
+ },
+ {
+ "epoch": 0.56,
+ "learning_rate": 5.635869565217391e-06,
+ "loss": 0.7892,
+ "step": 168
+ },
+ {
+ "epoch": 0.56,
+ "learning_rate": 5.630434782608695e-06,
+ "loss": 0.7715,
+ "step": 169
+ },
+ {
+ "epoch": 0.56,
+ "learning_rate": 5.625e-06,
+ "loss": 0.7636,
+ "step": 170
+ },
+ {
+ "epoch": 0.57,
+ "learning_rate": 5.6195652173913045e-06,
+ "loss": 0.7875,
+ "step": 171
+ },
+ {
+ "epoch": 0.57,
+ "learning_rate": 5.614130434782609e-06,
+ "loss": 0.7824,
+ "step": 172
+ },
+ {
+ "epoch": 0.57,
+ "learning_rate": 5.608695652173914e-06,
+ "loss": 0.7769,
+ "step": 173
+ },
+ {
+ "epoch": 0.58,
+ "learning_rate": 5.603260869565217e-06,
+ "loss": 0.793,
+ "step": 174
+ },
+ {
+ "epoch": 0.58,
+ "learning_rate": 5.597826086956522e-06,
+ "loss": 0.7667,
+ "step": 175
+ },
+ {
+ "epoch": 0.58,
+ "learning_rate": 5.592391304347826e-06,
+ "loss": 0.7643,
+ "step": 176
+ },
+ {
+ "epoch": 0.59,
+ "learning_rate": 5.5869565217391305e-06,
+ "loss": 0.7718,
+ "step": 177
+ },
+ {
+ "epoch": 0.59,
+ "learning_rate": 5.581521739130435e-06,
+ "loss": 0.7583,
+ "step": 178
+ },
+ {
+ "epoch": 0.59,
+ "learning_rate": 5.57608695652174e-06,
+ "loss": 0.7722,
+ "step": 179
+ },
+ {
+ "epoch": 0.6,
+ "learning_rate": 5.570652173913043e-06,
+ "loss": 0.7739,
+ "step": 180
+ },
+ {
+ "epoch": 0.6,
+ "learning_rate": 5.565217391304348e-06,
+ "loss": 0.7785,
+ "step": 181
+ },
+ {
+ "epoch": 0.6,
+ "learning_rate": 5.559782608695652e-06,
+ "loss": 0.7653,
+ "step": 182
+ },
+ {
+ "epoch": 0.61,
+ "learning_rate": 5.554347826086956e-06,
+ "loss": 0.7927,
+ "step": 183
+ },
+ {
+ "epoch": 0.61,
+ "eval_loss": 0.7837424278259277,
+ "eval_runtime": 17.6126,
+ "eval_samples_per_second": 129.566,
+ "eval_steps_per_second": 5.451,
+ "step": 183
+ },
+ {
+ "epoch": 0.61,
+ "learning_rate": 5.548913043478261e-06,
+ "loss": 0.7651,
+ "step": 184
+ },
+ {
+ "epoch": 0.61,
+ "learning_rate": 5.543478260869566e-06,
+ "loss": 0.7851,
+ "step": 185
+ },
+ {
+ "epoch": 0.62,
+ "learning_rate": 5.53804347826087e-06,
+ "loss": 0.7797,
+ "step": 186
+ },
+ {
+ "epoch": 0.62,
+ "learning_rate": 5.532608695652174e-06,
+ "loss": 0.7893,
+ "step": 187
+ },
+ {
+ "epoch": 0.62,
+ "learning_rate": 5.527173913043479e-06,
+ "loss": 0.783,
+ "step": 188
+ },
+ {
+ "epoch": 0.63,
+ "learning_rate": 5.521739130434782e-06,
+ "loss": 0.7599,
+ "step": 189
+ },
+ {
+ "epoch": 0.63,
+ "learning_rate": 5.516304347826087e-06,
+ "loss": 0.7859,
+ "step": 190
+ },
+ {
+ "epoch": 0.63,
+ "learning_rate": 5.5108695652173915e-06,
+ "loss": 0.7765,
+ "step": 191
+ },
+ {
+ "epoch": 0.64,
+ "learning_rate": 5.505434782608696e-06,
+ "loss": 0.7577,
+ "step": 192
+ },
+ {
+ "epoch": 0.64,
+ "learning_rate": 5.5e-06,
+ "loss": 0.7743,
+ "step": 193
+ },
+ {
+ "epoch": 0.64,
+ "learning_rate": 5.494565217391305e-06,
+ "loss": 0.7792,
+ "step": 194
+ },
+ {
+ "epoch": 0.65,
+ "learning_rate": 5.489130434782608e-06,
+ "loss": 0.7652,
+ "step": 195
+ },
+ {
+ "epoch": 0.65,
+ "learning_rate": 5.483695652173913e-06,
+ "loss": 0.765,
+ "step": 196
+ },
+ {
+ "epoch": 0.65,
+ "learning_rate": 5.478260869565217e-06,
+ "loss": 0.7859,
+ "step": 197
+ },
+ {
+ "epoch": 0.66,
+ "learning_rate": 5.472826086956522e-06,
+ "loss": 0.7969,
+ "step": 198
+ },
+ {
+ "epoch": 0.66,
+ "learning_rate": 5.467391304347827e-06,
+ "loss": 0.7669,
+ "step": 199
+ },
+ {
+ "epoch": 0.66,
+ "learning_rate": 5.461956521739131e-06,
+ "loss": 0.7502,
+ "step": 200
+ },
+ {
+ "epoch": 0.67,
+ "learning_rate": 5.456521739130435e-06,
+ "loss": 0.7658,
+ "step": 201
+ },
+ {
+ "epoch": 0.67,
+ "learning_rate": 5.451086956521739e-06,
+ "loss": 0.7901,
+ "step": 202
+ },
+ {
+ "epoch": 0.67,
+ "learning_rate": 5.445652173913044e-06,
+ "loss": 0.7658,
+ "step": 203
+ },
+ {
+ "epoch": 0.68,
+ "learning_rate": 5.4402173913043475e-06,
+ "loss": 0.7589,
+ "step": 204
+ },
+ {
+ "epoch": 0.68,
+ "learning_rate": 5.4347826086956525e-06,
+ "loss": 0.7774,
+ "step": 205
+ },
+ {
+ "epoch": 0.68,
+ "learning_rate": 5.429347826086957e-06,
+ "loss": 0.7738,
+ "step": 206
+ },
+ {
+ "epoch": 0.69,
+ "learning_rate": 5.423913043478261e-06,
+ "loss": 0.801,
+ "step": 207
+ },
+ {
+ "epoch": 0.69,
+ "learning_rate": 5.418478260869565e-06,
+ "loss": 0.7684,
+ "step": 208
+ },
+ {
+ "epoch": 0.69,
+ "learning_rate": 5.41304347826087e-06,
+ "loss": 0.7686,
+ "step": 209
+ },
+ {
+ "epoch": 0.7,
+ "learning_rate": 5.4076086956521734e-06,
+ "loss": 0.7751,
+ "step": 210
+ },
+ {
+ "epoch": 0.7,
+ "learning_rate": 5.4021739130434785e-06,
+ "loss": 0.7707,
+ "step": 211
+ },
+ {
+ "epoch": 0.7,
+ "learning_rate": 5.396739130434783e-06,
+ "loss": 0.7807,
+ "step": 212
+ },
+ {
+ "epoch": 0.71,
+ "learning_rate": 5.391304347826087e-06,
+ "loss": 0.7735,
+ "step": 213
+ },
+ {
+ "epoch": 0.71,
+ "learning_rate": 5.385869565217392e-06,
+ "loss": 0.7727,
+ "step": 214
+ },
+ {
+ "epoch": 0.71,
+ "learning_rate": 5.380434782608696e-06,
+ "loss": 0.7639,
+ "step": 215
+ },
+ {
+ "epoch": 0.72,
+ "learning_rate": 5.375e-06,
+ "loss": 0.7602,
+ "step": 216
+ },
+ {
+ "epoch": 0.72,
+ "learning_rate": 5.369565217391304e-06,
+ "loss": 0.7832,
+ "step": 217
+ },
+ {
+ "epoch": 0.72,
+ "learning_rate": 5.364130434782609e-06,
+ "loss": 0.7824,
+ "step": 218
+ },
+ {
+ "epoch": 0.73,
+ "learning_rate": 5.358695652173913e-06,
+ "loss": 0.7712,
+ "step": 219
+ },
+ {
+ "epoch": 0.73,
+ "learning_rate": 5.353260869565218e-06,
+ "loss": 0.7614,
+ "step": 220
+ },
+ {
+ "epoch": 0.73,
+ "learning_rate": 5.347826086956522e-06,
+ "loss": 0.782,
+ "step": 221
+ },
+ {
+ "epoch": 0.74,
+ "learning_rate": 5.342391304347826e-06,
+ "loss": 0.7726,
+ "step": 222
+ },
+ {
+ "epoch": 0.74,
+ "learning_rate": 5.33695652173913e-06,
+ "loss": 0.7577,
+ "step": 223
+ },
+ {
+ "epoch": 0.74,
+ "learning_rate": 5.331521739130435e-06,
+ "loss": 0.7623,
+ "step": 224
+ },
+ {
+ "epoch": 0.75,
+ "learning_rate": 5.326086956521739e-06,
+ "loss": 0.761,
+ "step": 225
+ },
+ {
+ "epoch": 0.75,
+ "learning_rate": 5.320652173913044e-06,
+ "loss": 0.782,
+ "step": 226
+ },
+ {
+ "epoch": 0.75,
+ "learning_rate": 5.315217391304348e-06,
+ "loss": 0.768,
+ "step": 227
+ },
+ {
+ "epoch": 0.76,
+ "learning_rate": 5.309782608695652e-06,
+ "loss": 0.7719,
+ "step": 228
+ },
+ {
+ "epoch": 0.76,
+ "learning_rate": 5.304347826086957e-06,
+ "loss": 0.7791,
+ "step": 229
+ },
+ {
+ "epoch": 0.76,
+ "learning_rate": 5.298913043478261e-06,
+ "loss": 0.7725,
+ "step": 230
+ },
+ {
+ "epoch": 0.77,
+ "learning_rate": 5.2934782608695654e-06,
+ "loss": 0.7805,
+ "step": 231
+ },
+ {
+ "epoch": 0.77,
+ "learning_rate": 5.28804347826087e-06,
+ "loss": 0.7689,
+ "step": 232
+ },
+ {
+ "epoch": 0.77,
+ "learning_rate": 5.282608695652174e-06,
+ "loss": 0.7599,
+ "step": 233
+ },
+ {
+ "epoch": 0.78,
+ "learning_rate": 5.277173913043478e-06,
+ "loss": 0.772,
+ "step": 234
+ },
+ {
+ "epoch": 0.78,
+ "learning_rate": 5.271739130434783e-06,
+ "loss": 0.7696,
+ "step": 235
+ },
+ {
+ "epoch": 0.78,
+ "learning_rate": 5.266304347826087e-06,
+ "loss": 0.7774,
+ "step": 236
+ },
+ {
+ "epoch": 0.79,
+ "learning_rate": 5.260869565217391e-06,
+ "loss": 0.7707,
+ "step": 237
+ },
+ {
+ "epoch": 0.79,
+ "learning_rate": 5.2554347826086955e-06,
+ "loss": 0.771,
+ "step": 238
+ },
+ {
+ "epoch": 0.79,
+ "learning_rate": 5.2500000000000006e-06,
+ "loss": 0.7562,
+ "step": 239
+ },
+ {
+ "epoch": 0.8,
+ "learning_rate": 5.244565217391305e-06,
+ "loss": 0.7635,
+ "step": 240
+ },
+ {
+ "epoch": 0.8,
+ "learning_rate": 5.239130434782609e-06,
+ "loss": 0.7594,
+ "step": 241
+ },
+ {
+ "epoch": 0.8,
+ "learning_rate": 5.233695652173913e-06,
+ "loss": 0.7374,
+ "step": 242
+ },
+ {
+ "epoch": 0.81,
+ "learning_rate": 5.228260869565217e-06,
+ "loss": 0.7605,
+ "step": 243
+ },
+ {
+ "epoch": 0.81,
+ "learning_rate": 5.222826086956522e-06,
+ "loss": 0.7742,
+ "step": 244
+ },
+ {
+ "epoch": 0.81,
+ "eval_loss": 0.7746977806091309,
+ "eval_runtime": 17.6216,
+ "eval_samples_per_second": 129.5,
+ "eval_steps_per_second": 5.448,
+ "step": 244
+ },
+ {
+ "epoch": 0.81,
+ "learning_rate": 5.2173913043478265e-06,
+ "loss": 0.779,
+ "step": 245
+ },
+ {
+ "epoch": 0.82,
+ "learning_rate": 5.211956521739131e-06,
+ "loss": 0.748,
+ "step": 246
+ },
+ {
+ "epoch": 0.82,
+ "learning_rate": 5.206521739130435e-06,
+ "loss": 0.7851,
+ "step": 247
+ },
+ {
+ "epoch": 0.82,
+ "learning_rate": 5.201086956521739e-06,
+ "loss": 0.7543,
+ "step": 248
+ },
+ {
+ "epoch": 0.83,
+ "learning_rate": 5.195652173913043e-06,
+ "loss": 0.7666,
+ "step": 249
+ },
+ {
+ "epoch": 0.83,
+ "learning_rate": 5.190217391304348e-06,
+ "loss": 0.7742,
+ "step": 250
+ },
+ {
+ "epoch": 0.83,
+ "learning_rate": 5.184782608695652e-06,
+ "loss": 0.7726,
+ "step": 251
+ },
+ {
+ "epoch": 0.84,
+ "learning_rate": 5.1793478260869566e-06,
+ "loss": 0.7659,
+ "step": 252
+ },
+ {
+ "epoch": 0.84,
+ "learning_rate": 5.173913043478262e-06,
+ "loss": 0.7677,
+ "step": 253
+ },
+ {
+ "epoch": 0.84,
+ "learning_rate": 5.168478260869565e-06,
+ "loss": 0.7497,
+ "step": 254
+ },
+ {
+ "epoch": 0.85,
+ "learning_rate": 5.16304347826087e-06,
+ "loss": 0.7655,
+ "step": 255
+ },
+ {
+ "epoch": 0.85,
+ "learning_rate": 5.157608695652174e-06,
+ "loss": 0.772,
+ "step": 256
+ },
+ {
+ "epoch": 0.85,
+ "learning_rate": 5.152173913043478e-06,
+ "loss": 0.7765,
+ "step": 257
+ },
+ {
+ "epoch": 0.86,
+ "learning_rate": 5.1467391304347825e-06,
+ "loss": 0.7722,
+ "step": 258
+ },
+ {
+ "epoch": 0.86,
+ "learning_rate": 5.1413043478260875e-06,
+ "loss": 0.7541,
+ "step": 259
+ },
+ {
+ "epoch": 0.86,
+ "learning_rate": 5.135869565217391e-06,
+ "loss": 0.7553,
+ "step": 260
+ },
+ {
+ "epoch": 0.87,
+ "learning_rate": 5.130434782608696e-06,
+ "loss": 0.7713,
+ "step": 261
+ },
+ {
+ "epoch": 0.87,
+ "learning_rate": 5.125e-06,
+ "loss": 0.7649,
+ "step": 262
+ },
+ {
+ "epoch": 0.87,
+ "learning_rate": 5.119565217391304e-06,
+ "loss": 0.7639,
+ "step": 263
+ },
+ {
+ "epoch": 0.88,
+ "learning_rate": 5.114130434782608e-06,
+ "loss": 0.786,
+ "step": 264
+ },
+ {
+ "epoch": 0.88,
+ "learning_rate": 5.1086956521739134e-06,
+ "loss": 0.7542,
+ "step": 265
+ },
+ {
+ "epoch": 0.88,
+ "learning_rate": 5.103260869565218e-06,
+ "loss": 0.765,
+ "step": 266
+ },
+ {
+ "epoch": 0.89,
+ "learning_rate": 5.097826086956522e-06,
+ "loss": 0.7495,
+ "step": 267
+ },
+ {
+ "epoch": 0.89,
+ "learning_rate": 5.092391304347827e-06,
+ "loss": 0.7719,
+ "step": 268
+ },
+ {
+ "epoch": 0.89,
+ "learning_rate": 5.08695652173913e-06,
+ "loss": 0.774,
+ "step": 269
+ },
+ {
+ "epoch": 0.9,
+ "learning_rate": 5.081521739130435e-06,
+ "loss": 0.7665,
+ "step": 270
+ },
+ {
+ "epoch": 0.9,
+ "learning_rate": 5.076086956521739e-06,
+ "loss": 0.7672,
+ "step": 271
+ },
+ {
+ "epoch": 0.9,
+ "learning_rate": 5.0706521739130435e-06,
+ "loss": 0.7601,
+ "step": 272
+ },
+ {
+ "epoch": 0.91,
+ "learning_rate": 5.065217391304348e-06,
+ "loss": 0.7602,
+ "step": 273
+ },
+ {
+ "epoch": 0.91,
+ "learning_rate": 5.059782608695653e-06,
+ "loss": 0.753,
+ "step": 274
+ },
+ {
+ "epoch": 0.91,
+ "learning_rate": 5.054347826086956e-06,
+ "loss": 0.7421,
+ "step": 275
+ },
+ {
+ "epoch": 0.92,
+ "learning_rate": 5.048913043478261e-06,
+ "loss": 0.7616,
+ "step": 276
+ },
+ {
+ "epoch": 0.92,
+ "learning_rate": 5.043478260869565e-06,
+ "loss": 0.7638,
+ "step": 277
+ },
+ {
+ "epoch": 0.92,
+ "learning_rate": 5.0380434782608695e-06,
+ "loss": 0.7704,
+ "step": 278
+ },
+ {
+ "epoch": 0.93,
+ "learning_rate": 5.032608695652174e-06,
+ "loss": 0.7513,
+ "step": 279
+ },
+ {
+ "epoch": 0.93,
+ "learning_rate": 5.027173913043479e-06,
+ "loss": 0.7916,
+ "step": 280
+ },
+ {
+ "epoch": 0.93,
+ "learning_rate": 5.021739130434783e-06,
+ "loss": 0.7544,
+ "step": 281
+ },
+ {
+ "epoch": 0.94,
+ "learning_rate": 5.016304347826087e-06,
+ "loss": 0.7646,
+ "step": 282
+ },
+ {
+ "epoch": 0.94,
+ "learning_rate": 5.010869565217392e-06,
+ "loss": 0.7663,
+ "step": 283
+ },
+ {
+ "epoch": 0.94,
+ "learning_rate": 5.005434782608695e-06,
+ "loss": 0.7605,
+ "step": 284
+ },
+ {
+ "epoch": 0.95,
+ "learning_rate": 5e-06,
+ "loss": 0.7634,
+ "step": 285
+ },
+ {
+ "epoch": 0.95,
+ "learning_rate": 4.994565217391305e-06,
+ "loss": 0.7702,
+ "step": 286
+ },
+ {
+ "epoch": 0.95,
+ "learning_rate": 4.989130434782609e-06,
+ "loss": 0.7531,
+ "step": 287
+ },
+ {
+ "epoch": 0.96,
+ "learning_rate": 4.983695652173913e-06,
+ "loss": 0.7609,
+ "step": 288
+ },
+ {
+ "epoch": 0.96,
+ "learning_rate": 4.978260869565218e-06,
+ "loss": 0.7697,
+ "step": 289
+ },
+ {
+ "epoch": 0.96,
+ "learning_rate": 4.972826086956521e-06,
+ "loss": 0.7531,
+ "step": 290
+ },
+ {
+ "epoch": 0.97,
+ "learning_rate": 4.967391304347826e-06,
+ "loss": 0.7438,
+ "step": 291
+ },
+ {
+ "epoch": 0.97,
+ "learning_rate": 4.9619565217391305e-06,
+ "loss": 0.7783,
+ "step": 292
+ },
+ {
+ "epoch": 0.97,
+ "learning_rate": 4.956521739130435e-06,
+ "loss": 0.7668,
+ "step": 293
+ },
+ {
+ "epoch": 0.98,
+ "learning_rate": 4.951086956521739e-06,
+ "loss": 0.8017,
+ "step": 294
+ },
+ {
+ "epoch": 0.98,
+ "learning_rate": 4.945652173913044e-06,
+ "loss": 0.7361,
+ "step": 295
+ },
+ {
+ "epoch": 0.98,
+ "learning_rate": 4.940217391304348e-06,
+ "loss": 0.7649,
+ "step": 296
+ },
+ {
+ "epoch": 0.99,
+ "learning_rate": 4.934782608695652e-06,
+ "loss": 0.7704,
+ "step": 297
+ },
+ {
+ "epoch": 0.99,
+ "learning_rate": 4.929347826086957e-06,
+ "loss": 0.7502,
+ "step": 298
+ },
+ {
+ "epoch": 0.99,
+ "learning_rate": 4.923913043478261e-06,
+ "loss": 0.7736,
+ "step": 299
+ },
+ {
+ "epoch": 1.0,
+ "learning_rate": 4.918478260869566e-06,
+ "loss": 0.759,
+ "step": 300
+ },
+ {
+ "epoch": 1.0,
+ "learning_rate": 4.91304347826087e-06,
+ "loss": 0.7812,
+ "step": 301
+ },
+ {
+ "epoch": 1.0,
+ "learning_rate": 4.907608695652174e-06,
+ "loss": 0.7343,
+ "step": 302
+ },
+ {
+ "epoch": 1.0,
+ "learning_rate": 4.902173913043478e-06,
+ "loss": 0.7192,
+ "step": 303
+ },
+ {
+ "epoch": 1.01,
+ "learning_rate": 4.896739130434783e-06,
+ "loss": 0.7057,
+ "step": 304
+ },
+ {
+ "epoch": 1.01,
+ "learning_rate": 4.8913043478260865e-06,
+ "loss": 0.6932,
+ "step": 305
+ },
+ {
+ "epoch": 1.01,
+ "eval_loss": 0.7692939639091492,
+ "eval_runtime": 17.6192,
+ "eval_samples_per_second": 129.518,
+ "eval_steps_per_second": 5.449,
+ "step": 305
+ },
+ {
+ "epoch": 1.01,
+ "learning_rate": 4.8858695652173916e-06,
+ "loss": 0.711,
+ "step": 306
+ },
+ {
+ "epoch": 1.02,
+ "learning_rate": 4.880434782608696e-06,
+ "loss": 0.7022,
+ "step": 307
+ },
+ {
+ "epoch": 1.02,
+ "learning_rate": 4.875e-06,
+ "loss": 0.7019,
+ "step": 308
+ },
+ {
+ "epoch": 1.02,
+ "learning_rate": 4.869565217391305e-06,
+ "loss": 0.7355,
+ "step": 309
+ },
+ {
+ "epoch": 1.03,
+ "learning_rate": 4.864130434782609e-06,
+ "loss": 0.6934,
+ "step": 310
+ },
+ {
+ "epoch": 1.03,
+ "learning_rate": 4.858695652173913e-06,
+ "loss": 0.7088,
+ "step": 311
+ },
+ {
+ "epoch": 1.03,
+ "learning_rate": 4.8532608695652175e-06,
+ "loss": 0.7062,
+ "step": 312
+ },
+ {
+ "epoch": 1.04,
+ "learning_rate": 4.847826086956522e-06,
+ "loss": 0.6968,
+ "step": 313
+ },
+ {
+ "epoch": 1.04,
+ "learning_rate": 4.842391304347826e-06,
+ "loss": 0.7053,
+ "step": 314
+ },
+ {
+ "epoch": 1.04,
+ "learning_rate": 4.836956521739131e-06,
+ "loss": 0.7199,
+ "step": 315
+ },
+ {
+ "epoch": 1.05,
+ "learning_rate": 4.831521739130435e-06,
+ "loss": 0.6892,
+ "step": 316
+ },
+ {
+ "epoch": 1.05,
+ "learning_rate": 4.826086956521739e-06,
+ "loss": 0.6913,
+ "step": 317
+ },
+ {
+ "epoch": 1.05,
+ "learning_rate": 4.820652173913043e-06,
+ "loss": 0.6999,
+ "step": 318
+ },
+ {
+ "epoch": 1.06,
+ "learning_rate": 4.815217391304348e-06,
+ "loss": 0.7066,
+ "step": 319
+ },
+ {
+ "epoch": 1.06,
+ "learning_rate": 4.809782608695652e-06,
+ "loss": 0.7163,
+ "step": 320
+ },
+ {
+ "epoch": 1.06,
+ "learning_rate": 4.804347826086957e-06,
+ "loss": 0.7102,
+ "step": 321
+ },
+ {
+ "epoch": 1.07,
+ "learning_rate": 4.798913043478261e-06,
+ "loss": 0.7012,
+ "step": 322
+ },
+ {
+ "epoch": 1.07,
+ "learning_rate": 4.793478260869565e-06,
+ "loss": 0.6949,
+ "step": 323
+ },
+ {
+ "epoch": 1.07,
+ "learning_rate": 4.78804347826087e-06,
+ "loss": 0.6825,
+ "step": 324
+ },
+ {
+ "epoch": 1.08,
+ "learning_rate": 4.782608695652174e-06,
+ "loss": 0.6905,
+ "step": 325
+ },
+ {
+ "epoch": 1.08,
+ "learning_rate": 4.7771739130434785e-06,
+ "loss": 0.709,
+ "step": 326
+ },
+ {
+ "epoch": 1.08,
+ "learning_rate": 4.771739130434783e-06,
+ "loss": 0.6833,
+ "step": 327
+ },
+ {
+ "epoch": 1.09,
+ "learning_rate": 4.766304347826087e-06,
+ "loss": 0.6943,
+ "step": 328
+ },
+ {
+ "epoch": 1.09,
+ "learning_rate": 4.760869565217391e-06,
+ "loss": 0.701,
+ "step": 329
+ },
+ {
+ "epoch": 1.09,
+ "learning_rate": 4.755434782608696e-06,
+ "loss": 0.6976,
+ "step": 330
+ },
+ {
+ "epoch": 1.1,
+ "learning_rate": 4.75e-06,
+ "loss": 0.6999,
+ "step": 331
+ },
+ {
+ "epoch": 1.1,
+ "learning_rate": 4.7445652173913044e-06,
+ "loss": 0.708,
+ "step": 332
+ },
+ {
+ "epoch": 1.1,
+ "learning_rate": 4.739130434782609e-06,
+ "loss": 0.7215,
+ "step": 333
+ },
+ {
+ "epoch": 1.11,
+ "learning_rate": 4.733695652173913e-06,
+ "loss": 0.6919,
+ "step": 334
+ },
+ {
+ "epoch": 1.11,
+ "learning_rate": 4.728260869565217e-06,
+ "loss": 0.7093,
+ "step": 335
+ },
+ {
+ "epoch": 1.11,
+ "learning_rate": 4.722826086956522e-06,
+ "loss": 0.6882,
+ "step": 336
+ },
+ {
+ "epoch": 1.12,
+ "learning_rate": 4.717391304347826e-06,
+ "loss": 0.7107,
+ "step": 337
+ },
+ {
+ "epoch": 1.12,
+ "learning_rate": 4.71195652173913e-06,
+ "loss": 0.692,
+ "step": 338
+ },
+ {
+ "epoch": 1.12,
+ "learning_rate": 4.706521739130435e-06,
+ "loss": 0.7042,
+ "step": 339
+ },
+ {
+ "epoch": 1.13,
+ "learning_rate": 4.7010869565217396e-06,
+ "loss": 0.7004,
+ "step": 340
+ },
+ {
+ "epoch": 1.13,
+ "learning_rate": 4.695652173913044e-06,
+ "loss": 0.7049,
+ "step": 341
+ },
+ {
+ "epoch": 1.13,
+ "learning_rate": 4.690217391304348e-06,
+ "loss": 0.7071,
+ "step": 342
+ },
+ {
+ "epoch": 1.14,
+ "learning_rate": 4.684782608695652e-06,
+ "loss": 0.6917,
+ "step": 343
+ },
+ {
+ "epoch": 1.14,
+ "learning_rate": 4.679347826086956e-06,
+ "loss": 0.7141,
+ "step": 344
+ },
+ {
+ "epoch": 1.14,
+ "learning_rate": 4.673913043478261e-06,
+ "loss": 0.7129,
+ "step": 345
+ },
+ {
+ "epoch": 1.15,
+ "learning_rate": 4.6684782608695655e-06,
+ "loss": 0.6929,
+ "step": 346
+ },
+ {
+ "epoch": 1.15,
+ "learning_rate": 4.66304347826087e-06,
+ "loss": 0.7041,
+ "step": 347
+ },
+ {
+ "epoch": 1.15,
+ "learning_rate": 4.657608695652174e-06,
+ "loss": 0.6764,
+ "step": 348
+ },
+ {
+ "epoch": 1.16,
+ "learning_rate": 4.652173913043478e-06,
+ "loss": 0.6999,
+ "step": 349
+ },
+ {
+ "epoch": 1.16,
+ "learning_rate": 4.646739130434783e-06,
+ "loss": 0.7008,
+ "step": 350
+ },
+ {
+ "epoch": 1.16,
+ "learning_rate": 4.641304347826087e-06,
+ "loss": 0.7005,
+ "step": 351
+ },
+ {
+ "epoch": 1.17,
+ "learning_rate": 4.635869565217391e-06,
+ "loss": 0.7069,
+ "step": 352
+ },
+ {
+ "epoch": 1.17,
+ "learning_rate": 4.630434782608696e-06,
+ "loss": 0.6915,
+ "step": 353
+ },
+ {
+ "epoch": 1.17,
+ "learning_rate": 4.625000000000001e-06,
+ "loss": 0.6864,
+ "step": 354
+ },
+ {
+ "epoch": 1.18,
+ "learning_rate": 4.619565217391304e-06,
+ "loss": 0.6933,
+ "step": 355
+ },
+ {
+ "epoch": 1.18,
+ "learning_rate": 4.614130434782609e-06,
+ "loss": 0.6874,
+ "step": 356
+ },
+ {
+ "epoch": 1.18,
+ "learning_rate": 4.608695652173913e-06,
+ "loss": 0.695,
+ "step": 357
+ },
+ {
+ "epoch": 1.19,
+ "learning_rate": 4.603260869565217e-06,
+ "loss": 0.6937,
+ "step": 358
+ },
+ {
+ "epoch": 1.19,
+ "learning_rate": 4.5978260869565215e-06,
+ "loss": 0.714,
+ "step": 359
+ },
+ {
+ "epoch": 1.19,
+ "learning_rate": 4.5923913043478265e-06,
+ "loss": 0.7005,
+ "step": 360
+ },
+ {
+ "epoch": 1.2,
+ "learning_rate": 4.58695652173913e-06,
+ "loss": 0.701,
+ "step": 361
+ },
+ {
+ "epoch": 1.2,
+ "learning_rate": 4.581521739130435e-06,
+ "loss": 0.7004,
+ "step": 362
+ },
+ {
+ "epoch": 1.2,
+ "learning_rate": 4.576086956521739e-06,
+ "loss": 0.7059,
+ "step": 363
+ },
+ {
+ "epoch": 1.21,
+ "learning_rate": 4.570652173913043e-06,
+ "loss": 0.7226,
+ "step": 364
+ },
+ {
+ "epoch": 1.21,
+ "learning_rate": 4.565217391304348e-06,
+ "loss": 0.7113,
+ "step": 365
+ },
+ {
+ "epoch": 1.21,
+ "learning_rate": 4.5597826086956525e-06,
+ "loss": 0.6889,
+ "step": 366
+ },
+ {
+ "epoch": 1.21,
+ "eval_loss": 0.7673465609550476,
+ "eval_runtime": 17.6178,
+ "eval_samples_per_second": 129.528,
+ "eval_steps_per_second": 5.449,
+ "step": 366
+ },
+ {
+ "epoch": 1.22,
+ "learning_rate": 4.554347826086957e-06,
+ "loss": 0.7108,
+ "step": 367
+ },
+ {
+ "epoch": 1.22,
+ "learning_rate": 4.548913043478261e-06,
+ "loss": 0.677,
+ "step": 368
+ },
+ {
+ "epoch": 1.22,
+ "learning_rate": 4.543478260869566e-06,
+ "loss": 0.6995,
+ "step": 369
+ },
+ {
+ "epoch": 1.23,
+ "learning_rate": 4.538043478260869e-06,
+ "loss": 0.6886,
+ "step": 370
+ },
+ {
+ "epoch": 1.23,
+ "learning_rate": 4.532608695652174e-06,
+ "loss": 0.7069,
+ "step": 371
+ },
+ {
+ "epoch": 1.23,
+ "learning_rate": 4.527173913043478e-06,
+ "loss": 0.6865,
+ "step": 372
+ },
+ {
+ "epoch": 1.24,
+ "learning_rate": 4.5217391304347826e-06,
+ "loss": 0.7159,
+ "step": 373
+ },
+ {
+ "epoch": 1.24,
+ "learning_rate": 4.516304347826087e-06,
+ "loss": 0.7085,
+ "step": 374
+ },
+ {
+ "epoch": 1.24,
+ "learning_rate": 4.510869565217392e-06,
+ "loss": 0.7113,
+ "step": 375
+ },
+ {
+ "epoch": 1.25,
+ "learning_rate": 4.505434782608695e-06,
+ "loss": 0.7031,
+ "step": 376
+ },
+ {
+ "epoch": 1.25,
+ "learning_rate": 4.5e-06,
+ "loss": 0.7236,
+ "step": 377
+ },
+ {
+ "epoch": 1.25,
+ "learning_rate": 4.494565217391305e-06,
+ "loss": 0.6915,
+ "step": 378
+ },
+ {
+ "epoch": 1.26,
+ "learning_rate": 4.4891304347826085e-06,
+ "loss": 0.7112,
+ "step": 379
+ },
+ {
+ "epoch": 1.26,
+ "learning_rate": 4.4836956521739135e-06,
+ "loss": 0.6947,
+ "step": 380
+ },
+ {
+ "epoch": 1.26,
+ "learning_rate": 4.478260869565218e-06,
+ "loss": 0.7108,
+ "step": 381
+ },
+ {
+ "epoch": 1.27,
+ "learning_rate": 4.472826086956522e-06,
+ "loss": 0.7302,
+ "step": 382
+ },
+ {
+ "epoch": 1.27,
+ "learning_rate": 4.467391304347826e-06,
+ "loss": 0.7203,
+ "step": 383
+ },
+ {
+ "epoch": 1.27,
+ "learning_rate": 4.461956521739131e-06,
+ "loss": 0.6911,
+ "step": 384
+ },
+ {
+ "epoch": 1.28,
+ "learning_rate": 4.456521739130434e-06,
+ "loss": 0.7072,
+ "step": 385
+ },
+ {
+ "epoch": 1.28,
+ "learning_rate": 4.451086956521739e-06,
+ "loss": 0.698,
+ "step": 386
+ },
+ {
+ "epoch": 1.28,
+ "learning_rate": 4.445652173913044e-06,
+ "loss": 0.7089,
+ "step": 387
+ },
+ {
+ "epoch": 1.29,
+ "learning_rate": 4.440217391304348e-06,
+ "loss": 0.7027,
+ "step": 388
+ },
+ {
+ "epoch": 1.29,
+ "learning_rate": 4.434782608695652e-06,
+ "loss": 0.6814,
+ "step": 389
+ },
+ {
+ "epoch": 1.29,
+ "learning_rate": 4.429347826086957e-06,
+ "loss": 0.7034,
+ "step": 390
+ },
+ {
+ "epoch": 1.3,
+ "learning_rate": 4.423913043478261e-06,
+ "loss": 0.6952,
+ "step": 391
+ },
+ {
+ "epoch": 1.3,
+ "learning_rate": 4.418478260869565e-06,
+ "loss": 0.7081,
+ "step": 392
+ },
+ {
+ "epoch": 1.3,
+ "learning_rate": 4.41304347826087e-06,
+ "loss": 0.711,
+ "step": 393
+ },
+ {
+ "epoch": 1.31,
+ "learning_rate": 4.407608695652174e-06,
+ "loss": 0.7107,
+ "step": 394
+ },
+ {
+ "epoch": 1.31,
+ "learning_rate": 4.402173913043479e-06,
+ "loss": 0.6898,
+ "step": 395
+ },
+ {
+ "epoch": 1.31,
+ "learning_rate": 4.396739130434783e-06,
+ "loss": 0.712,
+ "step": 396
+ },
+ {
+ "epoch": 1.32,
+ "learning_rate": 4.391304347826087e-06,
+ "loss": 0.6985,
+ "step": 397
+ },
+ {
+ "epoch": 1.32,
+ "learning_rate": 4.385869565217391e-06,
+ "loss": 0.7024,
+ "step": 398
+ },
+ {
+ "epoch": 1.32,
+ "learning_rate": 4.380434782608696e-06,
+ "loss": 0.7142,
+ "step": 399
+ },
+ {
+ "epoch": 1.33,
+ "learning_rate": 4.375e-06,
+ "loss": 0.7041,
+ "step": 400
+ },
+ {
+ "epoch": 1.33,
+ "learning_rate": 4.369565217391305e-06,
+ "loss": 0.6963,
+ "step": 401
+ },
+ {
+ "epoch": 1.33,
+ "learning_rate": 4.364130434782609e-06,
+ "loss": 0.7048,
+ "step": 402
+ },
+ {
+ "epoch": 1.34,
+ "learning_rate": 4.358695652173913e-06,
+ "loss": 0.6865,
+ "step": 403
+ },
+ {
+ "epoch": 1.34,
+ "learning_rate": 4.353260869565217e-06,
+ "loss": 0.721,
+ "step": 404
+ },
+ {
+ "epoch": 1.34,
+ "learning_rate": 4.347826086956522e-06,
+ "loss": 0.6925,
+ "step": 405
+ },
+ {
+ "epoch": 1.35,
+ "learning_rate": 4.342391304347826e-06,
+ "loss": 0.7016,
+ "step": 406
+ },
+ {
+ "epoch": 1.35,
+ "learning_rate": 4.3369565217391306e-06,
+ "loss": 0.7164,
+ "step": 407
+ },
+ {
+ "epoch": 1.35,
+ "learning_rate": 4.331521739130435e-06,
+ "loss": 0.6937,
+ "step": 408
+ },
+ {
+ "epoch": 1.36,
+ "learning_rate": 4.326086956521739e-06,
+ "loss": 0.7193,
+ "step": 409
+ },
+ {
+ "epoch": 1.36,
+ "learning_rate": 4.320652173913044e-06,
+ "loss": 0.6959,
+ "step": 410
+ },
+ {
+ "epoch": 1.36,
+ "learning_rate": 4.315217391304348e-06,
+ "loss": 0.6899,
+ "step": 411
+ },
+ {
+ "epoch": 1.37,
+ "learning_rate": 4.309782608695652e-06,
+ "loss": 0.7052,
+ "step": 412
+ },
+ {
+ "epoch": 1.37,
+ "learning_rate": 4.3043478260869565e-06,
+ "loss": 0.7075,
+ "step": 413
+ },
+ {
+ "epoch": 1.37,
+ "learning_rate": 4.298913043478261e-06,
+ "loss": 0.7107,
+ "step": 414
+ },
+ {
+ "epoch": 1.38,
+ "learning_rate": 4.293478260869565e-06,
+ "loss": 0.7055,
+ "step": 415
+ },
+ {
+ "epoch": 1.38,
+ "learning_rate": 4.28804347826087e-06,
+ "loss": 0.7116,
+ "step": 416
+ },
+ {
+ "epoch": 1.38,
+ "learning_rate": 4.282608695652174e-06,
+ "loss": 0.7202,
+ "step": 417
+ },
+ {
+ "epoch": 1.39,
+ "learning_rate": 4.277173913043478e-06,
+ "loss": 0.6763,
+ "step": 418
+ },
+ {
+ "epoch": 1.39,
+ "learning_rate": 4.271739130434783e-06,
+ "loss": 0.6957,
+ "step": 419
+ },
+ {
+ "epoch": 1.39,
+ "learning_rate": 4.2663043478260874e-06,
+ "loss": 0.7022,
+ "step": 420
+ },
+ {
+ "epoch": 1.4,
+ "learning_rate": 4.260869565217392e-06,
+ "loss": 0.7079,
+ "step": 421
+ },
+ {
+ "epoch": 1.4,
+ "learning_rate": 4.255434782608696e-06,
+ "loss": 0.7023,
+ "step": 422
+ },
+ {
+ "epoch": 1.4,
+ "learning_rate": 4.25e-06,
+ "loss": 0.6954,
+ "step": 423
+ },
+ {
+ "epoch": 1.41,
+ "learning_rate": 4.244565217391304e-06,
+ "loss": 0.693,
+ "step": 424
+ },
+ {
+ "epoch": 1.41,
+ "learning_rate": 4.239130434782609e-06,
+ "loss": 0.7061,
+ "step": 425
+ },
+ {
+ "epoch": 1.41,
+ "learning_rate": 4.233695652173913e-06,
+ "loss": 0.701,
+ "step": 426
+ },
+ {
+ "epoch": 1.42,
+ "learning_rate": 4.2282608695652175e-06,
+ "loss": 0.708,
+ "step": 427
+ },
+ {
+ "epoch": 1.42,
+ "eval_loss": 0.7639372944831848,
+ "eval_runtime": 17.6159,
+ "eval_samples_per_second": 129.542,
+ "eval_steps_per_second": 5.45,
+ "step": 427
+ },
+ {
+ "epoch": 1.42,
+ "learning_rate": 4.222826086956522e-06,
+ "loss": 0.6991,
+ "step": 428
+ },
+ {
+ "epoch": 1.42,
+ "learning_rate": 4.217391304347826e-06,
+ "loss": 0.7042,
+ "step": 429
+ },
+ {
+ "epoch": 1.43,
+ "learning_rate": 4.21195652173913e-06,
+ "loss": 0.6984,
+ "step": 430
+ },
+ {
+ "epoch": 1.43,
+ "learning_rate": 4.206521739130435e-06,
+ "loss": 0.6925,
+ "step": 431
+ },
+ {
+ "epoch": 1.43,
+ "learning_rate": 4.201086956521739e-06,
+ "loss": 0.6928,
+ "step": 432
+ },
+ {
+ "epoch": 1.44,
+ "learning_rate": 4.1956521739130434e-06,
+ "loss": 0.7,
+ "step": 433
+ },
+ {
+ "epoch": 1.44,
+ "learning_rate": 4.1902173913043485e-06,
+ "loss": 0.6964,
+ "step": 434
+ },
+ {
+ "epoch": 1.44,
+ "learning_rate": 4.184782608695652e-06,
+ "loss": 0.6815,
+ "step": 435
+ },
+ {
+ "epoch": 1.45,
+ "learning_rate": 4.179347826086957e-06,
+ "loss": 0.7096,
+ "step": 436
+ },
+ {
+ "epoch": 1.45,
+ "learning_rate": 4.173913043478261e-06,
+ "loss": 0.6967,
+ "step": 437
+ },
+ {
+ "epoch": 1.45,
+ "learning_rate": 4.168478260869565e-06,
+ "loss": 0.6979,
+ "step": 438
+ },
+ {
+ "epoch": 1.46,
+ "learning_rate": 4.163043478260869e-06,
+ "loss": 0.7085,
+ "step": 439
+ },
+ {
+ "epoch": 1.46,
+ "learning_rate": 4.157608695652174e-06,
+ "loss": 0.7059,
+ "step": 440
+ },
+ {
+ "epoch": 1.46,
+ "learning_rate": 4.1521739130434786e-06,
+ "loss": 0.6915,
+ "step": 441
+ },
+ {
+ "epoch": 1.47,
+ "learning_rate": 4.146739130434783e-06,
+ "loss": 0.6905,
+ "step": 442
+ },
+ {
+ "epoch": 1.47,
+ "learning_rate": 4.141304347826087e-06,
+ "loss": 0.7023,
+ "step": 443
+ },
+ {
+ "epoch": 1.47,
+ "learning_rate": 4.135869565217391e-06,
+ "loss": 0.7031,
+ "step": 444
+ },
+ {
+ "epoch": 1.48,
+ "learning_rate": 4.130434782608695e-06,
+ "loss": 0.6959,
+ "step": 445
+ },
+ {
+ "epoch": 1.48,
+ "learning_rate": 4.125e-06,
+ "loss": 0.7103,
+ "step": 446
+ },
+ {
+ "epoch": 1.48,
+ "learning_rate": 4.1195652173913045e-06,
+ "loss": 0.7052,
+ "step": 447
+ },
+ {
+ "epoch": 1.49,
+ "learning_rate": 4.114130434782609e-06,
+ "loss": 0.6919,
+ "step": 448
+ },
+ {
+ "epoch": 1.49,
+ "learning_rate": 4.108695652173914e-06,
+ "loss": 0.6965,
+ "step": 449
+ },
+ {
+ "epoch": 1.49,
+ "learning_rate": 4.103260869565217e-06,
+ "loss": 0.6912,
+ "step": 450
+ },
+ {
+ "epoch": 1.5,
+ "learning_rate": 4.097826086956522e-06,
+ "loss": 0.6924,
+ "step": 451
+ },
+ {
+ "epoch": 1.5,
+ "learning_rate": 4.092391304347826e-06,
+ "loss": 0.6991,
+ "step": 452
+ },
+ {
+ "epoch": 1.5,
+ "learning_rate": 4.08695652173913e-06,
+ "loss": 0.7091,
+ "step": 453
+ },
+ {
+ "epoch": 1.51,
+ "learning_rate": 4.081521739130435e-06,
+ "loss": 0.6905,
+ "step": 454
+ },
+ {
+ "epoch": 1.51,
+ "learning_rate": 4.07608695652174e-06,
+ "loss": 0.6723,
+ "step": 455
+ },
+ {
+ "epoch": 1.51,
+ "learning_rate": 4.070652173913043e-06,
+ "loss": 0.7039,
+ "step": 456
+ },
+ {
+ "epoch": 1.52,
+ "learning_rate": 4.065217391304348e-06,
+ "loss": 0.7229,
+ "step": 457
+ },
+ {
+ "epoch": 1.52,
+ "learning_rate": 4.059782608695652e-06,
+ "loss": 0.7003,
+ "step": 458
+ },
+ {
+ "epoch": 1.52,
+ "learning_rate": 4.054347826086956e-06,
+ "loss": 0.7108,
+ "step": 459
+ },
+ {
+ "epoch": 1.53,
+ "learning_rate": 4.048913043478261e-06,
+ "loss": 0.7016,
+ "step": 460
+ },
+ {
+ "epoch": 1.53,
+ "learning_rate": 4.0434782608695655e-06,
+ "loss": 0.6989,
+ "step": 461
+ },
+ {
+ "epoch": 1.53,
+ "learning_rate": 4.03804347826087e-06,
+ "loss": 0.6961,
+ "step": 462
+ },
+ {
+ "epoch": 1.54,
+ "learning_rate": 4.032608695652174e-06,
+ "loss": 0.7032,
+ "step": 463
+ },
+ {
+ "epoch": 1.54,
+ "learning_rate": 4.027173913043479e-06,
+ "loss": 0.6848,
+ "step": 464
+ },
+ {
+ "epoch": 1.54,
+ "learning_rate": 4.021739130434782e-06,
+ "loss": 0.7221,
+ "step": 465
+ },
+ {
+ "epoch": 1.55,
+ "learning_rate": 4.016304347826087e-06,
+ "loss": 0.698,
+ "step": 466
+ },
+ {
+ "epoch": 1.55,
+ "learning_rate": 4.0108695652173915e-06,
+ "loss": 0.7091,
+ "step": 467
+ },
+ {
+ "epoch": 1.55,
+ "learning_rate": 4.005434782608696e-06,
+ "loss": 0.6842,
+ "step": 468
+ },
+ {
+ "epoch": 1.56,
+ "learning_rate": 4e-06,
+ "loss": 0.7131,
+ "step": 469
+ },
+ {
+ "epoch": 1.56,
+ "learning_rate": 3.994565217391305e-06,
+ "loss": 0.7156,
+ "step": 470
+ },
+ {
+ "epoch": 1.56,
+ "learning_rate": 3.989130434782608e-06,
+ "loss": 0.7034,
+ "step": 471
+ },
+ {
+ "epoch": 1.57,
+ "learning_rate": 3.983695652173913e-06,
+ "loss": 0.7122,
+ "step": 472
+ },
+ {
+ "epoch": 1.57,
+ "learning_rate": 3.978260869565217e-06,
+ "loss": 0.6918,
+ "step": 473
+ },
+ {
+ "epoch": 1.57,
+ "learning_rate": 3.9728260869565216e-06,
+ "loss": 0.6963,
+ "step": 474
+ },
+ {
+ "epoch": 1.58,
+ "learning_rate": 3.967391304347827e-06,
+ "loss": 0.7275,
+ "step": 475
+ },
+ {
+ "epoch": 1.58,
+ "learning_rate": 3.961956521739131e-06,
+ "loss": 0.6986,
+ "step": 476
+ },
+ {
+ "epoch": 1.58,
+ "learning_rate": 3.956521739130435e-06,
+ "loss": 0.7079,
+ "step": 477
+ },
+ {
+ "epoch": 1.59,
+ "learning_rate": 3.951086956521739e-06,
+ "loss": 0.7087,
+ "step": 478
+ },
+ {
+ "epoch": 1.59,
+ "learning_rate": 3.945652173913044e-06,
+ "loss": 0.7108,
+ "step": 479
+ },
+ {
+ "epoch": 1.59,
+ "learning_rate": 3.9402173913043475e-06,
+ "loss": 0.6959,
+ "step": 480
+ },
+ {
+ "epoch": 1.6,
+ "learning_rate": 3.9347826086956525e-06,
+ "loss": 0.6995,
+ "step": 481
+ },
+ {
+ "epoch": 1.6,
+ "learning_rate": 3.929347826086957e-06,
+ "loss": 0.708,
+ "step": 482
+ },
+ {
+ "epoch": 1.6,
+ "learning_rate": 3.923913043478261e-06,
+ "loss": 0.7006,
+ "step": 483
+ },
+ {
+ "epoch": 1.61,
+ "learning_rate": 3.918478260869565e-06,
+ "loss": 0.6958,
+ "step": 484
+ },
+ {
+ "epoch": 1.61,
+ "learning_rate": 3.91304347826087e-06,
+ "loss": 0.6967,
+ "step": 485
+ },
+ {
+ "epoch": 1.61,
+ "learning_rate": 3.907608695652173e-06,
+ "loss": 0.7078,
+ "step": 486
+ },
+ {
+ "epoch": 1.62,
+ "learning_rate": 3.9021739130434784e-06,
+ "loss": 0.7017,
+ "step": 487
+ },
+ {
+ "epoch": 1.62,
+ "learning_rate": 3.896739130434783e-06,
+ "loss": 0.6794,
+ "step": 488
+ },
+ {
+ "epoch": 1.62,
+ "eval_loss": 0.7612941861152649,
+ "eval_runtime": 17.6201,
+ "eval_samples_per_second": 129.511,
+ "eval_steps_per_second": 5.448,
+ "step": 488
+ },
+ {
+ "epoch": 1.62,
+ "learning_rate": 3.891304347826087e-06,
+ "loss": 0.6965,
+ "step": 489
+ },
+ {
+ "epoch": 1.63,
+ "learning_rate": 3.885869565217392e-06,
+ "loss": 0.676,
+ "step": 490
+ },
+ {
+ "epoch": 1.63,
+ "learning_rate": 3.880434782608696e-06,
+ "loss": 0.6866,
+ "step": 491
+ },
+ {
+ "epoch": 1.63,
+ "learning_rate": 3.875e-06,
+ "loss": 0.6896,
+ "step": 492
+ },
+ {
+ "epoch": 1.64,
+ "learning_rate": 3.869565217391304e-06,
+ "loss": 0.7011,
+ "step": 493
+ },
+ {
+ "epoch": 1.64,
+ "learning_rate": 3.864130434782609e-06,
+ "loss": 0.6953,
+ "step": 494
+ },
+ {
+ "epoch": 1.64,
+ "learning_rate": 3.858695652173913e-06,
+ "loss": 0.7148,
+ "step": 495
+ },
+ {
+ "epoch": 1.65,
+ "learning_rate": 3.853260869565218e-06,
+ "loss": 0.6997,
+ "step": 496
+ },
+ {
+ "epoch": 1.65,
+ "learning_rate": 3.847826086956522e-06,
+ "loss": 0.71,
+ "step": 497
+ },
+ {
+ "epoch": 1.65,
+ "learning_rate": 3.842391304347826e-06,
+ "loss": 0.6852,
+ "step": 498
+ },
+ {
+ "epoch": 1.66,
+ "learning_rate": 3.83695652173913e-06,
+ "loss": 0.6968,
+ "step": 499
+ },
+ {
+ "epoch": 1.66,
+ "learning_rate": 3.831521739130435e-06,
+ "loss": 0.6925,
+ "step": 500
+ },
+ {
+ "epoch": 1.66,
+ "learning_rate": 3.826086956521739e-06,
+ "loss": 0.717,
+ "step": 501
+ },
+ {
+ "epoch": 1.67,
+ "learning_rate": 3.820652173913044e-06,
+ "loss": 0.6857,
+ "step": 502
+ },
+ {
+ "epoch": 1.67,
+ "learning_rate": 3.815217391304348e-06,
+ "loss": 0.6861,
+ "step": 503
+ },
+ {
+ "epoch": 1.67,
+ "learning_rate": 3.809782608695652e-06,
+ "loss": 0.7046,
+ "step": 504
+ },
+ {
+ "epoch": 1.67,
+ "learning_rate": 3.804347826086957e-06,
+ "loss": 0.6928,
+ "step": 505
+ },
+ {
+ "epoch": 1.68,
+ "learning_rate": 3.798913043478261e-06,
+ "loss": 0.6995,
+ "step": 506
+ },
+ {
+ "epoch": 1.68,
+ "learning_rate": 3.7934782608695654e-06,
+ "loss": 0.7215,
+ "step": 507
+ },
+ {
+ "epoch": 1.68,
+ "learning_rate": 3.7880434782608696e-06,
+ "loss": 0.6872,
+ "step": 508
+ },
+ {
+ "epoch": 1.69,
+ "learning_rate": 3.782608695652174e-06,
+ "loss": 0.7018,
+ "step": 509
+ },
+ {
+ "epoch": 1.69,
+ "learning_rate": 3.7771739130434784e-06,
+ "loss": 0.702,
+ "step": 510
+ },
+ {
+ "epoch": 1.69,
+ "learning_rate": 3.771739130434783e-06,
+ "loss": 0.6838,
+ "step": 511
+ },
+ {
+ "epoch": 1.7,
+ "learning_rate": 3.7663043478260867e-06,
+ "loss": 0.7072,
+ "step": 512
+ },
+ {
+ "epoch": 1.7,
+ "learning_rate": 3.7608695652173913e-06,
+ "loss": 0.7073,
+ "step": 513
+ },
+ {
+ "epoch": 1.7,
+ "learning_rate": 3.7554347826086955e-06,
+ "loss": 0.7026,
+ "step": 514
+ },
+ {
+ "epoch": 1.71,
+ "learning_rate": 3.75e-06,
+ "loss": 0.7,
+ "step": 515
+ },
+ {
+ "epoch": 1.71,
+ "learning_rate": 3.7445652173913047e-06,
+ "loss": 0.7,
+ "step": 516
+ },
+ {
+ "epoch": 1.71,
+ "learning_rate": 3.739130434782609e-06,
+ "loss": 0.6895,
+ "step": 517
+ },
+ {
+ "epoch": 1.72,
+ "learning_rate": 3.7336956521739135e-06,
+ "loss": 0.704,
+ "step": 518
+ },
+ {
+ "epoch": 1.72,
+ "learning_rate": 3.7282608695652172e-06,
+ "loss": 0.7022,
+ "step": 519
+ },
+ {
+ "epoch": 1.72,
+ "learning_rate": 3.722826086956522e-06,
+ "loss": 0.7192,
+ "step": 520
+ },
+ {
+ "epoch": 1.73,
+ "learning_rate": 3.717391304347826e-06,
+ "loss": 0.6944,
+ "step": 521
+ },
+ {
+ "epoch": 1.73,
+ "learning_rate": 3.7119565217391306e-06,
+ "loss": 0.6886,
+ "step": 522
+ },
+ {
+ "epoch": 1.73,
+ "learning_rate": 3.706521739130435e-06,
+ "loss": 0.694,
+ "step": 523
+ },
+ {
+ "epoch": 1.74,
+ "learning_rate": 3.7010869565217394e-06,
+ "loss": 0.7102,
+ "step": 524
+ },
+ {
+ "epoch": 1.74,
+ "learning_rate": 3.695652173913043e-06,
+ "loss": 0.7203,
+ "step": 525
+ },
+ {
+ "epoch": 1.74,
+ "learning_rate": 3.690217391304348e-06,
+ "loss": 0.6784,
+ "step": 526
+ },
+ {
+ "epoch": 1.75,
+ "learning_rate": 3.684782608695652e-06,
+ "loss": 0.6962,
+ "step": 527
+ },
+ {
+ "epoch": 1.75,
+ "learning_rate": 3.6793478260869565e-06,
+ "loss": 0.6813,
+ "step": 528
+ },
+ {
+ "epoch": 1.75,
+ "learning_rate": 3.673913043478261e-06,
+ "loss": 0.6897,
+ "step": 529
+ },
+ {
+ "epoch": 1.76,
+ "learning_rate": 3.6684782608695653e-06,
+ "loss": 0.7197,
+ "step": 530
+ },
+ {
+ "epoch": 1.76,
+ "learning_rate": 3.66304347826087e-06,
+ "loss": 0.6985,
+ "step": 531
+ },
+ {
+ "epoch": 1.76,
+ "learning_rate": 3.657608695652174e-06,
+ "loss": 0.6807,
+ "step": 532
+ },
+ {
+ "epoch": 1.77,
+ "learning_rate": 3.6521739130434787e-06,
+ "loss": 0.7106,
+ "step": 533
+ },
+ {
+ "epoch": 1.77,
+ "learning_rate": 3.6467391304347825e-06,
+ "loss": 0.6843,
+ "step": 534
+ },
+ {
+ "epoch": 1.77,
+ "learning_rate": 3.641304347826087e-06,
+ "loss": 0.6941,
+ "step": 535
+ },
+ {
+ "epoch": 1.78,
+ "learning_rate": 3.6358695652173912e-06,
+ "loss": 0.6739,
+ "step": 536
+ },
+ {
+ "epoch": 1.78,
+ "learning_rate": 3.630434782608696e-06,
+ "loss": 0.7088,
+ "step": 537
+ },
+ {
+ "epoch": 1.78,
+ "learning_rate": 3.625e-06,
+ "loss": 0.7152,
+ "step": 538
+ },
+ {
+ "epoch": 1.79,
+ "learning_rate": 3.6195652173913046e-06,
+ "loss": 0.6972,
+ "step": 539
+ },
+ {
+ "epoch": 1.79,
+ "learning_rate": 3.6141304347826084e-06,
+ "loss": 0.6745,
+ "step": 540
+ },
+ {
+ "epoch": 1.79,
+ "learning_rate": 3.608695652173913e-06,
+ "loss": 0.7125,
+ "step": 541
+ },
+ {
+ "epoch": 1.8,
+ "learning_rate": 3.603260869565217e-06,
+ "loss": 0.7072,
+ "step": 542
+ },
+ {
+ "epoch": 1.8,
+ "learning_rate": 3.5978260869565218e-06,
+ "loss": 0.6903,
+ "step": 543
+ },
+ {
+ "epoch": 1.8,
+ "learning_rate": 3.5923913043478264e-06,
+ "loss": 0.6906,
+ "step": 544
+ },
+ {
+ "epoch": 1.81,
+ "learning_rate": 3.5869565217391305e-06,
+ "loss": 0.7101,
+ "step": 545
+ },
+ {
+ "epoch": 1.81,
+ "learning_rate": 3.581521739130435e-06,
+ "loss": 0.7097,
+ "step": 546
+ },
+ {
+ "epoch": 1.81,
+ "learning_rate": 3.5760869565217393e-06,
+ "loss": 0.7042,
+ "step": 547
+ },
+ {
+ "epoch": 1.82,
+ "learning_rate": 3.570652173913044e-06,
+ "loss": 0.7147,
+ "step": 548
+ },
+ {
+ "epoch": 1.82,
+ "learning_rate": 3.5652173913043477e-06,
+ "loss": 0.7039,
+ "step": 549
+ },
+ {
+ "epoch": 1.82,
+ "eval_loss": 0.7584934830665588,
+ "eval_runtime": 17.6166,
+ "eval_samples_per_second": 129.537,
+ "eval_steps_per_second": 5.449,
+ "step": 549
+ },
+ {
+ "epoch": 1.82,
+ "learning_rate": 3.5597826086956523e-06,
+ "loss": 0.6864,
+ "step": 550
+ },
+ {
+ "epoch": 1.83,
+ "learning_rate": 3.5543478260869565e-06,
+ "loss": 0.6858,
+ "step": 551
+ },
+ {
+ "epoch": 1.83,
+ "learning_rate": 3.548913043478261e-06,
+ "loss": 0.7076,
+ "step": 552
+ },
+ {
+ "epoch": 1.83,
+ "learning_rate": 3.5434782608695652e-06,
+ "loss": 0.6965,
+ "step": 553
+ },
+ {
+ "epoch": 1.84,
+ "learning_rate": 3.53804347826087e-06,
+ "loss": 0.6954,
+ "step": 554
+ },
+ {
+ "epoch": 1.84,
+ "learning_rate": 3.5326086956521736e-06,
+ "loss": 0.6934,
+ "step": 555
+ },
+ {
+ "epoch": 1.84,
+ "learning_rate": 3.527173913043478e-06,
+ "loss": 0.7159,
+ "step": 556
+ },
+ {
+ "epoch": 1.85,
+ "learning_rate": 3.521739130434783e-06,
+ "loss": 0.6946,
+ "step": 557
+ },
+ {
+ "epoch": 1.85,
+ "learning_rate": 3.516304347826087e-06,
+ "loss": 0.6866,
+ "step": 558
+ },
+ {
+ "epoch": 1.85,
+ "learning_rate": 3.5108695652173916e-06,
+ "loss": 0.704,
+ "step": 559
+ },
+ {
+ "epoch": 1.86,
+ "learning_rate": 3.5054347826086958e-06,
+ "loss": 0.6896,
+ "step": 560
+ },
+ {
+ "epoch": 1.86,
+ "learning_rate": 3.5000000000000004e-06,
+ "loss": 0.6947,
+ "step": 561
+ },
+ {
+ "epoch": 1.86,
+ "learning_rate": 3.494565217391304e-06,
+ "loss": 0.6993,
+ "step": 562
+ },
+ {
+ "epoch": 1.87,
+ "learning_rate": 3.489130434782609e-06,
+ "loss": 0.7046,
+ "step": 563
+ },
+ {
+ "epoch": 1.87,
+ "learning_rate": 3.483695652173913e-06,
+ "loss": 0.7037,
+ "step": 564
+ },
+ {
+ "epoch": 1.87,
+ "learning_rate": 3.4782608695652175e-06,
+ "loss": 0.7024,
+ "step": 565
+ },
+ {
+ "epoch": 1.88,
+ "learning_rate": 3.4728260869565217e-06,
+ "loss": 0.7186,
+ "step": 566
+ },
+ {
+ "epoch": 1.88,
+ "learning_rate": 3.4673913043478263e-06,
+ "loss": 0.6894,
+ "step": 567
+ },
+ {
+ "epoch": 1.88,
+ "learning_rate": 3.46195652173913e-06,
+ "loss": 0.7006,
+ "step": 568
+ },
+ {
+ "epoch": 1.89,
+ "learning_rate": 3.456521739130435e-06,
+ "loss": 0.6949,
+ "step": 569
+ },
+ {
+ "epoch": 1.89,
+ "learning_rate": 3.451086956521739e-06,
+ "loss": 0.6846,
+ "step": 570
+ },
+ {
+ "epoch": 1.89,
+ "learning_rate": 3.4456521739130434e-06,
+ "loss": 0.7118,
+ "step": 571
+ },
+ {
+ "epoch": 1.9,
+ "learning_rate": 3.440217391304348e-06,
+ "loss": 0.6945,
+ "step": 572
+ },
+ {
+ "epoch": 1.9,
+ "learning_rate": 3.4347826086956522e-06,
+ "loss": 0.6869,
+ "step": 573
+ },
+ {
+ "epoch": 1.9,
+ "learning_rate": 3.429347826086957e-06,
+ "loss": 0.7181,
+ "step": 574
+ },
+ {
+ "epoch": 1.91,
+ "learning_rate": 3.423913043478261e-06,
+ "loss": 0.7133,
+ "step": 575
+ },
+ {
+ "epoch": 1.91,
+ "learning_rate": 3.4184782608695656e-06,
+ "loss": 0.7068,
+ "step": 576
+ },
+ {
+ "epoch": 1.91,
+ "learning_rate": 3.4130434782608694e-06,
+ "loss": 0.6977,
+ "step": 577
+ },
+ {
+ "epoch": 1.92,
+ "learning_rate": 3.407608695652174e-06,
+ "loss": 0.6988,
+ "step": 578
+ },
+ {
+ "epoch": 1.92,
+ "learning_rate": 3.402173913043478e-06,
+ "loss": 0.7052,
+ "step": 579
+ },
+ {
+ "epoch": 1.92,
+ "learning_rate": 3.3967391304347827e-06,
+ "loss": 0.7249,
+ "step": 580
+ },
+ {
+ "epoch": 1.93,
+ "learning_rate": 3.391304347826087e-06,
+ "loss": 0.7198,
+ "step": 581
+ },
+ {
+ "epoch": 1.93,
+ "learning_rate": 3.3858695652173915e-06,
+ "loss": 0.6859,
+ "step": 582
+ },
+ {
+ "epoch": 1.93,
+ "learning_rate": 3.3804347826086953e-06,
+ "loss": 0.683,
+ "step": 583
+ },
+ {
+ "epoch": 1.94,
+ "learning_rate": 3.375e-06,
+ "loss": 0.6997,
+ "step": 584
+ },
+ {
+ "epoch": 1.94,
+ "learning_rate": 3.369565217391305e-06,
+ "loss": 0.6758,
+ "step": 585
+ },
+ {
+ "epoch": 1.94,
+ "learning_rate": 3.3641304347826087e-06,
+ "loss": 0.7062,
+ "step": 586
+ },
+ {
+ "epoch": 1.95,
+ "learning_rate": 3.3586956521739133e-06,
+ "loss": 0.6972,
+ "step": 587
+ },
+ {
+ "epoch": 1.95,
+ "learning_rate": 3.3532608695652174e-06,
+ "loss": 0.7078,
+ "step": 588
+ },
+ {
+ "epoch": 1.95,
+ "learning_rate": 3.347826086956522e-06,
+ "loss": 0.7176,
+ "step": 589
+ },
+ {
+ "epoch": 1.96,
+ "learning_rate": 3.3423913043478262e-06,
+ "loss": 0.6931,
+ "step": 590
+ },
+ {
+ "epoch": 1.96,
+ "learning_rate": 3.336956521739131e-06,
+ "loss": 0.6981,
+ "step": 591
+ },
+ {
+ "epoch": 1.96,
+ "learning_rate": 3.3315217391304346e-06,
+ "loss": 0.7222,
+ "step": 592
+ },
+ {
+ "epoch": 1.97,
+ "learning_rate": 3.326086956521739e-06,
+ "loss": 0.6859,
+ "step": 593
+ },
+ {
+ "epoch": 1.97,
+ "learning_rate": 3.3206521739130434e-06,
+ "loss": 0.6878,
+ "step": 594
+ },
+ {
+ "epoch": 1.97,
+ "learning_rate": 3.315217391304348e-06,
+ "loss": 0.6986,
+ "step": 595
+ },
+ {
+ "epoch": 1.98,
+ "learning_rate": 3.309782608695652e-06,
+ "loss": 0.6882,
+ "step": 596
+ },
+ {
+ "epoch": 1.98,
+ "learning_rate": 3.3043478260869567e-06,
+ "loss": 0.7049,
+ "step": 597
+ },
+ {
+ "epoch": 1.98,
+ "learning_rate": 3.2989130434782613e-06,
+ "loss": 0.6961,
+ "step": 598
+ },
+ {
+ "epoch": 1.99,
+ "learning_rate": 3.293478260869565e-06,
+ "loss": 0.6907,
+ "step": 599
+ },
+ {
+ "epoch": 1.99,
+ "learning_rate": 3.28804347826087e-06,
+ "loss": 0.7177,
+ "step": 600
+ },
+ {
+ "epoch": 1.99,
+ "learning_rate": 3.282608695652174e-06,
+ "loss": 0.6917,
+ "step": 601
+ },
+ {
+ "epoch": 2.0,
+ "learning_rate": 3.2771739130434785e-06,
+ "loss": 0.6966,
+ "step": 602
+ },
+ {
+ "epoch": 2.0,
+ "learning_rate": 3.2717391304347827e-06,
+ "loss": 0.7036,
+ "step": 603
+ },
+ {
+ "epoch": 2.0,
+ "learning_rate": 3.2663043478260873e-06,
+ "loss": 0.6433,
+ "step": 604
+ },
+ {
+ "epoch": 2.01,
+ "learning_rate": 3.260869565217391e-06,
+ "loss": 0.6296,
+ "step": 605
+ },
+ {
+ "epoch": 2.01,
+ "learning_rate": 3.255434782608696e-06,
+ "loss": 0.6323,
+ "step": 606
+ },
+ {
+ "epoch": 2.01,
+ "learning_rate": 3.25e-06,
+ "loss": 0.64,
+ "step": 607
+ },
+ {
+ "epoch": 2.02,
+ "learning_rate": 3.2445652173913044e-06,
+ "loss": 0.6384,
+ "step": 608
+ },
+ {
+ "epoch": 2.02,
+ "learning_rate": 3.2391304347826086e-06,
+ "loss": 0.6355,
+ "step": 609
+ },
+ {
+ "epoch": 2.02,
+ "learning_rate": 3.233695652173913e-06,
+ "loss": 0.6299,
+ "step": 610
+ },
+ {
+ "epoch": 2.02,
+ "eval_loss": 0.7699483036994934,
+ "eval_runtime": 17.6263,
+ "eval_samples_per_second": 129.466,
+ "eval_steps_per_second": 5.446,
+ "step": 610
+ },
+ {
+ "epoch": 2.03,
+ "learning_rate": 3.2282608695652174e-06,
+ "loss": 0.6174,
+ "step": 611
+ },
+ {
+ "epoch": 2.03,
+ "learning_rate": 3.222826086956522e-06,
+ "loss": 0.6296,
+ "step": 612
+ },
+ {
+ "epoch": 2.03,
+ "learning_rate": 3.2173913043478266e-06,
+ "loss": 0.6246,
+ "step": 613
+ },
+ {
+ "epoch": 2.04,
+ "learning_rate": 3.2119565217391303e-06,
+ "loss": 0.6361,
+ "step": 614
+ },
+ {
+ "epoch": 2.04,
+ "learning_rate": 3.206521739130435e-06,
+ "loss": 0.6313,
+ "step": 615
+ },
+ {
+ "epoch": 2.04,
+ "learning_rate": 3.201086956521739e-06,
+ "loss": 0.638,
+ "step": 616
+ },
+ {
+ "epoch": 2.05,
+ "learning_rate": 3.1956521739130437e-06,
+ "loss": 0.6223,
+ "step": 617
+ },
+ {
+ "epoch": 2.05,
+ "learning_rate": 3.190217391304348e-06,
+ "loss": 0.6325,
+ "step": 618
+ },
+ {
+ "epoch": 2.05,
+ "learning_rate": 3.1847826086956525e-06,
+ "loss": 0.6369,
+ "step": 619
+ },
+ {
+ "epoch": 2.06,
+ "learning_rate": 3.1793478260869562e-06,
+ "loss": 0.6295,
+ "step": 620
+ },
+ {
+ "epoch": 2.06,
+ "learning_rate": 3.173913043478261e-06,
+ "loss": 0.6335,
+ "step": 621
+ },
+ {
+ "epoch": 2.06,
+ "learning_rate": 3.168478260869565e-06,
+ "loss": 0.6275,
+ "step": 622
+ },
+ {
+ "epoch": 2.07,
+ "learning_rate": 3.1630434782608696e-06,
+ "loss": 0.6226,
+ "step": 623
+ },
+ {
+ "epoch": 2.07,
+ "learning_rate": 3.157608695652174e-06,
+ "loss": 0.6326,
+ "step": 624
+ },
+ {
+ "epoch": 2.07,
+ "learning_rate": 3.1521739130434784e-06,
+ "loss": 0.6292,
+ "step": 625
+ },
+ {
+ "epoch": 2.08,
+ "learning_rate": 3.146739130434783e-06,
+ "loss": 0.6375,
+ "step": 626
+ },
+ {
+ "epoch": 2.08,
+ "learning_rate": 3.141304347826087e-06,
+ "loss": 0.6404,
+ "step": 627
+ },
+ {
+ "epoch": 2.08,
+ "learning_rate": 3.135869565217392e-06,
+ "loss": 0.6348,
+ "step": 628
+ },
+ {
+ "epoch": 2.09,
+ "learning_rate": 3.1304347826086955e-06,
+ "loss": 0.6341,
+ "step": 629
+ },
+ {
+ "epoch": 2.09,
+ "learning_rate": 3.125e-06,
+ "loss": 0.6085,
+ "step": 630
+ },
+ {
+ "epoch": 2.09,
+ "learning_rate": 3.1195652173913043e-06,
+ "loss": 0.6096,
+ "step": 631
+ },
+ {
+ "epoch": 2.1,
+ "learning_rate": 3.114130434782609e-06,
+ "loss": 0.6322,
+ "step": 632
+ },
+ {
+ "epoch": 2.1,
+ "learning_rate": 3.108695652173913e-06,
+ "loss": 0.6418,
+ "step": 633
+ },
+ {
+ "epoch": 2.1,
+ "learning_rate": 3.1032608695652177e-06,
+ "loss": 0.6248,
+ "step": 634
+ },
+ {
+ "epoch": 2.11,
+ "learning_rate": 3.0978260869565215e-06,
+ "loss": 0.631,
+ "step": 635
+ },
+ {
+ "epoch": 2.11,
+ "learning_rate": 3.092391304347826e-06,
+ "loss": 0.6203,
+ "step": 636
+ },
+ {
+ "epoch": 2.11,
+ "learning_rate": 3.0869565217391302e-06,
+ "loss": 0.6167,
+ "step": 637
+ },
+ {
+ "epoch": 2.12,
+ "learning_rate": 3.081521739130435e-06,
+ "loss": 0.6372,
+ "step": 638
+ },
+ {
+ "epoch": 2.12,
+ "learning_rate": 3.076086956521739e-06,
+ "loss": 0.6196,
+ "step": 639
+ },
+ {
+ "epoch": 2.12,
+ "learning_rate": 3.0706521739130436e-06,
+ "loss": 0.6187,
+ "step": 640
+ },
+ {
+ "epoch": 2.13,
+ "learning_rate": 3.0652173913043482e-06,
+ "loss": 0.6264,
+ "step": 641
+ },
+ {
+ "epoch": 2.13,
+ "learning_rate": 3.059782608695652e-06,
+ "loss": 0.6285,
+ "step": 642
+ },
+ {
+ "epoch": 2.13,
+ "learning_rate": 3.054347826086957e-06,
+ "loss": 0.6237,
+ "step": 643
+ },
+ {
+ "epoch": 2.14,
+ "learning_rate": 3.0489130434782608e-06,
+ "loss": 0.6242,
+ "step": 644
+ },
+ {
+ "epoch": 2.14,
+ "learning_rate": 3.0434782608695654e-06,
+ "loss": 0.6337,
+ "step": 645
+ },
+ {
+ "epoch": 2.14,
+ "learning_rate": 3.0380434782608696e-06,
+ "loss": 0.635,
+ "step": 646
+ },
+ {
+ "epoch": 2.15,
+ "learning_rate": 3.032608695652174e-06,
+ "loss": 0.6448,
+ "step": 647
+ },
+ {
+ "epoch": 2.15,
+ "learning_rate": 3.0271739130434783e-06,
+ "loss": 0.6375,
+ "step": 648
+ },
+ {
+ "epoch": 2.15,
+ "learning_rate": 3.021739130434783e-06,
+ "loss": 0.6213,
+ "step": 649
+ },
+ {
+ "epoch": 2.16,
+ "learning_rate": 3.0163043478260867e-06,
+ "loss": 0.632,
+ "step": 650
+ },
+ {
+ "epoch": 2.16,
+ "learning_rate": 3.0108695652173913e-06,
+ "loss": 0.6336,
+ "step": 651
+ },
+ {
+ "epoch": 2.16,
+ "learning_rate": 3.0054347826086955e-06,
+ "loss": 0.6361,
+ "step": 652
+ },
+ {
+ "epoch": 2.17,
+ "learning_rate": 3e-06,
+ "loss": 0.6177,
+ "step": 653
+ },
+ {
+ "epoch": 2.17,
+ "learning_rate": 2.9945652173913043e-06,
+ "loss": 0.6497,
+ "step": 654
+ },
+ {
+ "epoch": 2.17,
+ "learning_rate": 2.989130434782609e-06,
+ "loss": 0.621,
+ "step": 655
+ },
+ {
+ "epoch": 2.18,
+ "learning_rate": 2.983695652173913e-06,
+ "loss": 0.6304,
+ "step": 656
+ },
+ {
+ "epoch": 2.18,
+ "learning_rate": 2.9782608695652172e-06,
+ "loss": 0.6386,
+ "step": 657
+ },
+ {
+ "epoch": 2.18,
+ "learning_rate": 2.972826086956522e-06,
+ "loss": 0.6339,
+ "step": 658
+ },
+ {
+ "epoch": 2.19,
+ "learning_rate": 2.967391304347826e-06,
+ "loss": 0.6371,
+ "step": 659
+ },
+ {
+ "epoch": 2.19,
+ "learning_rate": 2.9619565217391306e-06,
+ "loss": 0.6375,
+ "step": 660
+ },
+ {
+ "epoch": 2.19,
+ "learning_rate": 2.956521739130435e-06,
+ "loss": 0.6463,
+ "step": 661
+ },
+ {
+ "epoch": 2.2,
+ "learning_rate": 2.9510869565217394e-06,
+ "loss": 0.6149,
+ "step": 662
+ },
+ {
+ "epoch": 2.2,
+ "learning_rate": 2.9456521739130436e-06,
+ "loss": 0.6465,
+ "step": 663
+ },
+ {
+ "epoch": 2.2,
+ "learning_rate": 2.940217391304348e-06,
+ "loss": 0.6415,
+ "step": 664
+ },
+ {
+ "epoch": 2.21,
+ "learning_rate": 2.9347826086956523e-06,
+ "loss": 0.6055,
+ "step": 665
+ },
+ {
+ "epoch": 2.21,
+ "learning_rate": 2.9293478260869565e-06,
+ "loss": 0.6254,
+ "step": 666
+ },
+ {
+ "epoch": 2.21,
+ "learning_rate": 2.923913043478261e-06,
+ "loss": 0.6116,
+ "step": 667
+ },
+ {
+ "epoch": 2.22,
+ "learning_rate": 2.9184782608695653e-06,
+ "loss": 0.6262,
+ "step": 668
+ },
+ {
+ "epoch": 2.22,
+ "learning_rate": 2.9130434782608695e-06,
+ "loss": 0.6373,
+ "step": 669
+ },
+ {
+ "epoch": 2.22,
+ "learning_rate": 2.907608695652174e-06,
+ "loss": 0.6348,
+ "step": 670
+ },
+ {
+ "epoch": 2.23,
+ "learning_rate": 2.9021739130434783e-06,
+ "loss": 0.6199,
+ "step": 671
+ },
+ {
+ "epoch": 2.23,
+ "eval_loss": 0.7679464221000671,
+ "eval_runtime": 17.626,
+ "eval_samples_per_second": 129.468,
+ "eval_steps_per_second": 5.446,
+ "step": 671
+ },
+ {
+ "epoch": 2.23,
+ "learning_rate": 2.8967391304347824e-06,
+ "loss": 0.6139,
+ "step": 672
+ },
+ {
+ "epoch": 2.23,
+ "learning_rate": 2.891304347826087e-06,
+ "loss": 0.6286,
+ "step": 673
+ },
+ {
+ "epoch": 2.24,
+ "learning_rate": 2.8858695652173916e-06,
+ "loss": 0.652,
+ "step": 674
+ },
+ {
+ "epoch": 2.24,
+ "learning_rate": 2.880434782608696e-06,
+ "loss": 0.6246,
+ "step": 675
+ },
+ {
+ "epoch": 2.24,
+ "learning_rate": 2.875e-06,
+ "loss": 0.6274,
+ "step": 676
+ },
+ {
+ "epoch": 2.25,
+ "learning_rate": 2.8695652173913046e-06,
+ "loss": 0.6589,
+ "step": 677
+ },
+ {
+ "epoch": 2.25,
+ "learning_rate": 2.8641304347826088e-06,
+ "loss": 0.6505,
+ "step": 678
+ },
+ {
+ "epoch": 2.25,
+ "learning_rate": 2.858695652173913e-06,
+ "loss": 0.6239,
+ "step": 679
+ },
+ {
+ "epoch": 2.26,
+ "learning_rate": 2.8532608695652176e-06,
+ "loss": 0.6199,
+ "step": 680
+ },
+ {
+ "epoch": 2.26,
+ "learning_rate": 2.8478260869565217e-06,
+ "loss": 0.6372,
+ "step": 681
+ },
+ {
+ "epoch": 2.26,
+ "learning_rate": 2.842391304347826e-06,
+ "loss": 0.6313,
+ "step": 682
+ },
+ {
+ "epoch": 2.27,
+ "learning_rate": 2.8369565217391305e-06,
+ "loss": 0.6295,
+ "step": 683
+ },
+ {
+ "epoch": 2.27,
+ "learning_rate": 2.8315217391304347e-06,
+ "loss": 0.6319,
+ "step": 684
+ },
+ {
+ "epoch": 2.27,
+ "learning_rate": 2.8260869565217393e-06,
+ "loss": 0.6432,
+ "step": 685
+ },
+ {
+ "epoch": 2.28,
+ "learning_rate": 2.8206521739130435e-06,
+ "loss": 0.6367,
+ "step": 686
+ },
+ {
+ "epoch": 2.28,
+ "learning_rate": 2.8152173913043477e-06,
+ "loss": 0.6343,
+ "step": 687
+ },
+ {
+ "epoch": 2.28,
+ "learning_rate": 2.8097826086956523e-06,
+ "loss": 0.6238,
+ "step": 688
+ },
+ {
+ "epoch": 2.29,
+ "learning_rate": 2.804347826086957e-06,
+ "loss": 0.6409,
+ "step": 689
+ },
+ {
+ "epoch": 2.29,
+ "learning_rate": 2.798913043478261e-06,
+ "loss": 0.6267,
+ "step": 690
+ },
+ {
+ "epoch": 2.29,
+ "learning_rate": 2.7934782608695652e-06,
+ "loss": 0.6319,
+ "step": 691
+ },
+ {
+ "epoch": 2.3,
+ "learning_rate": 2.78804347826087e-06,
+ "loss": 0.6363,
+ "step": 692
+ },
+ {
+ "epoch": 2.3,
+ "learning_rate": 2.782608695652174e-06,
+ "loss": 0.6341,
+ "step": 693
+ },
+ {
+ "epoch": 2.3,
+ "learning_rate": 2.777173913043478e-06,
+ "loss": 0.6347,
+ "step": 694
+ },
+ {
+ "epoch": 2.31,
+ "learning_rate": 2.771739130434783e-06,
+ "loss": 0.6392,
+ "step": 695
+ },
+ {
+ "epoch": 2.31,
+ "learning_rate": 2.766304347826087e-06,
+ "loss": 0.6529,
+ "step": 696
+ },
+ {
+ "epoch": 2.31,
+ "learning_rate": 2.760869565217391e-06,
+ "loss": 0.6399,
+ "step": 697
+ },
+ {
+ "epoch": 2.32,
+ "learning_rate": 2.7554347826086957e-06,
+ "loss": 0.6168,
+ "step": 698
+ },
+ {
+ "epoch": 2.32,
+ "learning_rate": 2.75e-06,
+ "loss": 0.6356,
+ "step": 699
+ },
+ {
+ "epoch": 2.32,
+ "learning_rate": 2.744565217391304e-06,
+ "loss": 0.6543,
+ "step": 700
+ },
+ {
+ "epoch": 2.33,
+ "learning_rate": 2.7391304347826087e-06,
+ "loss": 0.6373,
+ "step": 701
+ },
+ {
+ "epoch": 2.33,
+ "learning_rate": 2.7336956521739133e-06,
+ "loss": 0.6348,
+ "step": 702
+ },
+ {
+ "epoch": 2.33,
+ "learning_rate": 2.7282608695652175e-06,
+ "loss": 0.6327,
+ "step": 703
+ },
+ {
+ "epoch": 2.33,
+ "learning_rate": 2.722826086956522e-06,
+ "loss": 0.6171,
+ "step": 704
+ },
+ {
+ "epoch": 2.34,
+ "learning_rate": 2.7173913043478263e-06,
+ "loss": 0.6216,
+ "step": 705
+ },
+ {
+ "epoch": 2.34,
+ "learning_rate": 2.7119565217391305e-06,
+ "loss": 0.6314,
+ "step": 706
+ },
+ {
+ "epoch": 2.34,
+ "learning_rate": 2.706521739130435e-06,
+ "loss": 0.6293,
+ "step": 707
+ },
+ {
+ "epoch": 2.35,
+ "learning_rate": 2.7010869565217392e-06,
+ "loss": 0.6322,
+ "step": 708
+ },
+ {
+ "epoch": 2.35,
+ "learning_rate": 2.6956521739130434e-06,
+ "loss": 0.6474,
+ "step": 709
+ },
+ {
+ "epoch": 2.35,
+ "learning_rate": 2.690217391304348e-06,
+ "loss": 0.6381,
+ "step": 710
+ },
+ {
+ "epoch": 2.36,
+ "learning_rate": 2.684782608695652e-06,
+ "loss": 0.6139,
+ "step": 711
+ },
+ {
+ "epoch": 2.36,
+ "learning_rate": 2.6793478260869564e-06,
+ "loss": 0.6432,
+ "step": 712
+ },
+ {
+ "epoch": 2.36,
+ "learning_rate": 2.673913043478261e-06,
+ "loss": 0.6316,
+ "step": 713
+ },
+ {
+ "epoch": 2.37,
+ "learning_rate": 2.668478260869565e-06,
+ "loss": 0.6414,
+ "step": 714
+ },
+ {
+ "epoch": 2.37,
+ "learning_rate": 2.6630434782608693e-06,
+ "loss": 0.637,
+ "step": 715
+ },
+ {
+ "epoch": 2.37,
+ "learning_rate": 2.657608695652174e-06,
+ "loss": 0.6375,
+ "step": 716
+ },
+ {
+ "epoch": 2.38,
+ "learning_rate": 2.6521739130434785e-06,
+ "loss": 0.6324,
+ "step": 717
+ },
+ {
+ "epoch": 2.38,
+ "learning_rate": 2.6467391304347827e-06,
+ "loss": 0.6417,
+ "step": 718
+ },
+ {
+ "epoch": 2.38,
+ "learning_rate": 2.641304347826087e-06,
+ "loss": 0.6176,
+ "step": 719
+ },
+ {
+ "epoch": 2.39,
+ "learning_rate": 2.6358695652173915e-06,
+ "loss": 0.6438,
+ "step": 720
+ },
+ {
+ "epoch": 2.39,
+ "learning_rate": 2.6304347826086957e-06,
+ "loss": 0.6375,
+ "step": 721
+ },
+ {
+ "epoch": 2.39,
+ "learning_rate": 2.6250000000000003e-06,
+ "loss": 0.6248,
+ "step": 722
+ },
+ {
+ "epoch": 2.4,
+ "learning_rate": 2.6195652173913045e-06,
+ "loss": 0.6337,
+ "step": 723
+ },
+ {
+ "epoch": 2.4,
+ "learning_rate": 2.6141304347826086e-06,
+ "loss": 0.6517,
+ "step": 724
+ },
+ {
+ "epoch": 2.4,
+ "learning_rate": 2.6086956521739132e-06,
+ "loss": 0.6313,
+ "step": 725
+ },
+ {
+ "epoch": 2.41,
+ "learning_rate": 2.6032608695652174e-06,
+ "loss": 0.6425,
+ "step": 726
+ },
+ {
+ "epoch": 2.41,
+ "learning_rate": 2.5978260869565216e-06,
+ "loss": 0.6203,
+ "step": 727
+ },
+ {
+ "epoch": 2.41,
+ "learning_rate": 2.592391304347826e-06,
+ "loss": 0.6208,
+ "step": 728
+ },
+ {
+ "epoch": 2.42,
+ "learning_rate": 2.586956521739131e-06,
+ "loss": 0.6332,
+ "step": 729
+ },
+ {
+ "epoch": 2.42,
+ "learning_rate": 2.581521739130435e-06,
+ "loss": 0.6229,
+ "step": 730
+ },
+ {
+ "epoch": 2.42,
+ "learning_rate": 2.576086956521739e-06,
+ "loss": 0.6412,
+ "step": 731
+ },
+ {
+ "epoch": 2.43,
+ "learning_rate": 2.5706521739130438e-06,
+ "loss": 0.6498,
+ "step": 732
+ },
+ {
+ "epoch": 2.43,
+ "eval_loss": 0.768299400806427,
+ "eval_runtime": 17.6195,
+ "eval_samples_per_second": 129.516,
+ "eval_steps_per_second": 5.449,
+ "step": 732
+ },
+ {
+ "epoch": 2.43,
+ "learning_rate": 2.565217391304348e-06,
+ "loss": 0.6453,
+ "step": 733
+ },
+ {
+ "epoch": 2.43,
+ "learning_rate": 2.559782608695652e-06,
+ "loss": 0.6359,
+ "step": 734
+ },
+ {
+ "epoch": 2.44,
+ "learning_rate": 2.5543478260869567e-06,
+ "loss": 0.6303,
+ "step": 735
+ },
+ {
+ "epoch": 2.44,
+ "learning_rate": 2.548913043478261e-06,
+ "loss": 0.6458,
+ "step": 736
+ },
+ {
+ "epoch": 2.44,
+ "learning_rate": 2.543478260869565e-06,
+ "loss": 0.6565,
+ "step": 737
+ },
+ {
+ "epoch": 2.45,
+ "learning_rate": 2.5380434782608697e-06,
+ "loss": 0.6202,
+ "step": 738
+ },
+ {
+ "epoch": 2.45,
+ "learning_rate": 2.532608695652174e-06,
+ "loss": 0.6392,
+ "step": 739
+ },
+ {
+ "epoch": 2.45,
+ "learning_rate": 2.527173913043478e-06,
+ "loss": 0.6466,
+ "step": 740
+ },
+ {
+ "epoch": 2.46,
+ "learning_rate": 2.5217391304347826e-06,
+ "loss": 0.62,
+ "step": 741
+ },
+ {
+ "epoch": 2.46,
+ "learning_rate": 2.516304347826087e-06,
+ "loss": 0.6353,
+ "step": 742
+ },
+ {
+ "epoch": 2.46,
+ "learning_rate": 2.5108695652173914e-06,
+ "loss": 0.637,
+ "step": 743
+ },
+ {
+ "epoch": 2.47,
+ "learning_rate": 2.505434782608696e-06,
+ "loss": 0.63,
+ "step": 744
+ },
+ {
+ "epoch": 2.47,
+ "learning_rate": 2.5e-06,
+ "loss": 0.6419,
+ "step": 745
+ },
+ {
+ "epoch": 2.47,
+ "learning_rate": 2.4945652173913044e-06,
+ "loss": 0.6419,
+ "step": 746
+ },
+ {
+ "epoch": 2.48,
+ "learning_rate": 2.489130434782609e-06,
+ "loss": 0.6446,
+ "step": 747
+ },
+ {
+ "epoch": 2.48,
+ "learning_rate": 2.483695652173913e-06,
+ "loss": 0.6446,
+ "step": 748
+ },
+ {
+ "epoch": 2.48,
+ "learning_rate": 2.4782608695652173e-06,
+ "loss": 0.6272,
+ "step": 749
+ },
+ {
+ "epoch": 2.49,
+ "learning_rate": 2.472826086956522e-06,
+ "loss": 0.6417,
+ "step": 750
+ },
+ {
+ "epoch": 2.49,
+ "learning_rate": 2.467391304347826e-06,
+ "loss": 0.6328,
+ "step": 751
+ },
+ {
+ "epoch": 2.49,
+ "learning_rate": 2.4619565217391303e-06,
+ "loss": 0.6426,
+ "step": 752
+ },
+ {
+ "epoch": 2.5,
+ "learning_rate": 2.456521739130435e-06,
+ "loss": 0.6489,
+ "step": 753
+ },
+ {
+ "epoch": 2.5,
+ "learning_rate": 2.451086956521739e-06,
+ "loss": 0.6299,
+ "step": 754
+ },
+ {
+ "epoch": 2.5,
+ "learning_rate": 2.4456521739130433e-06,
+ "loss": 0.6457,
+ "step": 755
+ },
+ {
+ "epoch": 2.51,
+ "learning_rate": 2.440217391304348e-06,
+ "loss": 0.6306,
+ "step": 756
+ },
+ {
+ "epoch": 2.51,
+ "learning_rate": 2.4347826086956525e-06,
+ "loss": 0.639,
+ "step": 757
+ },
+ {
+ "epoch": 2.51,
+ "learning_rate": 2.4293478260869566e-06,
+ "loss": 0.6408,
+ "step": 758
+ },
+ {
+ "epoch": 2.52,
+ "learning_rate": 2.423913043478261e-06,
+ "loss": 0.64,
+ "step": 759
+ },
+ {
+ "epoch": 2.52,
+ "learning_rate": 2.4184782608695654e-06,
+ "loss": 0.6249,
+ "step": 760
+ },
+ {
+ "epoch": 2.52,
+ "learning_rate": 2.4130434782608696e-06,
+ "loss": 0.6382,
+ "step": 761
+ },
+ {
+ "epoch": 2.53,
+ "learning_rate": 2.407608695652174e-06,
+ "loss": 0.6328,
+ "step": 762
+ },
+ {
+ "epoch": 2.53,
+ "learning_rate": 2.4021739130434784e-06,
+ "loss": 0.6492,
+ "step": 763
+ },
+ {
+ "epoch": 2.53,
+ "learning_rate": 2.3967391304347826e-06,
+ "loss": 0.6252,
+ "step": 764
+ },
+ {
+ "epoch": 2.54,
+ "learning_rate": 2.391304347826087e-06,
+ "loss": 0.6385,
+ "step": 765
+ },
+ {
+ "epoch": 2.54,
+ "learning_rate": 2.3858695652173913e-06,
+ "loss": 0.6249,
+ "step": 766
+ },
+ {
+ "epoch": 2.54,
+ "learning_rate": 2.3804347826086955e-06,
+ "loss": 0.6272,
+ "step": 767
+ },
+ {
+ "epoch": 2.55,
+ "learning_rate": 2.375e-06,
+ "loss": 0.6533,
+ "step": 768
+ },
+ {
+ "epoch": 2.55,
+ "learning_rate": 2.3695652173913043e-06,
+ "loss": 0.6244,
+ "step": 769
+ },
+ {
+ "epoch": 2.55,
+ "learning_rate": 2.3641304347826085e-06,
+ "loss": 0.6553,
+ "step": 770
+ },
+ {
+ "epoch": 2.56,
+ "learning_rate": 2.358695652173913e-06,
+ "loss": 0.6257,
+ "step": 771
+ },
+ {
+ "epoch": 2.56,
+ "learning_rate": 2.3532608695652177e-06,
+ "loss": 0.6266,
+ "step": 772
+ },
+ {
+ "epoch": 2.56,
+ "learning_rate": 2.347826086956522e-06,
+ "loss": 0.6351,
+ "step": 773
+ },
+ {
+ "epoch": 2.57,
+ "learning_rate": 2.342391304347826e-06,
+ "loss": 0.6399,
+ "step": 774
+ },
+ {
+ "epoch": 2.57,
+ "learning_rate": 2.3369565217391307e-06,
+ "loss": 0.6195,
+ "step": 775
+ },
+ {
+ "epoch": 2.57,
+ "learning_rate": 2.331521739130435e-06,
+ "loss": 0.6235,
+ "step": 776
+ },
+ {
+ "epoch": 2.58,
+ "learning_rate": 2.326086956521739e-06,
+ "loss": 0.6235,
+ "step": 777
+ },
+ {
+ "epoch": 2.58,
+ "learning_rate": 2.3206521739130436e-06,
+ "loss": 0.6143,
+ "step": 778
+ },
+ {
+ "epoch": 2.58,
+ "learning_rate": 2.315217391304348e-06,
+ "loss": 0.6099,
+ "step": 779
+ },
+ {
+ "epoch": 2.59,
+ "learning_rate": 2.309782608695652e-06,
+ "loss": 0.63,
+ "step": 780
+ },
+ {
+ "epoch": 2.59,
+ "learning_rate": 2.3043478260869566e-06,
+ "loss": 0.6349,
+ "step": 781
+ },
+ {
+ "epoch": 2.59,
+ "learning_rate": 2.2989130434782608e-06,
+ "loss": 0.6158,
+ "step": 782
+ },
+ {
+ "epoch": 2.6,
+ "learning_rate": 2.293478260869565e-06,
+ "loss": 0.6386,
+ "step": 783
+ },
+ {
+ "epoch": 2.6,
+ "learning_rate": 2.2880434782608695e-06,
+ "loss": 0.6351,
+ "step": 784
+ },
+ {
+ "epoch": 2.6,
+ "learning_rate": 2.282608695652174e-06,
+ "loss": 0.6264,
+ "step": 785
+ },
+ {
+ "epoch": 2.61,
+ "learning_rate": 2.2771739130434783e-06,
+ "loss": 0.6343,
+ "step": 786
+ },
+ {
+ "epoch": 2.61,
+ "learning_rate": 2.271739130434783e-06,
+ "loss": 0.6395,
+ "step": 787
+ },
+ {
+ "epoch": 2.61,
+ "learning_rate": 2.266304347826087e-06,
+ "loss": 0.6398,
+ "step": 788
+ },
+ {
+ "epoch": 2.62,
+ "learning_rate": 2.2608695652173913e-06,
+ "loss": 0.6329,
+ "step": 789
+ },
+ {
+ "epoch": 2.62,
+ "learning_rate": 2.255434782608696e-06,
+ "loss": 0.6385,
+ "step": 790
+ },
+ {
+ "epoch": 2.62,
+ "learning_rate": 2.25e-06,
+ "loss": 0.6279,
+ "step": 791
+ },
+ {
+ "epoch": 2.63,
+ "learning_rate": 2.2445652173913042e-06,
+ "loss": 0.6235,
+ "step": 792
+ },
+ {
+ "epoch": 2.63,
+ "learning_rate": 2.239130434782609e-06,
+ "loss": 0.6309,
+ "step": 793
+ },
+ {
+ "epoch": 2.63,
+ "eval_loss": 0.7672982215881348,
+ "eval_runtime": 17.6155,
+ "eval_samples_per_second": 129.545,
+ "eval_steps_per_second": 5.45,
+ "step": 793
+ },
+ {
+ "epoch": 2.63,
+ "learning_rate": 2.233695652173913e-06,
+ "loss": 0.6449,
+ "step": 794
+ },
+ {
+ "epoch": 2.64,
+ "learning_rate": 2.228260869565217e-06,
+ "loss": 0.6438,
+ "step": 795
+ },
+ {
+ "epoch": 2.64,
+ "learning_rate": 2.222826086956522e-06,
+ "loss": 0.6295,
+ "step": 796
+ },
+ {
+ "epoch": 2.64,
+ "learning_rate": 2.217391304347826e-06,
+ "loss": 0.6437,
+ "step": 797
+ },
+ {
+ "epoch": 2.65,
+ "learning_rate": 2.2119565217391306e-06,
+ "loss": 0.6383,
+ "step": 798
+ },
+ {
+ "epoch": 2.65,
+ "learning_rate": 2.206521739130435e-06,
+ "loss": 0.6198,
+ "step": 799
+ },
+ {
+ "epoch": 2.65,
+ "learning_rate": 2.2010869565217394e-06,
+ "loss": 0.635,
+ "step": 800
+ },
+ {
+ "epoch": 2.66,
+ "learning_rate": 2.1956521739130435e-06,
+ "loss": 0.6344,
+ "step": 801
+ },
+ {
+ "epoch": 2.66,
+ "learning_rate": 2.190217391304348e-06,
+ "loss": 0.6427,
+ "step": 802
+ },
+ {
+ "epoch": 2.66,
+ "learning_rate": 2.1847826086956523e-06,
+ "loss": 0.6346,
+ "step": 803
+ },
+ {
+ "epoch": 2.67,
+ "learning_rate": 2.1793478260869565e-06,
+ "loss": 0.6427,
+ "step": 804
+ },
+ {
+ "epoch": 2.67,
+ "learning_rate": 2.173913043478261e-06,
+ "loss": 0.6394,
+ "step": 805
+ },
+ {
+ "epoch": 2.67,
+ "learning_rate": 2.1684782608695653e-06,
+ "loss": 0.6446,
+ "step": 806
+ },
+ {
+ "epoch": 2.68,
+ "learning_rate": 2.1630434782608695e-06,
+ "loss": 0.6453,
+ "step": 807
+ },
+ {
+ "epoch": 2.68,
+ "learning_rate": 2.157608695652174e-06,
+ "loss": 0.6413,
+ "step": 808
+ },
+ {
+ "epoch": 2.68,
+ "learning_rate": 2.1521739130434782e-06,
+ "loss": 0.6522,
+ "step": 809
+ },
+ {
+ "epoch": 2.69,
+ "learning_rate": 2.1467391304347824e-06,
+ "loss": 0.6361,
+ "step": 810
+ },
+ {
+ "epoch": 2.69,
+ "learning_rate": 2.141304347826087e-06,
+ "loss": 0.646,
+ "step": 811
+ },
+ {
+ "epoch": 2.69,
+ "learning_rate": 2.1358695652173916e-06,
+ "loss": 0.6367,
+ "step": 812
+ },
+ {
+ "epoch": 2.7,
+ "learning_rate": 2.130434782608696e-06,
+ "loss": 0.633,
+ "step": 813
+ },
+ {
+ "epoch": 2.7,
+ "learning_rate": 2.125e-06,
+ "loss": 0.6363,
+ "step": 814
+ },
+ {
+ "epoch": 2.7,
+ "learning_rate": 2.1195652173913046e-06,
+ "loss": 0.6244,
+ "step": 815
+ },
+ {
+ "epoch": 2.71,
+ "learning_rate": 2.1141304347826088e-06,
+ "loss": 0.6339,
+ "step": 816
+ },
+ {
+ "epoch": 2.71,
+ "learning_rate": 2.108695652173913e-06,
+ "loss": 0.6496,
+ "step": 817
+ },
+ {
+ "epoch": 2.71,
+ "learning_rate": 2.1032608695652175e-06,
+ "loss": 0.637,
+ "step": 818
+ },
+ {
+ "epoch": 2.72,
+ "learning_rate": 2.0978260869565217e-06,
+ "loss": 0.6239,
+ "step": 819
+ },
+ {
+ "epoch": 2.72,
+ "learning_rate": 2.092391304347826e-06,
+ "loss": 0.6361,
+ "step": 820
+ },
+ {
+ "epoch": 2.72,
+ "learning_rate": 2.0869565217391305e-06,
+ "loss": 0.6395,
+ "step": 821
+ },
+ {
+ "epoch": 2.73,
+ "learning_rate": 2.0815217391304347e-06,
+ "loss": 0.6114,
+ "step": 822
+ },
+ {
+ "epoch": 2.73,
+ "learning_rate": 2.0760869565217393e-06,
+ "loss": 0.6271,
+ "step": 823
+ },
+ {
+ "epoch": 2.73,
+ "learning_rate": 2.0706521739130435e-06,
+ "loss": 0.6286,
+ "step": 824
+ },
+ {
+ "epoch": 2.74,
+ "learning_rate": 2.0652173913043476e-06,
+ "loss": 0.6325,
+ "step": 825
+ },
+ {
+ "epoch": 2.74,
+ "learning_rate": 2.0597826086956522e-06,
+ "loss": 0.6355,
+ "step": 826
+ },
+ {
+ "epoch": 2.74,
+ "learning_rate": 2.054347826086957e-06,
+ "loss": 0.6335,
+ "step": 827
+ },
+ {
+ "epoch": 2.75,
+ "learning_rate": 2.048913043478261e-06,
+ "loss": 0.6422,
+ "step": 828
+ },
+ {
+ "epoch": 2.75,
+ "learning_rate": 2.043478260869565e-06,
+ "loss": 0.6407,
+ "step": 829
+ },
+ {
+ "epoch": 2.75,
+ "learning_rate": 2.03804347826087e-06,
+ "loss": 0.6405,
+ "step": 830
+ },
+ {
+ "epoch": 2.76,
+ "learning_rate": 2.032608695652174e-06,
+ "loss": 0.6409,
+ "step": 831
+ },
+ {
+ "epoch": 2.76,
+ "learning_rate": 2.027173913043478e-06,
+ "loss": 0.6561,
+ "step": 832
+ },
+ {
+ "epoch": 2.76,
+ "learning_rate": 2.0217391304347828e-06,
+ "loss": 0.6346,
+ "step": 833
+ },
+ {
+ "epoch": 2.77,
+ "learning_rate": 2.016304347826087e-06,
+ "loss": 0.6385,
+ "step": 834
+ },
+ {
+ "epoch": 2.77,
+ "learning_rate": 2.010869565217391e-06,
+ "loss": 0.6325,
+ "step": 835
+ },
+ {
+ "epoch": 2.77,
+ "learning_rate": 2.0054347826086957e-06,
+ "loss": 0.6299,
+ "step": 836
+ },
+ {
+ "epoch": 2.78,
+ "learning_rate": 2e-06,
+ "loss": 0.6252,
+ "step": 837
+ },
+ {
+ "epoch": 2.78,
+ "learning_rate": 1.994565217391304e-06,
+ "loss": 0.6418,
+ "step": 838
+ },
+ {
+ "epoch": 2.78,
+ "learning_rate": 1.9891304347826087e-06,
+ "loss": 0.6263,
+ "step": 839
+ },
+ {
+ "epoch": 2.79,
+ "learning_rate": 1.9836956521739133e-06,
+ "loss": 0.6293,
+ "step": 840
+ },
+ {
+ "epoch": 2.79,
+ "learning_rate": 1.9782608695652175e-06,
+ "loss": 0.6288,
+ "step": 841
+ },
+ {
+ "epoch": 2.79,
+ "learning_rate": 1.972826086956522e-06,
+ "loss": 0.6351,
+ "step": 842
+ },
+ {
+ "epoch": 2.8,
+ "learning_rate": 1.9673913043478263e-06,
+ "loss": 0.6285,
+ "step": 843
+ },
+ {
+ "epoch": 2.8,
+ "learning_rate": 1.9619565217391304e-06,
+ "loss": 0.6307,
+ "step": 844
+ },
+ {
+ "epoch": 2.8,
+ "learning_rate": 1.956521739130435e-06,
+ "loss": 0.6301,
+ "step": 845
+ },
+ {
+ "epoch": 2.81,
+ "learning_rate": 1.9510869565217392e-06,
+ "loss": 0.6226,
+ "step": 846
+ },
+ {
+ "epoch": 2.81,
+ "learning_rate": 1.9456521739130434e-06,
+ "loss": 0.622,
+ "step": 847
+ },
+ {
+ "epoch": 2.81,
+ "learning_rate": 1.940217391304348e-06,
+ "loss": 0.6267,
+ "step": 848
+ },
+ {
+ "epoch": 2.82,
+ "learning_rate": 1.934782608695652e-06,
+ "loss": 0.6106,
+ "step": 849
+ },
+ {
+ "epoch": 2.82,
+ "learning_rate": 1.9293478260869564e-06,
+ "loss": 0.6415,
+ "step": 850
+ },
+ {
+ "epoch": 2.82,
+ "learning_rate": 1.923913043478261e-06,
+ "loss": 0.6326,
+ "step": 851
+ },
+ {
+ "epoch": 2.83,
+ "learning_rate": 1.918478260869565e-06,
+ "loss": 0.6341,
+ "step": 852
+ },
+ {
+ "epoch": 2.83,
+ "learning_rate": 1.9130434782608693e-06,
+ "loss": 0.644,
+ "step": 853
+ },
+ {
+ "epoch": 2.83,
+ "learning_rate": 1.907608695652174e-06,
+ "loss": 0.6255,
+ "step": 854
+ },
+ {
+ "epoch": 2.83,
+ "eval_loss": 0.7656497359275818,
+ "eval_runtime": 17.6221,
+ "eval_samples_per_second": 129.496,
+ "eval_steps_per_second": 5.448,
+ "step": 854
+ },
+ {
+ "epoch": 2.84,
+ "learning_rate": 1.9021739130434785e-06,
+ "loss": 0.6199,
+ "step": 855
+ },
+ {
+ "epoch": 2.84,
+ "learning_rate": 1.8967391304347827e-06,
+ "loss": 0.6334,
+ "step": 856
+ },
+ {
+ "epoch": 2.84,
+ "learning_rate": 1.891304347826087e-06,
+ "loss": 0.6348,
+ "step": 857
+ },
+ {
+ "epoch": 2.85,
+ "learning_rate": 1.8858695652173915e-06,
+ "loss": 0.6361,
+ "step": 858
+ },
+ {
+ "epoch": 2.85,
+ "learning_rate": 1.8804347826086957e-06,
+ "loss": 0.6388,
+ "step": 859
+ },
+ {
+ "epoch": 2.85,
+ "learning_rate": 1.875e-06,
+ "loss": 0.653,
+ "step": 860
+ },
+ {
+ "epoch": 2.86,
+ "learning_rate": 1.8695652173913044e-06,
+ "loss": 0.6391,
+ "step": 861
+ },
+ {
+ "epoch": 2.86,
+ "learning_rate": 1.8641304347826086e-06,
+ "loss": 0.6152,
+ "step": 862
+ },
+ {
+ "epoch": 2.86,
+ "learning_rate": 1.858695652173913e-06,
+ "loss": 0.6552,
+ "step": 863
+ },
+ {
+ "epoch": 2.87,
+ "learning_rate": 1.8532608695652174e-06,
+ "loss": 0.6354,
+ "step": 864
+ },
+ {
+ "epoch": 2.87,
+ "learning_rate": 1.8478260869565216e-06,
+ "loss": 0.6272,
+ "step": 865
+ },
+ {
+ "epoch": 2.87,
+ "learning_rate": 1.842391304347826e-06,
+ "loss": 0.6433,
+ "step": 866
+ },
+ {
+ "epoch": 2.88,
+ "learning_rate": 1.8369565217391306e-06,
+ "loss": 0.6174,
+ "step": 867
+ },
+ {
+ "epoch": 2.88,
+ "learning_rate": 1.831521739130435e-06,
+ "loss": 0.6129,
+ "step": 868
+ },
+ {
+ "epoch": 2.88,
+ "learning_rate": 1.8260869565217394e-06,
+ "loss": 0.6384,
+ "step": 869
+ },
+ {
+ "epoch": 2.89,
+ "learning_rate": 1.8206521739130435e-06,
+ "loss": 0.6471,
+ "step": 870
+ },
+ {
+ "epoch": 2.89,
+ "learning_rate": 1.815217391304348e-06,
+ "loss": 0.6339,
+ "step": 871
+ },
+ {
+ "epoch": 2.89,
+ "learning_rate": 1.8097826086956523e-06,
+ "loss": 0.6236,
+ "step": 872
+ },
+ {
+ "epoch": 2.9,
+ "learning_rate": 1.8043478260869565e-06,
+ "loss": 0.6083,
+ "step": 873
+ },
+ {
+ "epoch": 2.9,
+ "learning_rate": 1.7989130434782609e-06,
+ "loss": 0.6331,
+ "step": 874
+ },
+ {
+ "epoch": 2.9,
+ "learning_rate": 1.7934782608695653e-06,
+ "loss": 0.634,
+ "step": 875
+ },
+ {
+ "epoch": 2.91,
+ "learning_rate": 1.7880434782608697e-06,
+ "loss": 0.63,
+ "step": 876
+ },
+ {
+ "epoch": 2.91,
+ "learning_rate": 1.7826086956521738e-06,
+ "loss": 0.6643,
+ "step": 877
+ },
+ {
+ "epoch": 2.91,
+ "learning_rate": 1.7771739130434782e-06,
+ "loss": 0.639,
+ "step": 878
+ },
+ {
+ "epoch": 2.92,
+ "learning_rate": 1.7717391304347826e-06,
+ "loss": 0.6396,
+ "step": 879
+ },
+ {
+ "epoch": 2.92,
+ "learning_rate": 1.7663043478260868e-06,
+ "loss": 0.6391,
+ "step": 880
+ },
+ {
+ "epoch": 2.92,
+ "learning_rate": 1.7608695652173914e-06,
+ "loss": 0.6264,
+ "step": 881
+ },
+ {
+ "epoch": 2.93,
+ "learning_rate": 1.7554347826086958e-06,
+ "loss": 0.6211,
+ "step": 882
+ },
+ {
+ "epoch": 2.93,
+ "learning_rate": 1.7500000000000002e-06,
+ "loss": 0.6119,
+ "step": 883
+ },
+ {
+ "epoch": 2.93,
+ "learning_rate": 1.7445652173913046e-06,
+ "loss": 0.6365,
+ "step": 884
+ },
+ {
+ "epoch": 2.94,
+ "learning_rate": 1.7391304347826088e-06,
+ "loss": 0.6396,
+ "step": 885
+ },
+ {
+ "epoch": 2.94,
+ "learning_rate": 1.7336956521739131e-06,
+ "loss": 0.6489,
+ "step": 886
+ },
+ {
+ "epoch": 2.94,
+ "learning_rate": 1.7282608695652175e-06,
+ "loss": 0.6471,
+ "step": 887
+ },
+ {
+ "epoch": 2.95,
+ "learning_rate": 1.7228260869565217e-06,
+ "loss": 0.6243,
+ "step": 888
+ },
+ {
+ "epoch": 2.95,
+ "learning_rate": 1.7173913043478261e-06,
+ "loss": 0.6238,
+ "step": 889
+ },
+ {
+ "epoch": 2.95,
+ "learning_rate": 1.7119565217391305e-06,
+ "loss": 0.6162,
+ "step": 890
+ },
+ {
+ "epoch": 2.96,
+ "learning_rate": 1.7065217391304347e-06,
+ "loss": 0.6443,
+ "step": 891
+ },
+ {
+ "epoch": 2.96,
+ "learning_rate": 1.701086956521739e-06,
+ "loss": 0.637,
+ "step": 892
+ },
+ {
+ "epoch": 2.96,
+ "learning_rate": 1.6956521739130435e-06,
+ "loss": 0.6377,
+ "step": 893
+ },
+ {
+ "epoch": 2.97,
+ "learning_rate": 1.6902173913043476e-06,
+ "loss": 0.6316,
+ "step": 894
+ },
+ {
+ "epoch": 2.97,
+ "learning_rate": 1.6847826086956524e-06,
+ "loss": 0.6405,
+ "step": 895
+ },
+ {
+ "epoch": 2.97,
+ "learning_rate": 1.6793478260869566e-06,
+ "loss": 0.6483,
+ "step": 896
+ },
+ {
+ "epoch": 2.98,
+ "learning_rate": 1.673913043478261e-06,
+ "loss": 0.6318,
+ "step": 897
+ },
+ {
+ "epoch": 2.98,
+ "learning_rate": 1.6684782608695654e-06,
+ "loss": 0.6447,
+ "step": 898
+ },
+ {
+ "epoch": 2.98,
+ "learning_rate": 1.6630434782608696e-06,
+ "loss": 0.65,
+ "step": 899
+ },
+ {
+ "epoch": 2.99,
+ "learning_rate": 1.657608695652174e-06,
+ "loss": 0.6443,
+ "step": 900
+ },
+ {
+ "epoch": 2.99,
+ "learning_rate": 1.6521739130434784e-06,
+ "loss": 0.6355,
+ "step": 901
+ },
+ {
+ "epoch": 2.99,
+ "learning_rate": 1.6467391304347825e-06,
+ "loss": 0.642,
+ "step": 902
+ },
+ {
+ "epoch": 3.0,
+ "learning_rate": 1.641304347826087e-06,
+ "loss": 0.6527,
+ "step": 903
+ },
+ {
+ "epoch": 3.0,
+ "learning_rate": 1.6358695652173913e-06,
+ "loss": 0.6383,
+ "step": 904
+ },
+ {
+ "epoch": 3.0,
+ "learning_rate": 1.6304347826086955e-06,
+ "loss": 0.6181,
+ "step": 905
+ },
+ {
+ "epoch": 3.0,
+ "learning_rate": 1.625e-06,
+ "loss": 0.5692,
+ "step": 906
+ },
+ {
+ "epoch": 3.01,
+ "learning_rate": 1.6195652173913043e-06,
+ "loss": 0.5856,
+ "step": 907
+ },
+ {
+ "epoch": 3.01,
+ "learning_rate": 1.6141304347826087e-06,
+ "loss": 0.5873,
+ "step": 908
+ },
+ {
+ "epoch": 3.01,
+ "learning_rate": 1.6086956521739133e-06,
+ "loss": 0.5891,
+ "step": 909
+ },
+ {
+ "epoch": 3.02,
+ "learning_rate": 1.6032608695652175e-06,
+ "loss": 0.5623,
+ "step": 910
+ },
+ {
+ "epoch": 3.02,
+ "learning_rate": 1.5978260869565219e-06,
+ "loss": 0.6019,
+ "step": 911
+ },
+ {
+ "epoch": 3.02,
+ "learning_rate": 1.5923913043478262e-06,
+ "loss": 0.5961,
+ "step": 912
+ },
+ {
+ "epoch": 3.03,
+ "learning_rate": 1.5869565217391304e-06,
+ "loss": 0.5779,
+ "step": 913
+ },
+ {
+ "epoch": 3.03,
+ "learning_rate": 1.5815217391304348e-06,
+ "loss": 0.5728,
+ "step": 914
+ },
+ {
+ "epoch": 3.03,
+ "learning_rate": 1.5760869565217392e-06,
+ "loss": 0.5855,
+ "step": 915
+ },
+ {
+ "epoch": 3.03,
+ "eval_loss": 0.7823443412780762,
+ "eval_runtime": 17.617,
+ "eval_samples_per_second": 129.534,
+ "eval_steps_per_second": 5.449,
+ "step": 915
+ },
+ {
+ "epoch": 3.04,
+ "learning_rate": 1.5706521739130436e-06,
+ "loss": 0.5867,
+ "step": 916
+ },
+ {
+ "epoch": 3.04,
+ "learning_rate": 1.5652173913043478e-06,
+ "loss": 0.5808,
+ "step": 917
+ },
+ {
+ "epoch": 3.04,
+ "learning_rate": 1.5597826086956522e-06,
+ "loss": 0.5879,
+ "step": 918
+ },
+ {
+ "epoch": 3.05,
+ "learning_rate": 1.5543478260869566e-06,
+ "loss": 0.5726,
+ "step": 919
+ },
+ {
+ "epoch": 3.05,
+ "learning_rate": 1.5489130434782607e-06,
+ "loss": 0.5633,
+ "step": 920
+ },
+ {
+ "epoch": 3.05,
+ "learning_rate": 1.5434782608695651e-06,
+ "loss": 0.5703,
+ "step": 921
+ },
+ {
+ "epoch": 3.06,
+ "learning_rate": 1.5380434782608695e-06,
+ "loss": 0.5811,
+ "step": 922
+ },
+ {
+ "epoch": 3.06,
+ "learning_rate": 1.5326086956521741e-06,
+ "loss": 0.5724,
+ "step": 923
+ },
+ {
+ "epoch": 3.06,
+ "learning_rate": 1.5271739130434785e-06,
+ "loss": 0.5885,
+ "step": 924
+ },
+ {
+ "epoch": 3.07,
+ "learning_rate": 1.5217391304347827e-06,
+ "loss": 0.5744,
+ "step": 925
+ },
+ {
+ "epoch": 3.07,
+ "learning_rate": 1.516304347826087e-06,
+ "loss": 0.5658,
+ "step": 926
+ },
+ {
+ "epoch": 3.07,
+ "learning_rate": 1.5108695652173915e-06,
+ "loss": 0.5684,
+ "step": 927
+ },
+ {
+ "epoch": 3.08,
+ "learning_rate": 1.5054347826086956e-06,
+ "loss": 0.5803,
+ "step": 928
+ },
+ {
+ "epoch": 3.08,
+ "learning_rate": 1.5e-06,
+ "loss": 0.586,
+ "step": 929
+ },
+ {
+ "epoch": 3.08,
+ "learning_rate": 1.4945652173913044e-06,
+ "loss": 0.5787,
+ "step": 930
+ },
+ {
+ "epoch": 3.09,
+ "learning_rate": 1.4891304347826086e-06,
+ "loss": 0.5757,
+ "step": 931
+ },
+ {
+ "epoch": 3.09,
+ "learning_rate": 1.483695652173913e-06,
+ "loss": 0.5655,
+ "step": 932
+ },
+ {
+ "epoch": 3.09,
+ "learning_rate": 1.4782608695652176e-06,
+ "loss": 0.5827,
+ "step": 933
+ },
+ {
+ "epoch": 3.1,
+ "learning_rate": 1.4728260869565218e-06,
+ "loss": 0.5978,
+ "step": 934
+ },
+ {
+ "epoch": 3.1,
+ "learning_rate": 1.4673913043478262e-06,
+ "loss": 0.5797,
+ "step": 935
+ },
+ {
+ "epoch": 3.1,
+ "learning_rate": 1.4619565217391306e-06,
+ "loss": 0.5823,
+ "step": 936
+ },
+ {
+ "epoch": 3.11,
+ "learning_rate": 1.4565217391304347e-06,
+ "loss": 0.5805,
+ "step": 937
+ },
+ {
+ "epoch": 3.11,
+ "learning_rate": 1.4510869565217391e-06,
+ "loss": 0.5685,
+ "step": 938
+ },
+ {
+ "epoch": 3.11,
+ "learning_rate": 1.4456521739130435e-06,
+ "loss": 0.5818,
+ "step": 939
+ },
+ {
+ "epoch": 3.12,
+ "learning_rate": 1.440217391304348e-06,
+ "loss": 0.571,
+ "step": 940
+ },
+ {
+ "epoch": 3.12,
+ "learning_rate": 1.4347826086956523e-06,
+ "loss": 0.5962,
+ "step": 941
+ },
+ {
+ "epoch": 3.12,
+ "learning_rate": 1.4293478260869565e-06,
+ "loss": 0.5897,
+ "step": 942
+ },
+ {
+ "epoch": 3.13,
+ "learning_rate": 1.4239130434782609e-06,
+ "loss": 0.5979,
+ "step": 943
+ },
+ {
+ "epoch": 3.13,
+ "learning_rate": 1.4184782608695653e-06,
+ "loss": 0.5771,
+ "step": 944
+ },
+ {
+ "epoch": 3.13,
+ "learning_rate": 1.4130434782608697e-06,
+ "loss": 0.5832,
+ "step": 945
+ },
+ {
+ "epoch": 3.14,
+ "learning_rate": 1.4076086956521738e-06,
+ "loss": 0.5997,
+ "step": 946
+ },
+ {
+ "epoch": 3.14,
+ "learning_rate": 1.4021739130434784e-06,
+ "loss": 0.5826,
+ "step": 947
+ },
+ {
+ "epoch": 3.14,
+ "learning_rate": 1.3967391304347826e-06,
+ "loss": 0.5864,
+ "step": 948
+ },
+ {
+ "epoch": 3.15,
+ "learning_rate": 1.391304347826087e-06,
+ "loss": 0.5822,
+ "step": 949
+ },
+ {
+ "epoch": 3.15,
+ "learning_rate": 1.3858695652173914e-06,
+ "loss": 0.5948,
+ "step": 950
+ },
+ {
+ "epoch": 3.15,
+ "learning_rate": 1.3804347826086956e-06,
+ "loss": 0.5913,
+ "step": 951
+ },
+ {
+ "epoch": 3.16,
+ "learning_rate": 1.375e-06,
+ "loss": 0.5886,
+ "step": 952
+ },
+ {
+ "epoch": 3.16,
+ "learning_rate": 1.3695652173913044e-06,
+ "loss": 0.5773,
+ "step": 953
+ },
+ {
+ "epoch": 3.16,
+ "learning_rate": 1.3641304347826087e-06,
+ "loss": 0.6007,
+ "step": 954
+ },
+ {
+ "epoch": 3.17,
+ "learning_rate": 1.3586956521739131e-06,
+ "loss": 0.5644,
+ "step": 955
+ },
+ {
+ "epoch": 3.17,
+ "learning_rate": 1.3532608695652175e-06,
+ "loss": 0.6069,
+ "step": 956
+ },
+ {
+ "epoch": 3.17,
+ "learning_rate": 1.3478260869565217e-06,
+ "loss": 0.5777,
+ "step": 957
+ },
+ {
+ "epoch": 3.18,
+ "learning_rate": 1.342391304347826e-06,
+ "loss": 0.5901,
+ "step": 958
+ },
+ {
+ "epoch": 3.18,
+ "learning_rate": 1.3369565217391305e-06,
+ "loss": 0.5853,
+ "step": 959
+ },
+ {
+ "epoch": 3.18,
+ "learning_rate": 1.3315217391304347e-06,
+ "loss": 0.569,
+ "step": 960
+ },
+ {
+ "epoch": 3.19,
+ "learning_rate": 1.3260869565217393e-06,
+ "loss": 0.5809,
+ "step": 961
+ },
+ {
+ "epoch": 3.19,
+ "learning_rate": 1.3206521739130434e-06,
+ "loss": 0.6015,
+ "step": 962
+ },
+ {
+ "epoch": 3.19,
+ "learning_rate": 1.3152173913043478e-06,
+ "loss": 0.5966,
+ "step": 963
+ },
+ {
+ "epoch": 3.2,
+ "learning_rate": 1.3097826086956522e-06,
+ "loss": 0.5774,
+ "step": 964
+ },
+ {
+ "epoch": 3.2,
+ "learning_rate": 1.3043478260869566e-06,
+ "loss": 0.5777,
+ "step": 965
+ },
+ {
+ "epoch": 3.2,
+ "learning_rate": 1.2989130434782608e-06,
+ "loss": 0.5811,
+ "step": 966
+ },
+ {
+ "epoch": 3.21,
+ "learning_rate": 1.2934782608695654e-06,
+ "loss": 0.5633,
+ "step": 967
+ },
+ {
+ "epoch": 3.21,
+ "learning_rate": 1.2880434782608696e-06,
+ "loss": 0.5904,
+ "step": 968
+ },
+ {
+ "epoch": 3.21,
+ "learning_rate": 1.282608695652174e-06,
+ "loss": 0.5732,
+ "step": 969
+ },
+ {
+ "epoch": 3.22,
+ "learning_rate": 1.2771739130434784e-06,
+ "loss": 0.5988,
+ "step": 970
+ },
+ {
+ "epoch": 3.22,
+ "learning_rate": 1.2717391304347825e-06,
+ "loss": 0.5784,
+ "step": 971
+ },
+ {
+ "epoch": 3.22,
+ "learning_rate": 1.266304347826087e-06,
+ "loss": 0.5813,
+ "step": 972
+ },
+ {
+ "epoch": 3.23,
+ "learning_rate": 1.2608695652173913e-06,
+ "loss": 0.5823,
+ "step": 973
+ },
+ {
+ "epoch": 3.23,
+ "learning_rate": 1.2554347826086957e-06,
+ "loss": 0.5879,
+ "step": 974
+ },
+ {
+ "epoch": 3.23,
+ "learning_rate": 1.25e-06,
+ "loss": 0.5847,
+ "step": 975
+ },
+ {
+ "epoch": 3.24,
+ "learning_rate": 1.2445652173913045e-06,
+ "loss": 0.594,
+ "step": 976
+ },
+ {
+ "epoch": 3.24,
+ "eval_loss": 0.7821070551872253,
+ "eval_runtime": 17.6099,
+ "eval_samples_per_second": 129.586,
+ "eval_steps_per_second": 5.451,
+ "step": 976
+ },
+ {
+ "epoch": 3.24,
+ "learning_rate": 1.2391304347826087e-06,
+ "loss": 0.5858,
+ "step": 977
+ },
+ {
+ "epoch": 3.24,
+ "learning_rate": 1.233695652173913e-06,
+ "loss": 0.5658,
+ "step": 978
+ },
+ {
+ "epoch": 3.25,
+ "learning_rate": 1.2282608695652175e-06,
+ "loss": 0.6045,
+ "step": 979
+ },
+ {
+ "epoch": 3.25,
+ "learning_rate": 1.2228260869565216e-06,
+ "loss": 0.5956,
+ "step": 980
+ },
+ {
+ "epoch": 3.25,
+ "learning_rate": 1.2173913043478262e-06,
+ "loss": 0.5875,
+ "step": 981
+ },
+ {
+ "epoch": 3.26,
+ "learning_rate": 1.2119565217391304e-06,
+ "loss": 0.5927,
+ "step": 982
+ },
+ {
+ "epoch": 3.26,
+ "learning_rate": 1.2065217391304348e-06,
+ "loss": 0.595,
+ "step": 983
+ },
+ {
+ "epoch": 3.26,
+ "learning_rate": 1.2010869565217392e-06,
+ "loss": 0.6002,
+ "step": 984
+ },
+ {
+ "epoch": 3.27,
+ "learning_rate": 1.1956521739130436e-06,
+ "loss": 0.5785,
+ "step": 985
+ },
+ {
+ "epoch": 3.27,
+ "learning_rate": 1.1902173913043478e-06,
+ "loss": 0.5844,
+ "step": 986
+ },
+ {
+ "epoch": 3.27,
+ "learning_rate": 1.1847826086956522e-06,
+ "loss": 0.5722,
+ "step": 987
+ },
+ {
+ "epoch": 3.28,
+ "learning_rate": 1.1793478260869565e-06,
+ "loss": 0.6025,
+ "step": 988
+ },
+ {
+ "epoch": 3.28,
+ "learning_rate": 1.173913043478261e-06,
+ "loss": 0.5794,
+ "step": 989
+ },
+ {
+ "epoch": 3.28,
+ "learning_rate": 1.1684782608695653e-06,
+ "loss": 0.5758,
+ "step": 990
+ },
+ {
+ "epoch": 3.29,
+ "learning_rate": 1.1630434782608695e-06,
+ "loss": 0.5911,
+ "step": 991
+ },
+ {
+ "epoch": 3.29,
+ "learning_rate": 1.157608695652174e-06,
+ "loss": 0.5862,
+ "step": 992
+ },
+ {
+ "epoch": 3.29,
+ "learning_rate": 1.1521739130434783e-06,
+ "loss": 0.5885,
+ "step": 993
+ },
+ {
+ "epoch": 3.3,
+ "learning_rate": 1.1467391304347825e-06,
+ "loss": 0.5874,
+ "step": 994
+ },
+ {
+ "epoch": 3.3,
+ "learning_rate": 1.141304347826087e-06,
+ "loss": 0.5796,
+ "step": 995
+ },
+ {
+ "epoch": 3.3,
+ "learning_rate": 1.1358695652173915e-06,
+ "loss": 0.598,
+ "step": 996
+ },
+ {
+ "epoch": 3.31,
+ "learning_rate": 1.1304347826086956e-06,
+ "loss": 0.5649,
+ "step": 997
+ },
+ {
+ "epoch": 3.31,
+ "learning_rate": 1.125e-06,
+ "loss": 0.5958,
+ "step": 998
+ },
+ {
+ "epoch": 3.31,
+ "learning_rate": 1.1195652173913044e-06,
+ "loss": 0.5721,
+ "step": 999
+ },
+ {
+ "epoch": 3.32,
+ "learning_rate": 1.1141304347826086e-06,
+ "loss": 0.5807,
+ "step": 1000
+ },
+ {
+ "epoch": 3.32,
+ "learning_rate": 1.108695652173913e-06,
+ "loss": 0.5779,
+ "step": 1001
+ },
+ {
+ "epoch": 3.32,
+ "learning_rate": 1.1032608695652176e-06,
+ "loss": 0.5884,
+ "step": 1002
+ },
+ {
+ "epoch": 3.33,
+ "learning_rate": 1.0978260869565218e-06,
+ "loss": 0.5588,
+ "step": 1003
+ },
+ {
+ "epoch": 3.33,
+ "learning_rate": 1.0923913043478262e-06,
+ "loss": 0.587,
+ "step": 1004
+ },
+ {
+ "epoch": 3.33,
+ "learning_rate": 1.0869565217391306e-06,
+ "loss": 0.5947,
+ "step": 1005
+ },
+ {
+ "epoch": 3.34,
+ "learning_rate": 1.0815217391304347e-06,
+ "loss": 0.5878,
+ "step": 1006
+ },
+ {
+ "epoch": 3.34,
+ "learning_rate": 1.0760869565217391e-06,
+ "loss": 0.5799,
+ "step": 1007
+ },
+ {
+ "epoch": 3.34,
+ "learning_rate": 1.0706521739130435e-06,
+ "loss": 0.5883,
+ "step": 1008
+ },
+ {
+ "epoch": 3.35,
+ "learning_rate": 1.065217391304348e-06,
+ "loss": 0.5812,
+ "step": 1009
+ },
+ {
+ "epoch": 3.35,
+ "learning_rate": 1.0597826086956523e-06,
+ "loss": 0.6035,
+ "step": 1010
+ },
+ {
+ "epoch": 3.35,
+ "learning_rate": 1.0543478260869565e-06,
+ "loss": 0.5917,
+ "step": 1011
+ },
+ {
+ "epoch": 3.36,
+ "learning_rate": 1.0489130434782609e-06,
+ "loss": 0.5717,
+ "step": 1012
+ },
+ {
+ "epoch": 3.36,
+ "learning_rate": 1.0434782608695653e-06,
+ "loss": 0.5666,
+ "step": 1013
+ },
+ {
+ "epoch": 3.36,
+ "learning_rate": 1.0380434782608696e-06,
+ "loss": 0.5983,
+ "step": 1014
+ },
+ {
+ "epoch": 3.37,
+ "learning_rate": 1.0326086956521738e-06,
+ "loss": 0.5921,
+ "step": 1015
+ },
+ {
+ "epoch": 3.37,
+ "learning_rate": 1.0271739130434784e-06,
+ "loss": 0.5959,
+ "step": 1016
+ },
+ {
+ "epoch": 3.37,
+ "learning_rate": 1.0217391304347826e-06,
+ "loss": 0.57,
+ "step": 1017
+ },
+ {
+ "epoch": 3.38,
+ "learning_rate": 1.016304347826087e-06,
+ "loss": 0.5806,
+ "step": 1018
+ },
+ {
+ "epoch": 3.38,
+ "learning_rate": 1.0108695652173914e-06,
+ "loss": 0.5865,
+ "step": 1019
+ },
+ {
+ "epoch": 3.38,
+ "learning_rate": 1.0054347826086956e-06,
+ "loss": 0.5943,
+ "step": 1020
+ },
+ {
+ "epoch": 3.39,
+ "learning_rate": 1e-06,
+ "loss": 0.5695,
+ "step": 1021
+ },
+ {
+ "epoch": 3.39,
+ "learning_rate": 9.945652173913043e-07,
+ "loss": 0.5857,
+ "step": 1022
+ },
+ {
+ "epoch": 3.39,
+ "learning_rate": 9.891304347826087e-07,
+ "loss": 0.5908,
+ "step": 1023
+ },
+ {
+ "epoch": 3.4,
+ "learning_rate": 9.836956521739131e-07,
+ "loss": 0.5626,
+ "step": 1024
+ },
+ {
+ "epoch": 3.4,
+ "learning_rate": 9.782608695652175e-07,
+ "loss": 0.5769,
+ "step": 1025
+ },
+ {
+ "epoch": 3.4,
+ "learning_rate": 9.728260869565217e-07,
+ "loss": 0.5884,
+ "step": 1026
+ },
+ {
+ "epoch": 3.41,
+ "learning_rate": 9.67391304347826e-07,
+ "loss": 0.5691,
+ "step": 1027
+ },
+ {
+ "epoch": 3.41,
+ "learning_rate": 9.619565217391305e-07,
+ "loss": 0.5825,
+ "step": 1028
+ },
+ {
+ "epoch": 3.41,
+ "learning_rate": 9.565217391304347e-07,
+ "loss": 0.5812,
+ "step": 1029
+ },
+ {
+ "epoch": 3.42,
+ "learning_rate": 9.510869565217393e-07,
+ "loss": 0.5674,
+ "step": 1030
+ },
+ {
+ "epoch": 3.42,
+ "learning_rate": 9.456521739130435e-07,
+ "loss": 0.5725,
+ "step": 1031
+ },
+ {
+ "epoch": 3.42,
+ "learning_rate": 9.402173913043478e-07,
+ "loss": 0.5788,
+ "step": 1032
+ },
+ {
+ "epoch": 3.43,
+ "learning_rate": 9.347826086956522e-07,
+ "loss": 0.6002,
+ "step": 1033
+ },
+ {
+ "epoch": 3.43,
+ "learning_rate": 9.293478260869565e-07,
+ "loss": 0.5616,
+ "step": 1034
+ },
+ {
+ "epoch": 3.43,
+ "learning_rate": 9.239130434782608e-07,
+ "loss": 0.5866,
+ "step": 1035
+ },
+ {
+ "epoch": 3.44,
+ "learning_rate": 9.184782608695653e-07,
+ "loss": 0.5993,
+ "step": 1036
+ },
+ {
+ "epoch": 3.44,
+ "learning_rate": 9.130434782608697e-07,
+ "loss": 0.5916,
+ "step": 1037
+ },
+ {
+ "epoch": 3.44,
+ "eval_loss": 0.7818962931632996,
+ "eval_runtime": 17.6137,
+ "eval_samples_per_second": 129.558,
+ "eval_steps_per_second": 5.45,
+ "step": 1037
+ },
+ {
+ "epoch": 3.44,
+ "learning_rate": 9.07608695652174e-07,
+ "loss": 0.5816,
+ "step": 1038
+ },
+ {
+ "epoch": 3.45,
+ "learning_rate": 9.021739130434782e-07,
+ "loss": 0.5946,
+ "step": 1039
+ },
+ {
+ "epoch": 3.45,
+ "learning_rate": 8.967391304347826e-07,
+ "loss": 0.5826,
+ "step": 1040
+ },
+ {
+ "epoch": 3.45,
+ "learning_rate": 8.913043478260869e-07,
+ "loss": 0.5805,
+ "step": 1041
+ },
+ {
+ "epoch": 3.46,
+ "learning_rate": 8.858695652173913e-07,
+ "loss": 0.6098,
+ "step": 1042
+ },
+ {
+ "epoch": 3.46,
+ "learning_rate": 8.804347826086957e-07,
+ "loss": 0.5869,
+ "step": 1043
+ },
+ {
+ "epoch": 3.46,
+ "learning_rate": 8.750000000000001e-07,
+ "loss": 0.573,
+ "step": 1044
+ },
+ {
+ "epoch": 3.47,
+ "learning_rate": 8.695652173913044e-07,
+ "loss": 0.5862,
+ "step": 1045
+ },
+ {
+ "epoch": 3.47,
+ "learning_rate": 8.641304347826088e-07,
+ "loss": 0.5944,
+ "step": 1046
+ },
+ {
+ "epoch": 3.47,
+ "learning_rate": 8.586956521739131e-07,
+ "loss": 0.5824,
+ "step": 1047
+ },
+ {
+ "epoch": 3.48,
+ "learning_rate": 8.532608695652173e-07,
+ "loss": 0.5997,
+ "step": 1048
+ },
+ {
+ "epoch": 3.48,
+ "learning_rate": 8.478260869565217e-07,
+ "loss": 0.5699,
+ "step": 1049
+ },
+ {
+ "epoch": 3.48,
+ "learning_rate": 8.423913043478262e-07,
+ "loss": 0.5964,
+ "step": 1050
+ },
+ {
+ "epoch": 3.49,
+ "learning_rate": 8.369565217391305e-07,
+ "loss": 0.5931,
+ "step": 1051
+ },
+ {
+ "epoch": 3.49,
+ "learning_rate": 8.315217391304348e-07,
+ "loss": 0.5951,
+ "step": 1052
+ },
+ {
+ "epoch": 3.49,
+ "learning_rate": 8.260869565217392e-07,
+ "loss": 0.5769,
+ "step": 1053
+ },
+ {
+ "epoch": 3.5,
+ "learning_rate": 8.206521739130435e-07,
+ "loss": 0.5687,
+ "step": 1054
+ },
+ {
+ "epoch": 3.5,
+ "learning_rate": 8.152173913043478e-07,
+ "loss": 0.5932,
+ "step": 1055
+ },
+ {
+ "epoch": 3.5,
+ "learning_rate": 8.097826086956521e-07,
+ "loss": 0.6009,
+ "step": 1056
+ },
+ {
+ "epoch": 3.51,
+ "learning_rate": 8.043478260869566e-07,
+ "loss": 0.5691,
+ "step": 1057
+ },
+ {
+ "epoch": 3.51,
+ "learning_rate": 7.989130434782609e-07,
+ "loss": 0.593,
+ "step": 1058
+ },
+ {
+ "epoch": 3.51,
+ "learning_rate": 7.934782608695652e-07,
+ "loss": 0.59,
+ "step": 1059
+ },
+ {
+ "epoch": 3.52,
+ "learning_rate": 7.880434782608696e-07,
+ "loss": 0.5868,
+ "step": 1060
+ },
+ {
+ "epoch": 3.52,
+ "learning_rate": 7.826086956521739e-07,
+ "loss": 0.588,
+ "step": 1061
+ },
+ {
+ "epoch": 3.52,
+ "learning_rate": 7.771739130434783e-07,
+ "loss": 0.5839,
+ "step": 1062
+ },
+ {
+ "epoch": 3.53,
+ "learning_rate": 7.717391304347826e-07,
+ "loss": 0.5876,
+ "step": 1063
+ },
+ {
+ "epoch": 3.53,
+ "learning_rate": 7.663043478260871e-07,
+ "loss": 0.5799,
+ "step": 1064
+ },
+ {
+ "epoch": 3.53,
+ "learning_rate": 7.608695652173913e-07,
+ "loss": 0.5802,
+ "step": 1065
+ },
+ {
+ "epoch": 3.54,
+ "learning_rate": 7.554347826086957e-07,
+ "loss": 0.593,
+ "step": 1066
+ },
+ {
+ "epoch": 3.54,
+ "learning_rate": 7.5e-07,
+ "loss": 0.6039,
+ "step": 1067
+ },
+ {
+ "epoch": 3.54,
+ "learning_rate": 7.445652173913043e-07,
+ "loss": 0.5731,
+ "step": 1068
+ },
+ {
+ "epoch": 3.55,
+ "learning_rate": 7.391304347826088e-07,
+ "loss": 0.5941,
+ "step": 1069
+ },
+ {
+ "epoch": 3.55,
+ "learning_rate": 7.336956521739131e-07,
+ "loss": 0.6007,
+ "step": 1070
+ },
+ {
+ "epoch": 3.55,
+ "learning_rate": 7.282608695652174e-07,
+ "loss": 0.5724,
+ "step": 1071
+ },
+ {
+ "epoch": 3.56,
+ "learning_rate": 7.228260869565218e-07,
+ "loss": 0.5829,
+ "step": 1072
+ },
+ {
+ "epoch": 3.56,
+ "learning_rate": 7.173913043478262e-07,
+ "loss": 0.5802,
+ "step": 1073
+ },
+ {
+ "epoch": 3.56,
+ "learning_rate": 7.119565217391304e-07,
+ "loss": 0.591,
+ "step": 1074
+ },
+ {
+ "epoch": 3.57,
+ "learning_rate": 7.065217391304348e-07,
+ "loss": 0.5748,
+ "step": 1075
+ },
+ {
+ "epoch": 3.57,
+ "learning_rate": 7.010869565217392e-07,
+ "loss": 0.5877,
+ "step": 1076
+ },
+ {
+ "epoch": 3.57,
+ "learning_rate": 6.956521739130435e-07,
+ "loss": 0.6029,
+ "step": 1077
+ },
+ {
+ "epoch": 3.58,
+ "learning_rate": 6.902173913043478e-07,
+ "loss": 0.5827,
+ "step": 1078
+ },
+ {
+ "epoch": 3.58,
+ "learning_rate": 6.847826086956522e-07,
+ "loss": 0.5958,
+ "step": 1079
+ },
+ {
+ "epoch": 3.58,
+ "learning_rate": 6.793478260869566e-07,
+ "loss": 0.569,
+ "step": 1080
+ },
+ {
+ "epoch": 3.59,
+ "learning_rate": 6.739130434782609e-07,
+ "loss": 0.5899,
+ "step": 1081
+ },
+ {
+ "epoch": 3.59,
+ "learning_rate": 6.684782608695652e-07,
+ "loss": 0.5876,
+ "step": 1082
+ },
+ {
+ "epoch": 3.59,
+ "learning_rate": 6.630434782608696e-07,
+ "loss": 0.5833,
+ "step": 1083
+ },
+ {
+ "epoch": 3.6,
+ "learning_rate": 6.576086956521739e-07,
+ "loss": 0.5949,
+ "step": 1084
+ },
+ {
+ "epoch": 3.6,
+ "learning_rate": 6.521739130434783e-07,
+ "loss": 0.5917,
+ "step": 1085
+ },
+ {
+ "epoch": 3.6,
+ "learning_rate": 6.467391304347827e-07,
+ "loss": 0.5877,
+ "step": 1086
+ },
+ {
+ "epoch": 3.61,
+ "learning_rate": 6.41304347826087e-07,
+ "loss": 0.5826,
+ "step": 1087
+ },
+ {
+ "epoch": 3.61,
+ "learning_rate": 6.358695652173913e-07,
+ "loss": 0.5922,
+ "step": 1088
+ },
+ {
+ "epoch": 3.61,
+ "learning_rate": 6.304347826086957e-07,
+ "loss": 0.5796,
+ "step": 1089
+ },
+ {
+ "epoch": 3.62,
+ "learning_rate": 6.25e-07,
+ "loss": 0.5744,
+ "step": 1090
+ },
+ {
+ "epoch": 3.62,
+ "learning_rate": 6.195652173913043e-07,
+ "loss": 0.5875,
+ "step": 1091
+ },
+ {
+ "epoch": 3.62,
+ "learning_rate": 6.141304347826087e-07,
+ "loss": 0.5811,
+ "step": 1092
+ },
+ {
+ "epoch": 3.63,
+ "learning_rate": 6.086956521739131e-07,
+ "loss": 0.5816,
+ "step": 1093
+ },
+ {
+ "epoch": 3.63,
+ "learning_rate": 6.032608695652174e-07,
+ "loss": 0.5956,
+ "step": 1094
+ },
+ {
+ "epoch": 3.63,
+ "learning_rate": 5.978260869565218e-07,
+ "loss": 0.5916,
+ "step": 1095
+ },
+ {
+ "epoch": 3.64,
+ "learning_rate": 5.923913043478261e-07,
+ "loss": 0.5795,
+ "step": 1096
+ },
+ {
+ "epoch": 3.64,
+ "learning_rate": 5.869565217391305e-07,
+ "loss": 0.5878,
+ "step": 1097
+ },
+ {
+ "epoch": 3.64,
+ "learning_rate": 5.815217391304348e-07,
+ "loss": 0.6141,
+ "step": 1098
+ },
+ {
+ "epoch": 3.64,
+ "eval_loss": 0.7818270325660706,
+ "eval_runtime": 17.6197,
+ "eval_samples_per_second": 129.514,
+ "eval_steps_per_second": 5.448,
+ "step": 1098
+ },
+ {
+ "epoch": 3.65,
+ "learning_rate": 5.760869565217391e-07,
+ "loss": 0.5944,
+ "step": 1099
+ },
+ {
+ "epoch": 3.65,
+ "learning_rate": 5.706521739130435e-07,
+ "loss": 0.5901,
+ "step": 1100
+ },
+ {
+ "epoch": 3.65,
+ "learning_rate": 5.652173913043478e-07,
+ "loss": 0.5932,
+ "step": 1101
+ },
+ {
+ "epoch": 3.66,
+ "learning_rate": 5.597826086956522e-07,
+ "loss": 0.5929,
+ "step": 1102
+ },
+ {
+ "epoch": 3.66,
+ "learning_rate": 5.543478260869565e-07,
+ "loss": 0.5674,
+ "step": 1103
+ },
+ {
+ "epoch": 3.66,
+ "learning_rate": 5.489130434782609e-07,
+ "loss": 0.5765,
+ "step": 1104
+ },
+ {
+ "epoch": 3.67,
+ "learning_rate": 5.434782608695653e-07,
+ "loss": 0.5847,
+ "step": 1105
+ },
+ {
+ "epoch": 3.67,
+ "learning_rate": 5.380434782608696e-07,
+ "loss": 0.6048,
+ "step": 1106
+ },
+ {
+ "epoch": 3.67,
+ "learning_rate": 5.32608695652174e-07,
+ "loss": 0.5921,
+ "step": 1107
+ },
+ {
+ "epoch": 3.67,
+ "learning_rate": 5.271739130434782e-07,
+ "loss": 0.5985,
+ "step": 1108
+ },
+ {
+ "epoch": 3.68,
+ "learning_rate": 5.217391304347826e-07,
+ "loss": 0.5727,
+ "step": 1109
+ },
+ {
+ "epoch": 3.68,
+ "learning_rate": 5.163043478260869e-07,
+ "loss": 0.5807,
+ "step": 1110
+ },
+ {
+ "epoch": 3.68,
+ "learning_rate": 5.108695652173913e-07,
+ "loss": 0.5804,
+ "step": 1111
+ },
+ {
+ "epoch": 3.69,
+ "learning_rate": 5.054347826086957e-07,
+ "loss": 0.5789,
+ "step": 1112
+ },
+ {
+ "epoch": 3.69,
+ "learning_rate": 5e-07,
+ "loss": 0.5782,
+ "step": 1113
+ },
+ {
+ "epoch": 3.69,
+ "learning_rate": 4.945652173913044e-07,
+ "loss": 0.5757,
+ "step": 1114
+ },
+ {
+ "epoch": 3.7,
+ "learning_rate": 4.891304347826088e-07,
+ "loss": 0.6007,
+ "step": 1115
+ },
+ {
+ "epoch": 3.7,
+ "learning_rate": 4.83695652173913e-07,
+ "loss": 0.5844,
+ "step": 1116
+ },
+ {
+ "epoch": 3.7,
+ "learning_rate": 4.782608695652173e-07,
+ "loss": 0.5973,
+ "step": 1117
+ },
+ {
+ "epoch": 3.71,
+ "learning_rate": 4.7282608695652177e-07,
+ "loss": 0.5796,
+ "step": 1118
+ },
+ {
+ "epoch": 3.71,
+ "learning_rate": 4.673913043478261e-07,
+ "loss": 0.5806,
+ "step": 1119
+ },
+ {
+ "epoch": 3.71,
+ "learning_rate": 4.619565217391304e-07,
+ "loss": 0.5921,
+ "step": 1120
+ },
+ {
+ "epoch": 3.72,
+ "learning_rate": 4.5652173913043484e-07,
+ "loss": 0.5825,
+ "step": 1121
+ },
+ {
+ "epoch": 3.72,
+ "learning_rate": 4.510869565217391e-07,
+ "loss": 0.5784,
+ "step": 1122
+ },
+ {
+ "epoch": 3.72,
+ "learning_rate": 4.4565217391304346e-07,
+ "loss": 0.5956,
+ "step": 1123
+ },
+ {
+ "epoch": 3.73,
+ "learning_rate": 4.4021739130434785e-07,
+ "loss": 0.585,
+ "step": 1124
+ },
+ {
+ "epoch": 3.73,
+ "learning_rate": 4.347826086956522e-07,
+ "loss": 0.5804,
+ "step": 1125
+ },
+ {
+ "epoch": 3.73,
+ "learning_rate": 4.2934782608695653e-07,
+ "loss": 0.6056,
+ "step": 1126
+ },
+ {
+ "epoch": 3.74,
+ "learning_rate": 4.2391304347826086e-07,
+ "loss": 0.5867,
+ "step": 1127
+ },
+ {
+ "epoch": 3.74,
+ "learning_rate": 4.1847826086956525e-07,
+ "loss": 0.5887,
+ "step": 1128
+ },
+ {
+ "epoch": 3.74,
+ "learning_rate": 4.130434782608696e-07,
+ "loss": 0.565,
+ "step": 1129
+ },
+ {
+ "epoch": 3.75,
+ "learning_rate": 4.076086956521739e-07,
+ "loss": 0.567,
+ "step": 1130
+ },
+ {
+ "epoch": 3.75,
+ "learning_rate": 4.021739130434783e-07,
+ "loss": 0.5955,
+ "step": 1131
+ },
+ {
+ "epoch": 3.75,
+ "learning_rate": 3.967391304347826e-07,
+ "loss": 0.5934,
+ "step": 1132
+ },
+ {
+ "epoch": 3.76,
+ "learning_rate": 3.9130434782608694e-07,
+ "loss": 0.5733,
+ "step": 1133
+ },
+ {
+ "epoch": 3.76,
+ "learning_rate": 3.858695652173913e-07,
+ "loss": 0.6031,
+ "step": 1134
+ },
+ {
+ "epoch": 3.76,
+ "learning_rate": 3.8043478260869567e-07,
+ "loss": 0.5827,
+ "step": 1135
+ },
+ {
+ "epoch": 3.77,
+ "learning_rate": 3.75e-07,
+ "loss": 0.5879,
+ "step": 1136
+ },
+ {
+ "epoch": 3.77,
+ "learning_rate": 3.695652173913044e-07,
+ "loss": 0.5773,
+ "step": 1137
+ },
+ {
+ "epoch": 3.77,
+ "learning_rate": 3.641304347826087e-07,
+ "loss": 0.5823,
+ "step": 1138
+ },
+ {
+ "epoch": 3.78,
+ "learning_rate": 3.586956521739131e-07,
+ "loss": 0.5853,
+ "step": 1139
+ },
+ {
+ "epoch": 3.78,
+ "learning_rate": 3.532608695652174e-07,
+ "loss": 0.5871,
+ "step": 1140
+ },
+ {
+ "epoch": 3.78,
+ "learning_rate": 3.4782608695652175e-07,
+ "loss": 0.595,
+ "step": 1141
+ },
+ {
+ "epoch": 3.79,
+ "learning_rate": 3.423913043478261e-07,
+ "loss": 0.5915,
+ "step": 1142
+ },
+ {
+ "epoch": 3.79,
+ "learning_rate": 3.369565217391304e-07,
+ "loss": 0.581,
+ "step": 1143
+ },
+ {
+ "epoch": 3.79,
+ "learning_rate": 3.315217391304348e-07,
+ "loss": 0.5883,
+ "step": 1144
+ },
+ {
+ "epoch": 3.8,
+ "learning_rate": 3.2608695652173915e-07,
+ "loss": 0.573,
+ "step": 1145
+ },
+ {
+ "epoch": 3.8,
+ "learning_rate": 3.206521739130435e-07,
+ "loss": 0.5737,
+ "step": 1146
+ },
+ {
+ "epoch": 3.8,
+ "learning_rate": 3.1521739130434783e-07,
+ "loss": 0.5638,
+ "step": 1147
+ },
+ {
+ "epoch": 3.81,
+ "learning_rate": 3.0978260869565217e-07,
+ "loss": 0.5953,
+ "step": 1148
+ },
+ {
+ "epoch": 3.81,
+ "learning_rate": 3.0434782608695656e-07,
+ "loss": 0.5984,
+ "step": 1149
+ },
+ {
+ "epoch": 3.81,
+ "learning_rate": 2.989130434782609e-07,
+ "loss": 0.5913,
+ "step": 1150
+ },
+ {
+ "epoch": 3.82,
+ "learning_rate": 2.9347826086956523e-07,
+ "loss": 0.5957,
+ "step": 1151
+ },
+ {
+ "epoch": 3.82,
+ "learning_rate": 2.8804347826086957e-07,
+ "loss": 0.5824,
+ "step": 1152
+ },
+ {
+ "epoch": 3.82,
+ "learning_rate": 2.826086956521739e-07,
+ "loss": 0.5823,
+ "step": 1153
+ },
+ {
+ "epoch": 3.83,
+ "learning_rate": 2.7717391304347825e-07,
+ "loss": 0.5796,
+ "step": 1154
+ },
+ {
+ "epoch": 3.83,
+ "learning_rate": 2.7173913043478264e-07,
+ "loss": 0.5915,
+ "step": 1155
+ },
+ {
+ "epoch": 3.83,
+ "learning_rate": 2.66304347826087e-07,
+ "loss": 0.6044,
+ "step": 1156
+ },
+ {
+ "epoch": 3.84,
+ "learning_rate": 2.608695652173913e-07,
+ "loss": 0.5918,
+ "step": 1157
+ },
+ {
+ "epoch": 3.84,
+ "learning_rate": 2.5543478260869565e-07,
+ "loss": 0.5903,
+ "step": 1158
+ },
+ {
+ "epoch": 3.84,
+ "learning_rate": 2.5e-07,
+ "loss": 0.595,
+ "step": 1159
+ },
+ {
+ "epoch": 3.84,
+ "eval_loss": 0.7819833755493164,
+ "eval_runtime": 17.6271,
+ "eval_samples_per_second": 129.46,
+ "eval_steps_per_second": 5.446,
+ "step": 1159
+ },
+ {
+ "epoch": 3.85,
+ "learning_rate": 2.445652173913044e-07,
+ "loss": 0.5926,
+ "step": 1160
+ },
+ {
+ "epoch": 3.85,
+ "learning_rate": 2.3913043478260866e-07,
+ "loss": 0.5802,
+ "step": 1161
+ },
+ {
+ "epoch": 3.85,
+ "learning_rate": 2.3369565217391305e-07,
+ "loss": 0.5726,
+ "step": 1162
+ },
+ {
+ "epoch": 3.86,
+ "learning_rate": 2.2826086956521742e-07,
+ "loss": 0.592,
+ "step": 1163
+ },
+ {
+ "epoch": 3.86,
+ "learning_rate": 2.2282608695652173e-07,
+ "loss": 0.6058,
+ "step": 1164
+ },
+ {
+ "epoch": 3.86,
+ "learning_rate": 2.173913043478261e-07,
+ "loss": 0.5843,
+ "step": 1165
+ },
+ {
+ "epoch": 3.87,
+ "learning_rate": 2.1195652173913043e-07,
+ "loss": 0.5848,
+ "step": 1166
+ },
+ {
+ "epoch": 3.87,
+ "learning_rate": 2.065217391304348e-07,
+ "loss": 0.603,
+ "step": 1167
+ },
+ {
+ "epoch": 3.87,
+ "learning_rate": 2.0108695652173916e-07,
+ "loss": 0.5937,
+ "step": 1168
+ },
+ {
+ "epoch": 3.88,
+ "learning_rate": 1.9565217391304347e-07,
+ "loss": 0.573,
+ "step": 1169
+ },
+ {
+ "epoch": 3.88,
+ "learning_rate": 1.9021739130434784e-07,
+ "loss": 0.5811,
+ "step": 1170
+ },
+ {
+ "epoch": 3.88,
+ "learning_rate": 1.847826086956522e-07,
+ "loss": 0.5945,
+ "step": 1171
+ },
+ {
+ "epoch": 3.89,
+ "learning_rate": 1.7934782608695654e-07,
+ "loss": 0.5802,
+ "step": 1172
+ },
+ {
+ "epoch": 3.89,
+ "learning_rate": 1.7391304347826088e-07,
+ "loss": 0.5944,
+ "step": 1173
+ },
+ {
+ "epoch": 3.89,
+ "learning_rate": 1.684782608695652e-07,
+ "loss": 0.5901,
+ "step": 1174
+ },
+ {
+ "epoch": 3.9,
+ "learning_rate": 1.6304347826086958e-07,
+ "loss": 0.5786,
+ "step": 1175
+ },
+ {
+ "epoch": 3.9,
+ "learning_rate": 1.5760869565217392e-07,
+ "loss": 0.5843,
+ "step": 1176
+ },
+ {
+ "epoch": 3.9,
+ "learning_rate": 1.5217391304347828e-07,
+ "loss": 0.5771,
+ "step": 1177
+ },
+ {
+ "epoch": 3.91,
+ "learning_rate": 1.4673913043478262e-07,
+ "loss": 0.5921,
+ "step": 1178
+ },
+ {
+ "epoch": 3.91,
+ "learning_rate": 1.4130434782608695e-07,
+ "loss": 0.577,
+ "step": 1179
+ },
+ {
+ "epoch": 3.91,
+ "learning_rate": 1.3586956521739132e-07,
+ "loss": 0.5844,
+ "step": 1180
+ },
+ {
+ "epoch": 3.92,
+ "learning_rate": 1.3043478260869566e-07,
+ "loss": 0.5803,
+ "step": 1181
+ },
+ {
+ "epoch": 3.92,
+ "learning_rate": 1.25e-07,
+ "loss": 0.5698,
+ "step": 1182
+ },
+ {
+ "epoch": 3.92,
+ "learning_rate": 1.1956521739130433e-07,
+ "loss": 0.605,
+ "step": 1183
+ },
+ {
+ "epoch": 3.93,
+ "learning_rate": 1.1413043478260871e-07,
+ "loss": 0.5873,
+ "step": 1184
+ },
+ {
+ "epoch": 3.93,
+ "learning_rate": 1.0869565217391305e-07,
+ "loss": 0.5849,
+ "step": 1185
+ },
+ {
+ "epoch": 3.93,
+ "learning_rate": 1.032608695652174e-07,
+ "loss": 0.5926,
+ "step": 1186
+ },
+ {
+ "epoch": 3.94,
+ "learning_rate": 9.782608695652174e-08,
+ "loss": 0.5804,
+ "step": 1187
+ },
+ {
+ "epoch": 3.94,
+ "learning_rate": 9.23913043478261e-08,
+ "loss": 0.5786,
+ "step": 1188
+ },
+ {
+ "epoch": 3.94,
+ "learning_rate": 8.695652173913044e-08,
+ "loss": 0.582,
+ "step": 1189
+ },
+ {
+ "epoch": 3.95,
+ "learning_rate": 8.152173913043479e-08,
+ "loss": 0.578,
+ "step": 1190
+ },
+ {
+ "epoch": 3.95,
+ "learning_rate": 7.608695652173914e-08,
+ "loss": 0.583,
+ "step": 1191
+ },
+ {
+ "epoch": 3.95,
+ "learning_rate": 7.065217391304348e-08,
+ "loss": 0.5603,
+ "step": 1192
+ },
+ {
+ "epoch": 3.96,
+ "learning_rate": 6.521739130434783e-08,
+ "loss": 0.5908,
+ "step": 1193
+ },
+ {
+ "epoch": 3.96,
+ "learning_rate": 5.978260869565217e-08,
+ "loss": 0.5871,
+ "step": 1194
+ },
+ {
+ "epoch": 3.96,
+ "learning_rate": 5.4347826086956524e-08,
+ "loss": 0.5958,
+ "step": 1195
+ },
+ {
+ "epoch": 3.97,
+ "learning_rate": 4.891304347826087e-08,
+ "loss": 0.588,
+ "step": 1196
+ },
+ {
+ "epoch": 3.97,
+ "learning_rate": 4.347826086956522e-08,
+ "loss": 0.5838,
+ "step": 1197
+ },
+ {
+ "epoch": 3.97,
+ "learning_rate": 3.804347826086957e-08,
+ "loss": 0.5835,
+ "step": 1198
+ },
+ {
+ "epoch": 3.98,
+ "learning_rate": 3.2608695652173914e-08,
+ "loss": 0.5801,
+ "step": 1199
+ },
+ {
+ "epoch": 3.98,
+ "learning_rate": 2.7173913043478262e-08,
+ "loss": 0.5959,
+ "step": 1200
+ },
+ {
+ "epoch": 3.98,
+ "learning_rate": 2.173913043478261e-08,
+ "loss": 0.5649,
+ "step": 1201
+ },
+ {
+ "epoch": 3.99,
+ "learning_rate": 1.6304347826086957e-08,
+ "loss": 0.5745,
+ "step": 1202
+ },
+ {
+ "epoch": 3.99,
+ "learning_rate": 1.0869565217391305e-08,
+ "loss": 0.5876,
+ "step": 1203
+ },
+ {
+ "epoch": 3.99,
+ "learning_rate": 5.434782608695652e-09,
+ "loss": 0.5869,
+ "step": 1204
+ }
+ ],
+ "logging_steps": 1,
+ "max_steps": 1204,
+ "num_train_epochs": 4,
+ "save_steps": 500,
+ "total_flos": 4.039702036808624e+19,
+ "trial_name": null,
+ "trial_params": null
+}
diff --git a/training_args.bin b/training_args.bin
new file mode 100644
index 0000000..29c8275
--- /dev/null
+++ b/training_args.bin
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b9941673462c040441acbc7e7237502c42f1e2b3f9a3f702a3fa1ecfa5ada1a9
+size 129
diff --git a/zero_to_fp32.py b/zero_to_fp32.py
new file mode 100644
index 0000000..c98caae
--- /dev/null
+++ b/zero_to_fp32.py
@@ -0,0 +1,587 @@
+#!/usr/bin/env python
+
+# Copyright (c) Microsoft Corporation.
+# SPDX-License-Identifier: Apache-2.0
+
+# DeepSpeed Team
+
+# This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
+# copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
+# the future. Once extracted, the weights don't require DeepSpeed and can be used in any
+# application.
+#
+# example: python zero_to_fp32.py . pytorch_model.bin
+
+import argparse
+import torch
+import glob
+import math
+import os
+import re
+from collections import OrderedDict
+from dataclasses import dataclass
+
+# while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
+# DeepSpeed data structures it has to be available in the current python environment.
+from deepspeed.utils import logger
+from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
+
+
+@dataclass
+class zero_model_state:
+ buffers: dict()
+ param_shapes: dict()
+ shared_params: list
+ ds_version: int
+ frozen_param_shapes: dict()
+ frozen_param_fragments: dict()
+
+
+debug = 0
+
+# load to cpu
+device = torch.device('cpu')
+
+
+def atoi(text):
+ return int(text) if text.isdigit() else text
+
+
+def natural_keys(text):
+ '''
+ alist.sort(key=natural_keys) sorts in human order
+ http://nedbatchelder.com/blog/200712/human_sorting.html
+ (See Toothy's implementation in the comments)
+ '''
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
+
+
+def get_model_state_file(checkpoint_dir, zero_stage):
+ if not os.path.isdir(checkpoint_dir):
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
+
+ # there should be only one file
+ if zero_stage <= 2:
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
+ elif zero_stage == 3:
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
+
+ if not os.path.exists(file):
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
+
+ return file
+
+
+def get_checkpoint_files(checkpoint_dir, glob_pattern):
+ # XXX: need to test that this simple glob rule works for multi-node setup too
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
+
+ if len(ckpt_files) == 0:
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
+
+ return ckpt_files
+
+
+def get_optim_files(checkpoint_dir):
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
+
+
+def get_model_state_files(checkpoint_dir):
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
+
+
+def parse_model_states(files):
+ zero_model_states = []
+ for file in files:
+ state_dict = torch.load(file, map_location=device)
+
+ if BUFFER_NAMES not in state_dict:
+ raise ValueError(f"{file} is not a model state checkpoint")
+ buffer_names = state_dict[BUFFER_NAMES]
+ if debug:
+ print("Found buffers:", buffer_names)
+
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
+ param_shapes = state_dict[PARAM_SHAPES]
+
+ # collect parameters that are included in param_shapes
+ param_names = []
+ for s in param_shapes:
+ for name in s.keys():
+ param_names.append(name)
+
+ # update with frozen parameters
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
+ if frozen_param_shapes is not None:
+ if debug:
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
+ param_names += list(frozen_param_shapes.keys())
+
+ # handle shared params
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
+
+ ds_version = state_dict.get(DS_VERSION, None)
+
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
+
+ z_model_state = zero_model_state(buffers=buffers,
+ param_shapes=param_shapes,
+ shared_params=shared_params,
+ ds_version=ds_version,
+ frozen_param_shapes=frozen_param_shapes,
+ frozen_param_fragments=frozen_param_fragments)
+ zero_model_states.append(z_model_state)
+
+ return zero_model_states
+
+
+def parse_optim_states(files, ds_checkpoint_dir):
+
+ total_files = len(files)
+ state_dicts = []
+ for f in files:
+ state_dict = torch.load(f, map_location=device)
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
+ # and also handle the case where it was already removed by another helper script
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
+ state_dicts.append(state_dict)
+
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
+
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
+ # use the max of the partition_count to get the dp world_size.
+
+ if type(world_size) is list:
+ world_size = max(world_size)
+
+ if world_size != total_files:
+ raise ValueError(
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
+ )
+
+ # the groups are named differently in each stage
+ if zero_stage <= 2:
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
+ elif zero_stage == 3:
+ fp32_groups_key = FP32_FLAT_GROUPS
+ else:
+ raise ValueError(f"unknown zero stage {zero_stage}")
+
+ if zero_stage <= 2:
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
+ elif zero_stage == 3:
+ # if there is more than one param group, there will be multiple flattened tensors - one
+ # flattened tensor per group - for simplicity merge them into a single tensor
+ #
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
+
+ fp32_flat_groups = [
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
+ ]
+
+ return zero_stage, world_size, fp32_flat_groups
+
+
+def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir):
+ """
+ Returns fp32 state_dict reconstructed from ds checkpoint
+
+ Args:
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
+
+ """
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
+
+ optim_files = get_optim_files(ds_checkpoint_dir)
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
+
+ model_files = get_model_state_files(ds_checkpoint_dir)
+
+ zero_model_states = parse_model_states(model_files)
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
+
+ if zero_stage <= 2:
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states)
+ elif zero_stage == 3:
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states)
+
+
+def _zero2_merge_frozen_params(state_dict, zero_model_states):
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
+ return
+
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
+
+ if debug:
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
+
+ wanted_params = len(frozen_param_shapes)
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
+ print(f'Frozen params: Have {avail_numel} numels to process.')
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
+
+ total_params = 0
+ total_numel = 0
+ for name, shape in frozen_param_shapes.items():
+ total_params += 1
+ unpartitioned_numel = shape.numel()
+ total_numel += unpartitioned_numel
+
+ state_dict[name] = frozen_param_fragments[name]
+
+ if debug:
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
+
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
+
+
+def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
+ param_shapes = zero_model_states[0].param_shapes
+
+ # Reconstruction protocol:
+ #
+ # XXX: document this
+
+ if debug:
+ for i in range(world_size):
+ for j in range(len(fp32_flat_groups[0])):
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
+
+ # XXX: memory usage doubles here (zero2)
+ num_param_groups = len(fp32_flat_groups[0])
+ merged_single_partition_of_fp32_groups = []
+ for i in range(num_param_groups):
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
+ avail_numel = sum(
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
+
+ if debug:
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
+ # not asserting if there is a mismatch due to possible padding
+ print(f"Have {avail_numel} numels to process.")
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
+
+ # params
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
+ # out-of-core computing solution
+ total_numel = 0
+ total_params = 0
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
+ offset = 0
+ avail_numel = full_single_fp32_vector.numel()
+ for name, shape in shapes.items():
+
+ unpartitioned_numel = shape.numel()
+ total_numel += unpartitioned_numel
+ total_params += 1
+
+ if debug:
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
+ offset += unpartitioned_numel
+
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
+ # live optimizer object, so we are checking that the numbers are within the right range
+ align_to = 2 * world_size
+
+ def zero2_align(x):
+ return align_to * math.ceil(x / align_to)
+
+ if debug:
+ print(f"original offset={offset}, avail_numel={avail_numel}")
+
+ offset = zero2_align(offset)
+ avail_numel = zero2_align(avail_numel)
+
+ if debug:
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
+
+ # Sanity check
+ if offset != avail_numel:
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
+
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
+
+
+def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states):
+ state_dict = OrderedDict()
+
+ # buffers
+ buffers = zero_model_states[0].buffers
+ state_dict.update(buffers)
+ if debug:
+ print(f"added {len(buffers)} buffers")
+
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
+
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
+
+ # recover shared parameters
+ for pair in zero_model_states[0].shared_params:
+ if pair[1] in state_dict:
+ state_dict[pair[0]] = state_dict[pair[1]]
+
+ return state_dict
+
+
+def zero3_partitioned_param_info(unpartitioned_numel, world_size):
+ remainder = unpartitioned_numel % world_size
+ padding_numel = (world_size - remainder) if remainder else 0
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
+ return partitioned_numel, padding_numel
+
+
+def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
+ return
+
+ if debug:
+ for i in range(world_size):
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
+
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
+ wanted_params = len(frozen_param_shapes)
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
+ print(f'Frozen params: Have {avail_numel} numels to process.')
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
+
+ total_params = 0
+ total_numel = 0
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
+ total_params += 1
+ unpartitioned_numel = shape.numel()
+ total_numel += unpartitioned_numel
+
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
+
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
+
+ if debug:
+ print(
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
+ )
+
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
+
+
+def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
+ param_shapes = zero_model_states[0].param_shapes
+ avail_numel = fp32_flat_groups[0].numel() * world_size
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
+ # param, re-consolidating each param, while dealing with padding if any
+
+ # merge list of dicts, preserving order
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
+
+ if debug:
+ for i in range(world_size):
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
+
+ wanted_params = len(param_shapes)
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
+ # not asserting if there is a mismatch due to possible padding
+ avail_numel = fp32_flat_groups[0].numel() * world_size
+ print(f"Trainable params: Have {avail_numel} numels to process.")
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
+
+ # params
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
+ # out-of-core computing solution
+ offset = 0
+ total_numel = 0
+ total_params = 0
+ for name, shape in param_shapes.items():
+
+ unpartitioned_numel = shape.numel()
+ total_numel += unpartitioned_numel
+ total_params += 1
+
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
+
+ if debug:
+ print(
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
+ )
+
+ # XXX: memory usage doubles here
+ state_dict[name] = torch.cat(
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
+ offset += partitioned_numel
+
+ offset *= world_size
+
+ # Sanity check
+ if offset != avail_numel:
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
+
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
+
+
+def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states):
+ state_dict = OrderedDict()
+
+ # buffers
+ buffers = zero_model_states[0].buffers
+ state_dict.update(buffers)
+ if debug:
+ print(f"added {len(buffers)} buffers")
+
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
+
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
+
+ # recover shared parameters
+ for pair in zero_model_states[0].shared_params:
+ if pair[1] in state_dict:
+ state_dict[pair[0]] = state_dict[pair[1]]
+
+ return state_dict
+
+
+def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None):
+ """
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
+ via a model hub.
+
+ Args:
+ - ``checkpoint_dir``: path to the desired checkpoint folder
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
+
+ Returns:
+ - pytorch ``state_dict``
+
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
+ the checkpoint.
+
+ A typical usage might be ::
+
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
+ # do the training and checkpoint saving
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
+ model = model.cpu() # move to cpu
+ model.load_state_dict(state_dict)
+ # submit to model hub or save the model to share with others
+
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
+ application. i.e. you will need to re-initialize the deepspeed engine, since
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
+
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
+
+ """
+ if tag is None:
+ latest_path = os.path.join(checkpoint_dir, 'latest')
+ if os.path.isfile(latest_path):
+ with open(latest_path, 'r') as fd:
+ tag = fd.read().strip()
+ else:
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
+
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
+
+ if not os.path.isdir(ds_checkpoint_dir):
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
+
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir)
+
+
+def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None):
+ """
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
+
+ Args:
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
+ """
+
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
+ print(f"Saving fp32 state dict to {output_file}")
+ torch.save(state_dict, output_file)
+
+
+def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
+ """
+ 1. Put the provided model to cpu
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
+ 3. Load it into the provided model
+
+ Args:
+ - ``model``: the model object to update
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
+
+ Returns:
+ - ``model`: modified model
+
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
+ conveniently placed for you in the checkpoint folder.
+
+ A typical usage might be ::
+
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
+ # submit to model hub or save the model to share with others
+
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
+
+ """
+ logger.info(f"Extracting fp32 weights")
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
+
+ logger.info(f"Overwriting model with fp32 weights")
+ model = model.cpu()
+ model.load_state_dict(state_dict, strict=False)
+
+ return model
+
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("checkpoint_dir",
+ type=str,
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
+ parser.add_argument(
+ "output_file",
+ type=str,
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
+ parser.add_argument("-t",
+ "--tag",
+ type=str,
+ default=None,
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
+ args = parser.parse_args()
+
+ debug = args.debug
+
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir, args.output_file, tag=args.tag)