初始化项目,由ModelHub XC社区提供模型

Model: Xunzillm4cc/Xunzi-Qwen1.5-4B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-10 17:39:13 +08:00
commit 8d00bdf806
20 changed files with 304764 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
model-00002-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
model-00001-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text

34
README.md Normal file
View File

@@ -0,0 +1,34 @@
# 荀子系列大语言模型
随着科技的飞速发展,人工智能已深入到各个领域。为响应古籍活化利用号召,推动大语言模型与古籍处理深度融合,以古籍智能化的研究为目的,本项目推出了一系列古籍处理领域大语言模型:荀子古籍大语言模型。荀子不仅是我国先秦时期伟大的朴素唯物主义的思想家,也是一位散文大家。他在语言学理论的阐述上又是一位开拓者、奠基人。荀子系列专为古籍智能处理而设计,这一系列模型的推出将推动古籍研究与保护工作的新发展,提高中华传统文化传承的效率与质量。
本次荀子系列模型开源包括两个部分:基座模型[**XunziALLM**](https://modelscope.cn/models/Xunzillm4cc/Xunzi-Qwen)与对话模型[**XunziChat**](https://modelscope.cn/models/Xunzillm4cc/Xunzi-Qwen-Chat)模型的调用方式与阿里云的Qwen系列大模型一致。
## 荀子系列模型亮点:
* 古籍智能标引,荀子模型具备强大的古籍文献标引能力,能够对古籍中的内容进行高质量主题标引,帮助研究人员快速了解文章主题。
* 古籍信息抽取,荀子模型能够自动从古籍中抽取关键信息,如人物、事件、地点等,大大节省了研究人员的信息整理时间。
* 诗歌生成:荀子模型还具备诗歌生成的能力,能够根据给定的主题或关键词,自动生成符合语法规则和韵律要求的古诗,为诗词爱好者提供创作灵感。
* 古籍高质量翻译:对于那些难以理解的古籍文献,荀子模型能够提供高质量的翻译服务,帮助研究人员更好地理解原文含义。
* 阅读理解:荀子模型能够对给出的古文文本进行分析解释,实现对古籍文本的自动阅读。
* 词法分析:荀子模型可以完成古籍文本的自动分词和词性标注,能够有效提升语言学工作者的研究效率。
* 自动标点:荀子大模型可以快速完成古籍文本的断句和标点,提升研究者以及业余爱好者对古籍文本的阅读体验。
由于我们同时发布了基座模型,用户也可以根据自己的需求,使用本地的训练语料微调荀子基座模型,使得其能够在古籍下游处理任务上取得更佳的处理性能。
## 声明:
大语言模型庞大的参数量也带来了更多的随机性,虽然我们在训练数据选取时已经尽可能保证了数据的合规性,但由于数据和模型的复杂性,仍有可能存在一些无法避免的问题。因此,如果由于使用本开源模型而导致的各种问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
此外,根据国家网信办等七部门联合发布的[《生成式人工智能服务管理暂行办法》](http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm),在训练、使用本模型以及其他生成式模型,请依据相关法律法规,为构建和谐、健康、可持续的生成式人工智能社区共同努力。
因此,如果由于使用本开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
如果您在使用模型时遇到了任何问题,欢迎联系我们(letz999@163.com)
```bash
git clone https://www.modelscope.cn/Xunzillm4cc/Xunzi-Qwen1.5-4B.git

5
added_tokens.json Normal file
View File

@@ -0,0 +1,5 @@
{
"<|endoftext|>": 151643,
"<|im_end|>": 151645,
"<|im_start|>": 151644
}

7
all_results.json Normal file
View File

@@ -0,0 +1,7 @@
{
"epoch": 1.0,
"train_loss": 2.982507556789895,
"train_runtime": 289179.519,
"train_samples_per_second": 5.604,
"train_steps_per_second": 0.005
}

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"_name_or_path": "/model_output/lc_lm_data/Qwen1.5-14B/qwen/Qwen1___5-4B/",
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 2560,
"initializer_range": 0.02,
"intermediate_size": 6912,
"max_position_embeddings": 32768,
"max_window_layers": 21,
"model_type": "qwen2",
"num_attention_heads": 20,
"num_hidden_layers": 40,
"num_key_value_heads": 20,
"rms_norm_eps": 1e-06,
"rope_theta": 5000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.37.2",
"use_cache": false,
"use_sliding_window": false,
"vocab_size": 151936
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"task":"text-generation"}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"bos_token_id": 151643,
"eos_token_id": 151643,
"max_new_tokens": 2048,
"transformers_version": "4.37.2"
}

151292
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4a7261aa96889c3580a7ea262a545ac2fe1a73941133f4cfd2ccfdf4f563355a
size 4989973136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8368495f762c04861ac2d1762b0a0282f8d6f23edc173fd72cc13d662206bf29
size 2910820360

View File

@@ -0,0 +1,490 @@
{
"metadata": {
"total_size": 7900738560
},
"weight_map": {
"lm_head.weight": "model-00002-of-00002.safetensors",
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.norm.weight": "model-00002-of-00002.safetensors"
}
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f552622583d640fe5e4e13979af01b5c926d58557be9d0787ba9cfaaa15288a4
size 29859

20
special_tokens_map.json Normal file
View File

@@ -0,0 +1,20 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"eos_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

44
tokenizer_config.json Normal file
View File

@@ -0,0 +1,44 @@
{
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": null,
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ system_message }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ content }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "<|endoftext|>",
"errors": "replace",
"model_max_length": 32768,
"pad_token": "<|endoftext|>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

7
train_results.json Normal file
View File

@@ -0,0 +1,7 @@
{
"epoch": 1.0,
"train_loss": 2.982507556789895,
"train_runtime": 289179.519,
"train_samples_per_second": 5.604,
"train_steps_per_second": 0.005
}

159
trainer_log.jsonl Normal file
View File

@@ -0,0 +1,159 @@
{"current_steps": 10, "total_steps": 1582, "loss": 3.8169, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.999810713827213e-05, "epoch": 0.01, "percentage": 0.63, "elapsed_time": "0:30:28", "remaining_time": "3 days, 7:51:55"}
{"current_steps": 20, "total_steps": 1582, "loss": 3.4444, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.999041820624981e-05, "epoch": 0.01, "percentage": 1.26, "elapsed_time": "1:00:58", "remaining_time": "3 days, 7:21:58"}
{"current_steps": 30, "total_steps": 1582, "loss": 3.3706, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9976817929807542e-05, "epoch": 0.02, "percentage": 1.9, "elapsed_time": "1:31:27", "remaining_time": "3 days, 6:51:35"}
{"current_steps": 40, "total_steps": 1582, "loss": 3.3325, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.995731167209911e-05, "epoch": 0.03, "percentage": 2.53, "elapsed_time": "2:01:57", "remaining_time": "3 days, 6:21:15"}
{"current_steps": 50, "total_steps": 1582, "loss": 3.2825, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9931907125251988e-05, "epoch": 0.03, "percentage": 3.16, "elapsed_time": "2:32:26", "remaining_time": "3 days, 5:50:47"}
{"current_steps": 60, "total_steps": 1582, "loss": 3.273, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9900614307334e-05, "epoch": 0.04, "percentage": 3.79, "elapsed_time": "3:02:55", "remaining_time": "3 days, 5:20:13"}
{"current_steps": 70, "total_steps": 1582, "loss": 3.2433, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.986344555840277e-05, "epoch": 0.04, "percentage": 4.42, "elapsed_time": "3:33:24", "remaining_time": "3 days, 4:49:41"}
{"current_steps": 80, "total_steps": 1582, "loss": 3.2409, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.982041553563955e-05, "epoch": 0.05, "percentage": 5.06, "elapsed_time": "4:03:53", "remaining_time": "3 days, 4:19:08"}
{"current_steps": 90, "total_steps": 1582, "loss": 3.2218, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.977154120756926e-05, "epoch": 0.06, "percentage": 5.69, "elapsed_time": "4:34:22", "remaining_time": "3 days, 3:48:36"}
{"current_steps": 100, "total_steps": 1582, "loss": 3.2128, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9716841847369106e-05, "epoch": 0.06, "percentage": 6.32, "elapsed_time": "5:04:51", "remaining_time": "3 days, 3:18:03"}
{"current_steps": 110, "total_steps": 1582, "loss": 3.2057, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9656339025268374e-05, "epoch": 0.07, "percentage": 6.95, "elapsed_time": "5:35:20", "remaining_time": "3 days, 2:47:29"}
{"current_steps": 120, "total_steps": 1582, "loss": 3.1869, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.959005660004237e-05, "epoch": 0.08, "percentage": 7.59, "elapsed_time": "6:05:49", "remaining_time": "3 days, 2:16:55"}
{"current_steps": 130, "total_steps": 1582, "loss": 3.1853, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9518020709603938e-05, "epoch": 0.08, "percentage": 8.22, "elapsed_time": "6:36:18", "remaining_time": "3 days, 1:46:25"}
{"current_steps": 140, "total_steps": 1582, "loss": 3.1747, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9440259760696174e-05, "epoch": 0.09, "percentage": 8.85, "elapsed_time": "7:06:47", "remaining_time": "3 days, 1:15:55"}
{"current_steps": 150, "total_steps": 1582, "loss": 3.1569, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.935680441769049e-05, "epoch": 0.09, "percentage": 9.48, "elapsed_time": "7:37:16", "remaining_time": "3 days, 0:45:24"}
{"current_steps": 160, "total_steps": 1582, "loss": 3.147, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9267687590494364e-05, "epoch": 0.1, "percentage": 10.11, "elapsed_time": "8:07:44", "remaining_time": "3 days, 0:14:52"}
{"current_steps": 170, "total_steps": 1582, "loss": 3.1393, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9172944421573587e-05, "epoch": 0.11, "percentage": 10.75, "elapsed_time": "8:38:13", "remaining_time": "2 days, 23:44:20"}
{"current_steps": 180, "total_steps": 1582, "loss": 3.1262, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.9072612272094165e-05, "epoch": 0.11, "percentage": 11.38, "elapsed_time": "9:08:42", "remaining_time": "2 days, 23:13:48"}
{"current_steps": 190, "total_steps": 1582, "loss": 3.1286, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8966730707189218e-05, "epoch": 0.12, "percentage": 12.01, "elapsed_time": "9:39:11", "remaining_time": "2 days, 22:43:19"}
{"current_steps": 200, "total_steps": 1582, "loss": 3.1145, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.885534148035684e-05, "epoch": 0.13, "percentage": 12.64, "elapsed_time": "10:09:40", "remaining_time": "2 days, 22:12:49"}
{"current_steps": 210, "total_steps": 1582, "loss": 3.1037, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8738488516994925e-05, "epoch": 0.13, "percentage": 13.27, "elapsed_time": "10:40:08", "remaining_time": "2 days, 21:42:17"}
{"current_steps": 220, "total_steps": 1582, "loss": 3.1064, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8616217897079593e-05, "epoch": 0.14, "percentage": 13.91, "elapsed_time": "11:10:37", "remaining_time": "2 days, 21:11:46"}
{"current_steps": 230, "total_steps": 1582, "loss": 3.1016, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8488577836993934e-05, "epoch": 0.15, "percentage": 14.54, "elapsed_time": "11:41:06", "remaining_time": "2 days, 20:41:15"}
{"current_steps": 240, "total_steps": 1582, "loss": 3.0887, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.835561867051426e-05, "epoch": 0.15, "percentage": 15.17, "elapsed_time": "12:11:34", "remaining_time": "2 days, 20:10:44"}
{"current_steps": 250, "total_steps": 1582, "loss": 3.0969, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.821739282896143e-05, "epoch": 0.16, "percentage": 15.8, "elapsed_time": "12:42:03", "remaining_time": "2 days, 19:40:13"}
{"current_steps": 260, "total_steps": 1582, "loss": 3.0809, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8073954820525014e-05, "epoch": 0.16, "percentage": 16.43, "elapsed_time": "13:12:32", "remaining_time": "2 days, 19:09:44"}
{"current_steps": 270, "total_steps": 1582, "loss": 3.0821, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.792536120876842e-05, "epoch": 0.17, "percentage": 17.07, "elapsed_time": "13:43:00", "remaining_time": "2 days, 18:39:13"}
{"current_steps": 280, "total_steps": 1582, "loss": 3.0799, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.7771670590323548e-05, "epoch": 0.18, "percentage": 17.7, "elapsed_time": "14:13:28", "remaining_time": "2 days, 18:08:41"}
{"current_steps": 290, "total_steps": 1582, "loss": 3.0651, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.7612943571783705e-05, "epoch": 0.18, "percentage": 18.33, "elapsed_time": "14:43:57", "remaining_time": "2 days, 17:38:10"}
{"current_steps": 300, "total_steps": 1582, "loss": 3.0548, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.7449242745803914e-05, "epoch": 0.19, "percentage": 18.96, "elapsed_time": "15:14:25", "remaining_time": "2 days, 17:07:40"}
{"current_steps": 310, "total_steps": 1582, "loss": 3.0592, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.7280632666418013e-05, "epoch": 0.2, "percentage": 19.6, "elapsed_time": "15:44:54", "remaining_time": "2 days, 16:37:09"}
{"current_steps": 320, "total_steps": 1582, "loss": 3.0647, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.710717982358233e-05, "epoch": 0.2, "percentage": 20.23, "elapsed_time": "16:15:22", "remaining_time": "2 days, 16:06:39"}
{"current_steps": 330, "total_steps": 1582, "loss": 3.0397, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.6928952616955944e-05, "epoch": 0.21, "percentage": 20.86, "elapsed_time": "16:45:51", "remaining_time": "2 days, 15:36:09"}
{"current_steps": 340, "total_steps": 1582, "loss": 3.0382, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.674602132892783e-05, "epoch": 0.21, "percentage": 21.49, "elapsed_time": "17:16:19", "remaining_time": "2 days, 15:05:39"}
{"current_steps": 350, "total_steps": 1582, "loss": 3.0304, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.655845809690162e-05, "epoch": 0.22, "percentage": 22.12, "elapsed_time": "17:46:48", "remaining_time": "2 days, 14:35:09"}
{"current_steps": 360, "total_steps": 1582, "loss": 3.0435, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.636633688484882e-05, "epoch": 0.23, "percentage": 22.76, "elapsed_time": "18:17:16", "remaining_time": "2 days, 14:04:39"}
{"current_steps": 370, "total_steps": 1582, "loss": 3.043, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.616973345414173e-05, "epoch": 0.23, "percentage": 23.39, "elapsed_time": "18:47:45", "remaining_time": "2 days, 13:34:10"}
{"current_steps": 380, "total_steps": 1582, "loss": 3.0378, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.598902211115875e-05, "epoch": 0.24, "percentage": 24.02, "elapsed_time": "19:18:13", "remaining_time": "2 days, 13:03:38"}
{"current_steps": 390, "total_steps": 1582, "loss": 3.0303, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5784117493354177e-05, "epoch": 0.25, "percentage": 24.65, "elapsed_time": "19:48:39", "remaining_time": "2 days, 12:33:02"}
{"current_steps": 400, "total_steps": 1582, "loss": 3.0149, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5574960250179765e-05, "epoch": 0.25, "percentage": 25.28, "elapsed_time": "20:19:06", "remaining_time": "2 days, 12:02:27"}
{"current_steps": 410, "total_steps": 1582, "loss": 3.0235, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5361632861022366e-05, "epoch": 0.26, "percentage": 25.92, "elapsed_time": "20:49:32", "remaining_time": "2 days, 11:31:52"}
{"current_steps": 420, "total_steps": 1582, "loss": 3.0245, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5144219449730562e-05, "epoch": 0.27, "percentage": 26.55, "elapsed_time": "21:19:59", "remaining_time": "2 days, 11:01:18"}
{"current_steps": 430, "total_steps": 1582, "loss": 3.0177, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.4922805751441174e-05, "epoch": 0.27, "percentage": 27.18, "elapsed_time": "21:50:25", "remaining_time": "2 days, 10:30:44"}
{"current_steps": 440, "total_steps": 1582, "loss": 3.0119, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.469747907877028e-05, "epoch": 0.28, "percentage": 27.81, "elapsed_time": "22:20:52", "remaining_time": "2 days, 10:00:09"}
{"current_steps": 450, "total_steps": 1582, "loss": 3.0069, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.4468328287382274e-05, "epoch": 0.28, "percentage": 28.45, "elapsed_time": "22:51:18", "remaining_time": "2 days, 9:29:36"}
{"current_steps": 460, "total_steps": 1582, "loss": 2.9941, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.423544374095036e-05, "epoch": 0.29, "percentage": 29.08, "elapsed_time": "23:21:44", "remaining_time": "2 days, 8:59:02"}
{"current_steps": 470, "total_steps": 1582, "loss": 3.0015, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.3998917275522408e-05, "epoch": 0.3, "percentage": 29.71, "elapsed_time": "23:52:11", "remaining_time": "2 days, 8:28:30"}
{"current_steps": 480, "total_steps": 1582, "loss": 3.0088, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.37588421633062e-05, "epoch": 0.3, "percentage": 30.34, "elapsed_time": "1 day, 0:22:38", "remaining_time": "2 days, 7:57:58"}
{"current_steps": 490, "total_steps": 1582, "loss": 3.0014, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.35153130758883e-05, "epoch": 0.31, "percentage": 30.97, "elapsed_time": "1 day, 0:53:04", "remaining_time": "2 days, 7:27:26"}
{"current_steps": 500, "total_steps": 1582, "loss": 2.9899, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.3268426046901153e-05, "epoch": 0.32, "percentage": 31.61, "elapsed_time": "1 day, 1:23:31", "remaining_time": "2 days, 6:56:53"}
{"current_steps": 510, "total_steps": 1582, "loss": 2.9969, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.3018278434152984e-05, "epoch": 0.32, "percentage": 32.24, "elapsed_time": "1 day, 1:53:57", "remaining_time": "2 days, 6:26:22"}
{"current_steps": 520, "total_steps": 1582, "loss": 3.0184, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.2764968881235625e-05, "epoch": 0.33, "percentage": 32.87, "elapsed_time": "1 day, 2:24:24", "remaining_time": "2 days, 5:55:50"}
{"current_steps": 530, "total_steps": 1582, "loss": 3.003, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.2508597278625205e-05, "epoch": 0.33, "percentage": 33.5, "elapsed_time": "1 day, 2:54:50", "remaining_time": "2 days, 5:25:19"}
{"current_steps": 540, "total_steps": 1582, "loss": 2.9918, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.2249264724291235e-05, "epoch": 0.34, "percentage": 34.13, "elapsed_time": "1 day, 3:25:17", "remaining_time": "2 days, 4:54:47"}
{"current_steps": 550, "total_steps": 1582, "loss": 2.9868, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.1987073483829422e-05, "epoch": 0.35, "percentage": 34.77, "elapsed_time": "1 day, 3:55:43", "remaining_time": "2 days, 4:24:16"}
{"current_steps": 560, "total_steps": 1582, "loss": 2.9815, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.172212695013415e-05, "epoch": 0.35, "percentage": 35.4, "elapsed_time": "1 day, 4:26:10", "remaining_time": "2 days, 3:53:45"}
{"current_steps": 570, "total_steps": 1582, "loss": 2.9837, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.1454529602626336e-05, "epoch": 0.36, "percentage": 36.03, "elapsed_time": "1 day, 4:56:36", "remaining_time": "2 days, 3:23:14"}
{"current_steps": 580, "total_steps": 1582, "loss": 2.9728, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.1184386966052872e-05, "epoch": 0.37, "percentage": 36.66, "elapsed_time": "1 day, 5:27:03", "remaining_time": "2 days, 2:52:43"}
{"current_steps": 590, "total_steps": 1582, "loss": 2.9696, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.0911805568873825e-05, "epoch": 0.37, "percentage": 37.29, "elapsed_time": "1 day, 5:57:29", "remaining_time": "2 days, 2:22:13"}
{"current_steps": 600, "total_steps": 1582, "loss": 2.9709, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.063689290125385e-05, "epoch": 0.38, "percentage": 37.93, "elapsed_time": "1 day, 6:27:55", "remaining_time": "2 days, 1:51:42"}
{"current_steps": 610, "total_steps": 1582, "loss": 2.97, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.0359757372674344e-05, "epoch": 0.39, "percentage": 38.56, "elapsed_time": "1 day, 6:58:22", "remaining_time": "2 days, 1:21:12"}
{"current_steps": 620, "total_steps": 1582, "loss": 2.9621, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.00805082691831e-05, "epoch": 0.39, "percentage": 39.19, "elapsed_time": "1 day, 7:28:48", "remaining_time": "2 days, 0:50:42"}
{"current_steps": 630, "total_steps": 1582, "loss": 2.9608, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9799255710298257e-05, "epoch": 0.4, "percentage": 39.82, "elapsed_time": "1 day, 7:59:15", "remaining_time": "2 days, 0:20:12"}
{"current_steps": 640, "total_steps": 1582, "loss": 2.9673, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.951611060558363e-05, "epoch": 0.4, "percentage": 40.46, "elapsed_time": "1 day, 8:29:41", "remaining_time": "1 day, 23:49:42"}
{"current_steps": 650, "total_steps": 1582, "loss": 2.9609, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9231184610912436e-05, "epoch": 0.41, "percentage": 41.09, "elapsed_time": "1 day, 9:00:08", "remaining_time": "1 day, 23:19:12"}
{"current_steps": 660, "total_steps": 1582, "loss": 2.9596, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8944590084436768e-05, "epoch": 0.42, "percentage": 41.72, "elapsed_time": "1 day, 9:30:34", "remaining_time": "1 day, 22:48:42"}
{"current_steps": 670, "total_steps": 1582, "loss": 2.9589, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.86564400422801e-05, "epoch": 0.42, "percentage": 42.35, "elapsed_time": "1 day, 10:01:00", "remaining_time": "1 day, 22:18:12"}
{"current_steps": 680, "total_steps": 1582, "loss": 2.9513, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8366848113970322e-05, "epoch": 0.43, "percentage": 42.98, "elapsed_time": "1 day, 10:31:26", "remaining_time": "1 day, 21:47:42"}
{"current_steps": 690, "total_steps": 1582, "loss": 2.9559, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8075928497630936e-05, "epoch": 0.44, "percentage": 43.62, "elapsed_time": "1 day, 11:01:52", "remaining_time": "1 day, 21:17:12"}
{"current_steps": 700, "total_steps": 1582, "loss": 2.9377, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7783795914947944e-05, "epoch": 0.44, "percentage": 44.25, "elapsed_time": "1 day, 11:32:18", "remaining_time": "1 day, 20:46:42"}
{"current_steps": 710, "total_steps": 1582, "loss": 2.9469, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7490565565930382e-05, "epoch": 0.45, "percentage": 44.88, "elapsed_time": "1 day, 12:02:44", "remaining_time": "1 day, 20:16:13"}
{"current_steps": 720, "total_steps": 1582, "loss": 2.9454, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7196353083482102e-05, "epoch": 0.45, "percentage": 45.51, "elapsed_time": "1 day, 12:33:10", "remaining_time": "1 day, 19:45:43"}
{"current_steps": 730, "total_steps": 1582, "loss": 2.9426, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6901274487802977e-05, "epoch": 0.46, "percentage": 46.14, "elapsed_time": "1 day, 13:03:37", "remaining_time": "1 day, 19:15:14"}
{"current_steps": 740, "total_steps": 1582, "loss": 2.9504, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.66054461406373e-05, "epoch": 0.47, "percentage": 46.78, "elapsed_time": "1 day, 13:34:03", "remaining_time": "1 day, 18:44:44"}
{"current_steps": 750, "total_steps": 1582, "loss": 2.9388, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6308984699387494e-05, "epoch": 0.47, "percentage": 47.41, "elapsed_time": "1 day, 14:04:29", "remaining_time": "1 day, 18:14:15"}
{"current_steps": 760, "total_steps": 1582, "loss": 2.9356, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6012007071111277e-05, "epoch": 0.48, "percentage": 48.04, "elapsed_time": "1 day, 14:34:55", "remaining_time": "1 day, 17:43:46"}
{"current_steps": 770, "total_steps": 1582, "loss": 2.938, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5714630366420347e-05, "epoch": 0.49, "percentage": 48.67, "elapsed_time": "1 day, 15:05:21", "remaining_time": "1 day, 17:13:16"}
{"current_steps": 780, "total_steps": 1582, "loss": 2.9352, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.541697185329881e-05, "epoch": 0.49, "percentage": 49.3, "elapsed_time": "1 day, 15:35:47", "remaining_time": "1 day, 16:42:47"}
{"current_steps": 790, "total_steps": 1582, "loss": 2.933, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.5119148910859536e-05, "epoch": 0.5, "percentage": 49.94, "elapsed_time": "1 day, 16:06:13", "remaining_time": "1 day, 16:12:18"}
{"current_steps": 800, "total_steps": 1582, "loss": 2.9314, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4821278983056699e-05, "epoch": 0.51, "percentage": 50.57, "elapsed_time": "1 day, 16:36:39", "remaining_time": "1 day, 15:41:49"}
{"current_steps": 810, "total_steps": 1582, "loss": 2.9319, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4523479532372764e-05, "epoch": 0.51, "percentage": 51.2, "elapsed_time": "1 day, 17:07:05", "remaining_time": "1 day, 15:11:21"}
{"current_steps": 820, "total_steps": 1582, "loss": 2.9238, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4225867993498119e-05, "epoch": 0.52, "percentage": 51.83, "elapsed_time": "1 day, 17:37:31", "remaining_time": "1 day, 14:40:52"}
{"current_steps": 830, "total_steps": 1582, "loss": 2.9338, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3928561727021752e-05, "epoch": 0.52, "percentage": 52.47, "elapsed_time": "1 day, 18:07:57", "remaining_time": "1 day, 14:10:23"}
{"current_steps": 840, "total_steps": 1582, "loss": 2.9274, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.363167797315105e-05, "epoch": 0.53, "percentage": 53.1, "elapsed_time": "1 day, 18:38:23", "remaining_time": "1 day, 13:39:55"}
{"current_steps": 850, "total_steps": 1582, "loss": 2.9247, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3335333805479126e-05, "epoch": 0.54, "percentage": 53.73, "elapsed_time": "1 day, 19:08:50", "remaining_time": "1 day, 13:09:26"}
{"current_steps": 860, "total_steps": 1582, "loss": 2.9243, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3039646084817877e-05, "epoch": 0.54, "percentage": 54.36, "elapsed_time": "1 day, 19:39:16", "remaining_time": "1 day, 12:38:58"}
{"current_steps": 870, "total_steps": 1582, "loss": 2.9208, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2744731413114838e-05, "epoch": 0.55, "percentage": 54.99, "elapsed_time": "1 day, 20:09:42", "remaining_time": "1 day, 12:08:29"}
{"current_steps": 880, "total_steps": 1582, "loss": 2.9209, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2450706087472275e-05, "epoch": 0.56, "percentage": 55.63, "elapsed_time": "1 day, 20:40:08", "remaining_time": "1 day, 11:38:01"}
{"current_steps": 890, "total_steps": 1582, "loss": 2.9274, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.215768605428635e-05, "epoch": 0.56, "percentage": 56.26, "elapsed_time": "1 day, 21:10:34", "remaining_time": "1 day, 11:07:33"}
{"current_steps": 900, "total_steps": 1582, "loss": 2.923, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1865786863524683e-05, "epoch": 0.57, "percentage": 56.89, "elapsed_time": "1 day, 21:41:01", "remaining_time": "1 day, 10:37:05"}
{"current_steps": 910, "total_steps": 1582, "loss": 2.9202, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.157512362316014e-05, "epoch": 0.57, "percentage": 57.52, "elapsed_time": "1 day, 22:11:27", "remaining_time": "1 day, 10:06:36"}
{"current_steps": 920, "total_steps": 1582, "loss": 2.9186, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.1285810953779057e-05, "epoch": 0.58, "percentage": 58.15, "elapsed_time": "1 day, 22:41:53", "remaining_time": "1 day, 9:36:08"}
{"current_steps": 930, "total_steps": 1582, "loss": 2.9166, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0997962943381529e-05, "epoch": 0.59, "percentage": 58.79, "elapsed_time": "1 day, 23:12:19", "remaining_time": "1 day, 9:05:40"}
{"current_steps": 940, "total_steps": 1582, "loss": 2.9149, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0711693102391794e-05, "epoch": 0.59, "percentage": 59.42, "elapsed_time": "1 day, 23:42:45", "remaining_time": "1 day, 8:35:12"}
{"current_steps": 950, "total_steps": 1582, "loss": 2.9076, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0427114318896343e-05, "epoch": 0.6, "percentage": 60.05, "elapsed_time": "2 days, 0:13:11", "remaining_time": "1 day, 8:04:44"}
{"current_steps": 960, "total_steps": 1582, "loss": 2.9179, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.014433881412749e-05, "epoch": 0.61, "percentage": 60.68, "elapsed_time": "2 days, 0:43:38", "remaining_time": "1 day, 7:34:16"}
{"current_steps": 970, "total_steps": 1582, "loss": 2.9127, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.863478098209844e-06, "epoch": 0.61, "percentage": 61.31, "elapsed_time": "2 days, 1:14:04", "remaining_time": "1 day, 7:03:48"}
{"current_steps": 980, "total_steps": 1582, "loss": 2.9162, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.584642926187264e-06, "epoch": 0.62, "percentage": 61.95, "elapsed_time": "2 days, 1:44:30", "remaining_time": "1 day, 6:33:20"}
{"current_steps": 990, "total_steps": 1582, "loss": 2.9087, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.307943254347521e-06, "epoch": 0.63, "percentage": 62.58, "elapsed_time": "2 days, 2:14:56", "remaining_time": "1 day, 6:02:52"}
{"current_steps": 1000, "total_steps": 1582, "loss": 2.9007, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.033488196861998e-06, "epoch": 0.63, "percentage": 63.21, "elapsed_time": "2 days, 2:45:22", "remaining_time": "1 day, 5:32:24"}
{"current_steps": 1010, "total_steps": 1582, "loss": 2.9102, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.76138598275741e-06, "epoch": 0.64, "percentage": 63.84, "elapsed_time": "2 days, 3:18:04", "remaining_time": "1 day, 5:03:13"}
{"current_steps": 1020, "total_steps": 1582, "loss": 2.9006, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.491743913236629e-06, "epoch": 0.64, "percentage": 64.48, "elapsed_time": "2 days, 3:48:30", "remaining_time": "1 day, 4:32:43"}
{"current_steps": 1030, "total_steps": 1582, "loss": 2.9061, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.224668319365347e-06, "epoch": 0.65, "percentage": 65.11, "elapsed_time": "2 days, 4:18:56", "remaining_time": "1 day, 4:02:13"}
{"current_steps": 1040, "total_steps": 1582, "loss": 2.9052, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.960264520141317e-06, "epoch": 0.66, "percentage": 65.74, "elapsed_time": "2 days, 4:49:22", "remaining_time": "1 day, 3:31:43"}
{"current_steps": 1050, "total_steps": 1582, "loss": 2.9017, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.698636780962729e-06, "epoch": 0.66, "percentage": 66.37, "elapsed_time": "2 days, 5:19:48", "remaining_time": "1 day, 3:01:14"}
{"current_steps": 1060, "total_steps": 1582, "loss": 2.8945, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.439888272512004e-06, "epoch": 0.67, "percentage": 67.0, "elapsed_time": "2 days, 5:50:14", "remaining_time": "1 day, 2:30:44"}
{"current_steps": 1070, "total_steps": 1582, "loss": 2.8982, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.184121030071315e-06, "epoch": 0.68, "percentage": 67.64, "elapsed_time": "2 days, 6:20:40", "remaining_time": "1 day, 2:00:15"}
{"current_steps": 1080, "total_steps": 1582, "loss": 2.9084, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.931435913285836e-06, "epoch": 0.68, "percentage": 68.27, "elapsed_time": "2 days, 6:51:07", "remaining_time": "1 day, 1:29:45"}
{"current_steps": 1090, "total_steps": 1582, "loss": 2.8975, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.681932566390615e-06, "epoch": 0.69, "percentage": 68.9, "elapsed_time": "2 days, 7:21:33", "remaining_time": "1 day, 0:59:16"}
{"current_steps": 1100, "total_steps": 1582, "loss": 2.9089, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.4357093789167005e-06, "epoch": 0.7, "percentage": 69.53, "elapsed_time": "2 days, 7:51:59", "remaining_time": "1 day, 0:28:46"}
{"current_steps": 1110, "total_steps": 1582, "loss": 2.9079, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.192863446892048e-06, "epoch": 0.7, "percentage": 70.16, "elapsed_time": "2 days, 8:22:25", "remaining_time": "23:58:17"}
{"current_steps": 1120, "total_steps": 1582, "loss": 2.9077, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.953490534552541e-06, "epoch": 0.71, "percentage": 70.8, "elapsed_time": "2 days, 8:52:51", "remaining_time": "23:27:48"}
{"current_steps": 1130, "total_steps": 1582, "loss": 2.8958, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.71768503657819e-06, "epoch": 0.71, "percentage": 71.43, "elapsed_time": "2 days, 9:23:17", "remaining_time": "22:57:19"}
{"current_steps": 1140, "total_steps": 1582, "loss": 2.8987, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.485539940869361e-06, "epoch": 0.72, "percentage": 72.06, "elapsed_time": "2 days, 9:53:43", "remaining_time": "22:26:49"}
{"current_steps": 1150, "total_steps": 1582, "loss": 2.892, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.2571467918777955e-06, "epoch": 0.73, "percentage": 72.69, "elapsed_time": "2 days, 10:24:10", "remaining_time": "21:56:20"}
{"current_steps": 1160, "total_steps": 1582, "loss": 2.8925, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.032595654506847e-06, "epoch": 0.73, "percentage": 73.32, "elapsed_time": "2 days, 10:54:36", "remaining_time": "21:25:51"}
{"current_steps": 1170, "total_steps": 1582, "loss": 2.8975, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.811975078595121e-06, "epoch": 0.74, "percentage": 73.96, "elapsed_time": "2 days, 11:25:02", "remaining_time": "20:55:23"}
{"current_steps": 1180, "total_steps": 1582, "loss": 2.9005, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.5953720639976185e-06, "epoch": 0.75, "percentage": 74.59, "elapsed_time": "2 days, 11:55:28", "remaining_time": "20:24:54"}
{"current_steps": 1190, "total_steps": 1582, "loss": 2.893, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.382872026278071e-06, "epoch": 0.75, "percentage": 75.22, "elapsed_time": "2 days, 12:25:55", "remaining_time": "19:54:25"}
{"current_steps": 1200, "total_steps": 1582, "loss": 2.8926, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.174558763026048e-06, "epoch": 0.76, "percentage": 75.85, "elapsed_time": "2 days, 12:56:21", "remaining_time": "19:23:56"}
{"current_steps": 1210, "total_steps": 1582, "loss": 2.8865, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.970514420812069e-06, "epoch": 0.76, "percentage": 76.49, "elapsed_time": "2 days, 13:26:47", "remaining_time": "18:53:27"}
{"current_steps": 1220, "total_steps": 1582, "loss": 2.8992, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.770819462793801e-06, "epoch": 0.77, "percentage": 77.12, "elapsed_time": "2 days, 13:57:13", "remaining_time": "18:22:58"}
{"current_steps": 1230, "total_steps": 1582, "loss": 2.8841, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.5755526369861207e-06, "epoch": 0.78, "percentage": 77.75, "elapsed_time": "2 days, 14:27:39", "remaining_time": "17:52:30"}
{"current_steps": 1240, "total_steps": 1582, "loss": 2.8843, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.3847909452074768e-06, "epoch": 0.78, "percentage": 78.38, "elapsed_time": "2 days, 14:58:05", "remaining_time": "17:22:01"}
{"current_steps": 1250, "total_steps": 1582, "loss": 2.8883, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.1986096127148724e-06, "epoch": 0.79, "percentage": 79.01, "elapsed_time": "2 days, 15:28:31", "remaining_time": "16:51:32"}
{"current_steps": 1260, "total_steps": 1582, "loss": 2.8919, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.0170820585394327e-06, "epoch": 0.8, "percentage": 79.65, "elapsed_time": "2 days, 15:58:57", "remaining_time": "16:21:04"}
{"current_steps": 1270, "total_steps": 1582, "loss": 2.8855, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.8402798665342412e-06, "epoch": 0.8, "percentage": 80.28, "elapsed_time": "2 days, 16:29:23", "remaining_time": "15:50:35"}
{"current_steps": 1280, "total_steps": 1582, "loss": 2.8964, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.668272757145839e-06, "epoch": 0.81, "percentage": 80.91, "elapsed_time": "2 days, 16:59:50", "remaining_time": "15:20:07"}
{"current_steps": 1290, "total_steps": 1582, "loss": 2.8939, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5011285599205554e-06, "epoch": 0.82, "percentage": 81.54, "elapsed_time": "2 days, 17:30:16", "remaining_time": "14:49:38"}
{"current_steps": 1300, "total_steps": 1582, "loss": 2.8852, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.3389131867565084e-06, "epoch": 0.82, "percentage": 82.17, "elapsed_time": "2 days, 18:00:42", "remaining_time": "14:19:10"}
{"current_steps": 1310, "total_steps": 1582, "loss": 2.8905, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.18169060591181e-06, "epoch": 0.83, "percentage": 82.81, "elapsed_time": "2 days, 18:31:08", "remaining_time": "13:48:41"}
{"current_steps": 1320, "total_steps": 1582, "loss": 2.8827, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.0295228167792224e-06, "epoch": 0.83, "percentage": 83.44, "elapsed_time": "2 days, 19:01:34", "remaining_time": "13:18:13"}
{"current_steps": 1330, "total_steps": 1582, "loss": 2.8844, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.8824698254372108e-06, "epoch": 0.84, "percentage": 84.07, "elapsed_time": "2 days, 19:32:00", "remaining_time": "12:47:44"}
{"current_steps": 1340, "total_steps": 1582, "loss": 2.8878, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7405896209870665e-06, "epoch": 0.85, "percentage": 84.7, "elapsed_time": "2 days, 20:02:26", "remaining_time": "12:17:16"}
{"current_steps": 1350, "total_steps": 1582, "loss": 2.8825, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.6039381526854018e-06, "epoch": 0.85, "percentage": 85.34, "elapsed_time": "2 days, 20:32:52", "remaining_time": "11:46:48"}
{"current_steps": 1360, "total_steps": 1582, "loss": 2.8849, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4725693078810238e-06, "epoch": 0.86, "percentage": 85.97, "elapsed_time": "2 days, 21:03:18", "remaining_time": "11:16:20"}
{"current_steps": 1370, "total_steps": 1582, "loss": 2.8894, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.3465348907649138e-06, "epoch": 0.87, "percentage": 86.6, "elapsed_time": "2 days, 21:33:45", "remaining_time": "10:45:51"}
{"current_steps": 1380, "total_steps": 1582, "loss": 2.882, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2258846019416892e-06, "epoch": 0.87, "percentage": 87.23, "elapsed_time": "2 days, 22:04:11", "remaining_time": "10:15:23"}
{"current_steps": 1390, "total_steps": 1582, "loss": 2.8784, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.110666018830594e-06, "epoch": 0.88, "percentage": 87.86, "elapsed_time": "2 days, 22:34:39", "remaining_time": "9:44:55"}
{"current_steps": 1400, "total_steps": 1582, "loss": 2.8918, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0009245769037412e-06, "epoch": 0.88, "percentage": 88.5, "elapsed_time": "2 days, 23:05:07", "remaining_time": "9:14:27"}
{"current_steps": 1410, "total_steps": 1582, "loss": 2.887, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.967035517690147e-07, "epoch": 0.89, "percentage": 89.13, "elapsed_time": "2 days, 23:35:35", "remaining_time": "8:44:00"}
{"current_steps": 1420, "total_steps": 1582, "loss": 2.8869, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.980440421047119e-07, "epoch": 0.9, "percentage": 89.76, "elapsed_time": "3 days, 0:06:03", "remaining_time": "8:13:32"}
{"current_steps": 1430, "total_steps": 1582, "loss": 2.8779, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.049849534526192e-07, "epoch": 0.9, "percentage": 90.39, "elapsed_time": "3 days, 0:36:31", "remaining_time": "7:43:04"}
{"current_steps": 1440, "total_steps": 1582, "loss": 2.885, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.175629828759482e-07, "epoch": 0.91, "percentage": 91.02, "elapsed_time": "3 days, 1:06:59", "remaining_time": "7:12:36"}
{"current_steps": 1450, "total_steps": 1582, "loss": 2.8866, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.358126044881461e-07, "epoch": 0.92, "percentage": 91.66, "elapsed_time": "3 days, 1:37:27", "remaining_time": "6:42:08"}
{"current_steps": 1460, "total_steps": 1582, "loss": 2.8828, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.5976605585834055e-07, "epoch": 0.92, "percentage": 92.29, "elapsed_time": "3 days, 2:07:55", "remaining_time": "6:11:40"}
{"current_steps": 1470, "total_steps": 1582, "loss": 2.8861, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.894533252987098e-07, "epoch": 0.93, "percentage": 92.92, "elapsed_time": "3 days, 2:38:23", "remaining_time": "5:41:12"}
{"current_steps": 1480, "total_steps": 1582, "loss": 2.8815, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.2490214003885963e-07, "epoch": 0.94, "percentage": 93.55, "elapsed_time": "3 days, 3:08:51", "remaining_time": "5:10:44"}
{"current_steps": 1490, "total_steps": 1582, "loss": 2.8752, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.661379552918142e-07, "epoch": 0.94, "percentage": 94.18, "elapsed_time": "3 days, 3:39:19", "remaining_time": "4:40:16"}
{"current_steps": 1500, "total_steps": 1582, "loss": 2.8789, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.1318394421597553e-07, "epoch": 0.95, "percentage": 94.82, "elapsed_time": "3 days, 4:09:47", "remaining_time": "4:09:48"}
{"current_steps": 1510, "total_steps": 1582, "loss": 2.8841, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.660609887769804e-07, "epoch": 0.95, "percentage": 95.45, "elapsed_time": "3 days, 4:40:15", "remaining_time": "3:39:21"}
{"current_steps": 1520, "total_steps": 1582, "loss": 2.88, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.2478767151308025e-07, "epoch": 0.96, "percentage": 96.08, "elapsed_time": "3 days, 5:10:44", "remaining_time": "3:08:53"}
{"current_steps": 1530, "total_steps": 1582, "loss": 2.8744, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.938026820726641e-08, "epoch": 0.97, "percentage": 96.71, "elapsed_time": "3 days, 5:41:12", "remaining_time": "2:38:25"}
{"current_steps": 1540, "total_steps": 1582, "loss": 2.8835, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.985274146905917e-08, "epoch": 0.97, "percentage": 97.35, "elapsed_time": "3 days, 6:11:40", "remaining_time": "2:07:57"}
{"current_steps": 1550, "total_steps": 1582, "loss": 2.8893, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.621673522847035e-08, "epoch": 0.98, "percentage": 97.98, "elapsed_time": "3 days, 6:42:09", "remaining_time": "1:37:29"}
{"current_steps": 1560, "total_steps": 1582, "loss": 2.8894, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.848157014431473e-08, "epoch": 0.99, "percentage": 98.61, "elapsed_time": "3 days, 7:12:37", "remaining_time": "1:07:01"}
{"current_steps": 1570, "total_steps": 1582, "loss": 2.885, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.65423992868841e-09, "epoch": 0.99, "percentage": 99.24, "elapsed_time": "3 days, 7:43:05", "remaining_time": "0:36:33"}
{"current_steps": 1580, "total_steps": 1582, "loss": 2.8879, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.394085890616298e-10, "epoch": 1.0, "percentage": 99.87, "elapsed_time": "3 days, 8:13:33", "remaining_time": "0:06:05"}
{"current_steps": 1582, "total_steps": 1582, "loss": null, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": null, "epoch": 1.0, "percentage": 100.0, "elapsed_time": "3 days, 8:19:39", "remaining_time": "0:00:00"}

978
trainer_state.json Normal file
View File

@@ -0,0 +1,978 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 0.9995754805461492,
"eval_steps": 500,
"global_step": 1582,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 0.01,
"learning_rate": 2.999810713827213e-05,
"loss": 3.8169,
"step": 10
},
{
"epoch": 0.01,
"learning_rate": 2.999041820624981e-05,
"loss": 3.4444,
"step": 20
},
{
"epoch": 0.02,
"learning_rate": 2.9976817929807542e-05,
"loss": 3.3706,
"step": 30
},
{
"epoch": 0.03,
"learning_rate": 2.995731167209911e-05,
"loss": 3.3325,
"step": 40
},
{
"epoch": 0.03,
"learning_rate": 2.9931907125251988e-05,
"loss": 3.2825,
"step": 50
},
{
"epoch": 0.04,
"learning_rate": 2.9900614307334e-05,
"loss": 3.273,
"step": 60
},
{
"epoch": 0.04,
"learning_rate": 2.986344555840277e-05,
"loss": 3.2433,
"step": 70
},
{
"epoch": 0.05,
"learning_rate": 2.982041553563955e-05,
"loss": 3.2409,
"step": 80
},
{
"epoch": 0.06,
"learning_rate": 2.977154120756926e-05,
"loss": 3.2218,
"step": 90
},
{
"epoch": 0.06,
"learning_rate": 2.9716841847369106e-05,
"loss": 3.2128,
"step": 100
},
{
"epoch": 0.07,
"learning_rate": 2.9656339025268374e-05,
"loss": 3.2057,
"step": 110
},
{
"epoch": 0.08,
"learning_rate": 2.959005660004237e-05,
"loss": 3.1869,
"step": 120
},
{
"epoch": 0.08,
"learning_rate": 2.9518020709603938e-05,
"loss": 3.1853,
"step": 130
},
{
"epoch": 0.09,
"learning_rate": 2.9440259760696174e-05,
"loss": 3.1747,
"step": 140
},
{
"epoch": 0.09,
"learning_rate": 2.935680441769049e-05,
"loss": 3.1569,
"step": 150
},
{
"epoch": 0.1,
"learning_rate": 2.9267687590494364e-05,
"loss": 3.147,
"step": 160
},
{
"epoch": 0.11,
"learning_rate": 2.9172944421573587e-05,
"loss": 3.1393,
"step": 170
},
{
"epoch": 0.11,
"learning_rate": 2.9072612272094165e-05,
"loss": 3.1262,
"step": 180
},
{
"epoch": 0.12,
"learning_rate": 2.8966730707189218e-05,
"loss": 3.1286,
"step": 190
},
{
"epoch": 0.13,
"learning_rate": 2.885534148035684e-05,
"loss": 3.1145,
"step": 200
},
{
"epoch": 0.13,
"learning_rate": 2.8738488516994925e-05,
"loss": 3.1037,
"step": 210
},
{
"epoch": 0.14,
"learning_rate": 2.8616217897079593e-05,
"loss": 3.1064,
"step": 220
},
{
"epoch": 0.15,
"learning_rate": 2.8488577836993934e-05,
"loss": 3.1016,
"step": 230
},
{
"epoch": 0.15,
"learning_rate": 2.835561867051426e-05,
"loss": 3.0887,
"step": 240
},
{
"epoch": 0.16,
"learning_rate": 2.821739282896143e-05,
"loss": 3.0969,
"step": 250
},
{
"epoch": 0.16,
"learning_rate": 2.8073954820525014e-05,
"loss": 3.0809,
"step": 260
},
{
"epoch": 0.17,
"learning_rate": 2.792536120876842e-05,
"loss": 3.0821,
"step": 270
},
{
"epoch": 0.18,
"learning_rate": 2.7771670590323548e-05,
"loss": 3.0799,
"step": 280
},
{
"epoch": 0.18,
"learning_rate": 2.7612943571783705e-05,
"loss": 3.0651,
"step": 290
},
{
"epoch": 0.19,
"learning_rate": 2.7449242745803914e-05,
"loss": 3.0548,
"step": 300
},
{
"epoch": 0.2,
"learning_rate": 2.7280632666418013e-05,
"loss": 3.0592,
"step": 310
},
{
"epoch": 0.2,
"learning_rate": 2.710717982358233e-05,
"loss": 3.0647,
"step": 320
},
{
"epoch": 0.21,
"learning_rate": 2.6928952616955944e-05,
"loss": 3.0397,
"step": 330
},
{
"epoch": 0.21,
"learning_rate": 2.674602132892783e-05,
"loss": 3.0382,
"step": 340
},
{
"epoch": 0.22,
"learning_rate": 2.655845809690162e-05,
"loss": 3.0304,
"step": 350
},
{
"epoch": 0.23,
"learning_rate": 2.636633688484882e-05,
"loss": 3.0435,
"step": 360
},
{
"epoch": 0.23,
"learning_rate": 2.616973345414173e-05,
"loss": 3.043,
"step": 370
},
{
"epoch": 0.24,
"learning_rate": 2.598902211115875e-05,
"loss": 3.0378,
"step": 380
},
{
"epoch": 0.25,
"learning_rate": 2.5784117493354177e-05,
"loss": 3.0303,
"step": 390
},
{
"epoch": 0.25,
"learning_rate": 2.5574960250179765e-05,
"loss": 3.0149,
"step": 400
},
{
"epoch": 0.26,
"learning_rate": 2.5361632861022366e-05,
"loss": 3.0235,
"step": 410
},
{
"epoch": 0.27,
"learning_rate": 2.5144219449730562e-05,
"loss": 3.0245,
"step": 420
},
{
"epoch": 0.27,
"learning_rate": 2.4922805751441174e-05,
"loss": 3.0177,
"step": 430
},
{
"epoch": 0.28,
"learning_rate": 2.469747907877028e-05,
"loss": 3.0119,
"step": 440
},
{
"epoch": 0.28,
"learning_rate": 2.4468328287382274e-05,
"loss": 3.0069,
"step": 450
},
{
"epoch": 0.29,
"learning_rate": 2.423544374095036e-05,
"loss": 2.9941,
"step": 460
},
{
"epoch": 0.3,
"learning_rate": 2.3998917275522408e-05,
"loss": 3.0015,
"step": 470
},
{
"epoch": 0.3,
"learning_rate": 2.37588421633062e-05,
"loss": 3.0088,
"step": 480
},
{
"epoch": 0.31,
"learning_rate": 2.35153130758883e-05,
"loss": 3.0014,
"step": 490
},
{
"epoch": 0.32,
"learning_rate": 2.3268426046901153e-05,
"loss": 2.9899,
"step": 500
},
{
"epoch": 0.32,
"learning_rate": 2.3018278434152984e-05,
"loss": 2.9969,
"step": 510
},
{
"epoch": 0.33,
"learning_rate": 2.2764968881235625e-05,
"loss": 3.0184,
"step": 520
},
{
"epoch": 0.33,
"learning_rate": 2.2508597278625205e-05,
"loss": 3.003,
"step": 530
},
{
"epoch": 0.34,
"learning_rate": 2.2249264724291235e-05,
"loss": 2.9918,
"step": 540
},
{
"epoch": 0.35,
"learning_rate": 2.1987073483829422e-05,
"loss": 2.9868,
"step": 550
},
{
"epoch": 0.35,
"learning_rate": 2.172212695013415e-05,
"loss": 2.9815,
"step": 560
},
{
"epoch": 0.36,
"learning_rate": 2.1454529602626336e-05,
"loss": 2.9837,
"step": 570
},
{
"epoch": 0.37,
"learning_rate": 2.1184386966052872e-05,
"loss": 2.9728,
"step": 580
},
{
"epoch": 0.37,
"learning_rate": 2.0911805568873825e-05,
"loss": 2.9696,
"step": 590
},
{
"epoch": 0.38,
"learning_rate": 2.063689290125385e-05,
"loss": 2.9709,
"step": 600
},
{
"epoch": 0.39,
"learning_rate": 2.0359757372674344e-05,
"loss": 2.97,
"step": 610
},
{
"epoch": 0.39,
"learning_rate": 2.00805082691831e-05,
"loss": 2.9621,
"step": 620
},
{
"epoch": 0.4,
"learning_rate": 1.9799255710298257e-05,
"loss": 2.9608,
"step": 630
},
{
"epoch": 0.4,
"learning_rate": 1.951611060558363e-05,
"loss": 2.9673,
"step": 640
},
{
"epoch": 0.41,
"learning_rate": 1.9231184610912436e-05,
"loss": 2.9609,
"step": 650
},
{
"epoch": 0.42,
"learning_rate": 1.8944590084436768e-05,
"loss": 2.9596,
"step": 660
},
{
"epoch": 0.42,
"learning_rate": 1.86564400422801e-05,
"loss": 2.9589,
"step": 670
},
{
"epoch": 0.43,
"learning_rate": 1.8366848113970322e-05,
"loss": 2.9513,
"step": 680
},
{
"epoch": 0.44,
"learning_rate": 1.8075928497630936e-05,
"loss": 2.9559,
"step": 690
},
{
"epoch": 0.44,
"learning_rate": 1.7783795914947944e-05,
"loss": 2.9377,
"step": 700
},
{
"epoch": 0.45,
"learning_rate": 1.7490565565930382e-05,
"loss": 2.9469,
"step": 710
},
{
"epoch": 0.45,
"learning_rate": 1.7196353083482102e-05,
"loss": 2.9454,
"step": 720
},
{
"epoch": 0.46,
"learning_rate": 1.6901274487802977e-05,
"loss": 2.9426,
"step": 730
},
{
"epoch": 0.47,
"learning_rate": 1.66054461406373e-05,
"loss": 2.9504,
"step": 740
},
{
"epoch": 0.47,
"learning_rate": 1.6308984699387494e-05,
"loss": 2.9388,
"step": 750
},
{
"epoch": 0.48,
"learning_rate": 1.6012007071111277e-05,
"loss": 2.9356,
"step": 760
},
{
"epoch": 0.49,
"learning_rate": 1.5714630366420347e-05,
"loss": 2.938,
"step": 770
},
{
"epoch": 0.49,
"learning_rate": 1.541697185329881e-05,
"loss": 2.9352,
"step": 780
},
{
"epoch": 0.5,
"learning_rate": 1.5119148910859536e-05,
"loss": 2.933,
"step": 790
},
{
"epoch": 0.51,
"learning_rate": 1.4821278983056699e-05,
"loss": 2.9314,
"step": 800
},
{
"epoch": 0.51,
"learning_rate": 1.4523479532372764e-05,
"loss": 2.9319,
"step": 810
},
{
"epoch": 0.52,
"learning_rate": 1.4225867993498119e-05,
"loss": 2.9238,
"step": 820
},
{
"epoch": 0.52,
"learning_rate": 1.3928561727021752e-05,
"loss": 2.9338,
"step": 830
},
{
"epoch": 0.53,
"learning_rate": 1.363167797315105e-05,
"loss": 2.9274,
"step": 840
},
{
"epoch": 0.54,
"learning_rate": 1.3335333805479126e-05,
"loss": 2.9247,
"step": 850
},
{
"epoch": 0.54,
"learning_rate": 1.3039646084817877e-05,
"loss": 2.9243,
"step": 860
},
{
"epoch": 0.55,
"learning_rate": 1.2744731413114838e-05,
"loss": 2.9208,
"step": 870
},
{
"epoch": 0.56,
"learning_rate": 1.2450706087472275e-05,
"loss": 2.9209,
"step": 880
},
{
"epoch": 0.56,
"learning_rate": 1.215768605428635e-05,
"loss": 2.9274,
"step": 890
},
{
"epoch": 0.57,
"learning_rate": 1.1865786863524683e-05,
"loss": 2.923,
"step": 900
},
{
"epoch": 0.57,
"learning_rate": 1.157512362316014e-05,
"loss": 2.9202,
"step": 910
},
{
"epoch": 0.58,
"learning_rate": 1.1285810953779057e-05,
"loss": 2.9186,
"step": 920
},
{
"epoch": 0.59,
"learning_rate": 1.0997962943381529e-05,
"loss": 2.9166,
"step": 930
},
{
"epoch": 0.59,
"learning_rate": 1.0711693102391794e-05,
"loss": 2.9149,
"step": 940
},
{
"epoch": 0.6,
"learning_rate": 1.0427114318896343e-05,
"loss": 2.9076,
"step": 950
},
{
"epoch": 0.61,
"learning_rate": 1.014433881412749e-05,
"loss": 2.9179,
"step": 960
},
{
"epoch": 0.61,
"learning_rate": 9.863478098209844e-06,
"loss": 2.9127,
"step": 970
},
{
"epoch": 0.62,
"learning_rate": 9.584642926187264e-06,
"loss": 2.9162,
"step": 980
},
{
"epoch": 0.63,
"learning_rate": 9.307943254347521e-06,
"loss": 2.9087,
"step": 990
},
{
"epoch": 0.63,
"learning_rate": 9.033488196861998e-06,
"loss": 2.9007,
"step": 1000
},
{
"epoch": 0.64,
"learning_rate": 8.76138598275741e-06,
"loss": 2.9102,
"step": 1010
},
{
"epoch": 0.64,
"learning_rate": 8.491743913236629e-06,
"loss": 2.9006,
"step": 1020
},
{
"epoch": 0.65,
"learning_rate": 8.224668319365347e-06,
"loss": 2.9061,
"step": 1030
},
{
"epoch": 0.66,
"learning_rate": 7.960264520141317e-06,
"loss": 2.9052,
"step": 1040
},
{
"epoch": 0.66,
"learning_rate": 7.698636780962729e-06,
"loss": 2.9017,
"step": 1050
},
{
"epoch": 0.67,
"learning_rate": 7.439888272512004e-06,
"loss": 2.8945,
"step": 1060
},
{
"epoch": 0.68,
"learning_rate": 7.184121030071315e-06,
"loss": 2.8982,
"step": 1070
},
{
"epoch": 0.68,
"learning_rate": 6.931435913285836e-06,
"loss": 2.9084,
"step": 1080
},
{
"epoch": 0.69,
"learning_rate": 6.681932566390615e-06,
"loss": 2.8975,
"step": 1090
},
{
"epoch": 0.7,
"learning_rate": 6.4357093789167005e-06,
"loss": 2.9089,
"step": 1100
},
{
"epoch": 0.7,
"learning_rate": 6.192863446892048e-06,
"loss": 2.9079,
"step": 1110
},
{
"epoch": 0.71,
"learning_rate": 5.953490534552541e-06,
"loss": 2.9077,
"step": 1120
},
{
"epoch": 0.71,
"learning_rate": 5.71768503657819e-06,
"loss": 2.8958,
"step": 1130
},
{
"epoch": 0.72,
"learning_rate": 5.485539940869361e-06,
"loss": 2.8987,
"step": 1140
},
{
"epoch": 0.73,
"learning_rate": 5.2571467918777955e-06,
"loss": 2.892,
"step": 1150
},
{
"epoch": 0.73,
"learning_rate": 5.032595654506847e-06,
"loss": 2.8925,
"step": 1160
},
{
"epoch": 0.74,
"learning_rate": 4.811975078595121e-06,
"loss": 2.8975,
"step": 1170
},
{
"epoch": 0.75,
"learning_rate": 4.5953720639976185e-06,
"loss": 2.9005,
"step": 1180
},
{
"epoch": 0.75,
"learning_rate": 4.382872026278071e-06,
"loss": 2.893,
"step": 1190
},
{
"epoch": 0.76,
"learning_rate": 4.174558763026048e-06,
"loss": 2.8926,
"step": 1200
},
{
"epoch": 0.76,
"learning_rate": 3.970514420812069e-06,
"loss": 2.8865,
"step": 1210
},
{
"epoch": 0.77,
"learning_rate": 3.770819462793801e-06,
"loss": 2.8992,
"step": 1220
},
{
"epoch": 0.78,
"learning_rate": 3.5755526369861207e-06,
"loss": 2.8841,
"step": 1230
},
{
"epoch": 0.78,
"learning_rate": 3.3847909452074768e-06,
"loss": 2.8843,
"step": 1240
},
{
"epoch": 0.79,
"learning_rate": 3.1986096127148724e-06,
"loss": 2.8883,
"step": 1250
},
{
"epoch": 0.8,
"learning_rate": 3.0170820585394327e-06,
"loss": 2.8919,
"step": 1260
},
{
"epoch": 0.8,
"learning_rate": 2.8402798665342412e-06,
"loss": 2.8855,
"step": 1270
},
{
"epoch": 0.81,
"learning_rate": 2.668272757145839e-06,
"loss": 2.8964,
"step": 1280
},
{
"epoch": 0.82,
"learning_rate": 2.5011285599205554e-06,
"loss": 2.8939,
"step": 1290
},
{
"epoch": 0.82,
"learning_rate": 2.3389131867565084e-06,
"loss": 2.8852,
"step": 1300
},
{
"epoch": 0.83,
"learning_rate": 2.18169060591181e-06,
"loss": 2.8905,
"step": 1310
},
{
"epoch": 0.83,
"learning_rate": 2.0295228167792224e-06,
"loss": 2.8827,
"step": 1320
},
{
"epoch": 0.84,
"learning_rate": 1.8824698254372108e-06,
"loss": 2.8844,
"step": 1330
},
{
"epoch": 0.85,
"learning_rate": 1.7405896209870665e-06,
"loss": 2.8878,
"step": 1340
},
{
"epoch": 0.85,
"learning_rate": 1.6039381526854018e-06,
"loss": 2.8825,
"step": 1350
},
{
"epoch": 0.86,
"learning_rate": 1.4725693078810238e-06,
"loss": 2.8849,
"step": 1360
},
{
"epoch": 0.87,
"learning_rate": 1.3465348907649138e-06,
"loss": 2.8894,
"step": 1370
},
{
"epoch": 0.87,
"learning_rate": 1.2258846019416892e-06,
"loss": 2.882,
"step": 1380
},
{
"epoch": 0.88,
"learning_rate": 1.110666018830594e-06,
"loss": 2.8784,
"step": 1390
},
{
"epoch": 0.88,
"learning_rate": 1.0009245769037412e-06,
"loss": 2.8918,
"step": 1400
},
{
"epoch": 0.89,
"learning_rate": 8.967035517690147e-07,
"loss": 2.887,
"step": 1410
},
{
"epoch": 0.9,
"learning_rate": 7.980440421047119e-07,
"loss": 2.8869,
"step": 1420
},
{
"epoch": 0.9,
"learning_rate": 7.049849534526192e-07,
"loss": 2.8779,
"step": 1430
},
{
"epoch": 0.91,
"learning_rate": 6.175629828759482e-07,
"loss": 2.885,
"step": 1440
},
{
"epoch": 0.92,
"learning_rate": 5.358126044881461e-07,
"loss": 2.8866,
"step": 1450
},
{
"epoch": 0.92,
"learning_rate": 4.5976605585834055e-07,
"loss": 2.8828,
"step": 1460
},
{
"epoch": 0.93,
"learning_rate": 3.894533252987098e-07,
"loss": 2.8861,
"step": 1470
},
{
"epoch": 0.94,
"learning_rate": 3.2490214003885963e-07,
"loss": 2.8815,
"step": 1480
},
{
"epoch": 0.94,
"learning_rate": 2.661379552918142e-07,
"loss": 2.8752,
"step": 1490
},
{
"epoch": 0.95,
"learning_rate": 2.1318394421597553e-07,
"loss": 2.8789,
"step": 1500
},
{
"epoch": 0.95,
"learning_rate": 1.660609887769804e-07,
"loss": 2.8841,
"step": 1510
},
{
"epoch": 0.96,
"learning_rate": 1.2478767151308025e-07,
"loss": 2.88,
"step": 1520
},
{
"epoch": 0.97,
"learning_rate": 8.938026820726641e-08,
"loss": 2.8744,
"step": 1530
},
{
"epoch": 0.97,
"learning_rate": 5.985274146905917e-08,
"loss": 2.8835,
"step": 1540
},
{
"epoch": 0.98,
"learning_rate": 3.621673522847035e-08,
"loss": 2.8893,
"step": 1550
},
{
"epoch": 0.99,
"learning_rate": 1.848157014431473e-08,
"loss": 2.8894,
"step": 1560
},
{
"epoch": 0.99,
"learning_rate": 6.65423992868841e-09,
"loss": 2.885,
"step": 1570
},
{
"epoch": 1.0,
"learning_rate": 7.394085890616298e-10,
"loss": 2.8879,
"step": 1580
},
{
"epoch": 1.0,
"step": 1582,
"total_flos": 7.0894081521926275e+19,
"train_loss": 2.982507556789895,
"train_runtime": 289179.519,
"train_samples_per_second": 5.604,
"train_steps_per_second": 0.005
}
],
"logging_steps": 10,
"max_steps": 1582,
"num_input_tokens_seen": 0,
"num_train_epochs": 1,
"save_steps": 1000,
"total_flos": 7.0894081521926275e+19,
"train_batch_size": 4,
"trial_name": null,
"trial_params": null
}

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca778962d04dfc8feb90089cf38ece0e1bcd0b502a6cc860c48fbb8fba842844
size 5755

BIN
training_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

151645
vocab.json Normal file

File diff suppressed because it is too large Load Diff