初始化项目,由ModelHub XC社区提供模型

Model: rbelanec/train_record_42_1776331412
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 01:09:49 +08:00
commit 8cd4e9de79
17 changed files with 143181 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

81
README.md Normal file
View File

@@ -0,0 +1,81 @@
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- peft-factory
- full
- llama-factory
- generated_from_trainer
model-index:
- name: train_record_42_1776331412
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_record_42_1776331412
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the record dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4481
- Num Input Tokens Seen: 245808128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 0.6094 | 0.2500 | 3906 | 0.5014 | 12292032 |
| 0.4689 | 0.5001 | 7812 | 0.5265 | 24620672 |
| 0.5124 | 0.7501 | 11718 | 0.4985 | 36894016 |
| 0.343 | 1.0002 | 15624 | 0.4854 | 49176512 |
| 0.265 | 1.2502 | 19530 | 0.5116 | 61465280 |
| 0.2897 | 1.5003 | 23436 | 0.4806 | 73739776 |
| 0.2995 | 1.7503 | 27342 | 0.4774 | 86015936 |
| 0.2658 | 2.0004 | 31248 | 0.4481 | 98341056 |
| 0.2663 | 2.2504 | 35154 | 0.5257 | 110649216 |
| 0.1792 | 2.5005 | 39060 | 0.5071 | 122910592 |
| 0.2395 | 2.7505 | 42966 | 0.5056 | 135222656 |
| 0.1496 | 3.0006 | 46872 | 0.5023 | 147516736 |
| 0.1005 | 3.2506 | 50778 | 0.5569 | 159826368 |
| 0.159 | 3.5007 | 54684 | 0.5747 | 172084032 |
| 0.1324 | 3.7507 | 58590 | 0.5466 | 184402752 |
| 0.1773 | 4.0008 | 62496 | 0.5555 | 196687936 |
| 0.0922 | 4.2508 | 66402 | 0.6279 | 209017024 |
| 0.1645 | 4.5009 | 70308 | 0.6087 | 221278272 |
| 0.1252 | 4.7509 | 74214 | 0.6058 | 233564288 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.10.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4

13
all_results.json Normal file
View File

@@ -0,0 +1,13 @@
{
"epoch": 5.0,
"eval_loss": 0.4481422007083893,
"eval_runtime": 50.9215,
"eval_samples_per_second": 272.694,
"eval_steps_per_second": 34.092,
"num_input_tokens_seen": 245808128,
"total_flos": 1.43524334436719e+18,
"train_loss": 0.2808395623519733,
"train_runtime": 12345.7888,
"train_samples_per_second": 50.612,
"train_steps_per_second": 6.326
}

39
config.json Normal file
View File

@@ -0,0 +1,39 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": [
128001,
128008,
128009
],
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 8192,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 16,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 32.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": true,
"torch_dtype": "float32",
"transformers_version": "4.51.3",
"use_cache": false,
"vocab_size": 128256
}

8
eval_results.json Normal file
View File

@@ -0,0 +1,8 @@
{
"epoch": 5.0,
"eval_loss": 0.4481422007083893,
"eval_runtime": 50.9215,
"eval_samples_per_second": 272.694,
"eval_steps_per_second": 34.092,
"num_input_tokens_seen": 245808128
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": [
128001,
128008,
128009
],
"temperature": 0.6,
"top_p": 0.9,
"transformers_version": "4.51.3"
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fe9ddaf8972ad4e41c9c3caea2f14ab808002010efd5ad03fcfc31c519586115
size 4943274328

26
special_tokens_map.json Normal file
View File

@@ -0,0 +1,26 @@
{
"additional_special_tokens": [
{
"content": "<|eom_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
],
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|eot_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "<|eot_id|>"
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
size 17209920

2069
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff

55
train.yaml Normal file
View File

@@ -0,0 +1,55 @@
seed: 42
### model
model_name_or_path: meta-llama/Llama-3.2-1B-Instruct
trust_remote_code: true
flash_attn: auto
use_cache: false
### method
stage: sft
do_train: true
finetuning_type: full
### dataset
dataset: record
template: llama3
cutoff_len: 2048
overwrite_cache: true
preprocessing_num_workers: 4
dataloader_num_workers: 4
packing: false
### output
output_dir: saves_bts_preliminary/base/llama-3.2-1b-instruct/train_record_42_1776331412
logging_steps: 5
save_steps: 0.05
overwrite_output_dir: true
save_only_model: false
plot_loss: true
include_num_input_tokens_seen: true
push_to_hub: true
push_to_hub_organization: rbelanec
load_best_model_at_end: true
save_total_limit: 1
### train
per_device_train_batch_size: 8
learning_rate: 5.0e-6
num_train_epochs: 5
weight_decay: 1.0e-5
lr_scheduler_type: cosine
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
warmup_ratio: 0.1
optim: adamw_torch
report_to:
- wandb
run_name: base_llama-3.2-1b-instruct_train_record_42_1776331412
### eval
per_device_eval_batch_size: 8
eval_strategy: steps
eval_steps: 0.05
val_size: 0.1

9
train_results.json Normal file
View File

@@ -0,0 +1,9 @@
{
"epoch": 5.0,
"num_input_tokens_seen": 245808128,
"total_flos": 1.43524334436719e+18,
"train_loss": 0.2808395623519733,
"train_runtime": 12345.7888,
"train_samples_per_second": 50.612,
"train_steps_per_second": 6.326
}

15641
trainer_log.jsonl Normal file

File diff suppressed because it is too large Load Diff

125183
trainer_state.json Normal file

File diff suppressed because it is too large Load Diff

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:989a294f6f42da4dc5b6e43a3e5204f9ea23ee3ae17ddfc438f23d7b940334e1
size 6289

BIN
training_eval_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
training_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB