初始化项目,由ModelHub XC社区提供模型

Model: open-sci/sft__ot30k_SmolLM2-1.7B-16k-SFT-Tulu3-decontaminated
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-28 08:43:03 +08:00
commit 2fa421e8d0
15 changed files with 259376 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

63
README.md Normal file
View File

@@ -0,0 +1,63 @@
---
library_name: transformers
license: other
base_model: ali-elganzory/SmolLM2-1.7B-16k-SFT-Tulu3-decontaminated
tags:
- llama-factory
- full
- generated_from_trainer
datasets:
- arrow
model-index:
- name: sft__f679a5c592c8dffb__e049b46eacd6b07b194d__smollm2-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft__f679a5c592c8dffb__e049b46eacd6b07b194d__smollm2-steps
This model is a fine-tuned version of [ali-elganzory/SmolLM2-1.7B-16k-SFT-Tulu3-decontaminated](https://huggingface.co/ali-elganzory/SmolLM2-1.7B-16k-SFT-Tulu3-decontaminated) on the /gpfs/scratch/ehpc524/ot/hf_hub/datasets/open-thoughts_open_thoughts3-1.2_m_30000_samples/default/0.0.0/f679a5c592c8dffb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 5.5.0
- Pytorch 2.10.0+cu128
- Datasets 4.8.4
- Tokenizers 0.22.2

12
all_results.json Normal file
View File

@@ -0,0 +1,12 @@
{
"epoch": 5.0,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.49867311120033264,
"total_flos": 1367557351931904.0,
"train_loss": 1.0911384893985505,
"train_runtime": 4706.648,
"train_samples_per_second": 31.87,
"train_steps_per_second": 0.25,
"valid_targets_mean": 13716.8,
"valid_targets_min": 3353
}

25
chat_template.jinja Normal file
View File

@@ -0,0 +1,25 @@
{%- for message in messages -%}
{%- if message["role"] == "system" -%}
{{- "<|system|>
" + message["content"] + "
" -}}
{%- elif message["role"] == "user" -%}
{{- "<|user|>
" + message["content"] + "
" -}}
{%- elif message["role"] == "assistant" -%}
{%- if not loop.last -%}
{{- "<|assistant|>
" + message["content"] + eos_token + "
" -}}
{%- else -%}
{{- "<|assistant|>
" + message["content"] + eos_token -}}
{%- endif -%}
{%- endif -%}
{%- if loop.last and add_generation_prompt -%}
{{- "<|assistant|>
" -}}
{%- endif -%}
{%- endfor -%}

34
config.json Normal file
View File

@@ -0,0 +1,34 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 0,
"dtype": "bfloat16",
"eos_token_id": 49152,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 8192,
"max_position_embeddings": 16384,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 24,
"num_key_value_heads": 32,
"pad_token_id": 49152,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_parameters": {
"factor": 2.0,
"original_max_position_embeddings": 8192,
"rope_theta": 130000,
"rope_type": "yarn"
},
"tie_word_embeddings": true,
"transformers_version": "5.5.0",
"use_cache": false,
"vocab_size": 49216
}

9
generation_config.json Normal file
View File

@@ -0,0 +1,9 @@
{
"_from_model_config": true,
"bos_token_id": 0,
"eos_token_id": [
49152
],
"pad_token_id": 49152,
"transformers_version": "5.5.0"
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85e7a5710ec29dc3a5c03b85084eb0505de9156562c63a3160817632c6461ac2
size 3423040096

12
run_summary.json Normal file
View File

@@ -0,0 +1,12 @@
{
"agent_name": "f679a5c592c8dffb",
"training_start": null,
"training_end": null,
"created_by": "DCAgent",
"base_model_name": "/gpfs/scratch/ehpc524/ot/hf_hub/models--ali-elganzory--SmolLM2-1.7B-16k-SFT-Tulu3-decontaminated/snapshots/e049b46eacd6b07b194dd10dd55afa64f18e3a7d/",
"dataset_name": "/gpfs/scratch/ehpc524/ot/hf_hub/datasets/open-thoughts_open_thoughts3-1.2_m_30000_samples/default/0.0.0/f679a5c592c8dffb",
"training_type": "SFT",
"training_parameters": "https://huggingface.co/mlfoundations-dev/sft__f679a5c592c8dffb__e049b46eacd6b07b194d__smollm2-steps/blob/main/config.json",
"wandb_link": null,
"traces_location_s3": null
}

245001
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

19
tokenizer_config.json Normal file
View File

@@ -0,0 +1,19 @@
{
"add_prefix_space": false,
"backend": "tokenizers",
"bos_token": "<|endoftext|>",
"clean_up_tokenization_spaces": false,
"eos_token": "<end_of_turn>",
"errors": "replace",
"extra_special_tokens": {
"<|im_end|>": "<|im_end|>"
},
"is_local": true,
"model_max_length": 16384,
"pad_token": "<end_of_turn>",
"padding_side": "right",
"split_special_tokens": false,
"tokenizer_class": "GPT2Tokenizer",
"unk_token": "<|endoftext|>",
"vocab_size": 49152
}

12
train_results.json Normal file
View File

@@ -0,0 +1,12 @@
{
"epoch": 5.0,
"loss_nan_ranks": 0,
"loss_rank_avg": 0.49867311120033264,
"total_flos": 1367557351931904.0,
"train_loss": 1.0911384893985505,
"train_runtime": 4706.648,
"train_samples_per_second": 31.87,
"train_steps_per_second": 0.25,
"valid_targets_mean": 13716.8,
"valid_targets_min": 3353
}

1176
trainer_log.jsonl Normal file

File diff suppressed because it is too large Load Diff

12972
trainer_state.json Normal file

File diff suppressed because it is too large Load Diff

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9c2cb27c66aa56fb17b2d5967602a5616b9123164304222c557caa9a76b777ee
size 7953

BIN
training_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB