Files
P2-split2_bs512_epoch10_2e-…/README.md
ModelHub XC 7068131879 初始化项目,由ModelHub XC社区提供模型
Model: Hyeongwon/P2-split2_bs512_epoch10_2e-5_prob_Qwen3-4B-Base_0320-01
Source: Original Platform
2026-05-07 19:06:25 +08:00

1.8 KiB

base_model, library_name, model_name, tags, licence
base_model library_name model_name tags licence
Hyeongwon/Qwen3-4B-Base transformers P2-split2_bs512_epoch10_2e-5_prob_Qwen3-4B-Base_0320-01
generated_from_trainer
trl
sft
license

Model Card for P2-split2_bs512_epoch10_2e-5_prob_Qwen3-4B-Base_0320-01

This model is a fine-tuned version of Hyeongwon/Qwen3-4B-Base. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hyeongwon/P2-split2_bs512_epoch10_2e-5_prob_Qwen3-4B-Base_0320-01", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.25.1
  • Transformers: 4.57.3
  • Pytorch: 2.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.22.2

Citations

Cite TRL as:

@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}