Files
Qwen3-0.6B-Code-Expert/README.md
ModelHub XC e31591ce9c 初始化项目,由ModelHub XC社区提供模型
Model: suayptalha/Qwen3-0.6B-Code-Expert
Source: Original Platform
2026-05-05 06:39:50 +08:00

1.9 KiB
Raw Permalink Blame History

license, tags, datasets, language, base_model, pipeline_tag, library_name
license tags datasets language base_model pipeline_tag library_name
apache-2.0
unsloth
trl
sft
code
reasoning
nvidia/OpenCodeReasoning
en
Qwen/Qwen3-0.6B
text-generation transformers

Qwen3-0.6B-Code-Expert

This project performs full fine-tuning on the Qwen3-0.6B language model to enhance its code reasoning and generation capabilities. Training was conducted exclusively on the nvidia/OpenCodeReasoning dataset, and the model was optimized using the bfloat16 (bf16) data type.

Training Procedure

  1. Dataset Preparation

    • nvidia/OpenCodeReasoning dataset was used.
    • Each example consists of code snippets paired with detailed step-by-step reasoning in Chain-of-Thought (CoT) style.
  2. Model Loading and Configuration

    • Qwen3-0.6B base model weights were loaded via the unsloth library in bf16 precision.
    • Full fine-tuning (full_finetuning=True) was applied to all layers for optimal adaptation to code reasoning.
  3. Supervised Fine-Tuning

    • Employed the Hugging Face TRL library with the Supervised Fine-Tuning (SFT) approach.
    • The model was trained to generate correct code solutions along with the corresponding reasoning chains.

Purpose and Outcome

  • The models capacity for understanding, reasoning about, and generating code was significantly improved through specialized, single-dataset training in bf16 precision.
  • Outputs include both intermediate reasoning steps and final code solutions, enabling transparent and interpretable code generation.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.

Support

Buy Me A Coffee