60 lines
1.5 KiB
Markdown
60 lines
1.5 KiB
Markdown
---
|
|
base_model: unsloth/Qwen3-4B-Instruct-2507
|
|
datasets:
|
|
- u-10bei/structured_data_with_cot_dataset_512_v5
|
|
language:
|
|
- en
|
|
license: apache-2.0
|
|
library_name: transformers
|
|
pipeline_tag: text-generation
|
|
tags:
|
|
- full-finetune
|
|
- structured-output
|
|
---
|
|
|
|
qwen3-4b-structured-output-lora
|
|
|
|
This repository provides a **full fine-tuned model** based on
|
|
**unsloth/Qwen3-4B-Instruct-2507** using **BF16 full fine-tuning + NEFTune**.
|
|
|
|
This repository contains the **complete model weights**.
|
|
|
|
## Training Objective
|
|
|
|
This model is trained to improve **structured output accuracy**
|
|
(JSON / YAML / XML / TOML / CSV).
|
|
|
|
CoT (Chain-of-Thought) is removed from training data during preprocessing.
|
|
|
|
## Training Configuration
|
|
|
|
- Base model: unsloth/Qwen3-4B-Instruct-2507
|
|
- Method: Full fine-tuning (BF16)
|
|
- Max sequence length: 2048
|
|
- Epochs: 1
|
|
- Learning rate: 2e-05
|
|
- NEFTune noise alpha: 5.0
|
|
|
|
## Usage
|
|
|
|
```python
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
import torch
|
|
|
|
model_id = "84basi/lora-10-1"
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
model_id,
|
|
torch_dtype=torch.bfloat16,
|
|
device_map="auto",
|
|
)
|
|
```
|
|
|
|
## Sources & Terms (IMPORTANT)
|
|
|
|
Training data: u-10bei/structured_data_with_cot_dataset_512_v5
|
|
|
|
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
|
|
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
|