---
library_name: transformers
base_model: Qwen/Qwen3-8B
tags:
- generated_from_trainer
# datasets: (stripped — axolotl embedded invalid local path)
_datasets_:
- laion/Sera-4.6-Lite-T2-v4-316
model-index:
- name: e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B-v3
results: []
---
[
](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.16.0.dev0`
```yaml
# Sera v4 retrain v3 — size=316, num_epochs doubled 3->6
# v2 (num_epochs=3, tokenizer_default) recovered structure but model still
# emits malformed JSON ({"name": "view"}}} — 3 closing braces) and collapses
# str_replace_editor -> inner arguments.command='view'. Training data confirmed
# clean (5 rows / 140 tool calls / 0 malformed / tool names str_replace_editor/
# bash/submit). Hypothesis: ~120 grad updates at size=316 is too few to latch
# onto nested JSON structure + tool-name dictionary; double passes to 6 epochs.
base_model: Qwen/Qwen3-8B
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json
load_in_8bit: false
load_in_4bit: false
chat_template: tokenizer_default
# datasets: (stripped — axolotl embedded invalid local path)
_datasets_:
- laion/Sera-4.6-Lite-T2-v4-316
type: chat_template
field_messages: messages
ds_type: json
message_field_training: train
dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/sera-v4-316-v3
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B-v3
sequence_len: 32768
wandb_project:
wandb_entity:
wandb_watch:
wandb_name: sera-v4-316-axolotl__Qwen3-8B-v3
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 6
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
adam_beta1: 0.9
adam_beta2: 0.95
bf16: auto
tf32: false
gradient_checkpointing: true
activation_offloading: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1875
evals_per_epoch: 0
save_strategy: epoch
weight_decay: 0.01
max_grad_norm: 1.0
special_tokens:
```
# e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B-v3
- laion/Sera-4.6-Lite-T2-v4-316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 6
- training_steps: 34
### Training results
### Framework versions
- Transformers 5.5.0
- Pytorch 2.9.1+cu130
- Datasets 4.5.0
- Tokenizers 0.22.2