Files
Sera-4.6-Lite-T2-v4-1000-ax…/README.md
ModelHub XC 6d29cd73f0 初始化项目,由ModelHub XC社区提供模型
Model: laion/Sera-4.6-Lite-T2-v4-1000-axolotl__Qwen3-8B-v7
Source: Original Platform
2026-05-02 22:20:01 +08:00

3.6 KiB
Raw Blame History

library_name, base_model, tags, datasets, model-index
library_name base_model tags datasets model-index
transformers Qwen/Qwen3-8B
generated_from_trainer
laion/Sera-4.6-Lite-T2-v4-1000
name results
e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-1000-axolotl__Qwen3-8B-v7

Built with Axolotl

See axolotl config

axolotl version: 0.16.0.dev0

# Sera v6 — scale data 316→1000 + num_epochs 3→6.
#
# Background: Sera v3 (316 rows × 6 epochs, SLURM 391242) passed turn-1 cleanly
# but collapsed at turn-3+ (degenerate tokens, 4.4.4.4… or for-the-for-the…)
# once a tool observation >~20 KB entered context. Greedy decoding didn't save
# it, so the root cause is under-training rather than sampling. See
# /Users/benjaminfeuer/Documents/notes/ot-agent/sera_braces_diagnosis.md for
# evidence (per-token probe + turn-3 replay).
#
# v6 = F3 fix: 3× more rows to give the model enough updates to stay stable
# in long multi-turn contexts.

base_model: Qwen/Qwen3-8B
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json

load_in_8bit: false
load_in_4bit: false

chat_template: tokenizer_default
datasets:
- laion/Sera-4.6-Lite-T2-v4-1000
    type: chat_template
    field_messages: messages
    ds_type: json
    message_field_training: train

dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/sera-v4-1000-v7
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-1000-axolotl__Qwen3-8B-v7

sequence_len: 32768

wandb_project:
wandb_entity:
wandb_watch:
wandb_name: sera-v4-1000-axolotl__Qwen3-8B-v7
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 12
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
adam_beta1: 0.9
adam_beta2: 0.95

bf16: auto
tf32: false

gradient_checkpointing: true
activation_offloading: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

warmup_ratio: 0.1875
evals_per_epoch: 0
save_strategy: epoch

weight_decay: 0.01
max_grad_norm: 1.0
special_tokens:


e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-1000-axolotl__Qwen3-8B-v7

This model is a fine-tuned version of Qwen/Qwen3-8B on the /e/data1/datasets/playground/ot-baf/hf_hub/datasets--laion--Sera-4.6-Lite-T2-v4-1000/snapshots/310c2661cea97bd8eb283374416193b64733fffb/sera-4.6-lite-t2_v4_1000.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 4
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 40
  • training_steps: 218

Training results

Framework versions

  • Transformers 5.5.0
  • Pytorch 2.9.1+cu130
  • Datasets 4.5.0
  • Tokenizers 0.22.2