262b71bc344660854dec1690cbd4fe71032dcc0e
Model: laion/Sera-4.6-Lite-T2-v4-1000-axolotl__Qwen3-8B Source: Original Platform
library_name, base_model, tags, datasets, model-index
| library_name | base_model | tags | datasets | model-index | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| transformers | Qwen/Qwen3-8B |
|
|
|
See axolotl config
axolotl version: 0.16.0.dev0
# Sera v4 axolotl config template — consumes laion/Sera-4.6-Lite-T2-v4-<SIZE>
# where tool_calls are already pre-rendered into content as <tool_call>...</tool_call>
# (Hermes/Qwen3 wire format) per SERA's transform_traj_hermes. Chatml passes the
# wire tokens through into input_ids + labels so tool calls are in the loss.
#
# Fill 1000 via sed-substitution.
base_model: Qwen/Qwen3-8B
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json
load_in_8bit: false
load_in_4bit: false
# CCE disabled (aarch64/torch2.9 grad explosion — see baselines/sera/README.md)
# plugins:
# - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
chat_template: chatml
datasets:
- path: laion/Sera-4.6-Lite-T2-v4-1000
data_files:
- sera-4.6-lite-t2_v4_1000.jsonl
type: chat_template
field_messages: messages
ds_type: json
message_field_training: train
dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/sera-v4-1000
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-1000-axolotl__Qwen3-8B
sequence_len: 32768
wandb_project:
wandb_entity:
wandb_watch:
wandb_name: sera-v4-1000-axolotl__Qwen3-8B
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
adam_beta1: 0.9
adam_beta2: 0.95
bf16: auto
tf32: false
gradient_checkpointing: true
activation_offloading: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1875
evals_per_epoch: 0
save_strategy: epoch
weight_decay: 0.01
max_grad_norm: 1.0
special_tokens:
e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-1000-axolotl__Qwen3-8B
This model is a fine-tuned version of Qwen/Qwen3-8B on the laion/Sera-4.6-Lite-T2-v4-1000 dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 54
Training results
Framework versions
- Transformers 5.5.0
- Pytorch 2.9.1+cu130
- Datasets 4.5.0
- Tokenizers 0.22.2
Description