Files
ModelHub XC 6bd30d3906 初始化项目,由ModelHub XC社区提供模型
Model: laion/CoderForge-Preview-v3-1000-axolotl__Qwen3-8B
Source: Original Platform
2026-05-03 03:03:53 +08:00

3.8 KiB

library_name, base_model, tags, datasets, model-index
library_name base_model tags datasets model-index
transformers Qwen/Qwen3-8B
generated_from_trainer
laion/CoderForge-Preview-v3-1000
name results
e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-1000-axolotl__Qwen3-8B

Built with Axolotl

See axolotl config

axolotl version: 0.16.0.dev0

# CoderForge v3 axolotl config template.
# Consumes the pre-tokenized laion/CoderForge-Preview-v3-<SIZE> datasets.
# Axolotl auto-detects pre-tokenized via input_ids + attention_mask + labels
# (_is_dataset_already_tokenized) and skips chat_template rendering entirely.
# Fill 1000 via sed-substitution.

base_model: Qwen/Qwen3-8B
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json

load_in_8bit: false
load_in_4bit: false

# plugins disabled 2026-04-22: CCE + bf16 + flash-attn on aarch64/torch2.9 caused
# gradient explosion (grad_norm 9.8e+11) and loss -> 0 within first 3-7 steps.
# plugins:
#   - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin

# chat_template still set so tokenizer can be loaded, even though axolotl
# bypasses template rendering for pre-tokenized data.
chat_template: chatml
datasets:
  - path: laion/CoderForge-Preview-v3-1000
    # No `type:` specified — axolotl's _is_dataset_already_tokenized() fires
    # early and returns the dataset as-is.
    ds_type: parquet

dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/cf-v3-1000
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-1000-axolotl__Qwen3-8B
# hub_model_id: laion/CoderForge-Preview-v3-1000-axolotl__Qwen3-8B
# hub_strategy: end

# Upstream pre-tokenized sequences can exceed 80k tokens; matches Sera v3 truncation.
sequence_len: 32768

wandb_project:
wandb_entity:
wandb_watch:
wandb_name: cf-v3-1000-axolotl__Qwen3-8B
wandb_log_model:

# Matches upstream SERA config's optimization hparams for apples-to-apples.
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
adam_beta1: 0.9
adam_beta2: 0.95

bf16: auto
tf32: false

gradient_checkpointing: true
activation_offloading: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

warmup_ratio: 0.1875
evals_per_epoch: 0
save_strategy: epoch

weight_decay: 0.01
max_grad_norm: 1.0
special_tokens:


e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-1000-axolotl__Qwen3-8B

This model is a fine-tuned version of Qwen/Qwen3-8B on the laion/CoderForge-Preview-v3-1000 dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 4
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 5
  • training_steps: 31

Training results

Framework versions

  • Transformers 5.5.0
  • Pytorch 2.9.1+cu130
  • Datasets 4.5.0
  • Tokenizers 0.22.2