初始化项目,由ModelHub XC社区提供模型
Model: laion/CoderForge-Preview-v3-316-axolotl__Qwen3-8B Source: Original Platform
This commit is contained in:
139
README.md
Normal file
139
README.md
Normal file
@@ -0,0 +1,139 @@
|
||||
---
|
||||
library_name: transformers
|
||||
base_model: Qwen/Qwen3-8B
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- laion/CoderForge-Preview-v3-316
|
||||
model-index:
|
||||
- name: e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-316-axolotl__Qwen3-8B
|
||||
results: []
|
||||
---
|
||||
|
||||
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||
should probably proofread and complete it, then remove this comment. -->
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
||||
<details><summary>See axolotl config</summary>
|
||||
|
||||
axolotl version: `0.16.0.dev0`
|
||||
```yaml
|
||||
# CoderForge v3 axolotl config template.
|
||||
# Consumes the pre-tokenized laion/CoderForge-Preview-v3-<SIZE> datasets.
|
||||
# Axolotl auto-detects pre-tokenized via input_ids + attention_mask + labels
|
||||
# (_is_dataset_already_tokenized) and skips chat_template rendering entirely.
|
||||
# Fill 316 via sed-substitution.
|
||||
|
||||
base_model: Qwen/Qwen3-8B
|
||||
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json
|
||||
|
||||
load_in_8bit: false
|
||||
load_in_4bit: false
|
||||
|
||||
# plugins disabled 2026-04-22: CCE + bf16 + flash-attn on aarch64/torch2.9 caused
|
||||
# gradient explosion (grad_norm 9.8e+11) and loss -> 0 within first 3-7 steps.
|
||||
# plugins:
|
||||
# - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
|
||||
|
||||
# chat_template still set so tokenizer can be loaded, even though axolotl
|
||||
# bypasses template rendering for pre-tokenized data.
|
||||
chat_template: chatml
|
||||
datasets:
|
||||
- path: laion/CoderForge-Preview-v3-316
|
||||
# No `type:` specified — axolotl's _is_dataset_already_tokenized() fires
|
||||
# early and returns the dataset as-is.
|
||||
ds_type: parquet
|
||||
|
||||
dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/cf-v3-316
|
||||
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-316-axolotl__Qwen3-8B
|
||||
# hub_model_id: laion/CoderForge-Preview-v3-316-axolotl__Qwen3-8B
|
||||
# hub_strategy: end
|
||||
|
||||
# Upstream pre-tokenized sequences can exceed 80k tokens; matches Sera v3 truncation.
|
||||
sequence_len: 32768
|
||||
|
||||
wandb_project:
|
||||
wandb_entity:
|
||||
wandb_watch:
|
||||
wandb_name: cf-v3-316-axolotl__Qwen3-8B
|
||||
wandb_log_model:
|
||||
|
||||
# Matches upstream SERA config's optimization hparams for apples-to-apples.
|
||||
gradient_accumulation_steps: 8
|
||||
micro_batch_size: 1
|
||||
num_epochs: 3
|
||||
optimizer: adamw_torch
|
||||
lr_scheduler: cosine
|
||||
learning_rate: 1e-5
|
||||
adam_beta1: 0.9
|
||||
adam_beta2: 0.95
|
||||
|
||||
bf16: auto
|
||||
tf32: false
|
||||
|
||||
gradient_checkpointing: true
|
||||
activation_offloading: true
|
||||
resume_from_checkpoint:
|
||||
logging_steps: 1
|
||||
flash_attention: true
|
||||
|
||||
loss_watchdog_threshold: 5.0
|
||||
loss_watchdog_patience: 3
|
||||
|
||||
warmup_ratio: 0.1875
|
||||
evals_per_epoch: 0
|
||||
save_strategy: epoch
|
||||
|
||||
weight_decay: 0.01
|
||||
max_grad_norm: 1.0
|
||||
special_tokens:
|
||||
|
||||
```
|
||||
|
||||
</details><br>
|
||||
|
||||
# e/data1/datasets/playground/ot-baf/checkpoints/cf-v3-316-axolotl__Qwen3-8B
|
||||
|
||||
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the laion/CoderForge-Preview-v3-316 dataset.
|
||||
|
||||
## Model description
|
||||
|
||||
More information needed
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
More information needed
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
More information needed
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 1e-05
|
||||
- train_batch_size: 1
|
||||
- eval_batch_size: 1
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 4
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 32
|
||||
- total_eval_batch_size: 4
|
||||
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_steps: 2
|
||||
- training_steps: 9
|
||||
|
||||
### Training results
|
||||
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 5.5.0
|
||||
- Pytorch 2.9.1+cu130
|
||||
- Datasets 4.5.0
|
||||
- Tokenizers 0.22.2
|
||||
Reference in New Issue
Block a user