初始化项目,由ModelHub XC社区提供模型

Model: laion/CoderForge-Preview-v6-1000-axolotl__Qwen3-8B-v8
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-29 18:11:13 +08:00
commit 2cb5694ece
11 changed files with 205755 additions and 0 deletions

143
README.md Normal file
View File

@@ -0,0 +1,143 @@
---
library_name: transformers
base_model: Qwen/Qwen3-8B
tags:
- generated_from_trainer
datasets:
- laion/CoderForge-Preview-v6-1000
model-index:
- name: e/data1/datasets/playground/ot-baf/checkpoints/cf-v8-1000-axolotl__Qwen3-8B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.16.0.dev0`
```yaml
# CoderForge v6 axolotl config template — consumes laion/CoderForge-Preview-v6-<SIZE>
# where every assistant turn has a <think>REASONING</think> block and tool calls
# are rendered as native OpenHands XML (<function=NAME><parameter=K>V</parameter></function>).
#
# Why v6: v3 (pre-tokenized) and v5 (wrapper-stripped, no <think>) both produced
# garbage output at eval time because stock Qwen3-8B assigns 100% prior to
# <think> as the first token after <|im_start|>assistant. Injecting <think>
# blocks into CF's training data aligns with Qwen3's post-training prior and
# preserves long-context coherence.
#
# Fill 316 via sed-substitution.
base_model: Qwen/Qwen3-8B
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json
load_in_8bit: false
load_in_4bit: false
# CCE disabled (aarch64/torch2.9 grad explosion — see baselines/sera/README.md)
# plugins:
# - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
# Use the base model's own chat_template (from Qwen/Qwen3-8B tokenizer_config.json)
# so training-time rendering matches vLLM's inference-time rendering byte-for-byte.
chat_template: tokenizer_default
datasets:
- laion/CoderForge-Preview-v6-1000
data_files:
- coderforge-preview_v6_316.jsonl
type: chat_template
field_messages: messages
ds_type: json
message_field_training: train
dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/cf-v8-1000
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/cf-v8-1000-axolotl__Qwen3-8B
sequence_len: 32768
wandb_project:
wandb_entity:
wandb_watch:
wandb_name: cf-v8-1000-axolotl__Qwen3-8B
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 12
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
adam_beta1: 0.9
adam_beta2: 0.95
bf16: auto
tf32: false
gradient_checkpointing: true
activation_offloading: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1875
evals_per_epoch: 0
save_strategy: epoch
weight_decay: 0.01
max_grad_norm: 1.0
special_tokens:
```
</details><br>
# e/data1/datasets/playground/ot-baf/checkpoints/cf-v8-1000-axolotl__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the /e/data1/datasets/playground/ot-baf/hf_hub/datasets--laion--CoderForge-Preview-v6-1000/snapshots/ed4a6b7b87753bc5ee6c858670ad5be422c683a9/coderforge-preview_v6_1000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 18
- training_steps: 98
### Training results
### Framework versions
- Transformers 5.5.0
- Pytorch 2.9.1+cu130
- Datasets 4.5.0
- Tokenizers 0.22.2