初始化项目,由ModelHub XC社区提供模型
Model: allura-forge/remnant-8b-ep2-ckpt Source: Original Platform
This commit is contained in:
131
README.md
Normal file
131
README.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
base_model: Qwen/Qwen3-8B-Base
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- allura-org/inkmix-v3.0
|
||||
model-index:
|
||||
- name: ephemeral/ckpts
|
||||
results: []
|
||||
---
|
||||
|
||||
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||
should probably proofread and complete it, then remove this comment. -->
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
||||
<details><summary>See axolotl config</summary>
|
||||
|
||||
axolotl version: `0.10.0.dev0`
|
||||
```yaml
|
||||
# === Model Configuration ===
|
||||
base_model: Qwen/Qwen3-8B-Base
|
||||
load_in_8bit: false
|
||||
load_in_4bit: false
|
||||
|
||||
# === Training Setup ===
|
||||
num_epochs: 2
|
||||
micro_batch_size: 32
|
||||
gradient_accumulation_steps: 1
|
||||
sequence_len: 8192
|
||||
sample_packing: true
|
||||
pad_to_sequence_len: true
|
||||
|
||||
# === Hyperparameter Configuration ===
|
||||
optimizer: apollo_adamw_layerwise
|
||||
# Apollo-mini configuration:
|
||||
optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
|
||||
# Regular Apollo configuration:
|
||||
# optim_args:
|
||||
optim_target_modules: all_linear
|
||||
learning_rate: 2e-5
|
||||
lr_scheduler: rex
|
||||
weight_decay: 0.01
|
||||
warmup_ratio: 0
|
||||
|
||||
# === Data Configuration ===
|
||||
datasets:
|
||||
- path: allura-org/inkmix-v3.0
|
||||
type: chat_template
|
||||
split: train
|
||||
field_messages: conversations
|
||||
message_field_role: from
|
||||
message_field_content: value
|
||||
|
||||
dataset_prepared_path: last_run_prepared
|
||||
chat_template: chatml
|
||||
|
||||
# === Plugins ===
|
||||
plugins:
|
||||
- axolotl.integrations.liger.LigerPlugin
|
||||
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
|
||||
|
||||
# === Hardware Optimization ===
|
||||
gradient_checkpointing: unsloth
|
||||
gradient_checkpointing_kwargs:
|
||||
use_reentrant: false
|
||||
liger_rope: true
|
||||
liger_rms_norm: true
|
||||
liger_glu_activation: true
|
||||
cut_cross_entropy: true
|
||||
|
||||
# === Wandb Tracking ===
|
||||
wandb_project: qwen3-8b-inkmix-v3
|
||||
|
||||
# === Checkpointing ===
|
||||
saves_per_epoch: 2
|
||||
save_total_limit: 3
|
||||
|
||||
# === Advanced Settings ===
|
||||
output_dir: /ephemeral/ckpts
|
||||
bf16: auto
|
||||
flash_attention: true
|
||||
train_on_inputs: false
|
||||
group_by_length: false
|
||||
logging_steps: 1
|
||||
trust_remote_code: true
|
||||
|
||||
```
|
||||
|
||||
</details><br>
|
||||
|
||||
# ephemeral/ckpts
|
||||
|
||||
This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) on the allura-org/inkmix-v3.0 dataset.
|
||||
|
||||
## Model description
|
||||
|
||||
More information needed
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
More information needed
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
More information needed
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 2e-05
|
||||
- train_batch_size: 32
|
||||
- eval_batch_size: 32
|
||||
- seed: 42
|
||||
- optimizer: Use apollo_adamw_layerwise with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200
|
||||
- lr_scheduler_type: cosine
|
||||
- num_epochs: 2.0
|
||||
|
||||
### Training results
|
||||
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.51.3
|
||||
- Pytorch 2.6.0+cu124
|
||||
- Datasets 3.5.0
|
||||
- Tokenizers 0.21.1
|
||||
Reference in New Issue
Block a user