初始化项目,由ModelHub XC社区提供模型
Model: laion/Sera-4.6-Lite-T2-v4-316-axolotl__Qwen3-8B Source: Original Platform
This commit is contained in:
136
README.md
Normal file
136
README.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
library_name: transformers
|
||||
base_model: Qwen/Qwen3-8B
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- laion/Sera-4.6-Lite-T2-v4-316
|
||||
model-index:
|
||||
- name: e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B
|
||||
results: []
|
||||
---
|
||||
|
||||
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||
should probably proofread and complete it, then remove this comment. -->
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
||||
<details><summary>See axolotl config</summary>
|
||||
|
||||
axolotl version: `0.16.0.dev0`
|
||||
```yaml
|
||||
# Sera v4 axolotl config template — consumes laion/Sera-4.6-Lite-T2-v4-<SIZE>
|
||||
# where tool_calls are already pre-rendered into content as <tool_call>...</tool_call>
|
||||
# (Hermes/Qwen3 wire format) per SERA's transform_traj_hermes. Chatml passes the
|
||||
# wire tokens through into input_ids + labels so tool calls are in the loss.
|
||||
#
|
||||
# Fill 316 via sed-substitution.
|
||||
|
||||
base_model: Qwen/Qwen3-8B
|
||||
deepspeed: /e/scratch/jureap59/feuer1/code/axolotl/deepspeed_configs/zero3_bf16.json
|
||||
|
||||
load_in_8bit: false
|
||||
load_in_4bit: false
|
||||
|
||||
# CCE disabled (aarch64/torch2.9 grad explosion — see baselines/sera/README.md)
|
||||
# plugins:
|
||||
# - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
|
||||
|
||||
chat_template: chatml
|
||||
datasets:
|
||||
- path: laion/Sera-4.6-Lite-T2-v4-316
|
||||
data_files:
|
||||
- sera-4.6-lite-t2_v4_316.jsonl
|
||||
type: chat_template
|
||||
field_messages: messages
|
||||
ds_type: json
|
||||
message_field_training: train
|
||||
|
||||
dataset_prepared_path: /e/data1/datasets/playground/ot-baf/axolotl_dataset_cache/sera-v4-316
|
||||
output_dir: /e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B
|
||||
|
||||
sequence_len: 32768
|
||||
|
||||
wandb_project:
|
||||
wandb_entity:
|
||||
wandb_watch:
|
||||
wandb_name: sera-v4-316-axolotl__Qwen3-8B
|
||||
wandb_log_model:
|
||||
|
||||
gradient_accumulation_steps: 8
|
||||
micro_batch_size: 1
|
||||
num_epochs: 3
|
||||
optimizer: adamw_torch
|
||||
lr_scheduler: cosine
|
||||
learning_rate: 1e-5
|
||||
adam_beta1: 0.9
|
||||
adam_beta2: 0.95
|
||||
|
||||
bf16: auto
|
||||
tf32: false
|
||||
|
||||
gradient_checkpointing: true
|
||||
activation_offloading: true
|
||||
resume_from_checkpoint:
|
||||
logging_steps: 1
|
||||
flash_attention: true
|
||||
|
||||
loss_watchdog_threshold: 5.0
|
||||
loss_watchdog_patience: 3
|
||||
|
||||
warmup_ratio: 0.1875
|
||||
evals_per_epoch: 0
|
||||
save_strategy: epoch
|
||||
|
||||
weight_decay: 0.01
|
||||
max_grad_norm: 1.0
|
||||
special_tokens:
|
||||
|
||||
```
|
||||
|
||||
</details><br>
|
||||
|
||||
# e/data1/datasets/playground/ot-baf/checkpoints/sera-v4-316-axolotl__Qwen3-8B
|
||||
|
||||
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the laion/Sera-4.6-Lite-T2-v4-316 dataset.
|
||||
|
||||
## Model description
|
||||
|
||||
More information needed
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
More information needed
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
More information needed
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 1e-05
|
||||
- train_batch_size: 1
|
||||
- eval_batch_size: 1
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 4
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 32
|
||||
- total_eval_batch_size: 4
|
||||
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_steps: 3
|
||||
- training_steps: 17
|
||||
|
||||
### Training results
|
||||
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 5.5.0
|
||||
- Pytorch 2.9.1+cu130
|
||||
- Datasets 4.5.0
|
||||
- Tokenizers 0.22.2
|
||||
Reference in New Issue
Block a user