初始化项目,由ModelHub XC社区提供模型

Model: xd2010/Qwen1.5-MOE-aux-free-sft-math7k-1e-3-gamma-1epoch
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 16:30:09 +08:00
commit 70e484f736
23 changed files with 160808 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

60
README.md Normal file
View File

@@ -0,0 +1,60 @@
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen1.5-MOE-aux-free-sft-math7k-1e-3-gamma-1epoch
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-aux-free-sft-math7k-1e-3-gamma-1epoch
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xd2010/Qwen1.5-MOE-aux-free-sft-math7k-1e-3-gamma-1epoch", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/n5wlb1vv)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.8.3
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```

5
added_tokens.json Normal file
View File

@@ -0,0 +1,5 @@
{
"<|endoftext|>": 151643,
"<|im_end|>": 151645,
"<|im_start|>": 151644
}

8
all_results.json Normal file
View File

@@ -0,0 +1,8 @@
{
"total_flos": 2.0144468407196058e+17,
"train_loss": 0.3228219868138779,
"train_runtime": 1246.4994,
"train_samples": 6851,
"train_samples_per_second": 5.496,
"train_steps_per_second": 0.172
}

39
config.json Normal file
View File

@@ -0,0 +1,39 @@
{
"architectures": [
"Qwen2MoeForCausalLM"
],
"attention_dropout": 0.0,
"bias_update_speed": 0.001,
"bos_token_id": 151643,
"decoder_sparse_step": 1,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 5632,
"max_position_embeddings": 8192,
"max_window_layers": 21,
"mlp_only_layers": [],
"model_type": "qwen2_moe",
"moe_intermediate_size": 1408,
"norm_topk_prob": false,
"num_attention_heads": 16,
"num_experts": 60,
"num_experts_per_tok": 4,
"num_hidden_layers": 24,
"num_key_value_heads": 16,
"output_router_logits": false,
"qkv_bias": true,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000.0,
"router_aux_loss_coef": 0.001,
"shared_expert_intermediate_size": 5632,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.51.0",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}

11
generation_config.json Normal file
View File

@@ -0,0 +1,11 @@
{
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"transformers_version": "4.51.0",
"use_cache": false
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:32e7bc22ebe5a83ad7a74da655c21a272e4b587c607dcf1636b6ef622d79d9ff
size 4996579016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:976e77251e4791db875fca6dd6aecbc05a2ade2491601ace40a001dedacbdee9
size 4996349360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4272d8be31effc0621d7e99a0453cbeac5114250293fc1c6ccfe184f0f7c830b
size 4997128400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d4cf87d2c3dd0bb2e967d9c06b190b6415389897117746d4b5c279c728142ad3
size 4985593800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6681302abb63fb1abfc4f7adca6d3057209a6cc522aae6a952ee943f6dd03aae
size 4996350584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d6baf590b2baf90e0a72b2b67ae2fa3c27cc5da4c6d4bc7a3d40e266b4f3ded8
size 3660152040

4690
model.safetensors.index.json Normal file

File diff suppressed because it is too large Load Diff

1667
moe_bias_states.json Normal file

File diff suppressed because it is too large Load Diff

14
special_tokens_map.json Normal file
View File

@@ -0,0 +1,14 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"eos_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "<|endoftext|>"
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f656f572e1c3de8e5d93166967a856400241e138b1e5cb880a41992894eaa645
size 11418364

45
tokenizer_config.json Normal file
View File

@@ -0,0 +1,45 @@
{
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": null,
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
"clean_up_tokenization_spaces": false,
"eos_token": "<|endoftext|>",
"errors": "replace",
"extra_special_tokens": {},
"fast_tokenizer": true,
"model_max_length": 32768,
"pad_token": "<|endoftext|>",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

8
train_results.json Normal file
View File

@@ -0,0 +1,8 @@
{
"total_flos": 2.0144468407196058e+17,
"train_loss": 0.3228219868138779,
"train_runtime": 1246.4994,
"train_samples": 6851,
"train_samples_per_second": 5.496,
"train_steps_per_second": 0.172
}

1763
trainer_state.json Normal file

File diff suppressed because it is too large Load Diff

1049
training.log Normal file

File diff suppressed because it is too large Load Diff

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c660143775d04edbf7f09ee2a40bb481f86e627b113b7bcd33c2923c36f34f3e
size 7608

1
vocab.json Normal file

File diff suppressed because one or more lines are too long