Files
Bespoke-Stratos-7B/README.md
ModelHub XC c8488bd041 初始化项目,由ModelHub XC社区提供模型
Model: bespokelabs/Bespoke-Stratos-7B
Source: Original Platform
2026-05-05 04:14:52 +08:00

74 lines
2.4 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: original
results: []
language:
- en
datasets:
- bespokelabs/Bespoke-Stratos-17k
---
<p align="center">
<img src="https://huggingface.co/bespokelabs/Bespoke-MiniCheck-7B/resolve/main/Bespoke-Labs-Logo.png" width="550">
</p>
## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k).
The dataset is derived by distilling DeepSeek-R1 using the data pipeline of Berkeley NovaSkys Sky-T1 with some modifications. More info in the dataset card at [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k).
It outperforms Qwen-2.5-7B-Instruct on math reasoning benchmarks:
||Bespoke-Stratos-7B|Qwen2.5-7B-Instruct|DeepSeek-R1-Distill-Qwen-7B (Ours)|DeepSeek-R1-Distill-Qwen-7B (Reported)|
|---|---|---|---|---|
|AIME2024|20.0|10.0|43.3|55.5|
|MATH500|82.0|74.2|89.4|92.8|
|GPQA-Diamond|37.8|33.3|44.9|49.1|
|LiveCodeBench v2 Easy|71.4|65.9|81.3|-|
|LiveCodeBench v2 Medium|25.5|18.9|42.2|-|
|LiveCodeBench v2 Hard|1.6|3.3|2.4|-|
|LiveCodeBench v2 All|36.1|31.9|46.6|-|
Note that the authors of Sky-T1 had [noted](https://github.com/NovaSky-AI/SkyThought/issues/4#issuecomment-2585860004) that they saw little or no improvement in training 7B or 14B models with their data.
However, see an improvement, though not at the scale of DeepSeek's distilled model. The reason could be that we used 17k examples, while DeepSeek seems to have used 800k.
## Intended uses & limitations
Apache 2.0 License
## Training procedure
We used 8xH100 to train the model for 7 hours.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3