ModelHub XC ef5077c9ff 初始化项目,由ModelHub XC社区提供模型
Model: bunnycore/Qwen-2.5-7b-S1k
Source: Original Platform
2026-04-26 22:30:09 +08:00

library_name, tags, base_model, model-index
library_name tags base_model model-index
transformers
mergekit
merge
bunnycore/Qwen-2.5-7B-Deep-Stock-v4
bunnycore/Qwen-2.5-7b-s1k-lora_model
name results
Qwen-2.5-7b-S1k
task dataset metrics source
type name
text-generation Text Generation
name type args
IFEval (0-Shot) HuggingFaceH4/ifeval
num_few_shot
0
type value name
inst_level_strict_acc and prompt_level_strict_acc 71.62 strict accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
BBH (3-Shot) BBH
num_few_shot
3
type value name
acc_norm 36.69 normalized accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MATH Lvl 5 (4-Shot) hendrycks/competition_math
num_few_shot
4
type value name
exact_match 47.81 exact match
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
GPQA (0-shot) Idavidrein/gpqa
num_few_shot
0
type value name
acc_norm 4.59 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MuSR (0-shot) TAUR-Lab/MuSR
num_few_shot
0
type value name
acc_norm 9.26 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU-PRO (5-shot) TIGER-Lab/MMLU-Pro main test
num_few_shot
5
type value name
acc 37.58 accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7b-S1k Open LLM Leaderboard

System Prompt

Think about the reasoning process in the mind first, then provide the answer. The reasoning process should detailed and should be wrapped within <think> </think> tags, then provide the answer after that, i.e., <think> reasoning process here </think> answer here.

Configuration

The following YAML configuration was used to produce this model:


base_model: bunnycore/Qwen-2.5-7B-Deep-Stock-v4+bunnycore/Qwen-2.5-7b-s1k-lora_model
dtype: bfloat16
merge_method: passthrough
models:
  - model: bunnycore/Qwen-2.5-7B-Deep-Stock-v4+bunnycore/Qwen-2.5-7b-s1k-lora_model
tokenizer_source: bunnycore/Qwen-2.5-7B-Deep-Stock-v4

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.59
IFEval (0-Shot) 71.62
BBH (3-Shot) 36.69
MATH Lvl 5 (4-Shot) 47.81
GPQA (0-shot) 4.59
MuSR (0-shot) 9.26
MMLU-PRO (5-shot) 37.58
Description
Model synced from source: bunnycore/Qwen-2.5-7b-S1k
Readme 2 MiB