ModelHub XC ff862e0c72 初始化项目,由ModelHub XC社区提供模型
Model: DavidAU/Qwen3-30B-A3B-Claude-4.5-Opus-High-Reasoning-2507-V2
Source: Original Platform
2026-05-01 23:19:19 +08:00

license, datasets, language, library_name, tags, base_model
license datasets language library_name tags base_model
apache-2.0
TeichAI/claude-4.5-opus-high-reasoning-250x
en
transformers
finetune
unsloth
claude-4.5-opus
reasoning
thinking
distill-fine-tune
moe
128 experts
256k context
mixture of experts
Qwen/Qwen3-30B-A3B-Thinking-2507

Qwen3-30B-A3B-Claude-4.5-Opus-High-Reasoning-2507-V2

The power of Claude 4.5 Opus High Reasoning with the MOE power (and speed) of Qwen 30B-A3B 2507 Thinking (256k context, 128 experts).

Benchmarks (below) show that this version exceeds the org model in 6 out of 8 metrics and very closely matches in the other 2.

Tuning via Unsloth (on local hardware) using Linux for Windows.

Compact, to the point, and powerful reasoning takes "Qwen 30B-A3B 2507 Thinking" to the next level.

Reasoning/Thinking blocks will be a lot shorter, and in many cases different from "Qwen" reasoning.

Note all math, science and other goodies are fully intact.

Model Specs:

  • 256k context
  • 128 experts (8 active by default)
  • 3B of 30B parameters active.
  • Model can be used on GPU, CPU or split at reasonable token/second speed.

BENCHMARKS:

[ xxx ] - Exceeds org model specs.

ARC-Challenge | ARC-Easy | BoolQ   | Hellaswag | OpenBookQA | PIQA  | Winogrande

0.405           [0.476]    [0.804]   [0.656]     0.374       [0.781]  [0.653]

VS "Normal Qwen3 30B-A3B"

ARC-Challenge | ARC-Easy | BoolQ | Hellaswag | OpenBookQA | PIQA  | Winogrande
0.410           0.444      0.691   0.635       0.390        0.769   0.650

Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

in text-generation-webui -> parameters -> lower right.

In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Description
Model synced from source: DavidAU/Qwen3-30B-A3B-Claude-4.5-Opus-High-Reasoning-2507-V2
Readme 2.1 MiB
Languages
Jinja 100%