Model: bingbangboom/Qwen3006B-transcriber-beta Source: Original Platform
base_model, tags, license, language, datasets
| base_model | tags | license | language | datasets | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| unsloth/qwen3-0.6b-unsloth-bnb-4bit |
|
apache-2.0 |
|
|
bingbangboom/Qwen3006B-transcriber-beta
Post processor for local ASR.
- Developed by: bingbangboom
- License: apache-2.0
- Finetuned from model : unsloth/qwen3-0.6b-unsloth-bnb-4bit
Recommended Settings
> Temperature = 0.1
> top_k = 10
> top_p = 0.95
> min_p = 0.05
> repeat_penalty = 1.0
> Prompt format (for chat) = {input transcript}
> Prompt format (for use in Handy) = ${output}
Note
No System Prompt required.
You need to disable thinking for the model by adding {%- set enable_thinking = false %} in the Jinja Prompt Template.
LMStudio: Go to model gallery, click the model entry, then in inference settings scroll to the bottom to Prompt Template and paste at top.
Available Model files:
Qwen3.5-0.8B.F16.ggufQwen3.5-0.8B.Q8_0.ggufQwen3.5-0.8B.Q5_K_M.gguQwen3.5-0.8B.Q4_K_M.ggufLora merged safetensor
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Description
Languages
Jinja
100%
