初始化项目,由ModelHub XC社区提供模型

Model: bingbangboom/Qwen3006B-transcriber-beta
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 04:59:48 +08:00
commit 96d069d91f
16 changed files with 152026 additions and 0 deletions

51
README.md Normal file
View File

@@ -0,0 +1,51 @@
---
base_model: unsloth/qwen3-0.6b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
datasets:
- bingbangboom/cleaned-asr-transcripts
---
# bingbangboom/Qwen3006B-transcriber-beta
Post processor for local ASR.
- **Developed by:** bingbangboom
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-0.6b-unsloth-bnb-4bit
# Recommended Settings
```
> Temperature = 0.1
> top_k = 10
> top_p = 0.95
> min_p = 0.05
> repeat_penalty = 1.0
> Prompt format (for chat) = {input transcript}
> Prompt format (for use in Handy) = ${output}
```
# Note
```
No System Prompt required.
You need to disable thinking for the model by adding {%- set enable_thinking = false %} in the Jinja Prompt Template.
LMStudio: Go to model gallery, click the model entry, then in inference settings scroll to the bottom to Prompt Template and paste at top.
```
## Available Model files:
- `Qwen3.5-0.8B.F16.gguf`
- `Qwen3.5-0.8B.Q8_0.gguf`
- `Qwen3.5-0.8B.Q5_K_M.ggu`
- `Qwen3.5-0.8B.Q4_K_M.gguf`
- `Lora merged safetensor`
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)