53 lines
1.5 KiB
Markdown
53 lines
1.5 KiB
Markdown
---
|
|
license: mit
|
|
datasets:
|
|
- aldigobbler/stt-correction
|
|
language:
|
|
- en
|
|
base_model: aldigobbler/stt-qwen3-0.6b-merged
|
|
pipeline_tag: text-generation
|
|
tags:
|
|
- speech-to-text
|
|
- error-correction
|
|
- text-cleaning
|
|
- llama-cpp
|
|
- gguf-my-repo
|
|
model-index:
|
|
- name: STT Error Correction Model
|
|
results:
|
|
- task:
|
|
type: text-generation
|
|
name: STT Error Correction
|
|
dataset:
|
|
name: stt-correction
|
|
type: aldigobbler/stt-correction
|
|
split: validation
|
|
metrics:
|
|
- type: loss
|
|
value: 5.0712228
|
|
name: Validation Loss
|
|
---
|
|
|
|
# stt-qwen3-0.6b-merged
|
|
**Model creator:** [aldigobbler](https://huggingface.co/aldigobbler)<br/>
|
|
**Original model**: [aldigobbler/stt-qwen3-0.6b-merged](https://huggingface.co/aldigobbler/stt-qwen3-0.6b-merged)<br/>
|
|
**GGUF quantization:** provided by [aldigobbler](https:/huggingface.co/aldigobbler) using `llama.cpp`<br/>
|
|
## Special thanks
|
|
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
## Use with Ollama
|
|
```bash
|
|
ollama run "hf.co/aldigobbler/stt-qwen3-0.6b-merged-GGUF:Q8_0"
|
|
```
|
|
## Use with LM Studio
|
|
```bash
|
|
lms load "aldigobbler/stt-qwen3-0.6b-merged-GGUF"
|
|
```
|
|
## Use with llama.cpp CLI
|
|
```bash
|
|
llama-cli --hf "aldigobbler/stt-qwen3-0.6b-merged-GGUF:Q8_0" -p "The meaning to life and the universe is"
|
|
```
|
|
## Use with llama.cpp Server:
|
|
```bash
|
|
llama-server --hf "aldigobbler/stt-qwen3-0.6b-merged-GGUF:Q8_0" -c 4096
|
|
```
|