初始化项目,由ModelHub XC社区提供模型
Model: gpugobrrr/Qwen3-0.6B-Farsi-GGUF Source: Original Platform
This commit is contained in:
34
README.md
Normal file
34
README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model: gpugobrrr/Qwen3-0.6B-Farsi
|
||||
library_name: gguf
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- gguf
|
||||
- llama.cpp
|
||||
- qwen3
|
||||
- text-generation
|
||||
- farsi
|
||||
- persian
|
||||
- quantized
|
||||
---
|
||||
|
||||
# Qwen3-0.6B-Farsi GGUF
|
||||
|
||||
GGUF quantizations of `gpugobrrr/Qwen3-0.6B-Farsi`, a finetune of `Qwen/Qwen3-0.6B`.
|
||||
|
||||
## Files
|
||||
|
||||
- `Qwen3-0.6B-Farsi-BF16.gguf`
|
||||
- `Qwen3-0.6B-Farsi-F16.gguf`
|
||||
- `Qwen3-0.6B-Farsi-Q3_K_S.gguf`
|
||||
- `Qwen3-0.6B-Farsi-Q4_0.gguf`
|
||||
- `Qwen3-0.6B-Farsi-Q4_K_M.gguf`
|
||||
- `Qwen3-0.6B-Farsi-Q6_K.gguf`
|
||||
- `Qwen3-0.6B-Farsi-Q8_0.gguf`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
llama-cli -m Qwen3-0.6B-Farsi-Q4_K_M.gguf -p "سلام، خودت را معرفی کن."
|
||||
```
|
||||
Reference in New Issue
Block a user