初始化项目,由ModelHub XC社区提供模型

Model: mlx-community/Qwen3-0.6B-bf16
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 08:49:33 +08:00
commit 00929b10e7
12 changed files with 152128 additions and 0 deletions

37
README.md Normal file
View File

@@ -0,0 +1,37 @@
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
tags:
- mlx
---
# mlx-community/Qwen3-0.6B-bf16
This model [mlx-community/Qwen3-0.6B-bf16](https://huggingface.co/mlx-community/Qwen3-0.6B-bf16) was
converted to MLX format from [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-0.6B-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```