Files
gemma-3-1b-it-qat-bf16/README.md
ModelHub XC 7b2589ad02 初始化项目,由ModelHub XC社区提供模型
Model: mlx-community/gemma-3-1b-it-qat-bf16
Source: Original Platform
2026-04-27 01:44:56 +08:00

42 lines
1.2 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youre required to review and
agree to Googles usage license. To do this, please ensure youre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-1b-it
tags:
- mlx
---
# mlx-community/gemma-3-1b-it-qat-bf16
The Model [mlx-community/gemma-3-1b-it-qat-bf16](https://huggingface.co/mlx-community/gemma-3-1b-it-qat-bf16) was
converted to MLX format from [google/gemma-3-1b-it-qat-q4_0](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0)
using mlx-lm version **0.22.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-3-1b-it-qat-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```