初始化项目,由ModelHub XC社区提供模型

Model: mlx-community/gemma-3-1b-it-qat-bf16
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-27 01:44:56 +08:00
commit 7b2589ad02
11 changed files with 51863 additions and 0 deletions

41
README.md Normal file
View File

@@ -0,0 +1,41 @@
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youre required to review and
agree to Googles usage license. To do this, please ensure youre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-1b-it
tags:
- mlx
---
# mlx-community/gemma-3-1b-it-qat-bf16
The Model [mlx-community/gemma-3-1b-it-qat-bf16](https://huggingface.co/mlx-community/gemma-3-1b-it-qat-bf16) was
converted to MLX format from [google/gemma-3-1b-it-qat-q4_0](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0)
using mlx-lm version **0.22.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-3-1b-it-qat-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```