1.2 KiB
1.2 KiB
license, library_name, pipeline_tag, extra_gated_heading, extra_gated_prompt, extra_gated_button_content, base_model, tags
| license | library_name | pipeline_tag | extra_gated_heading | extra_gated_prompt | extra_gated_button_content | base_model | tags | |
|---|---|---|---|---|---|---|---|---|
| gemma | transformers | text-generation | Access Gemma on Hugging Face | To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. | Acknowledge license | google/gemma-3-1b-it |
|
mlx-community/gemma-3-1b-it-qat-bf16
The Model mlx-community/gemma-3-1b-it-qat-bf16 was converted to MLX format from google/gemma-3-1b-it-qat-q4_0 using mlx-lm version 0.22.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-3-1b-it-qat-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)