40 lines
839 B
Markdown
40 lines
839 B
Markdown
|
|
---
|
||
|
|
license: mit
|
||
|
|
language:
|
||
|
|
- zh
|
||
|
|
- en
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
library_name: mlx
|
||
|
|
tags:
|
||
|
|
- mlx
|
||
|
|
base_model: THUDM/GLM-4-9B-0414
|
||
|
|
---
|
||
|
|
|
||
|
|
# mlx-community/GLM-4-9B-0414-bf16
|
||
|
|
|
||
|
|
This model [mlx-community/GLM-4-9B-0414-bf16](https://huggingface.co/mlx-community/GLM-4-9B-0414-bf16) was
|
||
|
|
converted to MLX format from [THUDM/GLM-4-9B-0414](https://huggingface.co/THUDM/GLM-4-9B-0414)
|
||
|
|
using mlx-lm version **0.22.4**.
|
||
|
|
|
||
|
|
## Use with mlx
|
||
|
|
|
||
|
|
```bash
|
||
|
|
pip install mlx-lm
|
||
|
|
```
|
||
|
|
|
||
|
|
```python
|
||
|
|
from mlx_lm import load, generate
|
||
|
|
|
||
|
|
model, tokenizer = load("mlx-community/GLM-4-9B-0414-bf16")
|
||
|
|
|
||
|
|
prompt = "hello"
|
||
|
|
|
||
|
|
if tokenizer.chat_template is not None:
|
||
|
|
messages = [{"role": "user", "content": prompt}]
|
||
|
|
prompt = tokenizer.apply_chat_template(
|
||
|
|
messages, add_generation_prompt=True
|
||
|
|
)
|
||
|
|
|
||
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
||
|
|
```
|