Model: mlx-community/Meta-Llama-3.1-8B-Instruct-abliterated-bfloat16 Source: Original Platform
812 B
812 B
base_model, library_name, license, tags
| base_model | library_name | license | tags | |||
|---|---|---|---|---|---|---|
| meta-llama/Meta-Llama-3.1-8B-Instruct | transformers | llama3.1 |
|
mlx-community/Meta-Llama-3.1-8B-Instruct-abliterated-bfloat16
The Model mlx-community/Meta-Llama-3.1-8B-Instruct-abliterated-bfloat16 was converted to MLX format from mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated using mlx-lm version 0.16.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3.1-8B-Instruct-abliterated-bfloat16")
response = generate(model, tokenizer, prompt="hello", verbose=True)