Files
Josiefied-Qwen2.5-Coder-7B-…/README.md
ModelHub XC 3f555187af 初始化项目,由ModelHub XC社区提供模型
Model: mlx-community/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1
Source: Original Platform
2026-05-10 07:28:35 +08:00

41 lines
1.1 KiB
Markdown

---
language:
- en
- de
license: apache-2.0
tags:
- chat
- GGUF
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1
pipeline_tag: text-generation
---
# mlx-community/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1
The Model [mlx-community/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1](https://huggingface.co/mlx-community/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1)
using mlx-lm version **0.21.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```