08f5a5d8e8765750fe3f6ed419b513776b5de922
Model: hsefz-ChenJunJie/Deepseek-R1-Distill-NSFW-RPv1-mlx-fp16 Source: Original Platform
base_model, library_name, tags
| base_model | library_name | tags | ||||
|---|---|---|---|---|---|---|
| mergekit-community/Deepseek-R1-Distill-NSFW-RPv1 | transformers |
|
hsefz-ChenJunJie/Deepseek-R1-Distill-NSFW-RPv1-mlx-fp16
The Model hsefz-ChenJunJie/Deepseek-R1-Distill-NSFW-RPv1-mlx-fp16 was converted to MLX format from mergekit-community/Deepseek-R1-Distill-NSFW-RPv1 using mlx-lm version 0.22.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("hsefz-ChenJunJie/Deepseek-R1-Distill-NSFW-RPv1-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Description