43 lines
1.0 KiB
Markdown
43 lines
1.0 KiB
Markdown
---
|
|
license: mit
|
|
library_name: mlx
|
|
datasets:
|
|
- PrimeIntellect/verifiable-coding-problems
|
|
- likaixin/TACO-verified
|
|
- livecodebench/code_generation_lite
|
|
language:
|
|
- en
|
|
base_model: agentica-org/DeepCoder-14B-Preview
|
|
pipeline_tag: text-generation
|
|
tags:
|
|
- mlx
|
|
---
|
|
|
|
# mlx-community/DeepCoder-14B-Preview-bf16
|
|
|
|
This model [mlx-community/DeepCoder-14B-Preview-bf16](https://huggingface.co/mlx-community/DeepCoder-14B-Preview-bf16) was
|
|
converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)
|
|
using mlx-lm version **0.22.3**.
|
|
|
|
## Use with mlx
|
|
|
|
```bash
|
|
pip install mlx-lm
|
|
```
|
|
|
|
```python
|
|
from mlx_lm import load, generate
|
|
|
|
model, tokenizer = load("mlx-community/DeepCoder-14B-Preview-bf16")
|
|
|
|
prompt = "hello"
|
|
|
|
if tokenizer.chat_template is not None:
|
|
messages = [{"role": "user", "content": prompt}]
|
|
prompt = tokenizer.apply_chat_template(
|
|
messages, add_generation_prompt=True
|
|
)
|
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
|
```
|