Files
dolphin-2.8-mistral-7b-v02/README.md

46 lines
1.1 KiB
Markdown
Raw Permalink Normal View History

2024-12-19 22:45:46 +00:00
---
2024-12-20 06:51:53 +08:00
language:
- en
license: apache-2.0
tags:
- mlx
base_model: alpindale/Mistral-7B-v0.2-hf
datasets:
- cognitivecomputations/dolphin
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- jondurbin/airoboros-2.2.1
- teknium/openhermes-2.5
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
model-index:
- name: dolphin-2.8-mistral-7b-v02
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 0.469
name: pass@1
verified: false
2024-12-19 22:45:46 +00:00
---
2024-12-20 06:51:53 +08:00
# mlx-community/dolphin-2.8-mistral-7b-v02
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.8-mistral-7b-v02`]() using mlx-lm version **0.7.0**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) for more details on the model.
## Use with mlx
2024-12-19 22:45:46 +00:00
```bash
2024-12-20 06:51:53 +08:00
pip install mlx-lm
2024-12-19 22:45:46 +00:00
```
2024-12-20 06:51:53 +08:00
2024-12-19 22:45:46 +00:00
```python
2024-12-20 06:51:53 +08:00
from mlx_lm import load, generate
2024-12-19 22:45:46 +00:00
2024-12-20 06:51:53 +08:00
model, tokenizer = load("mlx-community/dolphin-2.8-mistral-7b-v02")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```