初始化项目,由ModelHub XC社区提供模型
Model: mlx-community/Tessa-T1-14B-mlx-bf16 Source: Original Platform
This commit is contained in:
44
README.md
Normal file
44
README.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
base_model: Tesslate/Tessa-T1-14B
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
- qwen2
|
||||
- trl
|
||||
- mlx
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
datasets:
|
||||
- Tesslate/Tessa-T1-Dataset
|
||||
library_name: mlx
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# mlx-community/Tessa-T1-14B-mlx-bf16
|
||||
|
||||
This model [mlx-community/Tessa-T1-14B-mlx-bf16](https://huggingface.co/mlx-community/Tessa-T1-14B-mlx-bf16) was
|
||||
converted to MLX format from [Tesslate/Tessa-T1-14B](https://huggingface.co/Tesslate/Tessa-T1-14B)
|
||||
using mlx-lm version **0.22.2**.
|
||||
|
||||
## Use with mlx
|
||||
|
||||
```bash
|
||||
pip install mlx-lm
|
||||
```
|
||||
|
||||
```python
|
||||
from mlx_lm import load, generate
|
||||
|
||||
model, tokenizer = load("mlx-community/Tessa-T1-14B-mlx-bf16")
|
||||
|
||||
prompt = "hello"
|
||||
|
||||
if tokenizer.chat_template is not None:
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
prompt = tokenizer.apply_chat_template(
|
||||
messages, add_generation_prompt=True
|
||||
)
|
||||
|
||||
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
||||
```
|
||||
Reference in New Issue
Block a user