- c79308ee32b474a60535728b2fb8faf079a7b45fcbbc63acd3038099c77e5afc (395920d2ee58bd2a6cccdafcffebebc12eb33cbd) - 84cfe5d9b40aa3ebde3284b31c0a45ccb986756d45f7961ca3669e257656373d (5afb71f7bacec18a5c45b4b72fbb3b08f75fde62)
language, license, tags, base_model, widget, library_name, model-index
| language | license | tags | base_model | widget | library_name | model-index | ||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
llama3 |
|
NousResearch/Hermes-3-Llama-3.2-3B |
|
transformers |
|
mlx-community/Hermes-3-Llama-3.2-3B-bf16
The Model mlx-community/Hermes-3-Llama-3.2-3B-bf16 was converted to MLX format from NousResearch/Hermes-3-Llama-3.2-3B using mlx-lm version 0.20.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Hermes-3-Llama-3.2-3B-bf16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Description