Model: MaziyarPanahi/Llama-3-8B-Instruct-v0.1 Source: Original Platform
base_model, library_name, tags, language, pipeline_tag, license, license_name, license_link, inference, model_creator, model_name, quantized_by
| base_model | library_name | tags | language | pipeline_tag | license | license_name | license_link | inference | model_creator | model_name | quantized_by | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2 | transformers |
|
|
text-generation | other | llama3 | LICENSE | false | MaziyarPanahi | Llama-3-8B-Instruct-v0.1 | MaziyarPanahi |
Llama-3-8B-Instruct-v0.1
This model was developed based on MaziyarPanahi/Llama-3-8B-Instruct-DPO series.
Quantized GGUF
All GGUF models are available here: MaziyarPanahi/Llama-3-8B-Instruct-v0.1-GGUF
Prompt Template
This model uses ChatML prompt template:
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
How to use
You can use this model by using MaziyarPanahi/Llama-3-8B-Instruct-v0.1 as the model name in Hugging Face's
transformers library.
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-8B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
Description