Model: yodayo-ai/nephra_v1.0 Source: Original Platform
license, language, base_model
| license | language | base_model | |
|---|---|---|---|
| llama3 |
|
meta-llama/Meta-Llama-3-8B |
Overview
nephra v1 is primarily a model built for roleplaying sessions, trained on roleplay and instruction-style datasets.
Model Details
- Developed by: Sao10K
- Model type: Text-based Large Language Model
- License: Meta Llama 3 Community License Agreement
- Finetuned from model: Meta-Llama-3-8B
Inference Guidelines
import transformers
import torch
model_id = "yodayo-ai/nephra_v1.0"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are to play the role of a cheerful assistant."},
{"role": "user", "content": "Hi there, how's your day?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=[
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
pipeline.tokenizer.eos_token_id,
],
do_sample=True,
temperature=1.12,
min_p=0.075,
)
print(outputs[0]["generated_text"][len(prompt):])
Recommended Settings
To guide the model to generate high-quality responses, here are the ideal settings:
Prompt Format: Same Prompt Format as Llama-3-Instruct
Temperature - 1.12
min-p: 0.075
Repetition Penalty: 1.1
Custom Stopping Strings: "\n{{user}}", "<" , "```" , -> Has occasional broken generations.
License
Nephra v1 falls under META LLAMA 3 COMMUNITY LICENSE AGREEMENT.
Description