初始化项目,由ModelHub XC社区提供模型
Model: yodayo-ai/nephra_v1.0 Source: Original Platform
This commit is contained in:
74
README.md
Normal file
74
README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
license: llama3
|
||||
|
||||
language:
|
||||
- en
|
||||
base_model: meta-llama/Meta-Llama-3-8B
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**nephra v1** is primarily a model built for roleplaying sessions, trained on roleplay and instruction-style datasets.
|
||||
|
||||
## Model Details
|
||||
- **Developed by**: [Sao10K](https://huggingface.co/Sao10K)
|
||||
- **Model type**: Text-based Large Language Model
|
||||
- **License**: [Meta Llama 3 Community License Agreement](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
|
||||
- **Finetuned from model**: [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
||||
|
||||
## Inference Guidelines
|
||||
|
||||
|
||||
```python
|
||||
import transformers
|
||||
import torch
|
||||
|
||||
model_id = "yodayo-ai/nephra_v1.0"
|
||||
|
||||
pipeline = transformers.pipeline(
|
||||
"text-generation",
|
||||
model=model_id,
|
||||
model_kwargs={"torch_dtype": torch.bfloat16},
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are to play the role of a cheerful assistant."},
|
||||
{"role": "user", "content": "Hi there, how's your day?"},
|
||||
]
|
||||
|
||||
prompt = pipeline.tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
outputs = pipeline(
|
||||
prompt,
|
||||
max_new_tokens=512,
|
||||
eos_token_id=[
|
||||
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
|
||||
pipeline.tokenizer.eos_token_id,
|
||||
],
|
||||
do_sample=True,
|
||||
temperature=1.12,
|
||||
min_p=0.075,
|
||||
)
|
||||
print(outputs[0]["generated_text"][len(prompt):])
|
||||
```
|
||||
|
||||
### Recommended Settings
|
||||
|
||||
To guide the model to generate high-quality responses, here are the ideal settings:
|
||||
|
||||
```
|
||||
Prompt Format: Same Prompt Format as Llama-3-Instruct
|
||||
Temperature - 1.12
|
||||
min-p: 0.075
|
||||
Repetition Penalty: 1.1
|
||||
Custom Stopping Strings: "\n{{user}}", "<" , "```" , -> Has occasional broken generations.
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Nephra v1 falls under [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
|
||||
Reference in New Issue
Block a user