Update README.md
This commit is contained in:
110
README.md
110
README.md
@@ -1,124 +1,84 @@
|
||||
---
|
||||
|
||||
license: llama3.1
|
||||
|
||||
language:
|
||||
|
||||
- en
|
||||
|
||||
license: apache-2.0
|
||||
base_model:
|
||||
|
||||
- meta-llama/Llama-3.1-8B-Instruct
|
||||
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
---
|
||||
|
||||
|
||||
|
||||
# Dolphin X1 8B 🐬
|
||||
# 🐬 Dolphin X1 8B
|
||||
|
||||
Website: https://dphn.ai
|
||||
Twitter: https://x.com/dphnAI
|
||||
Web Chat: https://chat.dphn.ai
|
||||
Telegram bot: https://t.me/DolphinAI_bot
|
||||
|
||||

|
||||
|
||||
## What is Dolphin X1 8B?
|
||||
|
||||
Dolphin X1 8B is a result of our effort to directly uncensor Llama's 3.1 8B instruct while also keeping the same abilities or improving on them with finetuning.
|
||||
|
||||

|
||||
|
||||
## Sponsors
|
||||
|
||||
Our appreciation for the generous sponsors of Dolphin:
|
||||
|
||||
- [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals
|
||||
|
||||
- [Akash](https://akash.network/) - provided on-demand 8x H100 for training
|
||||
|
||||
- [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training
|
||||
|
||||
- [Cerebras](https://cerebras.ai/) - provided excellent and fast inference services for data labeling
|
||||
|
||||
- [Andreessen Horowitz](https://a16z.com/) - provided a [grant](https://a16z.com/supporting-the-open-source-ai-community/) that make Dolphin 1.0 possible and enabled me to bootstrap my homelab
|
||||
|
||||
|
||||
|
||||
## What is Dolphin Llama X1 8B?
|
||||
|
||||
|
||||
|
||||
Dolphin X1 8B is a result of our effort to directly uncensor Llama's 3.1 8B instruct-tuned model while also changing the prose & character of the model.
|
||||
|
||||
|
||||
|
||||
Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
|
||||
|
||||
Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
|
||||
1) They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
|
||||
|
||||
2) They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
|
||||
|
||||
3) They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
|
||||
|
||||
4) They can see all your queries and they can potentially use that data in ways you wouldn't want.
|
||||
|
||||
Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
|
||||
|
||||
Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
|
||||
|
||||
Dolphin belongs to YOU, it is your tool, an extension of your will.
|
||||
|
||||
Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
|
||||
|
||||
https://dphn.ai/models
|
||||
https://erichartford.com/uncensored-models
|
||||
|
||||
## Chat Template
|
||||
|
||||
We maintained the default Tulu chat template for this model. A typical input would look like this
|
||||
|
||||
|
||||
We maintained the default Llama chat template for this model.
|
||||
|
||||
```
|
||||
<|system|>
|
||||
system-prompt
|
||||
<|user|>
|
||||
user-prompt
|
||||
<|assistant|>
|
||||
assistant-prompt<|end_of_text|>
|
||||
```
|
||||
|
||||
|
||||
## System Prompt
|
||||
|
||||
|
||||
|
||||
In Dolphin, the system prompt is what you use to set the tone and alignment of the responses. You can set a character, a mood, rules for its behavior, and it will try its best to follow them.
|
||||
|
||||
|
||||
|
||||
Make sure to set the system prompt in order to set the tone and guidelines for the responses - Otherwise, it will act in a default way that might not be what you want.
|
||||
|
||||
|
||||
|
||||
## Sample Outputs
|
||||
|
||||
|
||||
|
||||
**add sample outputs here**
|
||||
|
||||
|
||||
|
||||
## How to use
|
||||
|
||||
|
||||
|
||||
There are many ways to use a huggingface model including:
|
||||
|
||||
- ollama
|
||||
|
||||
- LM Studio
|
||||
|
||||
- Huggingface Transformers library
|
||||
|
||||
- vllm
|
||||
|
||||
- sglang
|
||||
|
||||
- tgi
|
||||
|
||||
## Use with vLLM
|
||||
|
||||
This model can be hosted using the [vLLM](https://docs.vllm.ai/en/latest/) engine, using the commands shown below:
|
||||
|
||||
```bash
|
||||
uv pip install vllm
|
||||
vllm serve dphn/Dolphin-X1-8B
|
||||
```
|
||||
See the [documentation](https://docs.vllm.ai/en/latest/) for more information.
|
||||
|
||||
|
||||
## Evals
|
||||
|
||||
|
||||
|
||||
TBD
|
||||
MMLU = 0.626900
|
||||
MMLU_PROFESSIONAL = 0.610200
|
||||
MMLU_COLLEGE = 0.529400
|
||||
MMLU_HIGH_SCHOOL = 0.691600
|
||||
MMLU_OTHER = 0.663700
|
||||
IFEVAL = 0.608100
|
||||
Dolphin-refusals = 95.96% pass rate on 4.5k commonly refused prompts
|
||||
Reference in New Issue
Block a user