初始化项目,由ModelHub XC社区提供模型
Model: vilm/Quyen-Pro-v0.1 Source: Original Platform
This commit is contained in:
63
README.md
Normal file
63
README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: other
|
||||
datasets:
|
||||
- teknium/OpenHermes-2.5
|
||||
- LDJnr/Capybara
|
||||
- Intel/orca_dpo_pairs
|
||||
- argilla/distilabel-capybara-dpo-7k-binarized
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Quyen
|
||||
<img src="quyen.webp" width="512" height="512" alt="Quyen">
|
||||
|
||||
# Model Description
|
||||
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
|
||||
|
||||
- **Quyen-SE (0.5B)**
|
||||
- **Quyen-Mini (1.8B)**
|
||||
- **Quyen (4B)**
|
||||
- **Quyen-Plus (7B)**
|
||||
- **Quyen-Pro (14B)**
|
||||
- **Quyen-Pro-Max (72B)**
|
||||
|
||||
All models were trained with SFT and DPO using the following dataset:
|
||||
|
||||
- *OpenHermes-2.5* by **Teknium**
|
||||
- *Capyabara* by **LDJ**
|
||||
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
|
||||
- *orca_dpo_pairs* by **Intel**
|
||||
- and Private Data by **Ontocord** & **BEE-spoke-data**
|
||||
|
||||
# Prompt Template
|
||||
- All Quyen models use ChatML as the default template:
|
||||
|
||||
```
|
||||
<|im_start|>system
|
||||
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
|
||||
<|im_start|>user
|
||||
Hello world.<|im_end|>
|
||||
<|im_start|>assistant
|
||||
```
|
||||
|
||||
- You can also use `apply_chat_template`:
|
||||
|
||||
```python
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
|
||||
{"role": "user", "content": "Hello world."}
|
||||
]
|
||||
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
||||
model.generate(**gen_input)
|
||||
```
|
||||
|
||||
# Benchmarks:
|
||||
|
||||
- Coming Soon! We will update the benchmarks later
|
||||
|
||||
# Acknowledgement
|
||||
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
|
||||
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|
||||
Reference in New Issue
Block a user