first commit
This commit is contained in:
56
README.md
56
README.md
@@ -1,12 +1,48 @@
|
||||
---
|
||||
#以下为文本生成的 "tasks"示例,您可以从此网页中了解更多相关信息:https://modelscope.cn/docs/%E4%BB%BB%E5%8A%A1%E7%9A%84%E4%BB%8B%E7%BB%8D
|
||||
#tasks:
|
||||
#- text-generation
|
||||
license: Apache License 2.0
|
||||
inference: false
|
||||
license: llama2
|
||||
---
|
||||
###### 该模型当前使用的是默认介绍模版,处于“预发布”阶段,页面仅限所有者可见。
|
||||
###### 请根据[模型贡献文档说明](https://www.modelscope.cn/docs/%E5%A6%82%E4%BD%95%E6%92%B0%E5%86%99%E5%A5%BD%E7%94%A8%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8D%A1%E7%89%87),及时完善模型卡片内容。ModelScope平台将在模型卡片完善后展示。谢谢您的理解。
|
||||
#### Clone with HTTP
|
||||
```bash
|
||||
git clone https://www.modelscope.cn/Xorbits/vicuna-7b-v1.5.git
|
||||
```
|
||||
|
||||
# Vicuna Model Card
|
||||
|
||||
## Model Details
|
||||
|
||||
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
|
||||
|
||||
- **Developed by:** [LMSYS](https://lmsys.org/)
|
||||
- **Model type:** An auto-regressive language model based on the transformer architecture
|
||||
- **License:** Llama 2 Community License Agreement
|
||||
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
|
||||
|
||||
### Model Sources
|
||||
|
||||
- **Repository:** https://github.com/lm-sys/FastChat
|
||||
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
|
||||
- **Paper:** https://arxiv.org/abs/2306.05685
|
||||
- **Demo:** https://chat.lmsys.org/
|
||||
|
||||
## Uses
|
||||
|
||||
The primary use of Vicuna is research on large language models and chatbots.
|
||||
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
|
||||
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
|
||||
|
||||
## Training Details
|
||||
|
||||
Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning.
|
||||
The training data is around 125K conversations collected from ShareGPT.com.
|
||||
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
||||
|
||||
## Evaluation
|
||||
|
||||

|
||||
|
||||
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
|
||||
|
||||
## Difference between different versions of Vicuna
|
||||
|
||||
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
||||
Reference in New Issue
Block a user