初始化项目,由ModelHub XC社区提供模型
Model: PetroGPT/Breeze-Petro-7B-Instruct-v1 Source: Original Platform
This commit is contained in:
84
README.md
Normal file
84
README.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
library_name: transformers
|
||||
tags:
|
||||
- chemistry
|
||||
- code
|
||||
- text-generation-inference
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
- zh
|
||||
metrics:
|
||||
- accuracy
|
||||
- code_eval
|
||||
---
|
||||
|
||||
# Breeze-Petro-7B-Instruct-v1
|
||||
|
||||
- Model creator: [MediaTek Research](https://huggingface.co/MediaTek-Research)
|
||||
- Original model: [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0)
|
||||
|
||||
|
||||
### Model Description
|
||||
|
||||
This is a model fine-tuned based on Breeze-7B-Instruct-v1_0.
|
||||
The training set is mainly based on chemical knowledge and procedural knowledge. Supplement knowledge about the petroleum industry.
|
||||
|
||||
|
||||
- **Developed by:** RebeccaChou
|
||||
- **License:** apache-2.0
|
||||
- **Finetuned from model :** [MediaTek-Research/Breeze-7B-Instruct-v1_0
|
||||
- **Language(s) (NLP):** [English.繁體中文]
|
||||
|
||||
|
||||
# 📖 Table of Contents
|
||||
1.[Open LLM Leaderboard](#🏆-open-llm-leaderboard)
|
||||
- ARC
|
||||
- HellaSwag
|
||||
- MMLU
|
||||
- TruthfulQA
|
||||
- Winogrande
|
||||
- GSM8K
|
||||
3. [EvalPlus Leaderboard](#⚡-evalplus-leaderboard)
|
||||
- HumanEval
|
||||
- HumanEval_Plus
|
||||
- MBPP
|
||||
- MBPP_Plus
|
||||
4. [Prompt Format](#⚗️-prompt-format)
|
||||
5. [Quantized Models](#🛠️-quantized-models)
|
||||
6. [Gratitude](#🙏-gratitude)
|
||||
|
||||
## 🏆 Open LLM Leaderboard
|
||||
|
||||
WestSeverus-7B-DPO-v2 is one of the top 7B model in Open LLM Leaderboard and it outperforms on TruthfulQA and GSM8K.
|
||||
|
||||
| Metric |Value|
|
||||
|---------------------------------|----:|
|
||||
|Avg. |59.32|
|
||||
|AI2 Reasoning Challenge (25-Shot)|58.87|
|
||||
|HellaSwag (10-Shot) |79.17|
|
||||
|MMLU (5-Shot) |56.62|
|
||||
|TruthfulQA (0-shot) |46.36|
|
||||
|Winogrande (5-shot) |73.64|
|
||||
|GSM8k (5-shot) |41.24|
|
||||
|
||||
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Rebecca19990101__Breeze-Petro-7B-Instruct-v1)
|
||||
|
||||
## ⚡ EvalPlus Leaderboard
|
||||
|
||||
| Model | HumanEval | HumanEval_Plus| MBPP | MBPP_Plus |
|
||||
|---|---:|---:|---:|---:|
|
||||
| phi-2-2.7B |48.2|43.3|61.9|51.4|
|
||||
| | | | | |
|
||||
| SOLAR-10.7B-Instruct-v1.0 | 42.1 | 34.3 | 42.9 | 34.6 |
|
||||
| CodeLlama-7B| 37.8| 34.1 | 57.6 |45.4 |
|
||||
|
||||
## 🛠️ Quantized Models
|
||||
* **GGUF** https://huggingface.co/Rebecca19990101/breeze-petro-7b-instruct-v1-q4_k_m.gguf/tree/main
|
||||
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
- **Dataset:** Rebecca19990101/petro-dataset-v2
|
||||
Reference in New Issue
Block a user