初始化项目,由ModelHub XC社区提供模型
Model: HWERI/pythia-70m-deduped-cleansharegpt-en Source: Original Platform
This commit is contained in:
24
README.md
Normal file
24
README.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- shibing624/sharegpt_gpt4
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
Pythia-70m-deduped finetuned on a cleaned version of ShareGPT data.
|
||||
The cleaned dataset is obtained by removing duplicates and paraphrases from the original corpus, and keeping only the English instance.
|
||||
The final training size is of 3507 instances.
|
||||
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||||
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HWERI__pythia-70m-deduped-cleansharegpt-en)
|
||||
|
||||
| Metric | Value |
|
||||
|-----------------------|---------------------------|
|
||||
| Avg. | 25.06 |
|
||||
| ARC (25-shot) | 21.16 |
|
||||
| HellaSwag (10-shot) | 27.16 |
|
||||
| MMLU (5-shot) | 25.24 |
|
||||
| TruthfulQA (0-shot) | 48.57 |
|
||||
| Winogrande (5-shot) | 50.12 |
|
||||
| GSM8K (5-shot) | 0.0 |
|
||||
| DROP (3-shot) | 3.15 |
|
||||
Reference in New Issue
Block a user