license, datasets
Model Card for Model ID
在llama-2-13b上使用dolphin前5萬筆資料集進行訓練
Fine-Tuning Information
- GPU: RTX4090 (single core / 24564MiB)
- model: meta-llama/Llama-2-13b-hf
- dataset: ehartford/dolphin (取前5w筆訓練集)
- peft_type: LoRA
- lora_rank: 8
- lora_target: q_proj, v_proj
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 8
- learning_rate : 5e-5
- epoch: 1
- precision: bf16
- quantization: load_in_4bit
Fine-Tuning Detail
- train_loss: 0.8799
- train_runtime: 7:11:23 (use deepspeed)
Evaluation
- 評估結果來自HuggingFaceH4/open_llm_leaderboard
- 與Llama-2-13b和其他使用dolphin的模型比較4種Benchmark
- Benchmark包含ARC、HellaSwag、MMLU、TruthfulQA
- 注意:ehartford/dolphin-llama-13b使用的是llama-1
| Model |
Average |
ARC |
HellaSwag |
MMLU |
TruthfulQA |
| meta-llama/Llama-2-13b-hf |
56.9 |
58.11 |
80.97 |
54.34 |
34.17 |
| meta-llama/Llama-2-13b-chat-hf |
59.93 |
59.04 |
81.94 |
54.64 |
44.12 |
| ehartford/dolphin-llama-13b |
59.26 |
55.55 |
77.11 |
52.16 |
52.23 |
| CHIH-HUNG/llama-2-13b-dolphin_20w |
60.17 |
59.56 |
82.55 |
55.89 |
42.67 |
| CHIH-HUNG/llama-2-13b-dolphin_5w |
61 |
60.67 |
82.69 |
56.23 |
44.41 |
How to convert dataset to json
- 在load_dataset中輸入資料集名稱,並且在take中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入example欄位中(例如instruction、input、output)
- 最後指定json檔儲存位置 (json_filename)