license, datasets
Model Card for Model ID
在llama-2-13b上使用huangyt/FINETUNE1資料集進行訓練,總資料筆數約17w
Fine-Tuning Information
- GPU: RTX4090 (single core / 24564MiB)
- model: meta-llama/Llama-2-13b-hf
- dataset: huangyt/FINETUNE1 (共約17w筆訓練集)
- peft_type: LoRA
- lora_rank: 8
- lora_target: gate_proj, up_proj, down_proj
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 8
- learning_rate : 5e-5
- epoch: 1
- precision: bf16
- quantization: load_in_4bit
Fine-Tuning Detail
- train_loss: 0.66
- train_runtime: 16:24:31 (use deepspeed)
Evaluation
- 評估結果來自HuggingFaceH4/open_llm_leaderboard
- 與Llama-2-13b比較4種Benchmark,包含ARC、HellaSwag、MMLU、TruthfulQA
| Model |
Average |
ARC |
HellaSwag |
MMLU |
TruthfulQA |
| meta-llama/Llama-2-13b-hf |
56.9 |
58.11 |
80.97 |
54.34 |
34.17 |
| meta-llama/Llama-2-13b-chat-hf |
59.93 |
59.04 |
81.94 |
54.64 |
44.12 |
| CHIH-HUNG/llama-2-13b-Fintune_1_17w |
58.24 |
59.47 |
81 |
54.31 |
38.17 |
| CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj |
58.49 |
59.73 |
81.06 |
54.53 |
38.64 |
| CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj |
58.81 |
57.17 |
82.26 |
55.89 |
39.93 |
| CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 |
58.86 |
57.25 |
82.27 |
56.16 |
39.75 |
| CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 |
58.71 |
56.74 |
82.27 |
56.18 |
39.65 |
How to convert dataset to json
- 在load_dataset中輸入資料集名稱,並且在take中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入example欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (json_filename)