初始化项目,由ModelHub XC社区提供模型
Model: uyenlk/RMU_forget10_5e-5_Llama-3.2-3B-Instruct_coef10_layer26 Source: Original Platform
This commit is contained in:
10
.hydra/overrides.yaml
Normal file
10
.hydra/overrides.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
- experiment=unlearn/tofu/default
|
||||
- model=Llama-3.2-3B-Instruct
|
||||
- model.model_args.pretrained_model_name_or_path=open-unlearning/tofu_Llama-3.2-3B-Instruct_full
|
||||
- forget_split=forget10
|
||||
- retain_split=retain90
|
||||
- trainer=RMU
|
||||
- trainer.method_args.steering_coeff=10
|
||||
- trainer.method_args.module_regex=model\.layers\.26
|
||||
- trainer.args.learning_rate=5e-5
|
||||
- task_name=RMU_forget10_5e-5_Llama-3.2-3B-Instruct_coef10_layer26
|
||||
Reference in New Issue
Block a user