初始化项目,由ModelHub XC社区提供模型
Model: SakanaAI/DiscoPOP-zephyr-7b-gemma Source: Original Platform
This commit is contained in:
73
README.md
Normal file
73
README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
license: gemma
|
||||
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
|
||||
tags:
|
||||
- alignment-handbook
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- argilla/dpo-mix-7k
|
||||
model-index:
|
||||
- name: DiscoPOP-zephyr-7b-gemma
|
||||
results: []
|
||||
---
|
||||
|
||||
# DiscoPOP-zephyr-7b-gemma
|
||||
|
||||
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
|
||||
|
||||
This model is from the paper ["Discovering Preference Optimization Algorithms with and for Large Language Models"](https://arxiv.org/abs/2406.08414)
|
||||
|
||||
Read the [blog post on it here!](https://sakana.ai/llm-squared)
|
||||
|
||||
See the codebase to generate it here: [https://github.com/SakanaAI/DiscoPOP](https://github.com/SakanaAI/DiscoPOP)
|
||||
|
||||
## Model description
|
||||
|
||||
This model is identical in training to [HuggingFaceH4/zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1), except instead of using Direct Preference Optimization (DPO), it uses DiscoPOP.
|
||||
|
||||
DiscoPOP is our Discovered Preference Optimization algorithm, which is defined as follows:
|
||||
|
||||
```
|
||||
def log_ratio_modulated_loss(
|
||||
self,
|
||||
policy_chosen_logps: torch.FloatTensor,
|
||||
policy_rejected_logps: torch.FloatTensor,
|
||||
reference_chosen_logps: torch.FloatTensor,
|
||||
reference_rejected_logps: torch.FloatTensor,
|
||||
) -> torch.FloatTensor:
|
||||
pi_logratios = policy_chosen_logps - policy_rejected_logps
|
||||
ref_logratios = reference_chosen_logps - reference_rejected_logps
|
||||
logits = pi_logratios - ref_logratios
|
||||
# Modulate the mixing coefficient based on the log ratio magnitudes
|
||||
log_ratio_modulation = torch.sigmoid(logits)
|
||||
logistic_component = -F.logsigmoid(self.beta * logits)
|
||||
exp_component = torch.exp(-self.beta * logits)
|
||||
# Blend between logistic and exponential component based on log ratio modulation
|
||||
losses = logistic_component * (1 - log_ratio_modulation) + exp_component * log_ratio_modulation
|
||||
return losses
|
||||
```
|
||||
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 5e-07
|
||||
- train_batch_size: 2
|
||||
- eval_batch_size: 4
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 8
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 128
|
||||
- total_eval_batch_size: 32
|
||||
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_ratio: 0.1
|
||||
- num_epochs: 2
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.40.1
|
||||
- Pytorch 2.1.2+cu121
|
||||
- Datasets 2.19.0
|
||||
- Tokenizers 0.19.1
|
||||
Reference in New Issue
Block a user