109 lines
3.4 KiB
Markdown
109 lines
3.4 KiB
Markdown
|
|
---
|
|||
|
|
library_name: transformers
|
|||
|
|
extra_gated_heading: Access Gemma on Hugging Face
|
|||
|
|
extra_gated_prompt: >-
|
|||
|
|
To access Gemma on Hugging Face, you’re required to review and agree to
|
|||
|
|
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
|
|||
|
|
Face and click below. Requests are processed immediately.
|
|||
|
|
extra_gated_button_content: Acknowledge license
|
|||
|
|
license: other
|
|||
|
|
license_name: gemma-terms-of-use
|
|||
|
|
license_link: https://ai.google.dev/gemma/terms
|
|||
|
|
base_model:
|
|||
|
|
- google/gemma-2b
|
|||
|
|
datasets:
|
|||
|
|
- Open-Orca/SlimOrca-Dedup
|
|||
|
|
---
|
|||
|
|
|
|||
|
|

|
|||
|
|
|
|||
|
|
# Gemmalpaca-2B
|
|||
|
|
|
|||
|
|
This is gemma-2b model supervised fine-tuned on the [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) dataset. It's not as good as [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B).
|
|||
|
|
|
|||
|
|
## 🏆 Evaluation
|
|||
|
|
|
|||
|
|
### Nous
|
|||
|
|
|
|||
|
|
Gemmalpaca-2B outperforms gemma-2b but underperforms gemma-2b-it on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
|||
|
|
|
|||
|
|
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|
|||
|
|
|---|---:|---:|---:|---:|---:|
|
|||
|
|
| [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 |
|
|||
|
|
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
|
|||
|
|
| [**mlabonne/OrcaGemma-2B**](https://huggingface.co/mlabonne/OrcaGemma-2B) [📄](https://gist.github.com/mlabonne/c8c0914945f9c189cca74120bc834c3e) | **35.63** | **24.44** | **42.49** | **45.84** | **29.76** |
|
|||
|
|
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
|
|||
|
|
|
|||
|
|
## 🧩 Configuration
|
|||
|
|
|
|||
|
|
It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
|
|||
|
|
|
|||
|
|
```yaml
|
|||
|
|
base_model: google/gemma-2b
|
|||
|
|
model_type: AutoModelForCausalLM
|
|||
|
|
tokenizer_type: AutoTokenizer
|
|||
|
|
|
|||
|
|
load_in_8bit: false
|
|||
|
|
load_in_4bit: true
|
|||
|
|
strict: false
|
|||
|
|
|
|||
|
|
datasets:
|
|||
|
|
- path: Open-Orca/SlimOrca-Dedup
|
|||
|
|
type: sharegpt
|
|||
|
|
|
|||
|
|
dataset_prepared_path:
|
|||
|
|
val_set_size: 0.01
|
|||
|
|
output_dir: ./out
|
|||
|
|
|
|||
|
|
sequence_len: 2048
|
|||
|
|
sample_packing: true
|
|||
|
|
pad_to_sequence_len: true
|
|||
|
|
|
|||
|
|
adapter: qlora
|
|||
|
|
lora_model_dir:
|
|||
|
|
lora_r: 32
|
|||
|
|
lora_alpha: 64
|
|||
|
|
lora_dropout: 0.05
|
|||
|
|
lora_target_linear: true
|
|||
|
|
|
|||
|
|
wandb_project: axolotl
|
|||
|
|
wandb_entity:
|
|||
|
|
wandb_watch:
|
|||
|
|
wandb_name:
|
|||
|
|
wandb_log_model:
|
|||
|
|
|
|||
|
|
gradient_accumulation_steps: 4
|
|||
|
|
micro_batch_size: 2
|
|||
|
|
num_epochs: 2
|
|||
|
|
optimizer: adamw_bnb_8bit
|
|||
|
|
lr_scheduler: cosine
|
|||
|
|
learning_rate: 0.0002
|
|||
|
|
|
|||
|
|
train_on_inputs: false
|
|||
|
|
group_by_length: false
|
|||
|
|
bf16: auto
|
|||
|
|
fp16:
|
|||
|
|
tf32: false
|
|||
|
|
|
|||
|
|
gradient_checkpointing: true
|
|||
|
|
early_stopping_patience:
|
|||
|
|
resume_from_checkpoint:
|
|||
|
|
local_rank:
|
|||
|
|
logging_steps: 1
|
|||
|
|
xformers_attention:
|
|||
|
|
flash_attention:
|
|||
|
|
|
|||
|
|
warmup_steps: 10
|
|||
|
|
evals_per_epoch: 10
|
|||
|
|
eval_table_size:
|
|||
|
|
eval_table_max_new_tokens: 128
|
|||
|
|
saves_per_epoch: 1
|
|||
|
|
debug:
|
|||
|
|
deepspeed:
|
|||
|
|
weight_decay: 0.1
|
|||
|
|
fsdp:
|
|||
|
|
fsdp_config:
|
|||
|
|
special_tokens:
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|