base_model, inference, library_name, license, model-index, model_creator, model_name, pipeline_tag, quantized_by, tags
base_model inference library_name license model-index model_creator model_name pipeline_tag quantized_by tags
s3nh/phi-2_dolly_instruction_polish false peft other
name results
phi-2-sft-out
s3nh phi-2_dolly_instruction_polish text-generation afrideva
generated_from_trainer
gguf
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0

s3nh/phi-2_dolly_instruction_polish-GGUF

Quantized GGUF model files for phi-2_dolly_instruction_polish from s3nh

Name Quant method Size
phi-2_dolly_instruction_polish.fp16.gguf fp16 5.56 GB
phi-2_dolly_instruction_polish.q2_k.gguf q2_k 1.17 GB
phi-2_dolly_instruction_polish.q3_k_m.gguf q3_k_m 1.48 GB
phi-2_dolly_instruction_polish.q4_k_m.gguf q4_k_m 1.79 GB
phi-2_dolly_instruction_polish.q5_k_m.gguf q5_k_m 2.07 GB
phi-2_dolly_instruction_polish.q6_k.gguf q6_k 2.29 GB
phi-2_dolly_instruction_polish.q8_0.gguf q8_0 2.96 GB

Original Model Card:

Built with Axolotl

phi-2-sft-out

This model is a fine-tuned version of microsoft/phi-2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2813

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
No log 0.0 1 1.7973
1.9767 0.25 5290 1.4832
1.8474 0.5 10580 1.4356
1.8121 0.75 15870 1.4022
1.8333 1.0 21160 1.3678
1.6601 1.25 26450 1.3508
1.5452 1.5 31740 1.3357
1.7381 1.75 37030 1.3191
1.6256 2.0 42320 1.3090
1.5521 2.25 47610 1.2961
1.8318 2.5 52900 1.2910
1.6761 2.75 58190 1.2901
1.6312 3.0 63480 1.2879
1.7003 3.25 68770 1.2820
1.6915 3.5 74060 1.2814
1.5757 3.75 79350 1.2813

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.0
  • Tokenizers 0.15.0

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.6.0
Description
Model synced from source: afrideva/phi-2_dolly_instruction_polish-GGUF
Readme 26 KiB