57 lines
2.2 KiB
Markdown
57 lines
2.2 KiB
Markdown
---
|
|
-license: other
|
|
-license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
|
|
-license_name: microsoft-research-license
|
|
base_model: Yhyu13/LMCocktail-phi-2-v1
|
|
inference: false
|
|
model_creator: Yhyu13
|
|
model_name: LMCocktail-phi-2-v1
|
|
pipeline_tag: text-generation
|
|
quantized_by: afrideva
|
|
tags:
|
|
- gguf
|
|
- ggml
|
|
- quantized
|
|
- q2_k
|
|
- q3_k_m
|
|
- q4_k_m
|
|
- q5_k_m
|
|
- q6_k
|
|
- q8_0
|
|
---
|
|
# Yhyu13/LMCocktail-phi-2-v1-GGUF
|
|
|
|
Quantized GGUF model files for [LMCocktail-phi-2-v1](https://huggingface.co/Yhyu13/LMCocktail-phi-2-v1) from [Yhyu13](https://huggingface.co/Yhyu13)
|
|
|
|
|
|
| Name | Quant method | Size |
|
|
| ---- | ---- | ---- |
|
|
| [lmcocktail-phi-2-v1.fp16.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.fp16.gguf) | fp16 | 5.56 GB |
|
|
| [lmcocktail-phi-2-v1.q2_k.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q2_k.gguf) | q2_k | 1.17 GB |
|
|
| [lmcocktail-phi-2-v1.q3_k_m.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q3_k_m.gguf) | q3_k_m | 1.48 GB |
|
|
| [lmcocktail-phi-2-v1.q4_k_m.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q4_k_m.gguf) | q4_k_m | 1.79 GB |
|
|
| [lmcocktail-phi-2-v1.q5_k_m.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q5_k_m.gguf) | q5_k_m | 2.07 GB |
|
|
| [lmcocktail-phi-2-v1.q6_k.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q6_k.gguf) | q6_k | 2.29 GB |
|
|
| [lmcocktail-phi-2-v1.q8_0.gguf](https://huggingface.co/afrideva/LMCocktail-phi-2-v1-GGUF/resolve/main/lmcocktail-phi-2-v1.q8_0.gguf) | q8_0 | 2.96 GB |
|
|
|
|
|
|
|
|
## Original Model Card:
|
|
# LM-cocktail phi-2 v1
|
|
|
|
|
|
This is a 50%-50% model of the phi2 alpca gpt4 and phi2 ultrachat200K models.
|
|
|
|
https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1
|
|
|
|
https://huggingface.co/venkycs/phi-2-ultrachat200k
|
|
|
|
|
|
|
|
# Code
|
|
|
|
The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534
|
|
|
|
Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git
|
|
|
|
Merging scripts available under the [./scripts](./scripts) folder |