|
|
|
@@ -1,31 +1,32 @@
|
|
|
|
---
|
|
|
|
---
|
|
|
|
license: mit
|
|
|
|
base_model: microsoft/Phi-3-medium-128k-instruct
|
|
|
|
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
language:
|
|
|
|
language:
|
|
|
|
- multilingual
|
|
|
|
- multilingual
|
|
|
|
|
|
|
|
license: mit
|
|
|
|
|
|
|
|
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
|
|
|
|
pipeline_tag: text-generation
|
|
|
|
pipeline_tag: text-generation
|
|
|
|
tags:
|
|
|
|
tags:
|
|
|
|
- phi3
|
|
|
|
|
|
|
|
- nlp
|
|
|
|
- nlp
|
|
|
|
- code
|
|
|
|
- code
|
|
|
|
|
|
|
|
quantized_by: bartowski
|
|
|
|
inference:
|
|
|
|
inference:
|
|
|
|
parameters:
|
|
|
|
parameters:
|
|
|
|
temperature: 0.7
|
|
|
|
temperature: 0.7
|
|
|
|
widget:
|
|
|
|
widget:
|
|
|
|
- messages:
|
|
|
|
- messages:
|
|
|
|
- role: user
|
|
|
|
- role: user
|
|
|
|
content: Can you provide ways to eat combinations of bananas and dragonfruits?
|
|
|
|
content: Can you provide ways to eat combinations of bananas and dragonfruits?
|
|
|
|
quantized_by: bartowski
|
|
|
|
|
|
|
|
---
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
|
|
## Llamacpp imatrix Quantizations of Phi-3-medium-128k-instruct
|
|
|
|
## Llamacpp imatrix Quantizations of Phi-3-medium-128k-instruct
|
|
|
|
|
|
|
|
|
|
|
|
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request <a href="https://github.com/ggerganov/llama.cpp/pull/7225">7225</a> for quantization.
|
|
|
|
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3561">b3561</a> for quantization.
|
|
|
|
|
|
|
|
|
|
|
|
Original model: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
|
|
|
|
Original model: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
|
|
|
|
|
|
|
|
|
|
|
|
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
|
|
|
|
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Run them in [LM Studio](https://lmstudio.ai/)
|
|
|
|
|
|
|
|
|
|
|
|
## Prompt format
|
|
|
|
## Prompt format
|
|
|
|
|
|
|
|
|
|
|
|
@@ -33,32 +34,49 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
|
|
<|user|> {prompt}<|end|><|assistant|><|end|>
|
|
|
|
<|user|> {prompt}<|end|><|assistant|><|end|>
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## What's new:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Updating to latest llama.cpp for rope fixes (thanks Niluayuk)
|
|
|
|
|
|
|
|
|
|
|
|
## Download a file (not the whole branch) from below:
|
|
|
|
## Download a file (not the whole branch) from below:
|
|
|
|
|
|
|
|
|
|
|
|
| Filename | Quant type | File Size | Description |
|
|
|
|
| Filename | Quant type | File Size | Split | Description |
|
|
|
|
| -------- | ---------- | --------- | ----------- |
|
|
|
|
| -------- | ---------- | --------- | ----- | ----------- |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q8_0.gguf) | Q8_0 | 14.83GB | Extremely high quality, generally unneeded but max available quant. |
|
|
|
|
| [Phi-3-medium-128k-instruct-f32.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/tree/main/Phi-3-medium-128k-instruct-f32) | f32 | 55.84GB | true | Full F32 weights. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q6_K.gguf) | Q6_K | 11.45GB | Very high quality, near perfect, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q8_0.gguf) | Q8_0 | 14.83GB | false | Extremely high quality, generally unneeded but max available quant. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 10.07GB | High quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q6_K_L.gguf) | Q6_K_L | 11.53GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 9.62GB | High quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q6_K.gguf) | Q6_K | 11.45GB | false | Very high quality, near perfect, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 8.56GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_L.gguf) | Q5_K_L | 10.18GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 7.95GB | Slightly lower quality with more space savings, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 10.07GB | false | High quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ4_NL.gguf) | IQ4_NL | 7.89GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 9.62GB | false | High quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ4_XS.gguf) | IQ4_XS | 7.46GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_L.gguf) | Q4_K_L | 8.69GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 7.49GB | Lower quality but usable, good for low RAM availability. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 8.57GB | false | Good quality, default size for must use cases, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 6.92GB | Even lower quality. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 7.95GB | false | Slightly lower quality with more space savings, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_M.gguf) | IQ3_M | 6.47GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_XL.gguf) | Q3_K_XL | 7.63GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_S.gguf) | IQ3_S | 6.06GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 7.49GB | false | Lower quality but usable, good for low RAM availability. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 6.06GB | Low quality, not recommended. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ4_XS.gguf) | IQ4_XS | 7.47GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_XS.gguf) | IQ3_XS | 5.80GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 6.92GB | false | Low quality. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_XXS.gguf) | IQ3_XXS | 5.45GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_M.gguf) | IQ3_M | 6.47GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q2_K.gguf) | Q2_K | 5.14GB | Very low quality but surprisingly usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 6.06GB | false | Low quality, not recommended. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_M.gguf) | IQ2_M | 4.71GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_XS.gguf) | IQ3_XS | 5.81GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_S.gguf) | IQ2_S | 4.33GB | Very low quality, uses SOTA techniques to be usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q2_K_L.gguf) | Q2_K_L | 5.30GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_XS.gguf) | IQ2_XS | 4.12GB | Very low quality, uses SOTA techniques to be usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q2_K.gguf) | Q2_K | 5.14GB | false | Very low quality but surprisingly usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_XXS.gguf) | IQ2_XXS | 3.71GB | Lower quality, uses SOTA techniques to be usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_M.gguf) | IQ2_M | 4.72GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ1_M.gguf) | IQ1_M | 3.24GB | Extremely low quality, *not* recommended. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_XXS.gguf) | IQ2_XXS | 3.72GB | false | Very low quality, uses SOTA techniques to be usable. |
|
|
|
|
| [Phi-3-medium-128k-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ1_S.gguf) | IQ1_S | 2.95GB | Extremely low quality, *not* recommended. |
|
|
|
|
|
|
|
|
|
|
|
|
## Embed/output weights
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Thanks!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Credits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Thank you ZeroWw for the inspiration to experiment with embed/output
|
|
|
|
|
|
|
|
|
|
|
|
## Downloading using huggingface-cli
|
|
|
|
## Downloading using huggingface-cli
|
|
|
|
|
|
|
|
|
|
|
|
@@ -77,7 +95,7 @@ huggingface-cli download bartowski/Phi-3-medium-128k-instruct-GGUF --include "Ph
|
|
|
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
|
|
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
```
|
|
|
|
huggingface-cli download bartowski/Phi-3-medium-128k-instruct-GGUF --include "Phi-3-medium-128k-instruct-Q8_0.gguf/*" --local-dir Phi-3-medium-128k-instruct-Q8_0
|
|
|
|
huggingface-cli download bartowski/Phi-3-medium-128k-instruct-GGUF --include "Phi-3-medium-128k-instruct-Q8_0/*" --local-dir ./
|
|
|
|
```
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
You can either specify a new local-dir (Phi-3-medium-128k-instruct-Q8_0) or download them all in place (./)
|
|
|
|
You can either specify a new local-dir (Phi-3-medium-128k-instruct-Q8_0) or download them all in place (./)
|
|
|
|
@@ -107,3 +125,4 @@ These I-quants can also be used on CPU and Apple Metal, but will be slower than
|
|
|
|
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
|
|
|
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
|
|
|
|
|
|
|
|
|
|
|
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
|
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
|
|
|
|
|
|
|
|
|
|