diff --git a/README.md b/README.md
index 10052bd..2f20df2 100644
--- a/README.md
+++ b/README.md
@@ -1,14 +1,11 @@
---
quantized_by: bartowski
pipeline_tag: text-generation
-tags: []
-base_model: mlabonne/Qwen3-14B-abliterated
-base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Qwen3-14B-abliterated by mlabonne
-Using llama.cpp release b5200 for quantization.
+Using llama.cpp release b5270 for quantization.
Original model: https://huggingface.co/mlabonne/Qwen3-14B-abliterated
@@ -28,6 +25,10 @@ Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or a
<|im_start|>assistant
```
+## What's new:
+
+New model files from mlabonne (WIP model)
+
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
@@ -56,6 +57,7 @@ Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or a
| [Qwen3-14B-abliterated-IQ3_XXS.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ3_XXS.gguf) | IQ3_XXS | 5.94GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Qwen3-14B-abliterated-Q2_K.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q2_K.gguf) | Q2_K | 5.75GB | false | Very low quality but surprisingly usable. |
| [Qwen3-14B-abliterated-IQ2_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ2_M.gguf) | IQ2_M | 5.32GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
+| [Qwen3-14B-abliterated-IQ2_S.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ2_S.gguf) | IQ2_S | 4.96GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights