diff --git a/.gitattributes b/.gitattributes
index a6344aa..f1fb961 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
+s1.1-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..d6aa8f5
--- /dev/null
+++ b/README.md
@@ -0,0 +1,85 @@
+---
+base_model: simplescaling/s1.1-7B
+library_name: transformers
+model_name: Qwen2.5-14B-Instruct-20250308_215306
+tags:
+- generated_from_trainer
+- trl
+- sft
+- TensorBlock
+- GGUF
+licence: license
+---
+
+
+

+
+
+
+## simplescaling/s1.1-7B - GGUF
+
+This repo contains GGUF format model files for [simplescaling/s1.1-7B](https://huggingface.co/simplescaling/s1.1-7B).
+
+The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
+
+
+
+## Prompt template
+
+```
+<|im_start|>system
+{system_prompt}<|im_end|>
+<|im_start|>user
+{prompt}<|im_end|>
+<|im_start|>assistant
+```
+
+## Model file specification
+
+| Filename | Quant type | File Size | Description |
+| -------- | ---------- | --------- | ----------- |
+| [s1.1-7B-Q2_K.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
+| [s1.1-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
+| [s1.1-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
+| [s1.1-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
+| [s1.1-7B-Q4_0.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
+| [s1.1-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
+| [s1.1-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
+| [s1.1-7B-Q5_0.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
+| [s1.1-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
+| [s1.1-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
+| [s1.1-7B-Q6_K.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
+| [s1.1-7B-Q8_0.gguf](https://huggingface.co/tensorblock/s1.1-7B-GGUF/blob/main/s1.1-7B-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
+
+
+## Downloading instruction
+
+### Command line
+
+Firstly, install Huggingface Client
+
+```shell
+pip install -U "huggingface_hub[cli]"
+```
+
+Then, downoad the individual model file the a local directory
+
+```shell
+huggingface-cli download tensorblock/s1.1-7B-GGUF --include "s1.1-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
+```
+
+If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
+
+```shell
+huggingface-cli download tensorblock/s1.1-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
+```
diff --git a/s1.1-7B-Q2_K.gguf b/s1.1-7B-Q2_K.gguf
new file mode 100644
index 0000000..57f3f2e
--- /dev/null
+++ b/s1.1-7B-Q2_K.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86bcbf3c2f0a5d08cb22e7e4db4076ece4c3ceb933319e0e26bd033c8183a6d5
+size 3015940416
diff --git a/s1.1-7B-Q3_K_L.gguf b/s1.1-7B-Q3_K_L.gguf
new file mode 100644
index 0000000..d919c2c
--- /dev/null
+++ b/s1.1-7B-Q3_K_L.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7f15b59d9293e30d21bf03b6ad35f7c58d483f3d42aa35da6c6be0d40488c7c
+size 4088459584
diff --git a/s1.1-7B-Q3_K_M.gguf b/s1.1-7B-Q3_K_M.gguf
new file mode 100644
index 0000000..2363c41
--- /dev/null
+++ b/s1.1-7B-Q3_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c1cf0b2cf357a84dda65465f234e212058ed6376211ca8cc35beb63f6829b61
+size 3808391488
diff --git a/s1.1-7B-Q3_K_S.gguf b/s1.1-7B-Q3_K_S.gguf
new file mode 100644
index 0000000..44873a2
--- /dev/null
+++ b/s1.1-7B-Q3_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff0627631951ce14dac55fbe414d798bdab5cbbf455206ee1861aa748c817d12
+size 3492368704
diff --git a/s1.1-7B-Q4_0.gguf b/s1.1-7B-Q4_0.gguf
new file mode 100644
index 0000000..768535d
--- /dev/null
+++ b/s1.1-7B-Q4_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df62d572470ad04f167f81e5482e14b53d64d2197b49de944be67a6d9429bdee
+size 4431391040
diff --git a/s1.1-7B-Q4_K_M.gguf b/s1.1-7B-Q4_K_M.gguf
new file mode 100644
index 0000000..ad4ad81
--- /dev/null
+++ b/s1.1-7B-Q4_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:607f394dd4fef9d39a00abe4b68427759de9c80e4cd33a23b73f100a9f5c10a0
+size 4683073856
diff --git a/s1.1-7B-Q4_K_S.gguf b/s1.1-7B-Q4_K_S.gguf
new file mode 100644
index 0000000..a46218b
--- /dev/null
+++ b/s1.1-7B-Q4_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f09015a334a7749a207e336748e596f3e5955a7cc88607981ea12734554d06b
+size 4457769280
diff --git a/s1.1-7B-Q5_0.gguf b/s1.1-7B-Q5_0.gguf
new file mode 100644
index 0000000..050e8f8
--- /dev/null
+++ b/s1.1-7B-Q5_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5495176a9ed66a76147573c68ed4e2fe392ff70ea9ace5e8c0a4f9cd77141bea
+size 5315176768
diff --git a/s1.1-7B-Q5_K_M.gguf b/s1.1-7B-Q5_K_M.gguf
new file mode 100644
index 0000000..a74e298
--- /dev/null
+++ b/s1.1-7B-Q5_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c60e8d8783b57a774eaec718ef2f9ce57019afbb72411f619222b57873e0cca1
+size 5444831552
diff --git a/s1.1-7B-Q5_K_S.gguf b/s1.1-7B-Q5_K_S.gguf
new file mode 100644
index 0000000..27bfc03
--- /dev/null
+++ b/s1.1-7B-Q5_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6340d0a5c2cf8433ffa584f59318e9bac3b2e458b65c7192e3d0b521e5400ca9
+size 5315176768
diff --git a/s1.1-7B-Q6_K.gguf b/s1.1-7B-Q6_K.gguf
new file mode 100644
index 0000000..79a111f
--- /dev/null
+++ b/s1.1-7B-Q6_K.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71cbc23b3d5159f7c765f4cd0838707b5df9d23c11b4443066a03c623aa7d3f8
+size 6254199104
diff --git a/s1.1-7B-Q8_0.gguf b/s1.1-7B-Q8_0.gguf
new file mode 100644
index 0000000..78ac0e0
--- /dev/null
+++ b/s1.1-7B-Q8_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:723d3d713b0e8a06efa2669ffe009414064675c65b3c5f1252729dd0a59281b0
+size 8098525504