diff --git a/.gitattributes b/.gitattributes
index a6344aa..7eeeb81 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
+llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..9e237f8
--- /dev/null
+++ b/README.md
@@ -0,0 +1,86 @@
+---
+license: llama3
+base_model: ruslandev/llama-3-8b-gpt-4o-ru1.0
+tags:
+- generated_from_trainer
+- TensorBlock
+- GGUF
+datasets:
+- ruslandev/tagengo-rus-gpt-4o
+model-index:
+- name: home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru
+ results: []
+---
+
+
+

+
+
+
+## ruslandev/llama-3-8b-gpt-4o-ru1.0 - GGUF
+
+This repo contains GGUF format model files for [ruslandev/llama-3-8b-gpt-4o-ru1.0](https://huggingface.co/ruslandev/llama-3-8b-gpt-4o-ru1.0).
+
+The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
+
+
+
+## Prompt template
+
+```
+<|begin_of_text|><|start_header_id|>system<|end_header_id|>
+
+{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
+
+{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
+```
+
+## Model file specification
+
+| Filename | Quant type | File Size | Description |
+| -------- | ---------- | --------- | ----------- |
+| [llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
+| [llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
+| [llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
+| [llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
+| [llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
+| [llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
+| [llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
+| [llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
+| [llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
+| [llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
+| [llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
+| [llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF/blob/main/llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
+
+
+## Downloading instruction
+
+### Command line
+
+Firstly, install Huggingface Client
+
+```shell
+pip install -U "huggingface_hub[cli]"
+```
+
+Then, downoad the individual model file the a local directory
+
+```shell
+huggingface-cli download tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF --include "llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
+```
+
+If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
+
+```shell
+huggingface-cli download tensorblock/llama-3-8b-gpt-4o-ru1.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
+```
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf b/llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf
new file mode 100644
index 0000000..032ea6e
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q2_K.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ef06e59ca3d4f66d3c5e45e4c9dd53fb60e8395843141adef3f70166ce46e8d
+size 3179131904
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf b/llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf
new file mode 100644
index 0000000..436a476
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q3_K_L.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:340d1267763cab5f6bf59e7c4cc2125f4ccdad9ca84bdeb94fe303035e82667d
+size 4321956864
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf b/llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf
new file mode 100644
index 0000000..80873a0
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q3_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be325cd993fe561f130b7c970b4625c6f95290259d1679276d8c8ab895dc93b9
+size 4018918400
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf b/llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf
new file mode 100644
index 0000000..6b0bef4
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q3_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:01fda71b46a1734f655131a741793f9756a9e90744c269321d8d40f6b44ac7a2
+size 3664499712
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf b/llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf
new file mode 100644
index 0000000..9a46cf5
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q4_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:675ae3a1898d8c61a0fd5aa006576ead250de72e0832dda060360a1fb018ab49
+size 4661212160
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf b/llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf
new file mode 100644
index 0000000..b027c07
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q4_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07c9c0b8dcb94184774a13d5b71ec1ed3a50827a6be9f1e325167e2afacad1b0
+size 4920734720
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf b/llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf
new file mode 100644
index 0000000..df5db33
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q4_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1dcd4803fd7615bd7c688cb13f59277844116d144dffa9b853e78c1f710914a0
+size 4692669440
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf b/llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf
new file mode 100644
index 0000000..12e7f20
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q5_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:727a5a8e7e38c33539215984ce43ca7d8175ef0605b622767c326037d127d7cd
+size 5599294464
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf b/llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf
new file mode 100644
index 0000000..5d81947
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q5_K_M.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5e6e8dedc1873c0e061a0b914e17f23892958e305e29d6c4c216491cb6dc2f7
+size 5732987904
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf b/llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf
new file mode 100644
index 0000000..3e23ef2
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q5_K_S.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:357197ce6877199c780661e5b383760b4fbb450751265edec4acb8ea18792af5
+size 5599294464
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf b/llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf
new file mode 100644
index 0000000..3da761d
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q6_K.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d8fb37f6da24f82689ec2a5361ebef5c7eadfffb1582861c7967cde6fa9e06a
+size 6596006912
diff --git a/llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf b/llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf
new file mode 100644
index 0000000..72b5098
--- /dev/null
+++ b/llama-3-8b-gpt-4o-ru1.0-Q8_0.gguf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4954d6825d8a97c8a32897c0207a79caa93877741395dc672c187f50b05ad17d
+size 8540771328