初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 18:37:55 +08:00
commit 7e109c0b3c
27 changed files with 223 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Orion-Qwen3-1.7B-SFT-v2603.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ffb4da8039e822baf3cbc214679f274b5d2926ac0f59fb3f3235e868e72e0444
size 543794176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef6d4f26e5aa564c32c3fd88038cca246087deda346b20afd2402395168c9a7a
size 515777536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:669b0e4b5a896e88719b2892dcc021a7d19841f38cca9343a59575dd26aeaa61
size 695182336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2e7aeb2be9f895ff5c3ec08ed970ccd9967c93856f52aca947b4d6b0e138e194
size 657826816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b00f2920b4aa520b01704244ea6c851ec15529e0588bc6ba81a2c6c5ce8c8acd
size 631514112

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:26295f4ccac6d8b2a4097246cb70414fb6b4431f789a14f6e442359349ac3d43
size 590488576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f131b998b0510622b76043bca3362ed13ec1bd4e9b0166f0f3f588030d9836e9
size 895663104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf6bd3f36b3e51c20c4180cd685f371978995a81ffc2b926a175f89b496d5a80
size 867253248

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:39861c107d551ff6500f67cda2cd7ed83f2fee7e7109a3d4d284fc0691ae54c5
size 834223104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa1fc001484784856fe850e64fecc16111cc3a4fbc1dd0f72d4c5cccd7de1c42
size 754361344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11af67277ca0bd759c8df416d5ca1dfb4db761a3884afe473663cabea8539f55
size 1054424064

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42af91eef7f71c5594426489398ee4cb53d9d73b0ff4691e40e2bf06914e20e9
size 1010383872

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d9ab8d48a2a063a3e1312ba34e91e14b97739d51f909a45c4cb2b371e39ba0c0
size 777796608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c4543206723789c9a93e129ba399ea963178431b14fee22cbfbb6030e45d5e3
size 732969984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b410f45362fee4c6c5390f89cfbcde1c8f294d6c4871d5311a1b100d366dd752
size 1003502592

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:243f252f3a0e748d262e1b84dd30fb0cdb4a24a0b43fb1bd3c0f6d9674d58800
size 939539456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5ef7bc575b680e75925f28d07e516074b746a6c3194cf63271d915c15ed8e0ca
size 867253248

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f93e73bd1f4bee13009f0b9ef3962b5bddf8cf799d47b66f74d1b314301c8b4
size 1056783360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5bf3e772edfb79f863dd22320afba275177d7ab658a85375d322aa0396299e9e
size 1142504448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:40cce9c29d9b0244697d855400e2ddacda2ad84ccd705a44c1543f83ebbbe3e8
size 1107409920

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:660ecd8252cc35b5bf73eef1a7e9fbbaa423086a848246ceefe56b3210282984
size 1060191232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87747d5e5d1bdb4db5d42097fbf8b775a21ed52b5ee2f4627f85f427335ef4e2
size 1257880576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:04b0ade72acc2ac91de561eb5399194ca15fed702d1f023ae919c740f462310c
size 1230584832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fb581d8c2972e659c9ae512c051541cc9c5e0e3c6dd13d898c45b6a8a015665b
size 1417755648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5146044ec20c53d2103d4d764a44040183ccdecaa1478f3ddceffa571f873366
size 2094560

88
README.md Normal file
View File

@@ -0,0 +1,88 @@
---
base_model: 3tic/Orion-Qwen3-1.7B-SFT-v2603
language:
- ja
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lightnovel
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/3tic/Orion-Qwen3-1.7B-SFT-v2603
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orion-Qwen3-1.7B-SFT-v2603-i1-GGUF/resolve/main/Orion-Qwen3-1.7B-SFT-v2603.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->