初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Viper-Coder-v1.1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-23 21:15:10 +08:00
commit af53bf624e
13 changed files with 148 additions and 0 deletions

46
.gitattributes vendored Normal file
View File

@@ -0,0 +1,46 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Viper-Coder-v1.1.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

69
README.md Normal file
View File

@@ -0,0 +1,69 @@
---
base_model: prithivMLmods/Viper-Coder-v1.1
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- trl
- coder
- v1.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Viper-Coder-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Viper-Coder-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Viper-Coder-v1.1-GGUF/resolve/main/Viper-Coder-v1.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c4fcf42d6406289ef2de08ac7ba438e26d3a95d6bcf78f4fde66eea23bec3273
size 8186196192

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:41d874176c22a6a2a05f14efb150af020bd0b04d814bbd4d778054dad5ebb998
size 5770498272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ad0a300464c865346c5e7b03b7046aae81dd687acf587e74254f4c5965f1b49
size 7924768992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3da893658bc29adc91043e0fa80a2453e97573092d9219a165287aaaf80af750
size 7339204832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e2d718d18f8bdefa125067e966e9833f86a9271f058e94a473f9b50d641e849c
size 6659596512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5b96aa2701003456056a76a8ba0c0a103927d63b0b796afccd16c98435917019
size 8988111072

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:17dddd08102684dd7088b4e750ae3e171b8cd8a96f9ad9d1d924f5310dfe88b7
size 8573432032

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b390d14753636e0652423702d303b9c0a52813fbde86d6a10ac07f058278548
size 10508873952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f8dd7359b175ca941ad3cd7ebf7a1a63bc7a4ca046f1e3b2632456275677ee0
size 10266554592

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0248e5b983c2adbe5d2c9260819e91fc2cb08da0f49a6e4e191eec48c569d1a4
size 12124684512

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9baa7d54522b8923b1c4ff6902af8454d4fb7364bb81ec525bb208025a12d24a
size 15701598432