初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-08 16:35:04 +08:00
commit 789e631591
14 changed files with 146 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.f16.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-HardLambda0.1-220.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:04dc2e2c745af6221077bb7cda48f463fd2e45ec4e064ff59ffed801b0f9a281
size 4250298816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7d8da9535bb490383d05d32ba698aa62cda50467d334bab606b93d09df116829
size 3015940544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:28ebc2c11c5010d1387a0fecc8d20107b307e9edd93063c4c09c14cd96947e3c
size 4088459712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9f4e335a12965d93f81e40d1829037210bb0bfec1d65c39c2f8f43bba8c7955b
size 3808391616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf26fec9b8cbd017ff6b37e558c52991234ebb92313499efdd61169e6db2298f
size 3492368832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a133fbdd590839f081a71dce071473a2c35da1d357e46511c0680134fec0aa00
size 4683073984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8faf72e30626a00085031bc3d3b507276390160763e0e5d20daedf2cf1f544b8
size 4457769408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0d317b4a456d2277ee1c96afa46ac75e58127585a1e28084d98d968e4e08c722
size 5444831680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3844e211b21ae920ccc3fec0ae240540257f9a2b76851f8bf0ba68d847542674
size 5315176896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd4a0553be2cd4ee49f0d035bd1b509b9768bb889661773e0ecebc70bb8a2a11
size 6254199232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1f749c457403deea23c70ffd965f592f1c22fa897d4cc3c3e4773363b06f48e8
size 8098525632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:86da2d21a3211931a526aec6e27ca9623aa605b91f9be4249a2a0be4293e7e35
size 15237853632

63
README.md Normal file
View File

@@ -0,0 +1,63 @@
---
base_model: rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rd211/Qwen2.5-7B-Instruct-HardLambda0.1-220
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-HardLambda0.1-220-GGUF/resolve/main/Qwen2.5-7B-Instruct-HardLambda0.1-220.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->