初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/HamzahLMV2-1B-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-14 13:36:05 +08:00
commit b50b354cbc
27 changed files with 233 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
HamzahLMV2-1B.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f86f65c28f4c3ffc4b264cb9cd684d4e02f7f32f8367d1d47c25351f0306d10e
size 499796352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01a24944fc1681f5b2fd57674927876879fcbd2c9de080c57ca827b4104cf99e
size 479742336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2221a81c0eaec9771f506d7a834d900d5b743d99f343690b2e12ab7135d5ea5a
size 628316544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:640d623bf8536370c96e8e9810b6fadd86091f45ae9a0e2e87ac56785be1a2d7
size 601577856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:683c1eb30cddfa850823b7774133528d65c3d3c8754941a04543c783757290a2
size 562055552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91267652a4b692f121ff5db0cdd95b67884f06dcf140d1bd5c524974894366a2
size 533219712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2518215f1e96475d7d618f69f0b6f05a55a0e8f938b181f9816f576ae9e9b8c3
size 770156928

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:38fe12f4b1b86ca9993678938cac866d924816b4c690bed5503047f992133743
size 756787584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1fb884e8b79c2062fbf7b0a835b8fb16a30da439938989dac7a9bac30d8cce93
size 733981056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:79ca4b0e8ab1eee04f33972cdab9cad40db2d1a7ea17f7fef5fc172cfd233804
size 674978176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9627b8b22d2f13f6aa789c286cbc0a8d657efc8cda06a94c584b0536aaf9325a
size 920779136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eb2cf1f382bb6998fe8cebd900e72252ef7fa2db99199c52de64b3a62bdc6e2d
size 882686336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:49851bb412c14111783a7a40829e44b3b0a894175a826df0d35f3f949bd3096f
size 667064704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:318c9949d8dd6b11928a569111bb70e635f500afbdbce0b3da5b62cd3c511654
size 640850304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:029996e8fafec5e564588d60e5f2992869adb299d4d0eaf7d3c45e0def2e0df4
size 845392256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cee2c5ab4c9e4601ea35e42cbbcc9f3f6f586f5657cdfb182cb3908c0b7fcb23
size 803711360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fff43702c93055ae1beac0ee8e1897b0fcf25710507778f9774e4e74d1cdc7ac
size 754559360

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0283132d4f9c930fc78248ba0d0f16913a3a1ce5d235bf15d002f1d84ab1acdd
size 920779136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8682a0b70f0fccd1272dd2299c271ad224648531c4d7632a2f0cefeb63e39882
size 995916160

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0efb57625d729b5478e56d31b5a41c8a657ba68a0b26a93f597455be715eb5fd
size 955447680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:be3802beb8b467e27fc4251d6fe21908afab8473c4e04e1ce213cebf524d94da
size 923400576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:43073dd91805ca182694d20b8cf235cde16ea2e083a567dd011fe389a34a4ed3
size 1092090240

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2ff2c6a4b84c8a6199029464c833313ca72d2f25e7cf268bdecbe69d1962512
size 1073150336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ce64195797fc7776a57a9c4135f6423d74792238d1826ad68dab9ef38ccc61d
size 1237272960

98
README.md Normal file
View File

@@ -0,0 +1,98 @@
---
base_model: XeTute/HamzahLMV2-1B
datasets:
- XeTute/Eastern-Alpaca-14k
- XeTute/Medic-Thoughts-16k
- XeTute/iloveuser-1k
- vicgalle/alpaca-gpt4
- stas/openwebtext-10k
- Magpie-Align/Magpie-Reasoning-V2-250K-CoT-QwQ
- Magpie-Align/Magpie-Reasoning-V1-150K-CoT-QwQ
language:
- en
- zh
- pt
- it
- fr
- de
- hi
- th
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- rp
- roleplay
- thinking
- thoughts
- reasoning
- multilingual
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/XeTute/HamzahLMV2-1B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/HamzahLMV2-1B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/HamzahLMV2-1B-i1-GGUF/resolve/main/HamzahLMV2-1B.i1-Q6_K.gguf) | i1-Q6_K | 1.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:04b6dd5d6301388546ca8d55914a16503ffa9bf261839741f8178781c9aec0e4
size 1314413