Update metadata with huggingface_hub

This commit is contained in:
ai-modelscope
2024-12-18 14:27:02 +08:00
parent 2a8c2ec898
commit 337dc56af0
27 changed files with 272 additions and 63 deletions

56
.gitattributes vendored
View File

@@ -1,47 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-f16.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct-f32.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-10B-Instruct.imatrix filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:338604bfb7d62b2c9d6f5fe1eaca3644697fd779307864fdbda8427c3ff18c32
size 3592343904

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8cad36eb51e20406871cb0807c57f2223121ccf3a3c0273666845a1ec79a8eaf
size 4704985440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:71644650f2cd847f74ec7289baf6e7b8ce1536dd179a13bf6bd8749ed5d5c8f8
size 4368478560

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6178ad26bc9393a6309da530f849933c0fc989232da33db074ece60a2d9c42f9
size 5906346336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b80226e41f7c89da8cdf5f6d5cd72e1796ee707b6e986f15b00226d3b9b55266
size 5596885344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70ffc9c01582c3d340bf26f9bece33022c0925418db63e63ba214049ee3377d0
size 3924046176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cf8351b8e66dde26af67a696eec53fa21d791991dee329c432cf8fcf9942e938
size 4317262176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ed0020ab55f571aaed11f1cc78c404bec9ca41a815cc69eb6ce805763fd0e93c
size 5450805600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0d2a6502eb596e228722b7198674c23303dba1b75118ff22a385f23551a22284
size 5052477792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7fb30ebfc8c7c089bed413e01a7eeb0dee10c8d313d580ee7e1affdf051cafcc
size 4591137120

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:39d17d242f70dbc6391acf247f90036bee105798ec6d9ec2a8cb445c76e61a0d
size 5803127136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:38e78c65a0b177930751b9fa952c07cb5ea9dc452115551b115ed73f8ab20773
size 5928464736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fe1121611ab643dadab522d5dded26da60f96b457e71f37e81b63741694f114d
size 6586364256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6d54a35d740a616061d6c7d7740d64f4339410e58aaba985aa9e1ea79c7e882a
size 6287520096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:703cfbde06e7b7916fc22eb6d3fe98a5834a9cacf04a88fbabbbe4a7e4821e1a
size 5952156000

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0574ced7e6705358de7b26de93faf02043af4c0b8d82a2d869e4fb0fbebc519
size 7589065056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:536489b6ec6159beebc74e245b4b2d9f84b80fed4162fea6f50d604cb728858f
size 7340552544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7f04a35603029bd1972fa8816405e95c3dc4e6fda9f25e8fc8bd2a5c8fc0d6d0
size 7144190304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:44496b253716f229de0e9b33cffb748b2509afc53495b0c2e5eef3b32e8274c6
size 8459399520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88ff637ab1830664765d7da62f947c6fa4c24a3b7d38d4357cfb12dcc985af55
size 8654434656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:98892efdf8233741cbaaa5f14a11b441a20fe8bd5962762bb2f3c3fa657b22b0
size 10955239776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0bd98a38cbb5319d42b9d0b1e0880972e95e54c03c251989096931af6c85266e
size 20616556896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:efae50128b34ba1d339a8364401e9977df973bf878fab1afe6f26a14c2407cd7
size 41227366464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa5e0f3b3fe2c742dc66d0b1d6139e82dbda6d5ace7274af9979f7e071c984b7
size 6644818

206
README.md
View File

@@ -1,47 +1,171 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
quantized_by: bartowski
pipeline_tag: text-generation
tags:
- falcon3
license: other
base_model: tiiuae/Falcon3-10B-Instruct
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载
```bash
#安装ModelScope
pip install modelscope
## Llamacpp imatrix Quantizations of Falcon3-10B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4341">b4341</a> for quantization.
Original model: https://huggingface.co/tiiuae/Falcon3-10B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('bartowski/Falcon3-10B-Instruct-GGUF')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/bartowski/Falcon3-10B-Instruct-GGUF.git
<|system|>
{system_prompt}
<|user|>
{prompt}
<|assistant|>
```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Falcon3-10B-Instruct-f32.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-f32.gguf) | f32 | 41.23GB | false | Full F32 weights. |
| [Falcon3-10B-Instruct-f16.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-f16.gguf) | f16 | 20.62GB | false | Full F16 weights. |
| [Falcon3-10B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q8_0.gguf) | Q8_0 | 10.96GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Falcon3-10B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q6_K_L.gguf) | Q6_K_L | 8.65GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Falcon3-10B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q6_K.gguf) | Q6_K | 8.46GB | false | Very high quality, near perfect, *recommended*. |
| [Falcon3-10B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q5_K_L.gguf) | Q5_K_L | 7.59GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Falcon3-10B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q5_K_M.gguf) | Q5_K_M | 7.34GB | false | High quality, *recommended*. |
| [Falcon3-10B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q5_K_S.gguf) | Q5_K_S | 7.14GB | false | High quality, *recommended*. |
| [Falcon3-10B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q4_K_L.gguf) | Q4_K_L | 6.59GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Falcon3-10B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q4_K_M.gguf) | Q4_K_M | 6.29GB | false | Good quality, default size for most use cases, *recommended*. |
| [Falcon3-10B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q4_K_S.gguf) | Q4_K_S | 5.95GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Falcon3-10B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q4_0.gguf) | Q4_0 | 5.93GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Falcon3-10B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-IQ4_NL.gguf) | IQ4_NL | 5.91GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Falcon3-10B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 5.80GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Falcon3-10B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-IQ4_XS.gguf) | IQ4_XS | 5.60GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Falcon3-10B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q3_K_L.gguf) | Q3_K_L | 5.45GB | false | Lower quality but usable, good for low RAM availability. |
| [Falcon3-10B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q3_K_M.gguf) | Q3_K_M | 5.05GB | false | Low quality. |
| [Falcon3-10B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-IQ3_M.gguf) | IQ3_M | 4.70GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Falcon3-10B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q3_K_S.gguf) | Q3_K_S | 4.59GB | false | Low quality, not recommended. |
| [Falcon3-10B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-IQ3_XS.gguf) | IQ3_XS | 4.37GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Falcon3-10B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q2_K_L.gguf) | Q2_K_L | 4.32GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Falcon3-10B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-Q2_K.gguf) | Q2_K | 3.92GB | false | Very low quality but surprisingly usable. |
| [Falcon3-10B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Falcon3-10B-Instruct-GGUF/blob/main/Falcon3-10B-Instruct-IQ2_M.gguf) | IQ2_M | 3.59GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Falcon3-10B-Instruct-GGUF --include "Falcon3-10B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Falcon3-10B-Instruct-GGUF --include "Falcon3-10B-Instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Falcon3-10B-Instruct-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}