初始化项目,由ModelHub XC社区提供模型
Model: neopolita/h2o-danube-1.8b-sft-gguf Source: Original Platform
This commit is contained in:
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
{}
|
||||
---
|
||||
# GGUF quants for [**h2oai/h2o-danube-1.8b-sft**](https://huggingface.co/h2oai/h2o-danube-1.8b-sft) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
||||
|
||||
**Terms of Use**: Please check the [**original model**](https://huggingface.co/h2oai/h2o-danube-1.8b-sft)
|
||||
|
||||
<picture>
|
||||
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
|
||||
</picture>
|
||||
|
||||
## Quants
|
||||
|
||||
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
|
||||
* `q3_k_s`: Uses Q3_K for all tensors
|
||||
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
|
||||
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
|
||||
* `q4_0`: Original quant method, 4-bit.
|
||||
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
|
||||
* `q4_k_s`: Uses Q4_K for all tensors
|
||||
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
|
||||
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
|
||||
* `q5_1`: Even higher accuracy, resource usage and slower inference.
|
||||
* `q5_k_s`: Uses Q5_K for all tensors
|
||||
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
|
||||
* `q6_k`: Uses Q8_K for all tensors
|
||||
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
|
||||
Reference in New Issue
Block a user