初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/functionary-small-v3.1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 09:48:07 +08:00
commit 09ed733c74
15 changed files with 152 additions and 0 deletions

48
.gitattributes vendored Normal file
View File

@@ -0,0 +1,48 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.f16.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
functionary-small-v3.1.Q4_0_4_4.gguf filter=lfs diff=lfs merge=lfs -text

65
README.md Normal file
View File

@@ -0,0 +1,65 @@
---
base_model: meetkai/functionary-small-v3.1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/meetkai/functionary-small-v3.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/functionary-small-v3.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/functionary-small-v3.1-GGUF/resolve/main/functionary-small-v3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bcebcb0ad473140e46848bba5148c058e9ff2fd0813a1d2adba9ae8767c6aa11
size 4484366400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8f862fbf95193f157138043e4c89e8ddf3a31b3ac4a71a39df8b0f39b1dfa517
size 3179135040

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2351b656197567434cbed67a6cac33e973ef0c9a218c60f7286b79a1a6da27ff
size 4321960000

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0bfa04c063710145a6df5d8ae5550de534c787bdaf311e08a23b2174119b3061
size 4018921536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf523fb0bca088581fc3f7151d6e10c4ad6cdfb49cf795c8a38499b33ed67920
size 3664502848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9646d19546295c3d790390f6114dc629ab04cdfa4954209329f8fc7e7f9936f9
size 4661215296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:63c2e4bd726f946cac5b06e17e366e55387c51bad623b0b6de575d579c7c82a6
size 4920737856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cf99b31e596648f36f342ec0bf329ce2302ab635e2980df2dc0eb46d6fd474f3
size 4692672576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7535344a60deda5dd759e409b955c6c5824ea5f04685b6b7b342739730646a9f
size 5732991040

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e017544e16ae7816c2aa7ba6d3ab8a362fc92a390ab0e8fd2df567abc55f9fb
size 5599297600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b28e2618f4197fa2981289c94db84b3f36e7b91bbec34cf0f5687c17dfa651fa
size 6596010048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1fb7a2a6bd10525693594c0068c6369a1174105d249b6a009f35d3f937c264f9
size 8540774464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9612eeac5457bbe2a5f21b48014f9556b9115a8c5f2f9007119733551ebfa5f6
size 16068894784