Upload folder using huggingface_hub
This commit is contained in:
5
.gitattributes
vendored
5
.gitattributes
vendored
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|||||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Bielik-11B-v3.0-Instruct-fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Bielik-11B-v3.0-Instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Bielik-11B-v3.0-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Bielik-11B-v3.0-Instruct.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Bielik-11B-v3.0-Instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|||||||
3
Bielik-11B-v3.0-Instruct-fp16.gguf
Normal file
3
Bielik-11B-v3.0-Instruct-fp16.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:58b34db4988b2fff0a29e49ee10c45d978a289c573d007c3ddca49ae8692e98d
|
||||||
|
size 22339170944
|
||||||
3
Bielik-11B-v3.0-Instruct.Q4_K_M.gguf
Normal file
3
Bielik-11B-v3.0-Instruct.Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c16841621efe93c7c8ebf1b374709a96276f3741e649f83ed90131a7b5ad23a8
|
||||||
|
size 6724051584
|
||||||
3
Bielik-11B-v3.0-Instruct.Q5_K_M.gguf
Normal file
3
Bielik-11B-v3.0-Instruct.Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1a6baba952f0ebb12fa64f55feeb98271dc906954aad4cff6be234d90efccbc4
|
||||||
|
size 7907041920
|
||||||
3
Bielik-11B-v3.0-Instruct.Q6_K.gguf
Normal file
3
Bielik-11B-v3.0-Instruct.Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:80f2aa85631d7dba513681d9f7520a37d226c82d1f9b1544a2e57c60fd93349a
|
||||||
|
size 9163969152
|
||||||
3
Bielik-11B-v3.0-Instruct.Q8_0.gguf
Normal file
3
Bielik-11B-v3.0-Instruct.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b2b1b8cd2056638d83a257c1fc20e334e0129195e8aa41c878fef6f56f1e6724
|
||||||
|
size 11868811904
|
||||||
85
README.md
Normal file
85
README.md
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
---
|
||||||
|
language:
|
||||||
|
- pl
|
||||||
|
license: apache-2.0
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- finetuned
|
||||||
|
- gguf
|
||||||
|
inference: false
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
base_model: speakleash/Bielik-11B-v3.0-Instruct
|
||||||
|
---
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png">
|
||||||
|
</p>
|
||||||
|
|
||||||
|
# Bielik-11B-v3.0-Instruct-GGUF
|
||||||
|
|
||||||
|
This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct).
|
||||||
|
|
||||||
|
<b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br>
|
||||||
|
|
||||||
|
### Available quantization formats:
|
||||||
|
* **q4_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
|
||||||
|
* **q5_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
|
||||||
|
* **q6_k:** Uses Q8_K for all tensors
|
||||||
|
* **q8_0:** Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
|
||||||
|
* **16bit:** Converted fp16 to GGUF format.
|
||||||
|
|
||||||
|
### Ollama Modfile
|
||||||
|
The GGUF file can be used with [Ollama](https://ollama.com/). To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-11B-v3.0-Instruct.Q4_K_M.gguf (full path to model location) Modfile looks like:
|
||||||
|
|
||||||
|
```
|
||||||
|
FROM ./Bielik-11B-v3.0-Instruct.Q4_K_M.gguf
|
||||||
|
|
||||||
|
TEMPLATE """<s>{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
||||||
|
|
||||||
|
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
||||||
|
|
||||||
|
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
||||||
|
|
||||||
|
{{ .Response }}<|eot_id|>"""
|
||||||
|
|
||||||
|
PARAMETER stop "<|start_header_id|>"
|
||||||
|
PARAMETER stop "<|end_header_id|>"
|
||||||
|
PARAMETER stop "<|eot_id|>"
|
||||||
|
|
||||||
|
# Remeber to set low temperature for experimental models (1-3bits)
|
||||||
|
PARAMETER temperature 0.1
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Model description:
|
||||||
|
|
||||||
|
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
|
||||||
|
* **Language:** Polish
|
||||||
|
* **Model type:** causal decoder-only
|
||||||
|
* **Quant from:** [Bielik-11B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct)
|
||||||
|
* **Finetuned from:** [speakleash/Bielik-11B-v3-Base-20250730](https://huggingface.co/speakleash/Bielik-11B-v3-Base-20250730)
|
||||||
|
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
|
||||||
|
|
||||||
|
### About GGUF
|
||||||
|
|
||||||
|
GGUF is a new format introduced by the llama.cpp team on August 21st 2023.
|
||||||
|
|
||||||
|
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
||||||
|
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
||||||
|
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
||||||
|
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
||||||
|
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
||||||
|
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration
|
||||||
|
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
||||||
|
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
||||||
|
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
||||||
|
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
||||||
|
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models.
|
||||||
|
|
||||||
|
### Responsible for model quantization
|
||||||
|
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.
|
||||||
|
|
||||||
|
## Contact Us
|
||||||
|
|
||||||
|
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
|
||||||
Reference in New Issue
Block a user