From 3fe88846bf06e9881e78b37645956cc4eafcebcf Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sat, 11 Apr 2026 21:49:58 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: oki0ki/Bielik-1.5B-v3.0-Instruct-GGUF Source: Original Platform --- .gitattributes | 37 +++++++++++++ Bielik-1.5B-v3.0-Instruct-fp16.gguf | 3 ++ Bielik-1.5B-v3.0-Instruct.Q8_0.gguf | 3 ++ README.md | 84 +++++++++++++++++++++++++++++ 4 files changed, 127 insertions(+) create mode 100644 .gitattributes create mode 100644 Bielik-1.5B-v3.0-Instruct-fp16.gguf create mode 100644 Bielik-1.5B-v3.0-Instruct.Q8_0.gguf create mode 100644 README.md diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..8798598 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,37 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +Bielik-1.5B-v3.0-Instruct-fp16.gguf filter=lfs diff=lfs merge=lfs -text +Bielik-1.5B-v3.0-Instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Bielik-1.5B-v3.0-Instruct-fp16.gguf b/Bielik-1.5B-v3.0-Instruct-fp16.gguf new file mode 100644 index 0000000..b17568f --- /dev/null +++ b/Bielik-1.5B-v3.0-Instruct-fp16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f9a6b5680cb275a83e9b4c262ae1b0dcf5bab3d42cdafcd503f32f6a74b024a +size 3195509408 diff --git a/Bielik-1.5B-v3.0-Instruct.Q8_0.gguf b/Bielik-1.5B-v3.0-Instruct.Q8_0.gguf new file mode 100644 index 0000000..5a09ed9 --- /dev/null +++ b/Bielik-1.5B-v3.0-Instruct.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2abdd34a84825a43b49614bb6283be984e43444f7d54eedd77250cfcb1040e95 +size 1699571872 diff --git a/README.md b/README.md new file mode 100644 index 0000000..1d17ed2 --- /dev/null +++ b/README.md @@ -0,0 +1,84 @@ +--- +language: +- pl +license: apache-2.0 +library_name: transformers +tags: +- finetuned +- gguf +inference: false +pipeline_tag: text-generation +base_model: speakleash/Bielik-1.5B-v3.0-Instruct +--- + +

+ +

+ +# Bielik-1.5B-v3.0-Instruct-GGUF + +This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-1.5B-v.3.0-Instruct](https://huggingface.co/speakleash/Bielik-1.5B-v3.0-Instruct). + +📚 Technical report: https://arxiv.org/abs/2505.02550 + +DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!
+ +### Available quantization formats: +* **q8_0:** Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. +* **fp16:** Converted Bielik-1.5B-v3.0-Instruct to fp16 GGUF + +### Ollama Modfile +The GGUF file can be used with [Ollama](https://ollama.com/). To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-1.5B-v3.0-Instruct.Q4_K_M.gguf (full path to model location) Modfile looks like: + +``` +FROM ./Bielik-1.5B-v3.0-Instruct.Q4_K_M.gguf + +TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> + +{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> + +{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> + +{{ .Response }}<|eot_id|>""" + +PARAMETER stop "<|start_header_id|>" +PARAMETER stop "<|end_header_id|>" +PARAMETER stop "<|eot_id|>" + +# Remeber to set low temperature for experimental models (1-3bits) +PARAMETER temperature 0.1 + +``` + + +### Model description: + +* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) +* **Language:** Polish +* **Model type:** causal decoder-only +* **Quant from:** [Bielik-1.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-1.5B-v3.0-Instruct) +* **Finetuned from:** [Bielik-1.5B-v2](https://huggingface.co/speakleash/Bielik-1.5B-v2) +* **License:** Apache 2.0 + +### About GGUF + +GGUF is a new format introduced by the llama.cpp team on August 21st 2023. + +Here is an incomplete list of clients and libraries that are known to support GGUF: +* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. +* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. +* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. +* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. +* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration +* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. +* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. +* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. +* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. +* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models. + +### Responsible for model quantization +* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)SpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. + +## Contact Us + +If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4). \ No newline at end of file