From c4aaa6503a0ed80448a06aed19c73cc525f78dbb Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sun, 19 Apr 2026 16:17:59 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: VillanovaAI/Villanova-2B-2603-GGUF Source: Original Platform --- .gitattributes | 37 +++++++++++++++++++++++++ README.md | 55 +++++++++++++++++++++++++++++++++++++ Villanova-2B-2603-BF16.gguf | 3 ++ Villanova-2B-2603-Q8_0.gguf | 3 ++ 4 files changed, 98 insertions(+) create mode 100644 .gitattributes create mode 100644 README.md create mode 100644 Villanova-2B-2603-BF16.gguf create mode 100644 Villanova-2B-2603-Q8_0.gguf diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..93842a8 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,37 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +Villanova-2B-2603-BF16.gguf filter=lfs diff=lfs merge=lfs -text +Villanova-2B-2603-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..f13d135 --- /dev/null +++ b/README.md @@ -0,0 +1,55 @@ +--- +license: apache-2.0 +language: +- en +- de +- es +- fr +- it +base_model: +- VillanovaAI/Villanova-2B-2603 +pipeline_tag: text-generation +--- + + +# Model Card for Villanova-2B-2603-GGUF + +Villanova.AI logo + +**Villanova-2B-2603** is a fully open, multilingual instruction-tuned Large Language Model developed by [Villanova.AI](https://huggingface.co/VillanovaAI). Part of the Villanova project, it is designed to advance open European language technology with native support for five European languages. All model weights, training data sources, and training details are publicly released. + +This repo contains GGUF format model files for the [VillanovaAI/Villanova-2B-2603](https://huggingface.co/VillanovaAI/Villanova-2B-2603) model. + +--- + +## Model Family + +**[Villanova-2B-Base-2603](https://huggingface.co/VillanovaAI/Villanova-2B-Base-2603)** — Base model (4.4T)
+ ↳ **[Villanova-2B-2603](https://huggingface.co/VillanovaAI/Villanova-2B-2603)** — SFT / Instruct
+  ↳ [Villanova-2B-2603-GGUF](https://huggingface.co/VillanovaAI/Villanova-2B-2603-GGUF) — Quantized — 📍 *This model*
+ ↳ **[Villanova-2B-VL-2603](https://huggingface.co/VillanovaAI/Villanova-2B-VL-2603)** — Vision-Language Instruct
+  ↳ [Villanova-2B-VL-2603-GGUF](https://huggingface.co/VillanovaAI/Villanova-2B-VL-2603-GGUF) — Quantized
+
+**[Villanova-2B-Base-2512-Preview](https://huggingface.co/VillanovaAI/Villanova-2B-Base-2512-Preview)** — Base model (2.2T) (previous version, not recommended)
+ ↳ **[Villanova-2B-2512-Preview](https://huggingface.co/VillanovaAI/Villanova-2B-2512-Preview)** — SFT / Instruct (previous version, not recommended)
+ + + + +## About GGUF +**GGUF** is a format introduced by llama.cpp. + +It is a file format for storing and distributing LLMs that is designed for portability and efficient **inference on the edge**. + +## Quick Usage with llama.cpp + +You can run this model directly using the `llama-cli` tool (part of [llama.cpp](https://github.com/ggerganov/llama.cpp)). + +To run the model with the **Q8_0** quantization directly from Hugging Face: + +```bash +llama-cli -hf VillanovaAI/Villanova-2B-2603-GGUF:Q8_0 +``` + + + diff --git a/Villanova-2B-2603-BF16.gguf b/Villanova-2B-2603-BF16.gguf new file mode 100644 index 0000000..8dc0a14 --- /dev/null +++ b/Villanova-2B-2603-BF16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66bb5b6e0fe42a779b10f24c7049f56dbabb9b67d656691ddd3168d73a5084c6 +size 4714929152 diff --git a/Villanova-2B-2603-Q8_0.gguf b/Villanova-2B-2603-Q8_0.gguf new file mode 100644 index 0000000..26b3fdc --- /dev/null +++ b/Villanova-2B-2603-Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb0865b1cd939a2f73114523b099376230c03bed9b1fa6dfbb02283dff98577e +size 2508004608