From 2847e1a4640772493f0fb9172e0094bc805cac74 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Mon, 20 Apr 2026 19:01:54 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: sowilow/LFM2.5-1.2B-Instruct-DGX-Spark-GGUF Source: Original Platform --- .gitattributes | 39 +++++++ LFM2.5-1.2B-Instruct-Q4_K_M.gguf | 3 + LFM2.5-1.2B-Instruct-Q8_0.gguf | 3 + README.md | 180 +++++++++++++++++++++++++++++++ lfm2.5-1.2b-instruct-q4_k_m.gguf | 3 + lfm2.5-1.2b-instruct-q8_0.gguf | 3 + 6 files changed, 231 insertions(+) create mode 100644 .gitattributes create mode 100644 LFM2.5-1.2B-Instruct-Q4_K_M.gguf create mode 100644 LFM2.5-1.2B-Instruct-Q8_0.gguf create mode 100644 README.md create mode 100644 lfm2.5-1.2b-instruct-q4_k_m.gguf create mode 100644 lfm2.5-1.2b-instruct-q8_0.gguf diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..9984a28 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,39 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +lfm2.5-1.2b-instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text +lfm2.5-1.2b-instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Instruct-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +LFM2.5-1.2B-Instruct-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/LFM2.5-1.2B-Instruct-Q4_K_M.gguf b/LFM2.5-1.2B-Instruct-Q4_K_M.gguf new file mode 100644 index 0000000..82cae00 --- /dev/null +++ b/LFM2.5-1.2B-Instruct-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07cb5d93a310a3842271f6f402ea038f5d8e5b06c862dec0647c019632037da5 +size 730895424 diff --git a/LFM2.5-1.2B-Instruct-Q8_0.gguf b/LFM2.5-1.2B-Instruct-Q8_0.gguf new file mode 100644 index 0000000..b39447e --- /dev/null +++ b/LFM2.5-1.2B-Instruct-Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c8a680e1c0bf440539dbf713172004eef98713b257d6ebd5d5e61cd52dd6f89 +size 1246254144 diff --git a/README.md b/README.md new file mode 100644 index 0000000..1007fa8 --- /dev/null +++ b/README.md @@ -0,0 +1,180 @@ +--- +license: apache-2.0 +base_model: LiquidAI/LFM2.5-1.2B-Instruct +language: +- en +pipeline_tag: text-generation +tags: +- 4-bit +- 8-bit +- blackwell-optimized +- dgx-spark +- gguf +- liquid-ai +- quantized +- sm121 +--- + +--- + +## πŸš€ v0.1.6: Real-time Metrics & Blackwell-Optimized Docker (Recommended) + +This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**. +Experience the state-of-the-art inference engine optimized for NVIDIA Blackwell (DGX Spark) hardware. + +### 🌟 Key Features (v0.1.6) +- **Real-time Performance Metrics**: Now visualizes `Input TPS` and `Output TPS` during streaming. +- **Improved Reasoning UI**: Seamlessly renders and stabilizes the model's Chain-of-Thought (CoT). +- **Blackwell Optimization**: Native support for ARM64/SM121 and CUDA 13.0 FP4. + +### 🐳 Quick Start +```bash +# Pull the latest optimized image +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.6 +``` +For more details, visit our [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench). + +--- + +## πŸš€ v0.1.6: μ‹€μ‹œκ°„ μ§€ν‘œ 및 Blackwell μ΅œμ ν™” 도컀 (ꢌμž₯) + +이 λͺ¨λΈμ€ **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** μ‹œμŠ€ν…œμ— μ΅œμ ν™”λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. +NVIDIA Blackwell (DGX Spark) ν•˜λ“œμ›¨μ–΄μ˜ μ„±λŠ₯을 μ΅œλŒ€λ‘œ ν™œμš©ν•˜μ„Έμš”. + +### 🌟 μ£Όμš” νŠΉμ§• (v0.1.6) +- **μ‹€μ‹œκ°„ μ„±λŠ₯ μ§€ν‘œ μ‹œκ°ν™”**: 슀트리밍 쀑 `Input TPS` 및 `Output TPS`λ₯Ό μ‹€μ‹œκ°„μœΌλ‘œ ν‘œμ‹œν•©λ‹ˆλ‹€. +- **μ§€λŠ₯ν˜• μΆ”λ‘  UI 고도화**: λͺ¨λΈμ˜ μƒκ°ν•˜λŠ” κ³Όμ •(CoT)을 더 μ•ˆμ •μ μœΌλ‘œ λ Œλ”λ§ν•©λ‹ˆλ‹€. +- **Blackwell μ΅œμ ν™”**: ARM64/SM121 μ•„ν‚€ν…μ²˜ 및 CUDA 13.0 FP4 가속 지원. + +### 🐳 μ‹€ν–‰ 방법 +```bash +# μ΅œμ‹  μ΅œμ ν™” 이미지 λ‚΄λ €λ°›κΈ° +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.6 +``` +μƒμ„Έν•œ μ‚¬μš©λ²•μ€ [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”. + +--- + + +--- + +## πŸš€ v0.1.5: Real-time Metrics & Blackwell-Optimized Docker (Recommended) + +This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**. +Experience the state-of-the-art inference engine optimized for NVIDIA Blackwell (DGX Spark) hardware. + +### 🌟 Key Features (v0.1.5) +- **Real-time Performance Metrics**: Now visualizes `Input TPS` and `Output TPS` during streaming. +- **Improved Reasoning UI**: Seamlessly renders and stabilizes the model's Chain-of-Thought (CoT). +- **Blackwell Optimization**: Native support for ARM64/SM121 and CUDA 13.0 FP4. + +### 🐳 Quick Start +```bash +# Pull the latest optimized image +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.5 +``` +For more details, visit our [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench). + +--- + +## πŸš€ v0.1.5: μ‹€μ‹œκ°„ μ§€ν‘œ 및 Blackwell μ΅œμ ν™” 도컀 (ꢌμž₯) + +이 λͺ¨λΈμ€ **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** μ‹œμŠ€ν…œμ— μ΅œμ ν™”λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. +NVIDIA Blackwell (DGX Spark) ν•˜λ“œμ›¨μ–΄μ˜ μ„±λŠ₯을 μ΅œλŒ€λ‘œ ν™œμš©ν•˜μ„Έμš”. + +### 🌟 μ£Όμš” νŠΉμ§• (v0.1.5) +- **μ‹€μ‹œκ°„ μ„±λŠ₯ μ§€ν‘œ μ‹œκ°ν™”**: 슀트리밍 쀑 `Input TPS` 및 `Output TPS`λ₯Ό μ‹€μ‹œκ°„μœΌλ‘œ ν‘œμ‹œν•©λ‹ˆλ‹€. +- **μ§€λŠ₯ν˜• μΆ”λ‘  UI 고도화**: λͺ¨λΈμ˜ μƒκ°ν•˜λŠ” κ³Όμ •(CoT)을 더 μ•ˆμ •μ μœΌλ‘œ λ Œλ”λ§ν•©λ‹ˆλ‹€. +- **Blackwell μ΅œμ ν™”**: ARM64/SM121 μ•„ν‚€ν…μ²˜ 및 CUDA 13.0 FP4 가속 지원. + +### 🐳 μ‹€ν–‰ 방법 +```bash +# μ΅œμ‹  μ΅œμ ν™” 이미지 λ‚΄λ €λ°›κΈ° +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.5 +``` +μƒμ„Έν•œ μ‚¬μš©λ²•μ€ [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”. + +--- + + +--- + +## πŸš€ v0.1.4: Quick Start with Blackwell-Optimized Docker (Recommended) + +This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**. +Experience the best performance on NVIDIA Blackwell (DGX Spark) hardware with our optimized inference engine. + +### 🌟 Key Features (v0.1.4) +- **Blackwell Optimized**: Native support for ARM64/SM121 and CUDA 13.0 FP4. +- **Intelligent Reasoning UI**: Automatic extraction and visualization of reasoning processes (CoT). +- **One-Click Deployment**: Standardized environment via GHCR Docker image. + +### 🐳 How to Run +```bash +# Pull the latest optimized image +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.4 + +# Follow the instructions in our repo to serve this model +# GitHub: https://github.com/sowilow/DGX-Spark-llama.cpp-Bench +``` + +--- + +## πŸš€ v0.1.4: Blackwell μ΅œμ ν™” 도컀 ν€΅μŠ€νƒ€νŠΈ (ꢌμž₯) + +이 λͺ¨λΈμ€ **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** μ‹œμŠ€ν…œμ— μ΅œμ ν™”λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. +NVIDIA Blackwell (DGX Spark) ν•˜λ“œμ›¨μ–΄μ˜ μ„±λŠ₯을 μ΅œλŒ€λ‘œ ν™œμš©ν•˜λŠ” μ΅œμ ν™”λœ μΆ”λ‘  엔진을 κ²½ν—˜ν•΄ λ³΄μ„Έμš”. + +### 🌟 μ£Όμš” νŠΉμ§• (v0.1.4) +- **Blackwell μ΅œμ ν™”**: ARM64/SM121 μ•„ν‚€ν…μ²˜ 및 CUDA 13.0 FP4 ν•˜λ“œμ›¨μ–΄ 가속 지원. +- **μ§€λŠ₯ν˜• μΆ”λ‘  UI**: λͺ¨λΈμ˜ μƒκ°ν•˜λŠ” κ³Όμ •(CoT)을 μžλ™μœΌλ‘œ κ°μ§€ν•˜κ³  μ‹œκ°ν™”ν•©λ‹ˆλ‹€. +- **κ°„νŽΈν•œ 배포**: GHCR 도컀 이미지λ₯Ό 톡해 ν™˜κ²½ μ„€μ • 없이 μ¦‰μ‹œ μ‹€ν–‰ κ°€λŠ₯ν•©λ‹ˆλ‹€. + +### 🐳 μ‹€ν–‰ 방법 +```bash +# μ΅œμ‹  μ΅œμ ν™” 이미지 λ‚΄λ €λ°›κΈ° +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.4 +``` +μƒμ„Έν•œ μ‚¬μš©λ²•μ€ [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”. + +--- + + + +--- + +## πŸš€ Quick Start with Docker (Recommended) + +You can easily run this model using the **DGX-Spark-llama.cpp-Bench** inference engine. It's pre-configured for high-performance inference on NVIDIA hardware (especially Blackwell/DGX Spark). + +### 1. Pull the Docker Image +```bash +docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:latest +``` + +### 2. Run the Inference Server +For detailed configuration and usage, visit the [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench). + +--- + + +# LFM2.5-1.2B-Instruct-DGX-Spark-GGUF + +This repository contains GGUF-quantized weights for **LFM2.5-1.2B-Instruct**, specifically optimized for **NVIDIA Blackwell (DGX Spark)** hardware. + +## πŸš€ Key Features +- **Hardware Optimized**: Built with CUDA 13.0 and SM121 (Blackwell) native acceleration. +- **Quantization**: + - **Q4_K_M**: Balanced performance and accuracy. + - **Q8_0**: High precision preservation. +- **Base Model Integration**: Linked directly to the original [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct). + +## βš–οΈ License & Attribution +This model is a quantized version of the original [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) and is subject to its original license. + +## πŸ“‚ Files Included +- `lfm2.5-1.2b-instruct-q4_k_m.gguf`: 4-bit quantized model. +- `lfm2.5-1.2b-instruct-q8_0.gguf`: 8-bit quantized model. + +--- +*Created using [DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)* diff --git a/lfm2.5-1.2b-instruct-q4_k_m.gguf b/lfm2.5-1.2b-instruct-q4_k_m.gguf new file mode 100644 index 0000000..5fda68f --- /dev/null +++ b/lfm2.5-1.2b-instruct-q4_k_m.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2621a3b06908ba9c94567229d54a369bfa96e93da049473204f812fee6c8baec +size 730895424 diff --git a/lfm2.5-1.2b-instruct-q8_0.gguf b/lfm2.5-1.2b-instruct-q8_0.gguf new file mode 100644 index 0000000..b39447e --- /dev/null +++ b/lfm2.5-1.2b-instruct-q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c8a680e1c0bf440539dbf713172004eef98713b257d6ebd5d5e61cd52dd6f89 +size 1246254144