初始化项目,由ModelHub XC社区提供模型

Model: sowilow/LFM2.5-1.2B-Instruct-DGX-Spark-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-20 19:01:54 +08:00
commit 2847e1a464
6 changed files with 231 additions and 0 deletions

39
.gitattributes vendored Normal file
View File

@@ -0,0 +1,39 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
lfm2.5-1.2b-instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
lfm2.5-1.2b-instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
LFM2.5-1.2B-Instruct-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
LFM2.5-1.2B-Instruct-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:07cb5d93a310a3842271f6f402ea038f5d8e5b06c862dec0647c019632037da5
size 730895424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4c8a680e1c0bf440539dbf713172004eef98713b257d6ebd5d5e61cd52dd6f89
size 1246254144

180
README.md Normal file
View File

@@ -0,0 +1,180 @@
---
license: apache-2.0
base_model: LiquidAI/LFM2.5-1.2B-Instruct
language:
- en
pipeline_tag: text-generation
tags:
- 4-bit
- 8-bit
- blackwell-optimized
- dgx-spark
- gguf
- liquid-ai
- quantized
- sm121
---
---
## 🚀 v0.1.6: Real-time Metrics & Blackwell-Optimized Docker (Recommended)
This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**.
Experience the state-of-the-art inference engine optimized for NVIDIA Blackwell (DGX Spark) hardware.
### 🌟 Key Features (v0.1.6)
- **Real-time Performance Metrics**: Now visualizes `Input TPS` and `Output TPS` during streaming.
- **Improved Reasoning UI**: Seamlessly renders and stabilizes the model's Chain-of-Thought (CoT).
- **Blackwell Optimization**: Native support for ARM64/SM121 and CUDA 13.0 FP4.
### 🐳 Quick Start
```bash
# Pull the latest optimized image
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.6
```
For more details, visit our [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench).
---
## 🚀 v0.1.6: 실시간 지표 및 Blackwell 최적화 도커 (권장)
이 모델은 **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** 시스템에 최적화되어 있습니다.
NVIDIA Blackwell (DGX Spark) 하드웨어의 성능을 최대로 활용하세요.
### 🌟 주요 특징 (v0.1.6)
- **실시간 성능 지표 시각화**: 스트리밍 중 `Input TPS``Output TPS`를 실시간으로 표시합니다.
- **지능형 추론 UI 고도화**: 모델의 생각하는 과정(CoT)을 더 안정적으로 렌더링합니다.
- **Blackwell 최적화**: ARM64/SM121 아키텍처 및 CUDA 13.0 FP4 가속 지원.
### 🐳 실행 방법
```bash
# 최신 최적화 이미지 내려받기
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.6
```
상세한 사용법은 [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)를 참조하세요.
---
---
## 🚀 v0.1.5: Real-time Metrics & Blackwell-Optimized Docker (Recommended)
This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**.
Experience the state-of-the-art inference engine optimized for NVIDIA Blackwell (DGX Spark) hardware.
### 🌟 Key Features (v0.1.5)
- **Real-time Performance Metrics**: Now visualizes `Input TPS` and `Output TPS` during streaming.
- **Improved Reasoning UI**: Seamlessly renders and stabilizes the model's Chain-of-Thought (CoT).
- **Blackwell Optimization**: Native support for ARM64/SM121 and CUDA 13.0 FP4.
### 🐳 Quick Start
```bash
# Pull the latest optimized image
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.5
```
For more details, visit our [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench).
---
## 🚀 v0.1.5: 실시간 지표 및 Blackwell 최적화 도커 (권장)
이 모델은 **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** 시스템에 최적화되어 있습니다.
NVIDIA Blackwell (DGX Spark) 하드웨어의 성능을 최대로 활용하세요.
### 🌟 주요 특징 (v0.1.5)
- **실시간 성능 지표 시각화**: 스트리밍 중 `Input TPS``Output TPS`를 실시간으로 표시합니다.
- **지능형 추론 UI 고도화**: 모델의 생각하는 과정(CoT)을 더 안정적으로 렌더링합니다.
- **Blackwell 최적화**: ARM64/SM121 아키텍처 및 CUDA 13.0 FP4 가속 지원.
### 🐳 실행 방법
```bash
# 최신 최적화 이미지 내려받기
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.5
```
상세한 사용법은 [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)를 참조하세요.
---
---
## 🚀 v0.1.4: Quick Start with Blackwell-Optimized Docker (Recommended)
This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**.
Experience the best performance on NVIDIA Blackwell (DGX Spark) hardware with our optimized inference engine.
### 🌟 Key Features (v0.1.4)
- **Blackwell Optimized**: Native support for ARM64/SM121 and CUDA 13.0 FP4.
- **Intelligent Reasoning UI**: Automatic extraction and visualization of reasoning processes (CoT).
- **One-Click Deployment**: Standardized environment via GHCR Docker image.
### 🐳 How to Run
```bash
# Pull the latest optimized image
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.4
# Follow the instructions in our repo to serve this model
# GitHub: https://github.com/sowilow/DGX-Spark-llama.cpp-Bench
```
---
## 🚀 v0.1.4: Blackwell 최적화 도커 퀵스타트 (권장)
이 모델은 **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)** 시스템에 최적화되어 있습니다.
NVIDIA Blackwell (DGX Spark) 하드웨어의 성능을 최대로 활용하는 최적화된 추론 엔진을 경험해 보세요.
### 🌟 주요 특징 (v0.1.4)
- **Blackwell 최적화**: ARM64/SM121 아키텍처 및 CUDA 13.0 FP4 하드웨어 가속 지원.
- **지능형 추론 UI**: 모델의 생각하는 과정(CoT)을 자동으로 감지하고 시각화합니다.
- **간편한 배포**: GHCR 도커 이미지를 통해 환경 설정 없이 즉시 실행 가능합니다.
### 🐳 실행 방법
```bash
# 최신 최적화 이미지 내려받기
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:v0.1.4
```
상세한 사용법은 [GitHub 리포지토리](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)를 참조하세요.
---
---
## 🚀 Quick Start with Docker (Recommended)
You can easily run this model using the **DGX-Spark-llama.cpp-Bench** inference engine. It's pre-configured for high-performance inference on NVIDIA hardware (especially Blackwell/DGX Spark).
### 1. Pull the Docker Image
```bash
docker pull ghcr.io/sowilow/dgx-spark-llama.cpp-bench:latest
```
### 2. Run the Inference Server
For detailed configuration and usage, visit the [GitHub Repository](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench).
---
# LFM2.5-1.2B-Instruct-DGX-Spark-GGUF
This repository contains GGUF-quantized weights for **LFM2.5-1.2B-Instruct**, specifically optimized for **NVIDIA Blackwell (DGX Spark)** hardware.
## 🚀 Key Features
- **Hardware Optimized**: Built with CUDA 13.0 and SM121 (Blackwell) native acceleration.
- **Quantization**:
- **Q4_K_M**: Balanced performance and accuracy.
- **Q8_0**: High precision preservation.
- **Base Model Integration**: Linked directly to the original [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
## ⚖️ License & Attribution
This model is a quantized version of the original [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) and is subject to its original license.
## 📂 Files Included
- `lfm2.5-1.2b-instruct-q4_k_m.gguf`: 4-bit quantized model.
- `lfm2.5-1.2b-instruct-q8_0.gguf`: 8-bit quantized model.
---
*Created using [DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)*

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2621a3b06908ba9c94567229d54a369bfa96e93da049473204f812fee6c8baec
size 730895424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4c8a680e1c0bf440539dbf713172004eef98713b257d6ebd5d5e61cd52dd6f89
size 1246254144