初始化项目,由ModelHub XC社区提供模型

Model: pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B-gguf
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-02 03:47:10 +08:00
commit 90fb26859e
5 changed files with 150 additions and 0 deletions

38
.gitattributes vendored Normal file
View File

@@ -0,0 +1,38 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q8_0.gguf filter=lfs diff=lfs merge=lfs -text
merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text

103
README.md Normal file
View File

@@ -0,0 +1,103 @@
---
license: apache-2.0
tags:
- text-generation
- llama.cpp
- gguf
- quantization
- merged-model
language:
- en
library_name: gguf
---
# merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B - GGUF Quantized Model
This is a collection of GGUF quantized versions of [pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B).
## 🌳 Model Tree
This model was created by merging the following models:
```
pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B
├── Merge Method: dare_ties
├── Gensyn/Qwen2.5-1.5B-Instruct
└── deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
├── density: 0.6
├── weight: 0.5
```
**Merge Method**: DARE_TIES - Advanced merging technique that reduces interference between models
## 📊 Available Quantization Formats
This repository contains multiple quantization formats optimized for different use cases:
- **q4_k_m**: 4-bit quantization, medium quality, good balance of size and performance
- **q5_k_m**: 5-bit quantization, higher quality, slightly larger size
- **q8_0**: 8-bit quantization, highest quality, larger size but minimal quality loss
## 🚀 Usage
### With llama.cpp
```bash
# Download a specific quantization
wget https://huggingface.co/pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B/resolve/main/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q4_k_m.gguf
# Run with llama.cpp
./main -m merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q4_k_m.gguf -p "Your prompt here"
```
### With Python (llama-cpp-python)
```python
from llama_cpp import Llama
# Load the model
llm = Llama(model_path="merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q4_k_m.gguf")
# Generate text
output = llm("Your prompt here", max_tokens=512)
print(output['choices'][0]['text'])
```
### With Ollama
```bash
# Create a Modelfile
echo 'FROM ./merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B.q4_k_m.gguf' > Modelfile
# Create and run the model
ollama create merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B -f Modelfile
ollama run merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B "Your prompt here"
```
## 📋 Model Details
- **Original Model**: [pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B)
- **Quantization Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
- **License**: Same as original model
- **Use Cases**: Optimized for local inference, edge deployment, and resource-constrained environments
## 🎯 Recommended Usage
- **q4_k_m**: Best for most use cases, good quality/size trade-off
- **q5_k_m**: When you need higher quality and have more storage/memory
- **q8_0**: When you want minimal quality loss from the original model
## ⚡ Performance Notes
GGUF models are optimized for:
- Faster loading times
- Lower memory usage
- CPU and GPU inference
- Cross-platform compatibility
For best performance, ensure your hardware supports the quantization format you choose.
---
*This model was automatically quantized using the Lemuru LLM toolkit.*

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b79041f3549472f514644dea759c4adf6395575b8d395132b659fcdee61f8f43
size 1117321376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e7e8d365470337054ae7b246162a41f06e41691a3260c48372188a34cb4bf76
size 1285494944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6fb3a1e185d855492da0daf73cd73aa76cfdf29f6efe0b3e350f6a0c56a8cec4
size 1894532768