初始化项目,由ModelHub XC社区提供模型
Model: ysn-rfd/Ministral-3b-instruct-GGUF Source: Original Platform
This commit is contained in:
39
.gitattributes
vendored
Normal file
39
.gitattributes
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
ministral-3b-instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
ministral-3b-instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
ministral-3b-instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
ministral-3b-instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
87
README.md
Normal file
87
README.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
base_model: ministral/Ministral-3b-instruct
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- llama-cpp
|
||||
- matrixportal
|
||||
inference:
|
||||
parameters:
|
||||
temperature: 1
|
||||
top_p: 0.95
|
||||
top_k: 40
|
||||
repetition_penalty: 1.2
|
||||
---
|
||||
|
||||
# ysn-rfd/Ministral-3b-instruct-GGUF
|
||||
This model was converted to GGUF format from [`ministral/Ministral-3b-instruct`](https://huggingface.co/ministral/Ministral-3b-instruct) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space.
|
||||
Refer to the [original model card](https://huggingface.co/ministral/Ministral-3b-instruct) for more details on the model.
|
||||
|
||||
## ✅ Quantized Models Download List
|
||||
|
||||
### 🔍 Recommended Quantizations
|
||||
- **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q4_k_m.gguf) (Best balance of speed/quality)
|
||||
- **📱 ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q4_0.gguf) (Optimized for ARM CPUs)
|
||||
- **🏆 Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q8_0.gguf) (Near-original quality)
|
||||
|
||||
### 📦 Full Quantization Options
|
||||
| 🚀 Download | 🔢 Type | 📝 Notes |
|
||||
|:---------|:-----|:------|
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q2_k.gguf) |  | Basic quantization |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q3_k_s.gguf) |  | Small size |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q3_k_m.gguf) |  | Balanced quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q3_k_l.gguf) |  | Better quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q4_0.gguf) |  | Fast on ARM |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q4_k_s.gguf) |  | Fast, recommended |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q4_k_m.gguf) |  ⭐ | Best balance |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q5_0.gguf) |  | Good quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q5_k_s.gguf) |  | Balanced |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q5_k_m.gguf) |  | High quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q6_k.gguf) |  🏆 | Very good quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-q8_0.gguf) |  ⚡ | Fast, best quality |
|
||||
| [Download](https://huggingface.co/ysn-rfd/Ministral-3b-instruct-GGUF/resolve/main/ministral-3b-instruct-f16.gguf) |  | Maximum accuracy |
|
||||
|
||||
💡 **Tip:** Use `F16` for maximum precision when quality is critical
|
||||
|
||||
|
||||
---
|
||||
# 🚀 Applications and Tools for Locally Quantized LLMs
|
||||
## 🖥️ Desktop Applications
|
||||
|
||||
| Application | Description | Download Link |
|
||||
|-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
||||
| **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) |
|
||||
| **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) |
|
||||
| **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) |
|
||||
| **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) |
|
||||
| **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
|
||||
| **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) |
|
||||
| **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
|
||||
|
||||
---
|
||||
|
||||
## 📱 Mobile Applications
|
||||
|
||||
| Application | Description | Download Link |
|
||||
|-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
||||
| **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) |
|
||||
| **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) |
|
||||
| **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) |
|
||||
| **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) |
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Image Generation Applications
|
||||
|
||||
| Application | Description | Download Link |
|
||||
|-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
||||
| **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) |
|
||||
| **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
|
||||
| **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) |
|
||||
| **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) |
|
||||
|
||||
---
|
||||
|
||||
3
ministral-3b-instruct-q4_0.gguf
Normal file
3
ministral-3b-instruct-q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b41357f41ac5a172c03edda0ba2b5d12a762ff7377b39fbed674d7b3b5cb5d42
|
||||
size 1900016608
|
||||
3
ministral-3b-instruct-q4_k_m.gguf
Normal file
3
ministral-3b-instruct-q4_k_m.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:533618b6cc700ab5675dc35309df8a731904ed45c052fab3a0b970ed865a48ec
|
||||
size 1997337568
|
||||
3
ministral-3b-instruct-q5_0.gguf
Normal file
3
ministral-3b-instruct-q5_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d678ec95bca310c32c732bf46b7b99ef7819fbb28d9cc5a69dccfe9656857635
|
||||
size 2298082272
|
||||
3
ministral-3b-instruct-q8_0.gguf
Normal file
3
ministral-3b-instruct-q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:af6ea1b56fae0d64dfd6f3d97e07171987284fc25c8e88e35425d05805bb5187
|
||||
size 3524023264
|
||||
Reference in New Issue
Block a user