初始化项目,由ModelHub XC社区提供模型
Model: ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF Source: Original Platform
This commit is contained in:
56
README.md
Normal file
56
README.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
base_model: CraneAILabs/swahili-gemma-1b
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
quantized_by: ThijsL202
|
||||
tags:
|
||||
- gguf
|
||||
- quantized
|
||||
- llama-cpp
|
||||
|
||||
---
|
||||
|
||||
# CraneAILabs_swahili-gemma-1b - GGUF Standard
|
||||
|
||||
## 📊 Quantization Details
|
||||
|
||||
- **Base Model**: [CraneAILabs/swahili-gemma-1b](https://huggingface.co/CraneAILabs/swahili-gemma-1b)
|
||||
- **Quantization**: Standard
|
||||
- **Total Size**: 4.03 GB (5 files)
|
||||
|
||||
### 📦 Standard Quantizations Classic GGUF quantizations without importance matrix enhancement.
|
||||
|
||||
## 📁 Available Files
|
||||
|
||||
| Quantization | Size | Download |
|
||||
|:------------|-----:|:---------:|
|
||||
| **Q2_K** | 690MB | [⬇️](https://huggingface.co/ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF/resolve/main/CraneAILabs_swahili-gemma-1b.Q2_K.gguf) |
|
||||
| **Q3_K_L** | 752MB | [⬇️](https://huggingface.co/ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF/resolve/main/CraneAILabs_swahili-gemma-1b.Q3_K_L.gguf) |
|
||||
| **Q4_K_M** | 806MB | [⬇️](https://huggingface.co/ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF/resolve/main/CraneAILabs_swahili-gemma-1b.Q4_K_M.gguf) |
|
||||
| **Q6_K** | 1.0GB | [⬇️](https://huggingface.co/ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF/resolve/main/CraneAILabs_swahili-gemma-1b.Q6_K.gguf) |
|
||||
| **Q8_0** | 1.1GB | [⬇️](https://huggingface.co/ThijsL202/CraneAILabs_swahili-gemma-1b-GGUF/resolve/main/CraneAILabs_swahili-gemma-1b.Q8_0.gguf) |
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### llama.cpp
|
||||
```bash
|
||||
./llama-cli -m CraneAILabs_swahili-gemma-1b.Q6_K.gguf -p "Your prompt here" -n 512
|
||||
```
|
||||
|
||||
### Python
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama(model_path="./CraneAILabs_swahili-gemma-1b.Q6_K.gguf", n_ctx=2048)
|
||||
output = llm("Your prompt here", max_tokens=512)
|
||||
print(output["choices"][0]["text"])
|
||||
```
|
||||
|
||||
## 📊 Model Information
|
||||
|
||||
Original model: [CraneAILabs/swahili-gemma-1b](https://huggingface.co/CraneAILabs/swahili-gemma-1b)
|
||||
|
||||
---
|
||||
*Quantized using [llama.cpp](https://github.com/ggml-org/llama.cpp)*
|
||||
Reference in New Issue
Block a user