2.9 KiB
2.9 KiB
base_model, pipeline_tag, quantized_by
| base_model | pipeline_tag | quantized_by |
|---|---|---|
| google/gemma-1.1-2b-it | text-generation | featherless-ai-quants |
google/gemma-1.1-2b-it GGUF Quantizations 🚀
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations 📊
| Quantization Type | File | Size |
|---|---|---|
| IQ4_XS | google-gemma-1.1-2b-it-IQ4_XS.gguf | 1431.67 MB |
| Q2_K | google-gemma-1.1-2b-it-Q2_K.gguf | 1104.28 MB |
| Q3_K_L | google-gemma-1.1-2b-it-Q3_K_L.gguf | 1397.70 MB |
| Q3_K_M | google-gemma-1.1-2b-it-Q3_K_M.gguf | 1319.70 MB |
| Q3_K_S | google-gemma-1.1-2b-it-Q3_K_S.gguf | 1228.31 MB |
| Q4_K_M | google-gemma-1.1-2b-it-Q4_K_M.gguf | 1554.74 MB |
| Q4_K_S | google-gemma-1.1-2b-it-Q4_K_S.gguf | 1487.58 MB |
| Q5_K_M | google-gemma-1.1-2b-it-Q5_K_M.gguf | 1754.43 MB |
| Q5_K_S | google-gemma-1.1-2b-it-Q5_K_S.gguf | 1715.58 MB |
| Q6_K | google-gemma-1.1-2b-it-Q6_K.gguf | 1966.60 MB |
| Q8_0 | google-gemma-1.1-2b-it-Q8_0.gguf | 2545.42 MB |
⚡ Powered by Featherless AI
Key Features
- 🔥 Instant Hosting - Deploy any Llama model on HuggingFace instantly
- 🛠️ Zero Infrastructure - No server setup or maintenance required
- 📚 Vast Compatibility - Support for 2400+ models and counting
- 💎 Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models
