Files
ModelHub XC 3f2cfb298c 初始化项目,由ModelHub XC社区提供模型
Model: featherless-ai-quants/xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-GGUF
Source: Original Platform
2026-05-08 21:11:55 +08:00

3.7 KiB

base_model, pipeline_tag, quantized_by
base_model pipeline_tag quantized_by
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3 text-generation featherless-ai-quants

xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3 GGUF Quantizations 🚀

Featherless AI Quants

Optimized GGUF quantization files for enhanced model performance

Powered by Featherless AI - run any model you'd like for a simple small fee.


Available Quantizations 📊

Quantization Type File Size
IQ4_XS xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-IQ4_XS.gguf 581.56 MB
Q2_K xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q2_K.gguf 412.11 MB
Q3_K_L xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q3_K_L.gguf 564.12 MB
Q3_K_M xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q3_K_M.gguf 523.00 MB
Q3_K_S xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q3_K_S.gguf 476.21 MB
Q4_K_M xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q4_K_M.gguf 636.88 MB
Q4_K_S xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q4_K_S.gguf 610.23 MB
Q5_K_M xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q5_K_M.gguf 745.82 MB
Q5_K_S xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q5_K_S.gguf 730.54 MB
Q6_K xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q6_K.gguf 861.56 MB
Q8_0 xw17-TinyLlama-1.1B-Chat-v1.0_finetuned_s01_3-Q8_0.gguf 1115.62 MB

Powered by Featherless AI

Key Features

  • 🔥 Instant Hosting - Deploy any Llama model on HuggingFace instantly
  • 🛠️ Zero Infrastructure - No server setup or maintenance required
  • 📚 Vast Compatibility - Support for 2400+ models and counting
  • 💎 Affordable Pricing - Starting at just $10/month

Links:
Get Started | Documentation | Models