Files
ModelHub XC 2ab308511e 初始化项目,由ModelHub XC社区提供模型
Model: sayhan/Trendyol-LLM-7b-base-v0.1-GGUF
Source: Original Platform
2026-04-23 12:20:58 +08:00

2.2 KiB

base_model, language, pipeline_tag, license, model_type, library_name, inference
base_model language pipeline_tag license model_type library_name inference
Trendyol/Trendyol-LLM-7b-base-v0.1
tr
en
text-generation apache-2.0 llama transformers false

drawing

Trendyol LLM 7b base v0.1

Description

This repo contains GGUF format model files for Trendyol's Trendyol LLM 7b base v0.1

Quantization methods

quantization method bits size use case recommended
Q2_K 2 2.59 GB smallest, significant quality loss - not recommended for most purposes
Q3_K_S 3 3.01 GB very small, high quality loss
Q3_K_M 3 3.36 GB very small, high quality loss
Q3_K_L 3 3.66 GB small, substantial quality loss
Q4_0 4 3.9 GB legacy; small, very high quality loss - prefer using Q3_K_M
Q4_K_M 4 4.15 GB medium, balanced quality - recommended
Q5_0 5 4.73 GB legacy; medium, balanced quality - prefer using Q4_K_M
Q5_K_S 5 4.73 GB large, low quality loss - recommended
Q5_K_M 5 4.86 GB large, very low quality loss - recommended
Q6_K 6 5.61 GB very large, extremely low quality loss
Q8_0 8 13.7 GB very large, extremely low quality loss - not recommended