Update README.md
This commit is contained in:
@@ -14,6 +14,8 @@ base_model: google/gemma-3-27b-it
|
||||
|
||||
This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo.
|
||||
|
||||
NOTE: this is NOT the same as the official QAT INT4 GGUFs released here https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b
|
||||
|
||||
Below is the original Model card from https://huggingface.co/google/gemma-3-27b-it
|
||||
|
||||
# Gemma 3 model card
|
||||
|
||||
Reference in New Issue
Block a user