From 7cf8bdc81343c635390dd1e1bf590ab22dd6f366 Mon Sep 17 00:00:00 2001 From: Thien Tran Date: Sun, 6 Apr 2025 03:06:57 +0000 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index d8f807f..f5e5fae 100644 --- a/README.md +++ b/README.md @@ -14,6 +14,8 @@ base_model: google/gemma-3-27b-it This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo. +NOTE: this is NOT the same as the official QAT INT4 GGUFs released here https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b + Below is the original Model card from https://huggingface.co/google/gemma-3-27b-it # Gemma 3 model card