From 3bc7339376f8fc65eebac065859782f24e9a5f27 Mon Sep 17 00:00:00 2001 From: Richard Erkhov Date: Wed, 30 Oct 2024 00:32:16 +0000 Subject: [PATCH] uploaded readme --- README.md | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..7cdfde0 --- /dev/null +++ b/README.md @@ -0,0 +1,77 @@ +Quantization made by Richard Erkhov. + +[Github](https://github.com/RichardErkhov) + +[Discord](https://discord.gg/pvy7H8DZMG) + +[Request more models](https://github.com/RichardErkhov/quant_request) + + +gemma-2b-mt-German-to-English - GGUF +- Model creator: https://huggingface.co/Samvardhan777/ +- Original model: https://huggingface.co/Samvardhan777/gemma-2b-mt-German-to-English/ + + +| Name | Quant method | Size | +| ---- | ---- | ---- | +| [gemma-2b-mt-German-to-English.Q2_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q2_K.gguf) | Q2_K | 1.08GB | +| [gemma-2b-mt-German-to-English.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_S.gguf) | Q3_K_S | 1.2GB | +| [gemma-2b-mt-German-to-English.Q3_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K.gguf) | Q3_K | 1.29GB | +| [gemma-2b-mt-German-to-English.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_M.gguf) | Q3_K_M | 1.29GB | +| [gemma-2b-mt-German-to-English.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_L.gguf) | Q3_K_L | 1.36GB | +| [gemma-2b-mt-German-to-English.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.IQ4_XS.gguf) | IQ4_XS | 1.4GB | +| [gemma-2b-mt-German-to-English.Q4_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_0.gguf) | Q4_0 | 1.44GB | +| [gemma-2b-mt-German-to-English.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.IQ4_NL.gguf) | IQ4_NL | 1.45GB | +| [gemma-2b-mt-German-to-English.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K_S.gguf) | Q4_K_S | 1.45GB | +| [gemma-2b-mt-German-to-English.Q4_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K.gguf) | Q4_K | 1.52GB | +| [gemma-2b-mt-German-to-English.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K_M.gguf) | Q4_K_M | 1.52GB | +| [gemma-2b-mt-German-to-English.Q4_1.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_1.gguf) | Q4_1 | 1.56GB | +| [gemma-2b-mt-German-to-English.Q5_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_0.gguf) | Q5_0 | 1.68GB | +| [gemma-2b-mt-German-to-English.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K_S.gguf) | Q5_K_S | 1.68GB | +| [gemma-2b-mt-German-to-English.Q5_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K.gguf) | Q5_K | 1.71GB | +| [gemma-2b-mt-German-to-English.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K_M.gguf) | Q5_K_M | 1.71GB | +| [gemma-2b-mt-German-to-English.Q5_1.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_1.gguf) | Q5_1 | 1.79GB | +| [gemma-2b-mt-German-to-English.Q6_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q6_K.gguf) | Q6_K | 1.92GB | +| [gemma-2b-mt-German-to-English.Q8_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q8_0.gguf) | Q8_0 | 2.49GB | + + + + +Original model description: +--- +license: mit +language: +- de +- en +pipeline_tag: translation +tags: +- text-generation-inference +--- + + +# Description + +## Gemma 2B German to English v0.1 Alpha [Experimental Release] +This is a german instruction finetuned version of Google's Gemma 2B model. This is an experiment to see if Gemma can be Translate German to English by expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model. + + + + +--- +# Model description 🗄️: + Model type: A 2B parameter GPT-like model finetuned on 100,000 samples consisting of an equal proportion of English and German samples. + + Language(s): Bilingual. English and German. + + License: Google Gemma Terms of Use + + Finetuned from model: Samvardhan777/gemma-2b-mt-German-to-English + + Training Precision: bfloat16 + + Training Hardware: Free Google Colab + + Dataset: kaitchup/opus-German-to-English + +--- +