51 lines
2.4 KiB
Markdown
51 lines
2.4 KiB
Markdown
|
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
|||
|
|
|
|||
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|||
|
|
the License. You may obtain a copy of the License at
|
|||
|
|
|
|||
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|||
|
|
|
|||
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|||
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|||
|
|
specific language governing permissions and limitations under the License.
|
|||
|
|
|
|||
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|||
|
|
rendered properly in your Markdown viewer.
|
|||
|
|
|
|||
|
|
-->
|
|||
|
|
*This model was released on 2024-04-11 and added to Hugging Face Transformers on 2024-04-10.*
|
|||
|
|
|
|||
|
|
# RecurrentGemma
|
|||
|
|
|
|||
|
|
<div class="flex flex-wrap space-x-1">
|
|||
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
|||
|
|
</div>
|
|||
|
|
|
|||
|
|
## Overview
|
|||
|
|
|
|||
|
|
The Recurrent Gemma model was proposed in [RecurrentGemma: Moving Past Transformers for Efficient Open Language Models](https://huggingface.co/papers/2404.07839) by the Griffin, RLHF and Gemma Teams of Google.
|
|||
|
|
|
|||
|
|
The abstract from the paper is the following:
|
|||
|
|
|
|||
|
|
*We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.*
|
|||
|
|
|
|||
|
|
Tips:
|
|||
|
|
|
|||
|
|
- The original checkpoints can be converted using the conversion script [`src/transformers/models/recurrent_gemma/convert_recurrent_gemma_weights_to_hf.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/recurrent_gemma/convert_recurrent_gemma_to_hf.py).
|
|||
|
|
|
|||
|
|
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/google-deepmind/recurrentgemma).
|
|||
|
|
|
|||
|
|
## RecurrentGemmaConfig
|
|||
|
|
|
|||
|
|
[[autodoc]] RecurrentGemmaConfig
|
|||
|
|
|
|||
|
|
## RecurrentGemmaModel
|
|||
|
|
|
|||
|
|
[[autodoc]] RecurrentGemmaModel
|
|||
|
|
- forward
|
|||
|
|
|
|||
|
|
## RecurrentGemmaForCausalLM
|
|||
|
|
|
|||
|
|
[[autodoc]] RecurrentGemmaForCausalLM
|
|||
|
|
- forward
|