132 lines
4.5 KiB
Markdown
132 lines
4.5 KiB
Markdown
|
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||
|
|
|
||
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||
|
|
the License. You may obtain a copy of the License at
|
||
|
|
|
||
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
|
|
||
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||
|
|
specific language governing permissions and limitations under the License.
|
||
|
|
|
||
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||
|
|
rendered properly in your Markdown viewer.
|
||
|
|
|
||
|
|
-->
|
||
|
|
*This model was released on 2021-03-21 and added to Hugging Face Transformers on 2021-03-30.*
|
||
|
|
|
||
|
|
<div style="float: right;">
|
||
|
|
<div class="flex flex-wrap space-x-1">
|
||
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||
|
|
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||
|
|
</div>
|
||
|
|
</div>
|
||
|
|
|
||
|
|
## GPT-Neo
|
||
|
|
|
||
|
|
[GPT-Neo](https://zenodo.org/records/5297715) is an open-source alternative to GPT-2 and GPT-3 models, built with Mesh TensorFlow for TPUs. GPT-Neo uses local attention in every other layer for more efficiency. It is trained on the [Pile](https://huggingface.co/datasets/EleutherAI/pile), a diverse dataset consisting of 22 smaller high-quality datasets. The original github repository can be found [here](https://github.com/EleutherAI/gpt-neo/tree/v1.1)
|
||
|
|
|
||
|
|
You can find all the original GPT-Neo checkpoints under the [EleutherAI](https://huggingface.co/EleutherAI?search_models=gpt-neo) organization.
|
||
|
|
|
||
|
|
> [!TIP]
|
||
|
|
> Click on the GPT-Neo models in the right sidebar for more examples of how to apply GPT Neo to different language tasks.
|
||
|
|
|
||
|
|
The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line.
|
||
|
|
|
||
|
|
<hfoptions id="usage">
|
||
|
|
<hfoption id="Pipeline">
|
||
|
|
|
||
|
|
```py
|
||
|
|
import torch
|
||
|
|
from transformers import pipeline
|
||
|
|
|
||
|
|
pipeline = pipeline(task="text-generation", model="EleutherAI/gpt-neo-1.3B", dtype=torch.float16, device=0)
|
||
|
|
pipeline("Hello, I'm a language model")
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
<hfoption id="AutoModel">
|
||
|
|
|
||
|
|
```py
|
||
|
|
import torch
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
|
||
|
|
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B", dtype=torch.float16, device_map="auto", attn_implementation="flash_attention_2")
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
|
||
|
|
|
||
|
|
input_ids = tokenizer("Hello, I'm a language model", return_tensors="pt").to(model.device)
|
||
|
|
|
||
|
|
output = model.generate(**input_ids)
|
||
|
|
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
<hfoption id="transformers CLI">
|
||
|
|
|
||
|
|
```bash
|
||
|
|
echo -e "Hello, I'm a language model" | transformers run --task text-generation --model EleutherAI/gpt-neo-1.3B --device 0
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
</hfoptions>
|
||
|
|
|
||
|
|
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
|
||
|
|
|
||
|
|
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits.
|
||
|
|
|
||
|
|
```py
|
||
|
|
import torch
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
||
|
|
|
||
|
|
quantization_config = BitsAndBytesConfig(
|
||
|
|
load_in_4bit=True,
|
||
|
|
bnb_4bit_quant_type="nf4",
|
||
|
|
bnb_4bit_compute_dtype="float16",
|
||
|
|
bnb_4bit_use_double_quant=True
|
||
|
|
)
|
||
|
|
|
||
|
|
model = AutoModelForCausalLM.from_pretrained(
|
||
|
|
"EleutherAI/gpt-neo-2.7B",
|
||
|
|
quantization_config=quantization_config,
|
||
|
|
device_map="auto"
|
||
|
|
)
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
|
||
|
|
inputs = tokenizer("Hello, I'm a language model", return_tensors="pt").to(model.device)
|
||
|
|
outputs = model.generate(**inputs, max_new_tokens=100)
|
||
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
## Notes
|
||
|
|
|
||
|
|
- Pad inputs on the right because GPT-Neo uses absolute position embeddings.
|
||
|
|
|
||
|
|
## GPTNeoConfig
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoConfig
|
||
|
|
|
||
|
|
## GPTNeoModel
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoModel
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## GPTNeoForCausalLM
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoForCausalLM
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## GPTNeoForQuestionAnswering
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoForQuestionAnswering
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## GPTNeoForSequenceClassification
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoForSequenceClassification
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## GPTNeoForTokenClassification
|
||
|
|
|
||
|
|
[[autodoc]] GPTNeoForTokenClassification
|
||
|
|
- forward
|