5.1 KiB
This model was released on 2022-05-02 and added to Hugging Face Transformers on 2022-05-12.
OPT
OPT is a suite of open-source decoder-only pre-trained transformers whose parameters range from 125M to 175B. OPT models are designed for causal language modeling and aim to enable responsible and reproducible research at scale. OPT-175B is comparable in performance to GPT-3 with only 1/7th the carbon footprint.
You can find all the original OPT checkpoints under the OPT collection.
Tip
This model was contributed by ArthurZ, ybelkada, and patrickvonplaten.
Click on the OPT models in the right sidebar for more examples of how to apply OPT to different language tasks.
The example below demonstrates how to generate text with [Pipeline], [AutoModel], and from the command line.
import torch
from transformers import pipeline
pipeline = pipeline(task="text-generation", model="facebook/opt-125m", dtype=torch.float16, device=0)
pipeline("Once upon a time, in a land far, far away,", max_length=50, num_return_sequences=1)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
prompt = ("Once upon a time, in a land far, far away, ")
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model facebook/opt-125m --device 0
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to quantize the weights to 8-bits.
import torch
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM, infer_device
device = infer_device()
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", dtype=torch.float16, attn_implementation="sdpa", quantization_config=bnb_config).to(device)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b")
prompt = ("Once upon a time, in a land far, far away, ")
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
Notes
-
OPT adds an
EOStoken</s>to the beginning of every prompt. -
The
head_maskargument is ignored if the attention implementation isn't"eager". Setattn_implementation="eager"to enable thehead_mask.
Resources
- Refer to this notebook for an example of fine-tuning OPT with PEFT, bitsandbytes, and Transformers.
- The How 🤗 Accelerate runs very large models thanks to PyTorch blog post demonstrates how to run OPT for inference.
OPTConfig
autodoc OPTConfig
OPTModel
autodoc OPTModel - forward
OPTForCausalLM
autodoc OPTForCausalLM - forward
OPTForSequenceClassification
autodoc OPTForSequenceClassification - forward
OPTForQuestionAnswering
autodoc OPTForQuestionAnswering - forward