From 70fa5997afc42807f41eebd5d481f040556fdf97 Mon Sep 17 00:00:00 2001 From: Saurav Muralidharan Date: Tue, 20 Aug 2024 11:03:27 -0700 Subject: [PATCH] Update installation instructions --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index f8d433b..2f74b0a 100644 --- a/README.md +++ b/README.md @@ -48,11 +48,10 @@ It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE) ## Usage -The [pull request](https://github.com/huggingface/transformers/pull/32495) to support this model in Hugging Face Transformers is under review and is expected to be merged soon. In the meantime, please follow the installation instructions below: +Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source: ``` -$ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/transformers.git && cd transformers -$ pip install -e . +pip install git+https://github.com/huggingface/transformers ``` The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.