From 1e2c4cd5581225c14607619ddeb0ad80aceaaccd Mon Sep 17 00:00:00 2001 From: huangjintao Date: Wed, 18 Sep 2024 15:54:32 +0000 Subject: [PATCH] Update README.md --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 50c0ec6..9637118 100644 --- a/README.md +++ b/README.md @@ -49,8 +49,8 @@ Also check out our [AWQ documentation](https://qwen.readthedocs.io/en/latest/qua Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python -from transformers import AutoModelForCausalLM, AutoTokenizer -model_name = "Qwen/Qwen2.5-14B-Instruct-AWQ" +from modelscope import AutoModelForCausalLM, AutoTokenizer +model_name = "qwen/Qwen2.5-14B-Instruct-AWQ" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", @@ -76,6 +76,7 @@ generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] +print(response) ``` ### Processing Long Texts