add pkgs
This commit is contained in:
189
examples/qwen/README_CN.md
Normal file
189
examples/qwen/README_CN.md
Normal file
@@ -0,0 +1,189 @@
|
||||
# Qwen
|
||||
|
||||
本文档描述了如何使用昆仑芯XTRT-LLM中在单XPU和单节点多XPU上构建和运行Qwen模型。
|
||||
|
||||
## 概述
|
||||
|
||||
XTRT-LLM Qwen 示例代码的位置在文件夹`examples/qwen`下,此文件夹下有一个主要文件:
|
||||
|
||||
* [`build.py`](./build.py) 构建运行Qwen模型所需的XTRT-LLM引擎
|
||||
|
||||
除此之外,还有两个可以用来推理和评估的共享文件在父节点 [`examples`](../) 下:
|
||||
|
||||
* [`../run.py`](../run.py) 基于输入的文字进行推理
|
||||
* [`../summarize.py`](../summarize.py) 使用此模型对[cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) 数据集中的文章进行总结
|
||||
|
||||
## 支持的矩阵
|
||||
|
||||
* FP16
|
||||
* INT8 Weight-Only
|
||||
* Tensor Parallel
|
||||
|
||||
## 使用说明
|
||||
|
||||
XTRT-LLM Qwen 示例代码位于 [qwen](./)。它使用HF权重作为输入,并且构建对应的XTRT引擎。XTRT引擎的数量取决于为了运行推理而使用的XPU个数。
|
||||
|
||||
### 构建XTRT引擎
|
||||
|
||||
需要先按照下面的指南准备HF Qwen checkpoint: [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 或 [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat)
|
||||
|
||||
创建一个 `downloads` 目录,用来存储自Huggingface社区下载的权重。
|
||||
|
||||
```bash
|
||||
mkdir -p ./downloads
|
||||
```
|
||||
|
||||
将Qwen-7B-Chat和Qwen-14B-Chat分开存储。
|
||||
|
||||
- 存储 Qwen-7B-Chat
|
||||
|
||||
```bash
|
||||
mv Qwen-7B-Chat ./downloads/qwen-7b/
|
||||
```
|
||||
|
||||
- 存储 Qwen-14B-Chat
|
||||
|
||||
```bash
|
||||
mv Qwen-14B-Chat ./downloads/qwen-14b/
|
||||
```
|
||||
|
||||
XTRT-LLM从HFcheckpoint构建XTRT引擎。
|
||||
|
||||
通常`build.py`只需要一个XPU,但如果您在推理时已经获得了所需的所有XPU,则可以通过添加`--parallel_build`参数来启用并行构建,从而加快引擎构建过程。请注意,当前并行构建功能仅支持单个节点。
|
||||
|
||||
**请注意:在构建阶段执行安装命令`pip install transformers-stream-generator`**
|
||||
|
||||
以下是一些示例:
|
||||
|
||||
```bash
|
||||
# Build a single-XPU float16 engine from HF weights.
|
||||
# use_gpt_attention_plugin is necessary in Qwen.
|
||||
# Try use_gemm_plugin to prevent accuracy issue.
|
||||
# It is recommend to use --use_gpt_attention_plugin for better performance
|
||||
|
||||
# Build the Qwen 7B model using a single XPU and FP16.
|
||||
python build.py --hf_model_dir ./downloads/qwen-7b \
|
||||
--dtype float16 \
|
||||
--use_gpt_attention_plugin float16 \
|
||||
--output_dir ./downloads/qwen-7b/trt_engines/fp16/1-XPU/
|
||||
|
||||
|
||||
# Build the Qwen 7B model using a single XPU and apply INT8 weight-only quantization.
|
||||
python build.py --hf_model_dir ./downloads/qwen-7b/ \
|
||||
--dtype float16 \
|
||||
--use_gpt_attention_plugin float16 \
|
||||
--use_weight_only \
|
||||
--weight_only_precision int8 \
|
||||
--output_dir ./downloads/qwen-7b/trt_engines/int8_weight_only/1-XPU/
|
||||
|
||||
# Build Qwen 7B using 2-way tensor parallelism.
|
||||
python build.py --hf_model_dir ./downloads/qwen-7b/ \
|
||||
--dtype float16 \
|
||||
--use_gpt_attention_plugin float16 \
|
||||
--output_dir ./downloads/qwen-7b/trt_engines/fp16/2-XPU/ \
|
||||
--world_size 2 \
|
||||
--tp_size 2
|
||||
|
||||
|
||||
# Build Qwen 14B using 2-way tensor parallelism.
|
||||
python build.py --hf_model_dir ./downloads/qwen-14b/ \
|
||||
--dtype float16 \
|
||||
--use_gpt_attention_plugin float16 \
|
||||
--output_dir ./downloads/qwen-14b/trt_engines/fp16/2-XPU/ \
|
||||
--world_size 2 \
|
||||
--tp_size 2
|
||||
```
|
||||
|
||||
#### SmoothQuant
|
||||
|
||||
SmootQuant同时支持Qwen v1和Qwen v2。与FP16的HF权重可以直接被处理并加载到XTRT-LLM不同,SmoothQuant需要加载INT8权重,而INT8权重在构建引擎之前需要进行预处理。
|
||||
|
||||
示例:
|
||||
```bash
|
||||
python3 hf_qwen_convert.py -i ./downloads/qwen-7b/ -o ./downloads/qwen-7b/sq0.5/ -sq 0.5 --tensor-parallelism 1 --storage-type float16
|
||||
```
|
||||
|
||||
注意:`hf_qwen_convert.py`使用pytorch运行,并且
|
||||
1. 'torch-cpu' 通常比XPyTorch精度更高
|
||||
2. XPyTorch 通常使用超过32GB的GM,因此需要更多的XPU来完成它。
|
||||
3. 使用XPyTorch运行时,请添加`-p=1`。
|
||||
|
||||
`build.py`增加了新的选项来支持SmoothQuant模型的INT8推理。
|
||||
|
||||
`--use_smooth_quant` 是INT8推理的起点。默认情况下,它将以`--per-token`模式运行模型。
|
||||
`--per-token`和`--per-channel`目前还不支持。
|
||||
|
||||
构建调用示例:
|
||||
```bash
|
||||
# Build model for SmoothQuant in the _per_tensor_ mode.
|
||||
python3 build.py --ft_dir_path=./downloads/qwen-7b/sq0.5/1-XPU/ \
|
||||
--use_smooth_quant \
|
||||
--hf_model_dir ./downloads/qwen-7b/ \
|
||||
--output_dir ./downloads/qwen-7b/trt_engines/sq0.5/1-XPU/
|
||||
```
|
||||
|
||||
- 运行
|
||||
```bash
|
||||
python3 ../run.py --input_text "你好,请问你叫什么?" \
|
||||
--max_output_len=50 \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--engine_dir=./downloads/qwen-7b/trt_engines/sq0.5/1-XPU/
|
||||
```
|
||||
|
||||
- 总结
|
||||
```bash
|
||||
python ../summarize.py --test_trt_llm \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--data_type fp16 \
|
||||
--engine_dir=./downloads/qwen-7b/trt_engines/sq0.5/1-XPU/ \
|
||||
--max_input_length 2048 \
|
||||
--output_len 2048
|
||||
```
|
||||
|
||||
|
||||
### 运行
|
||||
|
||||
**注意:在运行阶段执行安装命令`pip install tiktoken`**
|
||||
|
||||
要使用`build.py`生成的引擎运行XTRT-LLM Qwen模型,请执行以下操作:
|
||||
|
||||
```bash
|
||||
# With fp16 inference
|
||||
python3 ../run.py --input_text "你好,请问你叫什么?" \
|
||||
--max_output_len=50 \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--engine_dir=./downloads/qwen-7b/trt_engines/fp16/1-XPU/
|
||||
|
||||
|
||||
# With int8 weight only inference
|
||||
python3 ../run.py --input_text "你好,请问你叫什么?" \
|
||||
--max_output_len=50 \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--engine_dir=./downloads/qwen-7b/trt_engines/int8_weight_only/1-XPU/
|
||||
|
||||
# Run Qwen 7B model in FP16 using two XPUs.
|
||||
mpirun -n 2 --allow-run-as-root \
|
||||
python ../run.py --input_text "你好,请问你叫什么?" \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--max_output_len=50 \
|
||||
--engine_dir ./downloads/qwen-7b/trt_engines/fp16/2-XPU/
|
||||
```
|
||||
|
||||
`run.py`的演示输出:
|
||||
|
||||
```bash
|
||||
python3 ../run.py --input_text "你好,请问你叫什么?" \
|
||||
--max_output_len=50 \
|
||||
--tokenizer_dir ./downloads/qwen-7b/ \
|
||||
--engine_dir ./downloads/qwen-7b/trt_engines/fp16/1-XPU/
|
||||
```
|
||||
```
|
||||
Loading engine from ./downloads/qwen-7b/trt_engines/fp16/1-XPU/qwen_float16_tp1_rank0.engine
|
||||
Input: "<|im_start|>system
|
||||
You are a helpful assistant.<|im_end|>
|
||||
<|im_start|>user
|
||||
你好,请问你叫什么?<|im_end|>
|
||||
<|im_start|>assistant
|
||||
"
|
||||
Output: "我是来自阿里云的大规模语言模型,我叫通义千问。"
|
||||
```
|
||||
Reference in New Issue
Block a user