初始化项目,由ModelHub XC社区提供模型

Model: Qwen/Qwen-7B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 06:23:34 +08:00
commit 4ddcc22227
29 changed files with 156727 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

53
LICENSE Normal file
View File

@@ -0,0 +1,53 @@
Tongyi Qianwen LICENSE AGREEMENT
Tongyi Qianwen Release Date: August 3, 2023
By clicking to agree or by using or distributing any portion or element of the Tongyi Qianwen Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
1. Definitions
a. This Tongyi Qianwen LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
b. "We"(or "Us") shall mean Alibaba Cloud.
c. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
d. "Third Parties" shall mean individuals or legal entities that are not under common control with Us or You.
e. "Tongyi Qianwen" shall mean the large language models (including Qwen model and Qwen-Chat model), and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Us.
f. "Materials" shall mean, collectively, Alibaba Cloud's proprietary Tongyi Qianwen and Documentation (and any portion thereof) made available under this Agreement.
g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
h. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation,
and conversions to other media types.
2. Grant of Rights
You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Alibaba Cloud's intellectual property or other rights owned by Us embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials.
3. Redistribution
You may reproduce and distribute copies of the Materials or derivative works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You shall give any other recipients of the Materials or derivative works a copy of this Agreement;
b. You shall cause any modified files to carry prominent notices stating that You changed the files;
c. You shall retain in all copies of the Materials that You distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "Tongyi Qianwen is licensed under the Tongyi Qianwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved."; and
d. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such derivative works as a whole, provided Your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
4. Restrictions
If you are commercially using the Materials, and your product or service has more than 100 million monthly active users, You shall request a license from Us. You cannot exercise your rights under this Agreement without our express authorization.
5. Rules of use
a. The Materials may be subject to export controls or restrictions in China, the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the Materials.
b. You can not use the Materials or any output therefrom to improve any other large language model (excluding Tongyi Qianwen or derivative works thereof).
6. Intellectual Property
a. We retain ownership of all intellectual property rights in and to the Materials and derivatives made by or for Us. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by you, you are and will be the owner of such derivative works and modifications.
b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of Us, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
c. If you commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Us or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by you, then all licences granted to you under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
7. Disclaimer of Warranty and Limitation of Liability
a. We are not obligated to support, update, provide training for, or develop any further version of the Tongyi Qianwen Materials or to grant any license thereto.
b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
c. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, NO MATTER HOW ITS CAUSED.
d. You will defend, indemnify and hold harmless Us from and against any claim by any third party arising out of or related to your use or distribution of the Materials.
8. Survival and Termination.
a. The term of this Agreement shall commence upon your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
b. We may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you must delete and cease use of the Materials. Sections 7 and 9 shall survive the termination of this Agreement.
9. Governing Law and Jurisdiction.
a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.

280
NOTICE Normal file
View File

@@ -0,0 +1,280 @@
------------- LICENSE FOR NVIDIA Megatron-LM code --------------
Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of NVIDIA CORPORATION nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
------------- LICENSE FOR OpenAI tiktoken code --------------
MIT License
Copyright (c) 2022 OpenAI, Shantanu Jain
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
------------- LICENSE FOR stanford_alpaca code --------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
------------- LICENSE FOR PanQiWei AutoGPTQ code --------------
MIT License
Copyright (c) 2023 潘其威(William)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

276
README.md Normal file
View File

@@ -0,0 +1,276 @@
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# Qwen-7B
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a> &nbsp&nbsp &nbsp&nbsp🖥 <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp &nbsp&nbsp<a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## 介绍 (Introduction)
**通义千问-7BQwen-7B**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样覆盖广泛包括大量网络文本、专业书籍、代码等。同时在Qwen-7B的基础上我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。相较于最初开源的Qwen-7B模型我们现已将预训练模型和Chat模型更新到效果更优的版本。本仓库为Qwen-7B预训练模型的仓库。
通义千问-7BQwen-7B主要有以下特点
1. **大规模高质量训练语料**使用超过2.4万亿tokens的数据进行预训练包含高质量中、英、多语言、代码、数学等数据涵盖通用及专业领域的训练语料。通过大量对比实验对预训练语料分布进行了优化。
2. **强大的性能**Qwen-7B在多个中英文下游评测任务上涵盖常识推理、代码、数学、翻译等效果显著超越现有的相近规模开源模型甚至在部分指标上相比更大尺寸模型也有较强竞争力。具体评测结果请详见下文。
3. **覆盖更全面的词表**相比目前以中英词表为主的开源模型Qwen-7B使用了约15万大小的词表。该词表对多语言更加友好方便用户在不扩展词表的情况下对部分语种进行能力增强和扩展。
如果您想了解更多关于通义千问7B开源模型的细节我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
**Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. Now we have updated both our pretrained and chat models for better performances. This repository is the one for the Qwen-7B base language model.
The features of Qwen-7B include:
1. **Large-scale high-quality training corpora**: It is pretrained on over 2.4 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments.
2. **Competitive performance**: It significantly surpasses existing open-source models of similar scale on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.), and even surpasses some larger-scale models in several benchmarks. See below for specific evaluation results.
3. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-7B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary.
For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## 要求Requirements
* python 3.8及以上版本
* pytorch 1.12及以上版本推荐2.0及以上版本
* 建议使用CUDA 11.4及以上GPU用户、flash-attention用户等需考虑此选项
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
<br>
## 依赖项 (Dependency)
运行Qwen-7B请确保满足上述要求再执行以下pip命令安装依赖库
To run Qwen-7B, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
```bash
pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
```
另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# pip install csrc/layer_norm
# pip install csrc/rotary
```
<br>
## 快速使用Quickstart
您可以通过以下代码轻松调用:
You can easily call the model with the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True).eval()
# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
inputs = tokenizer('蒙古国的首都是乌兰巴托Ulaanbaatar\n冰岛的首都是雷克雅未克Reykjavik\n埃塞俄比亚的首都是', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(**inputs)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
# 蒙古国的首都是乌兰巴托Ulaanbaatar\n冰岛的首都是雷克雅未克Reykjavik\n埃塞俄比亚的首都是亚的斯亚贝巴Addis Ababa...
```
关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## Tokenizer
> 作为术语的“tokenization”在中文中尚无共识的概念对应本文档采用英文表达以利说明。
基于tiktoken的分词器有别于其他分词器比如sentencepiece分词器。尤其在微调阶段需要特别注意特殊token的使用。关于tokenizer的更多信息以及微调时涉及的相关使用请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
<br>
## 模型细节 (Model)
Qwen-7B模型规模基本情况如下所示。
The details of the model architecture of Qwen-7B are listed as follows.
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 151851 |
| sequence length | 8192 |
在位置编码、FFN激活函数和normalization的实现方式上我们也采用了目前最流行的做法
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm可选安装flash-attention加速
在分词器方面相比目前主流开源模型以中英词表为主Qwen-7B使用了超过15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
我们从部分语种各随机抽取100万个文档语料以对比不同模型的编码压缩率以支持100语种的XLM-R为基准值1越低越好具体性能见图。
可以看到Qwen-7B在保持中英代码高效解码的前提下对部分使用人群较多的语种泰语th、希伯来语he、阿拉伯语ar、韩语ko、越南语vi、日语ja、土耳其语tr、印尼语id、波兰语pl、俄语ru、荷兰语nl、葡萄牙语pt、意大利语it、德语de、西班牙语es、法语fr等上也实现了较高的压缩率使得模型在这些语种上也具备较强的可扩展性和较高的训练和推理效率。
在预训练数据方面去重及过滤后的语料超过2.4T tokens囊括全网文本、百科、书籍、代码、数学及各个领域垂类。
<p align="center">
<img src="assets/tokenizer.png" style="width: 1200px"/>
<p>
For position encoding, FFN activation function, and normalization methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
We randomly selected 1 million document corpus of each language to test and compare the encoding compression rates of different models (with XLM-R, which supports 100 languages, as the base value 1). The specific performance is shown in the figure above.
As can be seen, while ensuring the efficient decoding of Chinese, English, and code, Qwen-7B also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages.
The scale of pretraining corpus reaches over 2.4T tokens after deduplication and filtration, encompassing web text, encyclopedia, books, code, mathematics, and various domains.
<br>
## 评测效果Evaluation
我们选取了MMLUC-EvalGSM8K, MATH, HumanEval, MBPP, BBH, CMMLU等目前较流行的benchmark对模型的中英知识能力、翻译、数学推理、代码等能力进行综合评测。从下列结果可以看到Qwen模型在所有benchmark上均取得了同级别开源模型中的最优表现。
We selected MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU, which are currently popular benchmarks, to test the models Chinese and English knowledge capabilities, translation, mathematical reasoning, coding and other capabilities. From the following comprehensive evaluation results, we can see that the Qwen model outperform the similarly sized open-source models on all tasks.
| Model | MMLU | C-Eval | GSM8K | MATH | HumanEval | MBPP | BBH | CMMLU |
|:-------------------|:--------:|:--------:|:--------:|:--------:|:---------:|:--------:|:--------:|:--------:|
| | 5-shot | 5-shot | 8-shot | 4-shot | 0-shot | 3-shot | 3-shot | 5-shot |
| LLaMA2-7B | 46.8 | 32.5 | 16.7 | 3.3 | 12.8 | 20.8 | 38.2 | 31.8 |
| LLaMA2-13B | 55.0 | 41.4 | 29.6 | 5.0 | 18.9 | 30.3 | 45.6 | 38.4 |
| LLaMA2-34B | 62.6 | - | 42.2 | 6.2 | 22.6 | 33.0 | 44.1 | - |
| ChatGLM2-6B | 47.9 | 51.7 | 32.4 | 6.5 | - | - | 33.7 | - |
| InternLM-7B | 51.0 | 53.4 | 31.2 | 6.3 | 10.4 | 14.0 | 37.0 | 51.8 |
| InternLM-20B | 62.1 | 58.8 | 52.6 | 7.9 | 25.6 | 35.6 | 52.5 | 59.0 |
| Baichuan2-7B | 54.7 | 56.3 | 24.6 | 5.6 | 18.3 | 24.2 | 41.6 | 57.1 |
| Baichuan2-13B | 59.5 | 59.0 | 52.8 | 10.1 | 17.1 | 30.2 | 49.0 | 62.0 |
| Qwen-7B (original) | 56.7 | 59.6 | 51.6 | - | 24.4 | 31.2 | 40.6 | 58.8 |
| **Qwen-7B** | 58.2 | 63.5 | 51.7 | 11.6 | 29.9 | 31.6 | 45.0 | 62.2 |
| **Qwen-14B** | **66.3** | **72.1** | **61.3** | **24.8** | **32.3** | **40.8** | **53.4** | **71.0** |
### 长序列评测Long-Context Evaluation
我们引入NTK插值LogN注意力缩放窗口注意力等技巧将Qwen-7B (original)和14B模型的上下文长度从2K扩展到8K以上将Qwen-7B从8K扩到32K。在arXiv数据上使用PPL指标测试Qwen-7B和Qwen-14B在不同长度下的表现结果如下
**(若要启用NTK和LogN注意力缩放请将config.json里的`use_dynamic_ntk``use_logn_attn`设置为true)**
We introduce NTK-aware interpolation, LogN attention scaling, Window attention, etc. to extend the context length to over 8K tokens. We conduct language modeling experiments on the arXiv dataset with the PPL evaluation. Results are demonstrated below:
**(To use NTK interpolation and LogN scaling, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
<table>
<tr>
<th rowspan="2">Model</th><th colspan="6" align="center">Sequence Length</th>
</tr>
<tr>
<th align="center">1024</th><th align="center">2048</th><th align="center">4096</th><th align="center">8192</th><th align="center">16384</th><th align="center">32768</th>
</tr>
<tr>
<td>Qwen-7B (original)</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">39.35</td><td align="center">469.81</td><td align="center">2645.09</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.59</td><td align="center">3.66</td><td align="center">5.71</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.58</td><td align="center">3.56</td><td align="center">4.62</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.58</td><td align="center">3.49</td><td align="center">4.32</td><td align="center">-</td>
</tr>
<tr>
<tr>
<td>Qwen-7B</td><td align="center"><b>4.23</b></td><td align="center"><b>3.81</b></td><td align="center"><b>3.52</b></td><td align="center"><b>3.31</b></td><td align="center">7.27</td><td align="center">181.49</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center"><b>4.23</b></td><td align="center"><b>3.81</b></td><td align="center"><b>3.52</b></td><td align="center"><b>3.33</b></td><td align="center"><b>3.22</b></td><td align="center"><b>3.17</b></td>
</tr>
<tr>
<td>Qwen-14B</td><td align="center"><b>-</b></td><td align="center"><b>3.46</b></td><td align="center">22.79</td><td align="center">334.65</td><td align="center">3168.35</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center"><b>-</b></td><td align="center"><b>3.46</b></td><td align="center"><b>3.29</b></td><td align="center"><b>3.18</b></td><td align="center">3.42</td><td align="center">-</td>
</tr>
</table>
## 评测复现Reproduction
我们提供了评测脚本,方便大家复现模型效果,详见[链接](https://github.com/QwenLM/Qwen/tree/main/eval)。提示:由于硬件和框架造成的舍入误差,复现结果如有小幅波动属于正常现象。
We have provided evaluation scripts to reproduce the performance of our model, details as [link](https://github.com/QwenLM/Qwen/tree/main/eval).
<br>
## FAQ
如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 引用 (Citation)
如果你觉得我们的工作对你有帮助,欢迎引用!
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## 使用协议License Agreement
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply.
<br>
## 联系我们Contact Us
如果你想给我们的研发团队和产品团队留言欢迎加入我们的微信群、钉钉群以及Discord同时也欢迎通过邮件qianwen_opensource@alibabacloud.com联系我们。
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.

BIN
assets/logo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

BIN
assets/qwen_tokenizer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

BIN
assets/tokenizer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

BIN
assets/wechat.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

198
cache_autogptq_cuda_256.cpp Normal file
View File

@@ -0,0 +1,198 @@
#include <torch/all.h>
#include <torch/python.h>
#include <c10/cuda/CUDAGuard.h>
// adapted from https://github.com/PanQiWei/AutoGPTQ/blob/main/autogptq_extension/cuda_256/autogptq_cuda_256.cpp
void vecquant8matmul_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros,
torch::Tensor g_idx
);
void vecquant8matmul(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros,
torch::Tensor g_idx
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_cuda(vec, mat, mul, scales, zeros, g_idx);
}
void vecquant8matmul_batched_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_column_compression_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_column_compression(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_column_compression_cuda(vec, mat, mul, scales, zeros);
}
void vecquant4matmul_batched_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant4matmul_batched(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant4matmul_batched_cuda(vec, mat, mul, scales, zeros);
}
void vecquant4matmul_batched_column_compression_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant4matmul_batched_column_compression(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant4matmul_batched_column_compression_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_old_cuda(vec, mat, mul, scales, zeros);
}
void vecquant4matmul_batched_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant4matmul_batched_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant4matmul_batched_old_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_column_compression_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_column_compression_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_column_compression_old_cuda(vec, mat, mul, scales, zeros);
}
void vecquant4matmul_batched_column_compression_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant4matmul_batched_column_compression_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant4matmul_batched_column_compression_old_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_faster_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_faster(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_faster_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_faster_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_faster_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_faster_old_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_column_compression_faster_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_column_compression_faster(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_column_compression_faster_cuda(vec, mat, mul, scales, zeros);
}
void vecquant8matmul_batched_column_compression_faster_old_cuda(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
);
void vecquant8matmul_batched_column_compression_faster_old(
torch::Tensor vec, torch::Tensor mat, torch::Tensor mul,
torch::Tensor scales, torch::Tensor zeros
) {
const at::cuda::OptionalCUDAGuard device_guard(device_of(vec));
vecquant8matmul_batched_column_compression_faster_old_cuda(vec, mat, mul, scales, zeros);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("vecquant8matmul", &vecquant8matmul, "Vector 8-bit Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant8matmul_batched", &vecquant8matmul_batched, "Vector 8-bit Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant8matmul_batched_old", &vecquant8matmul_batched_old, "Vector 8-bit old Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant8matmul_batched_faster", &vecquant8matmul_batched_faster, "Vector 8-bit old Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant8matmul_batched_faster_old", &vecquant8matmul_batched_faster_old, "Vector 8-bit old Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant4matmul_batched_old", &vecquant4matmul_batched_old, "Vector 4-bit old Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant8matmul_batched_column_compression", &vecquant8matmul_batched_column_compression, "Vector 8-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
m.def("vecquant8matmul_batched_column_compression_old", &vecquant8matmul_batched_column_compression_old, "Vector old 8-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
m.def("vecquant8matmul_batched_column_compression_faster", &vecquant8matmul_batched_column_compression_faster, "Vector old 8-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
m.def("vecquant8matmul_batched_column_compression_faster_old", &vecquant8matmul_batched_column_compression_faster_old, "Vector old 8-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
m.def("vecquant4matmul_batched_column_compression_old", &vecquant4matmul_batched_column_compression_old, "Vector old 4-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
m.def("vecquant4matmul_batched", &vecquant4matmul_batched, "Vector 4-bit Batched Quantized Matrix Multiplication (CUDA) (desc_act)");
m.def("vecquant4matmul_batched_column_compression", &vecquant4matmul_batched_column_compression, "Vector 4-bit Batched Quantized Matrix Multiplication (CUDA) with weight's column compressed (desc_act)");
}

File diff suppressed because it is too large Load Diff

37
config.json Normal file
View File

@@ -0,0 +1,37 @@
{
"architectures": [
"QWenLMHeadModel"
],
"auto_map": {
"AutoConfig": "configuration_qwen.QWenConfig",
"AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
},
"attn_dropout_prob": 0.0,
"bf16": false,
"emb_dropout_prob": 0.0,
"fp16": false,
"fp32": false,
"hidden_size": 4096,
"intermediate_size": 22016,
"initializer_range": 0.02,
"kv_channels": 128,
"layer_norm_epsilon": 1e-06,
"max_position_embeddings": 32768,
"model_type": "qwen",
"no_bias": true,
"num_attention_heads": 32,
"num_hidden_layers": 32,
"onnx_safe": null,
"rotary_emb_base": 10000,
"rotary_pct": 1.0,
"scale_attn_weights": true,
"seq_length": 8192,
"tie_word_embeddings": false,
"tokenizer_class": "QWenTokenizer",
"transformers_version": "4.32.0",
"use_cache": true,
"use_dynamic_ntk": true,
"use_flash_attn": "auto",
"use_logn_attn": true,
"vocab_size": 151936
}

5
configuration.json Normal file
View File

@@ -0,0 +1,5 @@
{
"framework": "pytorch",
"task": "text-generation",
"allow_remote": true
}

71
configuration_qwen.py Normal file
View File

@@ -0,0 +1,71 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from transformers import PretrainedConfig
class QWenConfig(PretrainedConfig):
model_type = "qwen"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size=151936,
hidden_size=4096,
num_hidden_layers=32,
num_attention_heads=32,
emb_dropout_prob=0.0,
attn_dropout_prob=0.0,
layer_norm_epsilon=1e-6,
initializer_range=0.02,
max_position_embeddings=8192,
scale_attn_weights=True,
use_cache=True,
bf16=False,
fp16=False,
fp32=False,
kv_channels=128,
rotary_pct=1.0,
rotary_emb_base=10000,
use_dynamic_ntk=True,
use_logn_attn=True,
use_flash_attn="auto",
intermediate_size=22016,
no_bias=True,
tie_word_embeddings=False,
use_cache_quantization=False,
use_cache_kernel=False,
softmax_in_fp32=False,
**kwargs,
):
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.emb_dropout_prob = emb_dropout_prob
self.attn_dropout_prob = attn_dropout_prob
self.layer_norm_epsilon = layer_norm_epsilon
self.initializer_range = initializer_range
self.scale_attn_weights = scale_attn_weights
self.use_cache = use_cache
self.max_position_embeddings = max_position_embeddings
self.bf16 = bf16
self.fp16 = fp16
self.fp32 = fp32
self.kv_channels = kv_channels
self.rotary_pct = rotary_pct
self.rotary_emb_base = rotary_emb_base
self.use_dynamic_ntk = use_dynamic_ntk
self.use_logn_attn = use_logn_attn
self.use_flash_attn = use_flash_attn
self.no_bias = no_bias
self.use_cache_quantization = use_cache_quantization
self.use_cache_kernel = use_cache_kernel
self.softmax_in_fp32 = softmax_in_fp32
super().__init__(
tie_word_embeddings=tie_word_embeddings,
**kwargs
)

55
cpp_kernels.py Normal file
View File

@@ -0,0 +1,55 @@
from torch.utils import cpp_extension
import pathlib
import os
import subprocess
def _get_cuda_bare_metal_version(cuda_dir):
raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"],
universal_newlines=True)
output = raw_output.split()
release_idx = output.index("release") + 1
release = output[release_idx].split(".")
bare_metal_major = release[0]
bare_metal_minor = release[1][0]
return raw_output, bare_metal_major, bare_metal_minor
def _create_build_dir(buildpath):
try:
os.mkdir(buildpath)
except OSError:
if not os.path.isdir(buildpath):
print(f"Creation of the build directory {buildpath} failed")
# Check if cuda 11 is installed for compute capability 8.0
cc_flag = []
_, bare_metal_major, bare_metal_minor = _get_cuda_bare_metal_version(cpp_extension.CUDA_HOME)
if int(bare_metal_major) >= 11:
cc_flag.append('-gencode')
cc_flag.append('arch=compute_80,code=sm_80')
if int(bare_metal_minor) >= 7:
cc_flag.append('-gencode')
cc_flag.append('arch=compute_90,code=sm_90')
# Build path
srcpath = pathlib.Path(__file__).parent.absolute()
buildpath = srcpath / 'build'
_create_build_dir(buildpath)
def _cpp_extention_load_helper(name, sources, extra_cuda_flags):
return cpp_extension.load(
name=name,
sources=sources,
build_directory=buildpath,
extra_cflags=['-O3', ],
extra_cuda_cflags=['-O3',
'-gencode', 'arch=compute_70,code=sm_70',
'--use_fast_math'] + extra_cuda_flags + cc_flag,
verbose=1
)
extra_flags = []
cache_autogptq_cuda_256_sources = ["./cache_autogptq_cuda_256.cpp",
"./cache_autogptq_cuda_kernel_256.cu"]
cache_autogptq_cuda_256 = _cpp_extention_load_helper("cache_autogptq_cuda_256", cache_autogptq_cuda_256_sources, extra_flags)

11
generation_config.json Normal file
View File

@@ -0,0 +1,11 @@
{
"chat_format": "raw",
"eos_token_id": 151643,
"pad_token_id": 151643,
"stop_words_ids": [[151643]],
"max_new_tokens": 512,
"do_sample": true,
"top_k": 0,
"top_p": 0.8,
"transformers_version": "4.31.0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9dfd6266bcf80de9c3e5cd4e60300d839d03e459e48975b08d3e3b286044a306
size 1964066488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3dedac66034371aa3b284a7886e9ce0fde9245ebac60f507f089b33ef82a2912
size 2023960808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:81b25b14a58b62300d11b16c66933a8b631400bf846b27a2a5c5344629cd26e8
size 2023960816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:59a22cb822f9e6d0a6a8415a7bab7b8448ce9672fc71f09266db264afc26ce48
size 2023960848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e61126f7ad8c520112c49808a73d59bee18c6da7693d9a963a4867f389c91e4a
size 2023960848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ced56cc3265fe03ee73658f845dc23c713389b5d68087e918d24d0e2dea624f
size 2023960848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0999d47ea087bf79a075ed10889aa3497caff2356200204b032fba208109517
size 2023960848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7bc05473a78ade06d526cc206e4a01722563abe99099367cdcbb9b3bf670a5de
size 1334845784

View File

@@ -0,0 +1,266 @@
{
"metadata": {
"total_size": 15442649088
},
"weight_map": {
"lm_head.weight": "model-00008-of-00008.safetensors",
"transformer.h.0.attn.c_attn.bias": "model-00001-of-00008.safetensors",
"transformer.h.0.attn.c_attn.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.attn.c_proj.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.ln_1.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.ln_2.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.mlp.c_proj.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.mlp.w1.weight": "model-00001-of-00008.safetensors",
"transformer.h.0.mlp.w2.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.attn.c_attn.bias": "model-00001-of-00008.safetensors",
"transformer.h.1.attn.c_attn.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.attn.c_proj.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.ln_1.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.ln_2.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.1.mlp.w1.weight": "model-00001-of-00008.safetensors",
"transformer.h.1.mlp.w2.weight": "model-00001-of-00008.safetensors",
"transformer.h.10.attn.c_attn.bias": "model-00003-of-00008.safetensors",
"transformer.h.10.attn.c_attn.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.attn.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.ln_1.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.ln_2.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.mlp.w1.weight": "model-00003-of-00008.safetensors",
"transformer.h.10.mlp.w2.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.attn.c_attn.bias": "model-00003-of-00008.safetensors",
"transformer.h.11.attn.c_attn.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.attn.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.ln_1.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.ln_2.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.11.mlp.w1.weight": "model-00003-of-00008.safetensors",
"transformer.h.11.mlp.w2.weight": "model-00003-of-00008.safetensors",
"transformer.h.12.attn.c_attn.bias": "model-00004-of-00008.safetensors",
"transformer.h.12.attn.c_attn.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.attn.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.ln_1.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.ln_2.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.mlp.w1.weight": "model-00004-of-00008.safetensors",
"transformer.h.12.mlp.w2.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.attn.c_attn.bias": "model-00004-of-00008.safetensors",
"transformer.h.13.attn.c_attn.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.attn.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.ln_1.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.ln_2.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.mlp.w1.weight": "model-00004-of-00008.safetensors",
"transformer.h.13.mlp.w2.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.attn.c_attn.bias": "model-00004-of-00008.safetensors",
"transformer.h.14.attn.c_attn.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.attn.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.ln_1.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.ln_2.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.mlp.w1.weight": "model-00004-of-00008.safetensors",
"transformer.h.14.mlp.w2.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.attn.c_attn.bias": "model-00004-of-00008.safetensors",
"transformer.h.15.attn.c_attn.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.attn.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.ln_1.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.ln_2.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.mlp.w1.weight": "model-00004-of-00008.safetensors",
"transformer.h.15.mlp.w2.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.attn.c_attn.bias": "model-00004-of-00008.safetensors",
"transformer.h.16.attn.c_attn.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.attn.c_proj.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.ln_1.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.ln_2.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.16.mlp.w1.weight": "model-00004-of-00008.safetensors",
"transformer.h.16.mlp.w2.weight": "model-00004-of-00008.safetensors",
"transformer.h.17.attn.c_attn.bias": "model-00005-of-00008.safetensors",
"transformer.h.17.attn.c_attn.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.attn.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.ln_1.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.ln_2.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.mlp.w1.weight": "model-00005-of-00008.safetensors",
"transformer.h.17.mlp.w2.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.attn.c_attn.bias": "model-00005-of-00008.safetensors",
"transformer.h.18.attn.c_attn.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.attn.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.ln_1.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.ln_2.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.mlp.w1.weight": "model-00005-of-00008.safetensors",
"transformer.h.18.mlp.w2.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.attn.c_attn.bias": "model-00005-of-00008.safetensors",
"transformer.h.19.attn.c_attn.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.attn.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.ln_1.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.ln_2.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.mlp.w1.weight": "model-00005-of-00008.safetensors",
"transformer.h.19.mlp.w2.weight": "model-00005-of-00008.safetensors",
"transformer.h.2.attn.c_attn.bias": "model-00002-of-00008.safetensors",
"transformer.h.2.attn.c_attn.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.attn.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.ln_1.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.ln_2.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.mlp.w1.weight": "model-00002-of-00008.safetensors",
"transformer.h.2.mlp.w2.weight": "model-00002-of-00008.safetensors",
"transformer.h.20.attn.c_attn.bias": "model-00005-of-00008.safetensors",
"transformer.h.20.attn.c_attn.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.attn.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.ln_1.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.ln_2.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.mlp.w1.weight": "model-00005-of-00008.safetensors",
"transformer.h.20.mlp.w2.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.attn.c_attn.bias": "model-00005-of-00008.safetensors",
"transformer.h.21.attn.c_attn.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.attn.c_proj.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.ln_1.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.ln_2.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.21.mlp.w1.weight": "model-00005-of-00008.safetensors",
"transformer.h.21.mlp.w2.weight": "model-00005-of-00008.safetensors",
"transformer.h.22.attn.c_attn.bias": "model-00006-of-00008.safetensors",
"transformer.h.22.attn.c_attn.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.attn.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.ln_1.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.ln_2.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.mlp.w1.weight": "model-00006-of-00008.safetensors",
"transformer.h.22.mlp.w2.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.attn.c_attn.bias": "model-00006-of-00008.safetensors",
"transformer.h.23.attn.c_attn.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.attn.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.ln_1.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.ln_2.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.mlp.w1.weight": "model-00006-of-00008.safetensors",
"transformer.h.23.mlp.w2.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.attn.c_attn.bias": "model-00006-of-00008.safetensors",
"transformer.h.24.attn.c_attn.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.attn.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.ln_1.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.ln_2.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.mlp.w1.weight": "model-00006-of-00008.safetensors",
"transformer.h.24.mlp.w2.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.attn.c_attn.bias": "model-00006-of-00008.safetensors",
"transformer.h.25.attn.c_attn.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.attn.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.ln_1.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.ln_2.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.mlp.w1.weight": "model-00006-of-00008.safetensors",
"transformer.h.25.mlp.w2.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.attn.c_attn.bias": "model-00006-of-00008.safetensors",
"transformer.h.26.attn.c_attn.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.attn.c_proj.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.ln_1.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.ln_2.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.26.mlp.w1.weight": "model-00006-of-00008.safetensors",
"transformer.h.26.mlp.w2.weight": "model-00006-of-00008.safetensors",
"transformer.h.27.attn.c_attn.bias": "model-00007-of-00008.safetensors",
"transformer.h.27.attn.c_attn.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.attn.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.ln_1.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.ln_2.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.mlp.w1.weight": "model-00007-of-00008.safetensors",
"transformer.h.27.mlp.w2.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.attn.c_attn.bias": "model-00007-of-00008.safetensors",
"transformer.h.28.attn.c_attn.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.attn.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.ln_1.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.ln_2.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.mlp.w1.weight": "model-00007-of-00008.safetensors",
"transformer.h.28.mlp.w2.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.attn.c_attn.bias": "model-00007-of-00008.safetensors",
"transformer.h.29.attn.c_attn.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.attn.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.ln_1.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.ln_2.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.mlp.w1.weight": "model-00007-of-00008.safetensors",
"transformer.h.29.mlp.w2.weight": "model-00007-of-00008.safetensors",
"transformer.h.3.attn.c_attn.bias": "model-00002-of-00008.safetensors",
"transformer.h.3.attn.c_attn.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.attn.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.ln_1.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.ln_2.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.mlp.w1.weight": "model-00002-of-00008.safetensors",
"transformer.h.3.mlp.w2.weight": "model-00002-of-00008.safetensors",
"transformer.h.30.attn.c_attn.bias": "model-00007-of-00008.safetensors",
"transformer.h.30.attn.c_attn.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.attn.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.ln_1.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.ln_2.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.mlp.w1.weight": "model-00007-of-00008.safetensors",
"transformer.h.30.mlp.w2.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.attn.c_attn.bias": "model-00007-of-00008.safetensors",
"transformer.h.31.attn.c_attn.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.attn.c_proj.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.ln_1.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.ln_2.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.mlp.c_proj.weight": "model-00008-of-00008.safetensors",
"transformer.h.31.mlp.w1.weight": "model-00007-of-00008.safetensors",
"transformer.h.31.mlp.w2.weight": "model-00007-of-00008.safetensors",
"transformer.h.4.attn.c_attn.bias": "model-00002-of-00008.safetensors",
"transformer.h.4.attn.c_attn.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.attn.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.ln_1.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.ln_2.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.mlp.w1.weight": "model-00002-of-00008.safetensors",
"transformer.h.4.mlp.w2.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.attn.c_attn.bias": "model-00002-of-00008.safetensors",
"transformer.h.5.attn.c_attn.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.attn.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.ln_1.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.ln_2.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.mlp.w1.weight": "model-00002-of-00008.safetensors",
"transformer.h.5.mlp.w2.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.attn.c_attn.bias": "model-00002-of-00008.safetensors",
"transformer.h.6.attn.c_attn.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.attn.c_proj.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.ln_1.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.ln_2.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.6.mlp.w1.weight": "model-00002-of-00008.safetensors",
"transformer.h.6.mlp.w2.weight": "model-00002-of-00008.safetensors",
"transformer.h.7.attn.c_attn.bias": "model-00003-of-00008.safetensors",
"transformer.h.7.attn.c_attn.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.attn.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.ln_1.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.ln_2.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.mlp.w1.weight": "model-00003-of-00008.safetensors",
"transformer.h.7.mlp.w2.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.attn.c_attn.bias": "model-00003-of-00008.safetensors",
"transformer.h.8.attn.c_attn.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.attn.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.ln_1.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.ln_2.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.mlp.w1.weight": "model-00003-of-00008.safetensors",
"transformer.h.8.mlp.w2.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.attn.c_attn.bias": "model-00003-of-00008.safetensors",
"transformer.h.9.attn.c_attn.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.attn.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.ln_1.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.ln_2.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.mlp.w1.weight": "model-00003-of-00008.safetensors",
"transformer.h.9.mlp.w2.weight": "model-00003-of-00008.safetensors",
"transformer.ln_f.weight": "model-00008-of-00008.safetensors",
"transformer.wte.weight": "model-00001-of-00008.safetensors"
}
}

1363
modeling_qwen.py Normal file

File diff suppressed because it is too large Load Diff

151643
qwen.tiktoken Normal file

File diff suppressed because it is too large Load Diff

416
qwen_generation_utils.py Normal file
View File

@@ -0,0 +1,416 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
"""Generation support."""
from typing import Tuple, List, Union, Iterable
import numpy as np
import torch
import torch.nn.functional as F
from transformers import PreTrainedTokenizer
from transformers import logging
from transformers.generation import LogitsProcessor
logger = logging.get_logger(__name__)
# Types.
HistoryType = List[Tuple[str, str]]
TokensType = List[int]
BatchTokensType = List[List[int]]
def pad_batch(batch: BatchTokensType, pad_id: int, seq_length: int) -> BatchTokensType:
for tokens in batch:
context_length = len(tokens)
if context_length < seq_length:
tokens.extend([pad_id] * (seq_length - context_length))
return batch
def get_ltor_masks_and_position_ids(
data,
eod_token,
reset_position_ids,
reset_attention_mask,
eod_mask_loss,
):
"""Build masks and position id for left to right model."""
# Extract batch size and sequence length.
micro_batch_size, seq_length = data.size()
# Attention mask (lower triangular).
if reset_attention_mask:
att_mask_batch = micro_batch_size
else:
att_mask_batch = 1
attention_mask = torch.tril(
torch.ones((att_mask_batch, seq_length, seq_length), device=data.device)
).view(att_mask_batch, 1, seq_length, seq_length)
# Loss mask.
loss_mask = torch.ones(data.size(), dtype=torch.float, device=data.device)
if eod_mask_loss:
loss_mask[data == eod_token] = 0.0
# Position ids.
position_ids = torch.arange(seq_length, dtype=torch.long, device=data.device)
position_ids = position_ids.unsqueeze(0).expand_as(data)
# We need to clone as the ids will be modifed based on batch index.
if reset_position_ids:
position_ids = position_ids.clone()
if reset_position_ids or reset_attention_mask:
# Loop through the batches:
for b in range(micro_batch_size):
# Find indecies where EOD token is.
eod_index = position_ids[b, data[b] == eod_token]
# Detach indecies from positions if going to modify positions.
if reset_position_ids:
eod_index = eod_index.clone()
# Loop through EOD indecies:
prev_index = 0
for j in range(eod_index.size()[0]):
i = eod_index[j]
# Mask attention loss.
if reset_attention_mask:
attention_mask[b, 0, (i + 1) :, : (i + 1)] = 0
# Reset positions.
if reset_position_ids:
position_ids[b, (i + 1) :] -= i + 1 - prev_index
prev_index = i + 1
# Convert attention mask to binary:
attention_mask = attention_mask < 0.5
return attention_mask, loss_mask, position_ids
def get_batch(context_tokens: torch.LongTensor, eod_id: int):
"""Generate batch from context tokens."""
# Move to GPU.
tokens = context_tokens.contiguous().to(context_tokens.device)
# Get the attention mask and postition ids.
attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
tokens,
eod_id,
reset_position_ids=False,
reset_attention_mask=False,
eod_mask_loss=False,
)
return tokens, attention_mask, position_ids
def get_stop_words_ids(chat_format, tokenizer):
if chat_format == "raw":
stop_words_ids = [tokenizer.encode("Human:"), [tokenizer.eod_id]]
elif chat_format == "chatml":
stop_words_ids = [[tokenizer.im_end_id], [tokenizer.im_start_id]]
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
return stop_words_ids
def make_context(
tokenizer: PreTrainedTokenizer,
query: str,
history: List[Tuple[str, str]] = None,
system: str = "",
max_window_size: int = 6144,
chat_format: str = "chatml",
):
if history is None:
history = []
if chat_format == "chatml":
im_start, im_end = "<|im_start|>", "<|im_end|>"
im_start_tokens = [tokenizer.im_start_id]
im_end_tokens = [tokenizer.im_end_id]
nl_tokens = tokenizer.encode("\n")
def _tokenize_str(role, content):
return f"{role}\n{content}", tokenizer.encode(
role, allowed_special=set()
) + nl_tokens + tokenizer.encode(content, allowed_special=set())
system_text, system_tokens_part = _tokenize_str("system", system)
system_tokens = im_start_tokens + system_tokens_part + im_end_tokens
raw_text = ""
context_tokens = []
for turn_query, turn_response in reversed(history):
query_text, query_tokens_part = _tokenize_str("user", turn_query)
query_tokens = im_start_tokens + query_tokens_part + im_end_tokens
response_text, response_tokens_part = _tokenize_str(
"assistant", turn_response
)
response_tokens = im_start_tokens + response_tokens_part + im_end_tokens
next_context_tokens = nl_tokens + query_tokens + nl_tokens + response_tokens
prev_chat = (
f"\n{im_start}{query_text}{im_end}\n{im_start}{response_text}{im_end}"
)
current_context_size = (
len(system_tokens) + len(next_context_tokens) + len(context_tokens)
)
if current_context_size < max_window_size:
context_tokens = next_context_tokens + context_tokens
raw_text = prev_chat + raw_text
else:
break
context_tokens = system_tokens + context_tokens
raw_text = f"{im_start}{system_text}{im_end}" + raw_text
context_tokens += (
nl_tokens
+ im_start_tokens
+ _tokenize_str("user", query)[1]
+ im_end_tokens
+ nl_tokens
+ im_start_tokens
+ tokenizer.encode("assistant")
+ nl_tokens
)
raw_text += f"\n{im_start}user\n{query}{im_end}\n{im_start}assistant\n"
elif chat_format == "raw":
raw_text = query
context_tokens = tokenizer.encode(raw_text)
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
return raw_text, context_tokens
def _decode_default(
tokens: List[int],
*,
stop_words: List[str],
eod_words: List[str],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
verbose: bool = False,
return_end_reason: bool = False,
errors: str='replace',
):
trim_decode_tokens = tokenizer.decode(tokens, errors=errors)[raw_text_len:]
if verbose:
print("\nRaw Generate: ", trim_decode_tokens)
end_reason = f"Gen length {len(tokens)}"
for stop_word in stop_words:
trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
for eod_word in eod_words:
if eod_word in trim_decode_tokens:
end_reason = f"Gen {eod_word!r}"
trim_decode_tokens = trim_decode_tokens.split(eod_word)[0]
trim_decode_tokens = trim_decode_tokens.strip()
if verbose:
print("\nEnd Reason:", end_reason)
print("\nGenerate: ", trim_decode_tokens)
if return_end_reason:
return trim_decode_tokens, end_reason
else:
return trim_decode_tokens
def _decode_chatml(
tokens: List[int],
*,
stop_words: List[str],
eod_token_ids: List[int],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
context_length: int,
verbose: bool = False,
return_end_reason: bool = False,
errors: str='replace'
):
end_reason = f"Gen length {len(tokens)}"
eod_token_idx = context_length
for eod_token_idx in range(context_length, len(tokens)):
if tokens[eod_token_idx] in eod_token_ids:
end_reason = f"Gen {tokenizer.decode([tokens[eod_token_idx]])!r}"
break
trim_decode_tokens = tokenizer.decode(tokens[:eod_token_idx], errors=errors)[raw_text_len:]
if verbose:
print("\nRaw Generate w/o EOD:", tokenizer.decode(tokens, errors=errors)[raw_text_len:])
print("\nRaw Generate:", trim_decode_tokens)
print("\nEnd Reason:", end_reason)
for stop_word in stop_words:
trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
trim_decode_tokens = trim_decode_tokens.strip()
if verbose:
print("\nGenerate:", trim_decode_tokens)
if return_end_reason:
return trim_decode_tokens, end_reason
else:
return trim_decode_tokens
def decode_tokens(
tokens: Union[torch.LongTensor, TokensType],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
context_length: int,
chat_format: str,
verbose: bool = False,
return_end_reason: bool = False,
errors: str="replace",
) -> str:
if torch.is_tensor(tokens):
tokens = tokens.cpu().numpy().tolist()
if chat_format == "chatml":
return _decode_chatml(
tokens,
stop_words=[],
eod_token_ids=[tokenizer.im_start_id, tokenizer.im_end_id],
tokenizer=tokenizer,
raw_text_len=raw_text_len,
context_length=context_length,
verbose=verbose,
return_end_reason=return_end_reason,
errors=errors,
)
elif chat_format == "raw":
return _decode_default(
tokens,
stop_words=["<|endoftext|>"],
eod_words=["<|endoftext|>"],
tokenizer=tokenizer,
raw_text_len=raw_text_len,
verbose=verbose,
return_end_reason=return_end_reason,
errors=errors,
)
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
class StopWordsLogitsProcessor(LogitsProcessor):
"""
:class:`transformers.LogitsProcessor` that enforces that when specified sequences appear, stop geration.
Args:
stop_words_ids (:obj:`List[List[int]]`):
List of list of token ids of stop ids. In order to get the tokens of the words
that should not appear in the generated text, use :obj:`tokenizer(bad_word,
add_prefix_space=True).input_ids`.
eos_token_id (:obj:`int`):
The id of the `end-of-sequence` token.
"""
def __init__(self, stop_words_ids: Iterable[Iterable[int]], eos_token_id: int):
if not isinstance(stop_words_ids, List) or len(stop_words_ids) == 0:
raise ValueError(
f"`stop_words_ids` has to be a non-emtpy list, but is {stop_words_ids}."
)
if any(not isinstance(bad_word_ids, list) for bad_word_ids in stop_words_ids):
raise ValueError(
f"`stop_words_ids` has to be a list of lists, but is {stop_words_ids}."
)
if any(
any(
(not isinstance(token_id, (int, np.integer)) or token_id < 0)
for token_id in stop_word_ids
)
for stop_word_ids in stop_words_ids
):
raise ValueError(
f"Each list in `stop_words_ids` has to be a list of positive integers, but is {stop_words_ids}."
)
self.stop_words_ids = list(
filter(
lambda bad_token_seq: bad_token_seq != [eos_token_id], stop_words_ids
)
)
self.eos_token_id = eos_token_id
for stop_token_seq in self.stop_words_ids:
assert (
len(stop_token_seq) > 0
), "Stop words token sequences {} cannot have an empty list".format(
stop_words_ids
)
def __call__(
self, input_ids: torch.LongTensor, scores: torch.FloatTensor
) -> torch.FloatTensor:
stopped_samples = self._calc_stopped_samples(input_ids)
for i, should_stop in enumerate(stopped_samples):
if should_stop:
scores[i, self.eos_token_id] = float(2**15)
return scores
def _tokens_match(self, prev_tokens: torch.LongTensor, tokens: List[int]) -> bool:
if len(tokens) == 0:
# if bad word tokens is just one token always ban it
return True
elif len(tokens) > len(prev_tokens):
# if bad word tokens are longer then prev input_ids they can't be equal
return False
elif prev_tokens[-len(tokens) :].tolist() == tokens:
# if tokens match
return True
else:
return False
def _calc_stopped_samples(self, prev_input_ids: Iterable[int]) -> Iterable[int]:
stopped_samples = []
for prev_input_ids_slice in prev_input_ids:
match = False
for stop_token_seq in self.stop_words_ids:
if self._tokens_match(prev_input_ids_slice, stop_token_seq):
# if tokens do not match continue
match = True
break
stopped_samples.append(match)
return stopped_samples
def top_k_logits(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")):
"""This function has been mostly taken from huggingface conversational
ai code at
https://medium.com/huggingface/how-to-build-a-state-of-the-art-
conversational-ai-with-transfer-learning-2d818ac26313"""
if top_k > 0:
# Remove all tokens with a probability less than the
# last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p > 0.0:
# Cconvert to 1D
sorted_logits, sorted_indices = torch.sort(logits, descending=True, dim=-1)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probs > top_p
# Shift the indices to the right to keep also the first token
# above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
for i in range(sorted_indices.size(0)):
indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]]
logits[i][indices_to_remove] = filter_value
return logits
def switch(val1, val2, boolean):
boolean = boolean.type_as(val1)
return (1 - boolean) * val1 + boolean * val2

276
tokenization_qwen.py Normal file
View File

@@ -0,0 +1,276 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
"""Tokenization classes for QWen."""
import base64
import logging
import os
import unicodedata
from typing import Collection, Dict, List, Set, Tuple, Union
import tiktoken
from transformers import PreTrainedTokenizer, AddedToken
logger = logging.getLogger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
ENDOFTEXT = "<|endoftext|>"
IMSTART = "<|im_start|>"
IMEND = "<|im_end|>"
# as the default behavior is changed to allow special tokens in
# regular texts, the surface forms of special tokens need to be
# as different as possible to minimize the impact
EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
# changed to use actual index to avoid misconfiguration with vocabulary expansion
SPECIAL_START_ID = 151643
SPECIAL_TOKENS = tuple(
enumerate(
(
(
ENDOFTEXT,
IMSTART,
IMEND,
)
+ EXTRAS
),
start=SPECIAL_START_ID,
)
)
SPECIAL_TOKENS_SET = set(t for i, t in SPECIAL_TOKENS)
def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
with open(tiktoken_bpe_file, "rb") as f:
contents = f.read()
return {
base64.b64decode(token): int(rank)
for token, rank in (line.split() for line in contents.splitlines() if line)
}
class QWenTokenizer(PreTrainedTokenizer):
"""QWen tokenizer."""
vocab_files_names = VOCAB_FILES_NAMES
def __init__(
self,
vocab_file,
errors="replace",
extra_vocab_file=None,
**kwargs,
):
super().__init__(**kwargs)
# how to handle errors in decoding UTF-8 byte sequences
# use ignore if you are in streaming inference
self.errors = errors
self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: Dict[bytes, int]
self.special_tokens = {
token: index
for index, token in SPECIAL_TOKENS
}
# try load extra vocab from file
if extra_vocab_file is not None:
used_ids = set(self.mergeable_ranks.values()) | set(self.special_tokens.values())
extra_mergeable_ranks = _load_tiktoken_bpe(extra_vocab_file)
for token, index in extra_mergeable_ranks.items():
if token in self.mergeable_ranks:
logger.info(f"extra token {token} exists, skipping")
continue
if index in used_ids:
logger.info(f'the index {index} for extra token {token} exists, skipping')
continue
self.mergeable_ranks[token] = index
# the index may be sparse after this, but don't worry tiktoken.Encoding will handle this
enc = tiktoken.Encoding(
"Qwen",
pat_str=PAT_STR,
mergeable_ranks=self.mergeable_ranks,
special_tokens=self.special_tokens,
)
assert (
len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
self.decoder = {
v: k for k, v in self.mergeable_ranks.items()
} # type: dict[int, bytes|str]
self.decoder.update({v: k for k, v in self.special_tokens.items()})
self.tokenizer = enc # type: tiktoken.Encoding
self.eod_id = self.tokenizer.eot_token
self.im_start_id = self.special_tokens[IMSTART]
self.im_end_id = self.special_tokens[IMEND]
def __getstate__(self):
# for pickle lovers
state = self.__dict__.copy()
del state["tokenizer"]
return state
def __setstate__(self, state):
# tokenizer is not python native; don't pass it; rebuild it
self.__dict__.update(state)
enc = tiktoken.Encoding(
"Qwen",
pat_str=PAT_STR,
mergeable_ranks=self.mergeable_ranks,
special_tokens=self.special_tokens,
)
self.tokenizer = enc
def __len__(self) -> int:
return self.tokenizer.n_vocab
def get_vocab(self) -> Dict[bytes, int]:
return self.mergeable_ranks
def convert_tokens_to_ids(
self, tokens: Union[bytes, str, List[Union[bytes, str]]]
) -> List[int]:
ids = []
if isinstance(tokens, (str, bytes)):
if tokens in self.special_tokens:
return self.special_tokens[tokens]
else:
return self.mergeable_ranks.get(tokens)
for token in tokens:
if token in self.special_tokens:
ids.append(self.special_tokens[token])
else:
ids.append(self.mergeable_ranks.get(token))
return ids
def _add_tokens(
self,
new_tokens: Union[List[str], List[AddedToken]],
special_tokens: bool = False,
) -> int:
if not special_tokens and new_tokens:
raise ValueError("Adding regular tokens is not supported")
for token in new_tokens:
surface_form = token.content if isinstance(token, AddedToken) else token
if surface_form not in SPECIAL_TOKENS_SET:
raise ValueError("Adding unknown special tokens is not supported")
return 0
def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
"""
Save only the vocabulary of the tokenizer (vocabulary).
Returns:
`Tuple(str)`: Paths to the files saved.
"""
file_path = os.path.join(save_directory, "qwen.tiktoken")
with open(file_path, "w", encoding="utf8") as w:
for k, v in self.mergeable_ranks.items():
line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
w.write(line)
return (file_path,)
def tokenize(
self,
text: str,
allowed_special: Union[Set, str] = "all",
disallowed_special: Union[Collection, str] = (),
**kwargs,
) -> List[Union[bytes, str]]:
"""
Converts a string in a sequence of tokens.
Args:
text (`str`):
The sequence to be encoded.
allowed_special (`Literal["all"]` or `set`):
The surface forms of the tokens to be encoded as special tokens in regular texts.
Default to "all".
disallowed_special (`Literal["all"]` or `Collection`):
The surface forms of the tokens that should not be in regular texts and trigger errors.
Default to an empty tuple.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific encode method.
Returns:
`List[bytes|str]`: The list of tokens.
"""
tokens = []
text = unicodedata.normalize("NFC", text)
# this implementation takes a detour: text -> token id -> token surface forms
for t in self.tokenizer.encode(
text, allowed_special=allowed_special, disallowed_special=disallowed_special
):
tokens.append(self.decoder[t])
return tokens
def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
"""
Converts a sequence of tokens in a single string.
"""
text = ""
temp = b""
for t in tokens:
if isinstance(t, str):
if temp:
text += temp.decode("utf-8", errors=self.errors)
temp = b""
text += t
elif isinstance(t, bytes):
temp += t
else:
raise TypeError("token should only be of type types or str")
if temp:
text += temp.decode("utf-8", errors=self.errors)
return text
@property
def vocab_size(self):
return self.tokenizer.n_vocab
def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
"""Converts an id to a token, special tokens included"""
if index in self.decoder:
return self.decoder[index]
raise ValueError("unknown ids")
def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
"""Converts a token to an id using the vocab, special tokens included"""
if token in self.special_tokens:
return self.special_tokens[token]
if token in self.mergeable_ranks:
return self.mergeable_ranks[token]
raise ValueError("unknown token")
def _tokenize(self, text: str, **kwargs):
"""
Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens.
"""
raise NotImplementedError
def _decode(
self,
token_ids: Union[int, List[int]],
skip_special_tokens: bool = False,
errors: str = None,
**kwargs,
) -> str:
if isinstance(token_ids, int):
token_ids = [token_ids]
if skip_special_tokens:
token_ids = [i for i in token_ids if i < self.eod_id]
return self.tokenizer.decode(token_ids, errors=errors or self.errors)

10
tokenizer_config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"model_max_length": 32768,
"tokenizer_class": "QWenTokenizer",
"auto_map": {
"AutoTokenizer": [
"tokenization_qwen.QWenTokenizer",
null
]
}
}