diff --git a/.gitattributes b/.gitattributes
index 7bc225d..11561fa 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -31,4 +31,25 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
-*.ckpt filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
+*.ckpt filter=lfs diff=lfs merge=lfs -text
+model-00001-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00003-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00005-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00006-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00014-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00004-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00011-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00012-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00017-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00020-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00002-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00009-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00015-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00021-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00007-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00008-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00010-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00013-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00016-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00018-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
+model-00019-of-00021.safetensors filter=lfs diff=lfs merge=lfs -text
diff --git a/README.md b/README.md
index 7569426..dbd9216 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,133 @@
---
-frameworks:
-- Pytorch
-license: Apache License 2.0
-tasks:
-- text-generation
+pipeline_tag: text-generation
+license: other
+language:
+- en
+- zh
+tags:
+- math
---
-###### 该模型当前使用的是默认介绍模版,处于“预发布”阶段,页面仅限所有者可见。
-###### 请根据[模型贡献文档说明](https://www.modelscope.cn/docs/%E5%A6%82%E4%BD%95%E6%92%B0%E5%86%99%E5%A5%BD%E7%94%A8%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8D%A1%E7%89%87),及时完善模型卡片内容。ModelScope平台将在模型卡片完善后展示。谢谢您的理解。
-#### Clone with HTTP
-```bash
- git clone https://www.modelscope.cn/Shanghai_AI_Laboratory/internlm2-math-20b.git
-```
\ No newline at end of file
+
+# InternLM-Math
+
+
+
+

+
+
+
+State-of-the-art bilingual open-sourced Math reasoning LLMs.
+
+
+# Introduction
+- **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with ~100B high quality math-related tokens and SFT with ~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage.
+- **Add Lean as a support language for math problem solving and math theorem proving.** We are exploring combining Lean 3 with InternLM-Math for verifiable math reasoning. InternLM-Math can generate Lean codes for simple math reasoning tasks like GSM8K or provide possible proof tactics based on Lean states.
+- **Also can be viewed as a reward model, which supports the Outcome/Process/Lean Reward Model.** We supervise InternLM2-Math with various types of reward modeling data, to make InternLM2-Math can also verify chain-of-thought processes. We also add the ability to convert a chain-of-thought process into Lean 3 code.
+- **A Math LM Augment Helper** and **Code Intepreter**. InternLM2-Math can help augment math reasoning problems and solve them using the code interpreter which makes you generate synthesis data quicker!
+
+# Models
+| Model | Transformers(HF) |Release Date |
+|---|---|---|
+| **InternLM2-Math-Base-7B** | [🤗internlm/internlm2-math-base-7b](https://huggingface.co/internlm/internlm2-math-base-7b) | 2024-01-23|
+| **InternLM2-Math-Base-20B** | [🤗internlm/internlm2-math-base-20b](https://huggingface.co/internlm/internlm2-math-base-20b) | 2024-01-23|
+| **InternLM2-Math-7B** | [🤗internlm/internlm2-math-7b](https://huggingface.co/internlm/internlm2-math-7b) | 2024-01-23|
+| **InternLM2-Math-20B** | [🤗internlm/internlm2-math-20b](https://huggingface.co/internlm/internlm2-math-20b) | 2024-01-23|
+
+
+# Performance
+
+## Pretrain Performance
+We evaluate pretrain checkpoints based on greedy decoding with few-shot COT. Details of pretraining will be introduced in the tech report.
+| Model | GSM8K | MATH |
+|------------------------|---------|--------|
+| Llama2-7B | 11.8 | 3.2 |
+| Llemma-7B | 36.4 | 18.0 |
+| InternLM2-Base-7B | 36.5 | 8.6 |
+| **InternLM2-Math-Base-7B** | **49.2** | **21.5** |
+| Minerva-8B | 16.2 | 14.1 |
+| InternLM2-Base-20B | 54.6 | 13.7 |
+| **InternLM2-Math-Base-20B** | **63.7** | **27.3** |
+| Llemma-34B | 51.5 | 25.0 |
+| Minerva-62B | 52.4 | 27.6 |
+| Minerva-540B | 58.8 | 33.6 |
+
+
+## SFT Peformance
+All performance is based on greedy decoding with COT. We notice that the performance of Hungary has a big variance between our different checkpoints, while other performance is very stable. This may be due to the problem amount about Hungary.
+| Model | Model Type | GSM8K | MATH | Hungary |
+|------------------------|----------------------|--------|--------|---------|
+| Qwen-7B-Chat | Genearl | 51.7 | 11.6 | - |
+| DeepSeek-7B-Chat | General | 63.0 | 15.8 | 28.5 |
+| InternLM2-Chat-7B | General | 70.7 | 23.0 | - |
+| ChatGLM3-6B | General | 53.8 | 20.4 | 32 |
+| MetaMath-Mistral-7B | Mathematics | 77.7 | 28.2 | 29 |
+| MetaMath-Llemma-7B | Mathematics | 69.2 | 30.0 | - |
+| **InternLM2-Math-7B** | Mathematics | **78.1** | **34.6** | **55** |
+| InternLM2-Chat-20B | General | 79.6 | 31.9 | - |
+| MetaMath-Llemma-34B | Mathematics | 75.8 | 34.8 | - |
+| **InternLM2-Math-20B** | Mathematics | **82.6** | **37.7** | **66** |
+| Qwen-72B | General | 78.9 | 35.2 | 52 |
+| DeepSeek-67B | General | 84.1 | 32.6 | 58 |
+| ChatGPT (GPT-3.5) | General | 80.8 | 34.1 | 41 |
+| GPT4 (First version) | General | 92.0 | 42.5 | 68 |
+
+# Inference
+
+```python
+from modelscope import snapshot_download, AutoTokenizer, AutoModelForCausalLM
+import torch
+
+model_dir = snapshot_download("Shanghai_AI_Laboratory/internlm2-math-20b")
+tokenizer = AutoTokenizer.from_pretrained(model_dir, device_map="auto", trust_remote_code=True)
+# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
+model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, torch_dtype=torch.float16)
+model = model.eval()
+response, history = model.chat(tokenizer, "1+1=", history=[], meta_instruction="")
+print(response)
+```
+
+# Special usages
+We list some instructions used in our SFT. You can use them to help you. You can use the other ways to prompt the model, but the following are recommended. InternLM2-Math may combine the following abilities but it is not guaranteed.
+
+| Description | Query |
+| --- | --- |
+| Solving question via chain-of-thought | {Question} |
+| Solving question via Lean 3 | {Question}\nSolve this via Lean 3 |
+| Outcome reward model | Given a question and an answer, check is it correct?\nQuestion:{Question}\nAnswer:{COT} |
+| Process reward model | Given a question and an answer, check correctness of each step.\nQuestion:{Question}\nAnswer:{COT} |
+| Reward model | Given a question and two answers, which one is better? \nQuestion:{Question}\nAnswer 1:{COT}\nAnswer 2:{COT} |
+| Convert chain-of-thought to Lean 3 | Convert this answer into Lean3. Question:{Question}\nAnswer:{COT} |
+| Convert Lean 3 to chain-of-thought | Convert this lean 3 code into a natural language problem with answers:\n{LEAN} |
+| Translate question and chain-of-thought answer to a proof statement | Convert this question and answer into a proof format.\nQuestion:{Question}\nAnswer:{COT} |
+| Translate proof problem to Lean 3 | Convert this natural langauge statement into a Lean 3 theorem statement:{Theorem} |
+| Translate Lean 3 to proof problem | Convert this Lean 3 theorem statement into natural language:{STATEMENT} |
+| Suggest a tactic based on Lean state | Given the Lean 3 tactic state, suggest a next tactic:\n{State} |
+| Rephrase Problem | Describe this problem in another way. {STATEMENT} |
+| Augment Problem | Please augment a new problem based on: {Question} |
+| Augment a harder Problem | Increase the complexity of the problem: {Question} |
+| Change specific numbers | Change specific numbers: {Question}|
+| Introduce fractions or percentages | Introduce fractions or percentages: {Question}|
+| Code Intepreter | [lagent](https://github.com/InternLM/InternLM/blob/main/agent/lagent.md) |
+| In-context Learning | Question:{Question}\nAnswer:{COT}\n...Question:{Question}\nAnswer:{COT}|
+
+# Fine-tune and others
+Please refer to [InternLM](https://github.com/InternLM/InternLM/tree/main).
+
+# Known issues
+Our model is still under development and will be upgraded. There are some possible issues of InternLM-Math.
+- Jump the calculating step.
+- Perform badly at Chinese fill-in-the-bank problems and English choice problems due to SFT data composition.
+- The reward model mode can be better leveraged with assigned token probabilities.
+- Code switch due to SFT data composition.
+- Some abilities of Lean can only be adapted to GSM8K-like problems (e.g. Convert chain-of-thought to Lean 3), and performance related to Lean is not guaranteed.
+
+# Citation and Tech Report
+To be appended.
\ No newline at end of file
diff --git a/config.json b/config.json
new file mode 100644
index 0000000..723f5fb
--- /dev/null
+++ b/config.json
@@ -0,0 +1,31 @@
+{
+ "architectures": [
+ "InternLM2ForCausalLM"
+ ],
+ "auto_map": {
+ "AutoConfig": "configuration_internlm2.InternLM2Config",
+ "AutoModelForCausalLM": "modeling_internlm2.InternLM2ForCausalLM",
+ "AutoModel": "modeling_internlm2.InternLM2ForCausalLM"
+ },
+ "bias": false,
+ "bos_token_id": 1,
+ "eos_token_id": 2,
+ "hidden_act": "silu",
+ "hidden_size": 6144,
+ "initializer_range": 0.02,
+ "intermediate_size": 16384,
+ "max_position_embeddings": 8192,
+ "model_type": "internlm2",
+ "num_attention_heads": 48,
+ "num_hidden_layers": 48,
+ "num_key_value_heads": 8,
+ "pad_token_id": 2,
+ "rms_norm_eps": 1e-05,
+ "rope_scaling": null,
+ "rope_theta": 1000000,
+ "tie_word_embeddings": false,
+ "torch_dtype": "bfloat16",
+ "transformers_version": "4.35.2",
+ "use_cache": true,
+ "vocab_size": 92544
+}
\ No newline at end of file
diff --git a/configuration_internlm2.py b/configuration_internlm2.py
new file mode 100644
index 0000000..b011dd3
--- /dev/null
+++ b/configuration_internlm2.py
@@ -0,0 +1,151 @@
+# coding=utf-8
+# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
+#
+# This code is based on transformers/src/transformers/models/llama/configuration_llama.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+""" InternLM2 model configuration"""
+
+from transformers.configuration_utils import PretrainedConfig
+from transformers.utils import logging
+
+logger = logging.get_logger(__name__)
+
+INTERNLM2_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
+
+
+# Modified from transformers.model.llama.configuration_llama.LlamaConfig
+class InternLM2Config(PretrainedConfig):
+ r"""
+ This is the configuration class to store the configuration of a [`InternLM2Model`]. It is used to instantiate
+ an InternLM2 model according to the specified arguments, defining the model architecture. Instantiating a
+ configuration with the defaults will yield a similar configuration to that of the InternLM2-7B.
+
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
+ documentation from [`PretrainedConfig`] for more information.
+
+
+ Args:
+ vocab_size (`int`, *optional*, defaults to 32000):
+ Vocabulary size of the InternLM2 model. Defines the number of different tokens that can be represented by the
+ `inputs_ids` passed when calling [`InternLM2Model`]
+ hidden_size (`int`, *optional*, defaults to 4096):
+ Dimension of the hidden representations.
+ intermediate_size (`int`, *optional*, defaults to 11008):
+ Dimension of the MLP representations.
+ num_hidden_layers (`int`, *optional*, defaults to 32):
+ Number of hidden layers in the Transformer encoder.
+ num_attention_heads (`int`, *optional*, defaults to 32):
+ Number of attention heads for each attention layer in the Transformer encoder.
+ num_key_value_heads (`int`, *optional*):
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
+ by meanpooling all the original heads within that group. For more details checkout [this
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
+ `num_attention_heads`.
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
+ The non-linear activation function (function or string) in the decoder.
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
+ just in case (e.g., 512 or 1024 or 2048).
+ initializer_range (`float`, *optional*, defaults to 0.02):
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
+ The epsilon used by the rms normalization layers.
+ use_cache (`bool`, *optional*, defaults to `True`):
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
+ relevant if `config.is_decoder=True`.
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
+ Whether to tie weight embeddings
+ Example:
+
+ """
+ model_type = "internlm2"
+ _auto_class = "AutoConfig"
+
+ def __init__( # pylint: disable=W0102
+ self,
+ vocab_size=103168,
+ hidden_size=4096,
+ intermediate_size=11008,
+ num_hidden_layers=32,
+ num_attention_heads=32,
+ num_key_value_heads=None,
+ hidden_act="silu",
+ max_position_embeddings=2048,
+ initializer_range=0.02,
+ rms_norm_eps=1e-6,
+ use_cache=True,
+ pad_token_id=0,
+ bos_token_id=1,
+ eos_token_id=2,
+ tie_word_embeddings=False,
+ bias=True,
+ rope_theta=10000,
+ rope_scaling=None,
+ attn_implementation="eager",
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.max_position_embeddings = max_position_embeddings
+ self.hidden_size = hidden_size
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_attention_heads = num_attention_heads
+ self.bias = bias
+
+ if num_key_value_heads is None:
+ num_key_value_heads = num_attention_heads
+ self.num_key_value_heads = num_key_value_heads
+
+ self.hidden_act = hidden_act
+ self.initializer_range = initializer_range
+ self.rms_norm_eps = rms_norm_eps
+ self.use_cache = use_cache
+ self.rope_theta = rope_theta
+ self.rope_scaling = rope_scaling
+ self._rope_scaling_validation()
+
+ self.attn_implementation = attn_implementation
+ if self.attn_implementation is None:
+ self.attn_implementation = "eager"
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
+
+ def _rope_scaling_validation(self):
+ """
+ Validate the `rope_scaling` configuration.
+ """
+ if self.rope_scaling is None:
+ return
+
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
+ raise ValueError(
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
+ f"got {self.rope_scaling}"
+ )
+ rope_scaling_type = self.rope_scaling.get("type", None)
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
+ raise ValueError(
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
+ )
+ if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor < 1.0:
+ raise ValueError(f"`rope_scaling`'s factor field must be a float >= 1, got {rope_scaling_factor}")
diff --git a/generation_config.json b/generation_config.json
new file mode 100644
index 0000000..9d0dbeb
--- /dev/null
+++ b/generation_config.json
@@ -0,0 +1,7 @@
+{
+ "_from_model_config": true,
+ "bos_token_id": 1,
+ "eos_token_id": 2,
+ "pad_token_id": 2,
+ "transformers_version": "4.35.2"
+}
diff --git a/model-00001-of-00021.safetensors b/model-00001-of-00021.safetensors
new file mode 100644
index 0000000..affa65f
--- /dev/null
+++ b/model-00001-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2d76ba004a3e3ccfc3b58e3178fe901a8e62511443ece2896209bd8e0f36b6b2
+size 1917346712
diff --git a/model-00002-of-00021.safetensors b/model-00002-of-00021.safetensors
new file mode 100644
index 0000000..10c9d69
--- /dev/null
+++ b/model-00002-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15a67775bc6960cce5c229886f4654fd7c3d7179a6777011c51d81fecf58ee14
+size 1937819544
diff --git a/model-00003-of-00021.safetensors b/model-00003-of-00021.safetensors
new file mode 100644
index 0000000..59c1d0e
--- /dev/null
+++ b/model-00003-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cd742b765847b6cd2df9271aba2be3b20bfb2d49a598b7d0ecb131cb13a1cc41
+size 1963010040
diff --git a/model-00004-of-00021.safetensors b/model-00004-of-00021.safetensors
new file mode 100644
index 0000000..6e8dda8
--- /dev/null
+++ b/model-00004-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:525a26e9e521996f8ed8bd3671def0b3187a3ea64bc71ce160cabec53ba22d70
+size 1937819544
diff --git a/model-00005-of-00021.safetensors b/model-00005-of-00021.safetensors
new file mode 100644
index 0000000..47185e1
--- /dev/null
+++ b/model-00005-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4bedbbf363ac1893331fa49b397b7434bd107a20239f108c3680368455d4195c
+size 1963010056
diff --git a/model-00006-of-00021.safetensors b/model-00006-of-00021.safetensors
new file mode 100644
index 0000000..3e141c8
--- /dev/null
+++ b/model-00006-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0f7409abb93eae0c9610b40b1d0ca1c63c3e268f7b763929069a729a2c89ab2
+size 1937819560
diff --git a/model-00007-of-00021.safetensors b/model-00007-of-00021.safetensors
new file mode 100644
index 0000000..1f5246d
--- /dev/null
+++ b/model-00007-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0fe2222b25fde896ff8f7e49755b0cd53966e9f2a8b69560c8995dbe4fa0e6e
+size 1963010064
diff --git a/model-00008-of-00021.safetensors b/model-00008-of-00021.safetensors
new file mode 100644
index 0000000..5fa5c1f
--- /dev/null
+++ b/model-00008-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:484f476615b282d00cd03765f6db910b202397ae69e3fe76e3af966cb99b165a
+size 1937819560
diff --git a/model-00009-of-00021.safetensors b/model-00009-of-00021.safetensors
new file mode 100644
index 0000000..0da00ad
--- /dev/null
+++ b/model-00009-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:530addf4b3600bc47b4f0c9214a45ca3256610e809fb17b6311521b726a41ec4
+size 1963010064
diff --git a/model-00010-of-00021.safetensors b/model-00010-of-00021.safetensors
new file mode 100644
index 0000000..44b6d6d
--- /dev/null
+++ b/model-00010-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c386a84da17e9e6dc7e64858f6d8678bff6cd9b97bac43704551669e6d821556
+size 1937819560
diff --git a/model-00011-of-00021.safetensors b/model-00011-of-00021.safetensors
new file mode 100644
index 0000000..7bd4ddc
--- /dev/null
+++ b/model-00011-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3f4370c7ad210df0f4775f64e9e0cf1daceaf1462efd77ff2d345b87e685b31
+size 1963010064
diff --git a/model-00012-of-00021.safetensors b/model-00012-of-00021.safetensors
new file mode 100644
index 0000000..e6bb6ab
--- /dev/null
+++ b/model-00012-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fa0e34c1a8a6cd79f796965fb53a629bddc4f25bc8693cacfe57d245a7ecaf5
+size 1937819560
diff --git a/model-00013-of-00021.safetensors b/model-00013-of-00021.safetensors
new file mode 100644
index 0000000..03863de
--- /dev/null
+++ b/model-00013-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d759ebd095a2a4d6be4d5970c78902375cdde1c23cc5d10b9646f23b03afde0
+size 1963010064
diff --git a/model-00014-of-00021.safetensors b/model-00014-of-00021.safetensors
new file mode 100644
index 0000000..e99f201
--- /dev/null
+++ b/model-00014-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb3847692a87d01d18f4bc30c3699bfdcc61df762bc91967cffb291eaa671f3c
+size 1937819560
diff --git a/model-00015-of-00021.safetensors b/model-00015-of-00021.safetensors
new file mode 100644
index 0000000..ce3a2a8
--- /dev/null
+++ b/model-00015-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09eca575963b7ae7d26d79a0a81626f7ddb5f9cf9e66f4c423929d75e277c189
+size 1963010064
diff --git a/model-00016-of-00021.safetensors b/model-00016-of-00021.safetensors
new file mode 100644
index 0000000..f4d2ebf
--- /dev/null
+++ b/model-00016-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f013c9cb10fc939a7e6dec0f2b8136f46dedfce3a9cb0faf7bc1e9a404a2d41a
+size 1937819560
diff --git a/model-00017-of-00021.safetensors b/model-00017-of-00021.safetensors
new file mode 100644
index 0000000..8ac1dea
--- /dev/null
+++ b/model-00017-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:580de14cb160de4fb039f17499c8b24a15ace432b69b8991d7788d5d3c481f0a
+size 1963010064
diff --git a/model-00018-of-00021.safetensors b/model-00018-of-00021.safetensors
new file mode 100644
index 0000000..4d299b8
--- /dev/null
+++ b/model-00018-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb25638a8d82192385598ed6efcf386a090d7ef2d5c9e33e84c80210403a65ac
+size 1937819560
diff --git a/model-00019-of-00021.safetensors b/model-00019-of-00021.safetensors
new file mode 100644
index 0000000..4595345
--- /dev/null
+++ b/model-00019-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2fbf8eb25674346cab5ecd31d3d85abe46722c895e629983f2d0c1fcbee1479
+size 1963010064
diff --git a/model-00020-of-00021.safetensors b/model-00020-of-00021.safetensors
new file mode 100644
index 0000000..70d5d9d
--- /dev/null
+++ b/model-00020-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6ec2fe0740539dc2caf905262e965098aa35f0dc1784e3ef90059371a02ca8f
+size 1560344232
diff --git a/model-00021-of-00021.safetensors b/model-00021-of-00021.safetensors
new file mode 100644
index 0000000..4cd83c2
--- /dev/null
+++ b/model-00021-of-00021.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1b8e046517b8e5be8cd526fd48fc25f7c33add377d5c0735cf29fae116a2ceb
+size 1137180800
diff --git a/model.safetensors.index.json b/model.safetensors.index.json
new file mode 100644
index 0000000..84b3b23
--- /dev/null
+++ b/model.safetensors.index.json
@@ -0,0 +1,346 @@
+{
+ "metadata": {
+ "total_size": 39722299392
+ },
+ "weight_map": {
+ "model.layers.0.attention.wo.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.attention.wqkv.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.attention_norm.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.feed_forward.w1.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.feed_forward.w2.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.feed_forward.w3.weight": "model-00001-of-00021.safetensors",
+ "model.layers.0.ffn_norm.weight": "model-00001-of-00021.safetensors",
+ "model.layers.1.attention.wo.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.attention.wqkv.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.attention_norm.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.feed_forward.w1.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.feed_forward.w2.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.feed_forward.w3.weight": "model-00002-of-00021.safetensors",
+ "model.layers.1.ffn_norm.weight": "model-00002-of-00021.safetensors",
+ "model.layers.10.attention.wo.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.attention.wqkv.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.attention_norm.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.feed_forward.w1.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.feed_forward.w2.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.feed_forward.w3.weight": "model-00005-of-00021.safetensors",
+ "model.layers.10.ffn_norm.weight": "model-00005-of-00021.safetensors",
+ "model.layers.11.attention.wo.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.attention.wqkv.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.attention_norm.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.feed_forward.w1.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.feed_forward.w2.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.feed_forward.w3.weight": "model-00006-of-00021.safetensors",
+ "model.layers.11.ffn_norm.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.attention.wo.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.attention.wqkv.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.attention_norm.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.feed_forward.w1.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.feed_forward.w2.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.feed_forward.w3.weight": "model-00006-of-00021.safetensors",
+ "model.layers.12.ffn_norm.weight": "model-00006-of-00021.safetensors",
+ "model.layers.13.attention.wo.weight": "model-00006-of-00021.safetensors",
+ "model.layers.13.attention.wqkv.weight": "model-00006-of-00021.safetensors",
+ "model.layers.13.attention_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.13.feed_forward.w1.weight": "model-00006-of-00021.safetensors",
+ "model.layers.13.feed_forward.w2.weight": "model-00007-of-00021.safetensors",
+ "model.layers.13.feed_forward.w3.weight": "model-00007-of-00021.safetensors",
+ "model.layers.13.ffn_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.attention.wo.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.attention.wqkv.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.attention_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.feed_forward.w1.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.feed_forward.w2.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.feed_forward.w3.weight": "model-00007-of-00021.safetensors",
+ "model.layers.14.ffn_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.attention.wo.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.attention.wqkv.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.attention_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.feed_forward.w1.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.feed_forward.w2.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.feed_forward.w3.weight": "model-00007-of-00021.safetensors",
+ "model.layers.15.ffn_norm.weight": "model-00007-of-00021.safetensors",
+ "model.layers.16.attention.wo.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.attention.wqkv.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.attention_norm.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.feed_forward.w1.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.feed_forward.w2.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.feed_forward.w3.weight": "model-00008-of-00021.safetensors",
+ "model.layers.16.ffn_norm.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.attention.wo.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.attention.wqkv.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.attention_norm.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.feed_forward.w1.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.feed_forward.w2.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.feed_forward.w3.weight": "model-00008-of-00021.safetensors",
+ "model.layers.17.ffn_norm.weight": "model-00008-of-00021.safetensors",
+ "model.layers.18.attention.wo.weight": "model-00008-of-00021.safetensors",
+ "model.layers.18.attention.wqkv.weight": "model-00008-of-00021.safetensors",
+ "model.layers.18.attention_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.18.feed_forward.w1.weight": "model-00008-of-00021.safetensors",
+ "model.layers.18.feed_forward.w2.weight": "model-00009-of-00021.safetensors",
+ "model.layers.18.feed_forward.w3.weight": "model-00009-of-00021.safetensors",
+ "model.layers.18.ffn_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.attention.wo.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.attention.wqkv.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.attention_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.feed_forward.w1.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.feed_forward.w2.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.feed_forward.w3.weight": "model-00009-of-00021.safetensors",
+ "model.layers.19.ffn_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.2.attention.wo.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.attention.wqkv.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.attention_norm.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.feed_forward.w1.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.feed_forward.w2.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.feed_forward.w3.weight": "model-00002-of-00021.safetensors",
+ "model.layers.2.ffn_norm.weight": "model-00002-of-00021.safetensors",
+ "model.layers.20.attention.wo.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.attention.wqkv.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.attention_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.feed_forward.w1.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.feed_forward.w2.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.feed_forward.w3.weight": "model-00009-of-00021.safetensors",
+ "model.layers.20.ffn_norm.weight": "model-00009-of-00021.safetensors",
+ "model.layers.21.attention.wo.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.attention.wqkv.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.attention_norm.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.feed_forward.w1.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.feed_forward.w2.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.feed_forward.w3.weight": "model-00010-of-00021.safetensors",
+ "model.layers.21.ffn_norm.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.attention.wo.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.attention.wqkv.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.attention_norm.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.feed_forward.w1.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.feed_forward.w2.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.feed_forward.w3.weight": "model-00010-of-00021.safetensors",
+ "model.layers.22.ffn_norm.weight": "model-00010-of-00021.safetensors",
+ "model.layers.23.attention.wo.weight": "model-00010-of-00021.safetensors",
+ "model.layers.23.attention.wqkv.weight": "model-00010-of-00021.safetensors",
+ "model.layers.23.attention_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.23.feed_forward.w1.weight": "model-00010-of-00021.safetensors",
+ "model.layers.23.feed_forward.w2.weight": "model-00011-of-00021.safetensors",
+ "model.layers.23.feed_forward.w3.weight": "model-00011-of-00021.safetensors",
+ "model.layers.23.ffn_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.attention.wo.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.attention.wqkv.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.attention_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.feed_forward.w1.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.feed_forward.w2.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.feed_forward.w3.weight": "model-00011-of-00021.safetensors",
+ "model.layers.24.ffn_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.attention.wo.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.attention.wqkv.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.attention_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.feed_forward.w1.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.feed_forward.w2.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.feed_forward.w3.weight": "model-00011-of-00021.safetensors",
+ "model.layers.25.ffn_norm.weight": "model-00011-of-00021.safetensors",
+ "model.layers.26.attention.wo.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.attention.wqkv.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.attention_norm.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.feed_forward.w1.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.feed_forward.w2.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.feed_forward.w3.weight": "model-00012-of-00021.safetensors",
+ "model.layers.26.ffn_norm.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.attention.wo.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.attention.wqkv.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.attention_norm.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.feed_forward.w1.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.feed_forward.w2.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.feed_forward.w3.weight": "model-00012-of-00021.safetensors",
+ "model.layers.27.ffn_norm.weight": "model-00012-of-00021.safetensors",
+ "model.layers.28.attention.wo.weight": "model-00012-of-00021.safetensors",
+ "model.layers.28.attention.wqkv.weight": "model-00012-of-00021.safetensors",
+ "model.layers.28.attention_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.28.feed_forward.w1.weight": "model-00012-of-00021.safetensors",
+ "model.layers.28.feed_forward.w2.weight": "model-00013-of-00021.safetensors",
+ "model.layers.28.feed_forward.w3.weight": "model-00013-of-00021.safetensors",
+ "model.layers.28.ffn_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.attention.wo.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.attention.wqkv.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.attention_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.feed_forward.w1.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.feed_forward.w2.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.feed_forward.w3.weight": "model-00013-of-00021.safetensors",
+ "model.layers.29.ffn_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.3.attention.wo.weight": "model-00002-of-00021.safetensors",
+ "model.layers.3.attention.wqkv.weight": "model-00002-of-00021.safetensors",
+ "model.layers.3.attention_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.3.feed_forward.w1.weight": "model-00002-of-00021.safetensors",
+ "model.layers.3.feed_forward.w2.weight": "model-00003-of-00021.safetensors",
+ "model.layers.3.feed_forward.w3.weight": "model-00003-of-00021.safetensors",
+ "model.layers.3.ffn_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.30.attention.wo.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.attention.wqkv.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.attention_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.feed_forward.w1.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.feed_forward.w2.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.feed_forward.w3.weight": "model-00013-of-00021.safetensors",
+ "model.layers.30.ffn_norm.weight": "model-00013-of-00021.safetensors",
+ "model.layers.31.attention.wo.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.attention.wqkv.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.attention_norm.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.feed_forward.w1.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.feed_forward.w2.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.feed_forward.w3.weight": "model-00014-of-00021.safetensors",
+ "model.layers.31.ffn_norm.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.attention.wo.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.attention.wqkv.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.attention_norm.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.feed_forward.w1.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.feed_forward.w2.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.feed_forward.w3.weight": "model-00014-of-00021.safetensors",
+ "model.layers.32.ffn_norm.weight": "model-00014-of-00021.safetensors",
+ "model.layers.33.attention.wo.weight": "model-00014-of-00021.safetensors",
+ "model.layers.33.attention.wqkv.weight": "model-00014-of-00021.safetensors",
+ "model.layers.33.attention_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.33.feed_forward.w1.weight": "model-00014-of-00021.safetensors",
+ "model.layers.33.feed_forward.w2.weight": "model-00015-of-00021.safetensors",
+ "model.layers.33.feed_forward.w3.weight": "model-00015-of-00021.safetensors",
+ "model.layers.33.ffn_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.attention.wo.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.attention.wqkv.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.attention_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.feed_forward.w1.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.feed_forward.w2.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.feed_forward.w3.weight": "model-00015-of-00021.safetensors",
+ "model.layers.34.ffn_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.attention.wo.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.attention.wqkv.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.attention_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.feed_forward.w1.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.feed_forward.w2.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.feed_forward.w3.weight": "model-00015-of-00021.safetensors",
+ "model.layers.35.ffn_norm.weight": "model-00015-of-00021.safetensors",
+ "model.layers.36.attention.wo.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.attention.wqkv.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.attention_norm.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.feed_forward.w1.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.feed_forward.w2.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.feed_forward.w3.weight": "model-00016-of-00021.safetensors",
+ "model.layers.36.ffn_norm.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.attention.wo.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.attention.wqkv.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.attention_norm.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.feed_forward.w1.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.feed_forward.w2.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.feed_forward.w3.weight": "model-00016-of-00021.safetensors",
+ "model.layers.37.ffn_norm.weight": "model-00016-of-00021.safetensors",
+ "model.layers.38.attention.wo.weight": "model-00016-of-00021.safetensors",
+ "model.layers.38.attention.wqkv.weight": "model-00016-of-00021.safetensors",
+ "model.layers.38.attention_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.38.feed_forward.w1.weight": "model-00016-of-00021.safetensors",
+ "model.layers.38.feed_forward.w2.weight": "model-00017-of-00021.safetensors",
+ "model.layers.38.feed_forward.w3.weight": "model-00017-of-00021.safetensors",
+ "model.layers.38.ffn_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.attention.wo.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.attention.wqkv.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.attention_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.feed_forward.w1.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.feed_forward.w2.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.feed_forward.w3.weight": "model-00017-of-00021.safetensors",
+ "model.layers.39.ffn_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.4.attention.wo.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.attention.wqkv.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.attention_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.feed_forward.w1.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.feed_forward.w2.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.feed_forward.w3.weight": "model-00003-of-00021.safetensors",
+ "model.layers.4.ffn_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.40.attention.wo.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.attention.wqkv.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.attention_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.feed_forward.w1.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.feed_forward.w2.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.feed_forward.w3.weight": "model-00017-of-00021.safetensors",
+ "model.layers.40.ffn_norm.weight": "model-00017-of-00021.safetensors",
+ "model.layers.41.attention.wo.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.attention.wqkv.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.attention_norm.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.feed_forward.w1.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.feed_forward.w2.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.feed_forward.w3.weight": "model-00018-of-00021.safetensors",
+ "model.layers.41.ffn_norm.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.attention.wo.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.attention.wqkv.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.attention_norm.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.feed_forward.w1.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.feed_forward.w2.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.feed_forward.w3.weight": "model-00018-of-00021.safetensors",
+ "model.layers.42.ffn_norm.weight": "model-00018-of-00021.safetensors",
+ "model.layers.43.attention.wo.weight": "model-00018-of-00021.safetensors",
+ "model.layers.43.attention.wqkv.weight": "model-00018-of-00021.safetensors",
+ "model.layers.43.attention_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.43.feed_forward.w1.weight": "model-00018-of-00021.safetensors",
+ "model.layers.43.feed_forward.w2.weight": "model-00019-of-00021.safetensors",
+ "model.layers.43.feed_forward.w3.weight": "model-00019-of-00021.safetensors",
+ "model.layers.43.ffn_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.attention.wo.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.attention.wqkv.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.attention_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.feed_forward.w1.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.feed_forward.w2.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.feed_forward.w3.weight": "model-00019-of-00021.safetensors",
+ "model.layers.44.ffn_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.attention.wo.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.attention.wqkv.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.attention_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.feed_forward.w1.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.feed_forward.w2.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.feed_forward.w3.weight": "model-00019-of-00021.safetensors",
+ "model.layers.45.ffn_norm.weight": "model-00019-of-00021.safetensors",
+ "model.layers.46.attention.wo.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.attention.wqkv.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.attention_norm.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.feed_forward.w1.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.feed_forward.w2.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.feed_forward.w3.weight": "model-00020-of-00021.safetensors",
+ "model.layers.46.ffn_norm.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.attention.wo.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.attention.wqkv.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.attention_norm.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.feed_forward.w1.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.feed_forward.w2.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.feed_forward.w3.weight": "model-00020-of-00021.safetensors",
+ "model.layers.47.ffn_norm.weight": "model-00020-of-00021.safetensors",
+ "model.layers.5.attention.wo.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.attention.wqkv.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.attention_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.feed_forward.w1.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.feed_forward.w2.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.feed_forward.w3.weight": "model-00003-of-00021.safetensors",
+ "model.layers.5.ffn_norm.weight": "model-00003-of-00021.safetensors",
+ "model.layers.6.attention.wo.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.attention.wqkv.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.attention_norm.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.feed_forward.w1.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.feed_forward.w2.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.feed_forward.w3.weight": "model-00004-of-00021.safetensors",
+ "model.layers.6.ffn_norm.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.attention.wo.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.attention.wqkv.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.attention_norm.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.feed_forward.w1.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.feed_forward.w2.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.feed_forward.w3.weight": "model-00004-of-00021.safetensors",
+ "model.layers.7.ffn_norm.weight": "model-00004-of-00021.safetensors",
+ "model.layers.8.attention.wo.weight": "model-00004-of-00021.safetensors",
+ "model.layers.8.attention.wqkv.weight": "model-00004-of-00021.safetensors",
+ "model.layers.8.attention_norm.weight": "model-00005-of-00021.safetensors",
+ "model.layers.8.feed_forward.w1.weight": "model-00004-of-00021.safetensors",
+ "model.layers.8.feed_forward.w2.weight": "model-00005-of-00021.safetensors",
+ "model.layers.8.feed_forward.w3.weight": "model-00005-of-00021.safetensors",
+ "model.layers.8.ffn_norm.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.attention.wo.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.attention.wqkv.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.attention_norm.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.feed_forward.w1.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.feed_forward.w2.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.feed_forward.w3.weight": "model-00005-of-00021.safetensors",
+ "model.layers.9.ffn_norm.weight": "model-00005-of-00021.safetensors",
+ "model.norm.weight": "model-00020-of-00021.safetensors",
+ "model.tok_embeddings.weight": "model-00001-of-00021.safetensors",
+ "output.weight": "model-00021-of-00021.safetensors"
+ }
+}
diff --git a/modeling_internlm2.py b/modeling_internlm2.py
new file mode 100644
index 0000000..e0fbc30
--- /dev/null
+++ b/modeling_internlm2.py
@@ -0,0 +1,1391 @@
+# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
+#
+# This code is based on transformers/src/transformers/models/llama/modeling_llama.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+""" PyTorch InternLM2 model."""
+import math
+import queue
+import threading
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn.functional as F
+import torch.utils.checkpoint
+from einops import rearrange
+from torch import nn
+from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
+from transformers.activations import ACT2FN
+from transformers.modeling_outputs import (
+ BaseModelOutputWithPast,
+ CausalLMOutputWithPast,
+ SequenceClassifierOutputWithPast,
+)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import (
+ add_start_docstrings,
+ add_start_docstrings_to_model_forward,
+ logging,
+ replace_return_docstrings,
+)
+
+try:
+ from transformers.generation.streamers import BaseStreamer
+except: # noqa # pylint: disable=bare-except
+ BaseStreamer = None
+
+from .configuration_internlm2 import InternLM2Config
+
+logger = logging.get_logger(__name__)
+
+_CONFIG_FOR_DOC = "InternLM2Config"
+
+flash_attn_func, flash_attn_varlen_func = None, None
+pad_input, index_first_axis, unpad_input = None, None, None
+def _import_flash_attn():
+ global flash_attn_func, flash_attn_varlen_func
+ global pad_input, index_first_axis, unpad_input
+ try:
+ from flash_attn import flash_attn_func as _flash_attn_func, flash_attn_varlen_func as _flash_attn_varlen_func
+ from flash_attn.bert_padding import pad_input as _pad_input, index_first_axis as _index_first_axis, unpad_input as _unpad_input
+ flash_attn_func, flash_attn_varlen_func = _flash_attn_func, _flash_attn_varlen_func
+ pad_input, index_first_axis, unpad_input = _pad_input, _index_first_axis, _unpad_input
+ except ImportError:
+ raise ImportError("flash_attn is not installed.")
+
+# Copied from transformers.models.llama.modeling_llama._get_unpad_data
+def _get_unpad_data(attention_mask):
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
+ return (
+ indices,
+ cu_seqlens,
+ max_seqlen_in_batch,
+ )
+
+
+# Copied from transformers.models.bart.modeling_bart._make_causal_mask
+def _make_causal_mask(
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
+):
+ """
+ Make causal mask used for bi-directional self-attention.
+ """
+ bsz, tgt_len = input_ids_shape
+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
+ mask_cond = torch.arange(mask.size(-1), device=device)
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
+ mask = mask.to(dtype)
+
+ if past_key_values_length > 0:
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
+
+
+# Copied from transformers.models.bart.modeling_bart._expand_mask
+def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
+ """
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
+ """
+ bsz, src_len = mask.size()
+ tgt_len = tgt_len if tgt_len is not None else src_len
+
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
+
+ inverted_mask = 1.0 - expanded_mask
+
+ return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
+
+
+# Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->InternLM2
+class InternLM2RMSNorm(nn.Module):
+ def __init__(self, hidden_size, eps=1e-6):
+ """
+ InternLM2RMSNorm is equivalent to T5LayerNorm
+ """
+ super().__init__()
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ self.variance_epsilon = eps
+
+ def forward(self, hidden_states):
+ input_dtype = hidden_states.dtype
+ hidden_states = hidden_states.to(torch.float32)
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
+ return self.weight * hidden_states.to(input_dtype)
+
+
+# Copied from transformers.model.llama.modeling_llama.LlamaRotaryEmbedding with Llama->InternLM2
+class InternLM2RotaryEmbedding(nn.Module):
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
+ super().__init__()
+
+ self.dim = dim
+ self.max_position_embeddings = max_position_embeddings
+ self.base = base
+ inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
+
+ # Build here to make `torch.jit.trace` work.
+ self._set_cos_sin_cache(
+ seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype()
+ )
+
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
+ self.max_seq_len_cached = seq_len
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
+
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
+ emb = torch.cat((freqs, freqs), dim=-1)
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
+
+ def forward(self, x, seq_len=None):
+ # x: [bs, num_attention_heads, seq_len, head_size]
+ if seq_len > self.max_seq_len_cached:
+ self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.float32)
+
+ return (
+ self.cos_cached[:seq_len].to(dtype=x.dtype),
+ self.sin_cached[:seq_len].to(dtype=x.dtype),
+ )
+
+
+# Copied from transformers.model.llama.modeling_llama.LlamaLinearScalingRotaryEmbedding with Llama->InternLM2
+class InternLM2LinearScalingRotaryEmbedding(InternLM2RotaryEmbedding):
+ """InternLM2RotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
+
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
+ self.scaling_factor = scaling_factor
+ super().__init__(dim, max_position_embeddings, base, device)
+
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
+ self.max_seq_len_cached = seq_len
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
+ t = t / self.scaling_factor
+
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
+ emb = torch.cat((freqs, freqs), dim=-1)
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
+
+
+# Copied from transformers.model.llama.modeling_llama.LlamaDynamicNTKScalingRotaryEmbedding with Llama->InternLM2
+class InternLM2DynamicNTKScalingRotaryEmbedding(InternLM2RotaryEmbedding):
+ """InternLM2RotaryEmbedding extended with Dynamic NTK scaling.
+ Credits to the Reddit users /u/bloc97 and /u/emozilla.
+ """
+
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
+ self.scaling_factor = scaling_factor
+ super().__init__(dim, max_position_embeddings, base, device)
+
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
+ self.max_seq_len_cached = seq_len
+
+ if seq_len > self.max_position_embeddings:
+ base = self.base * (
+ (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
+ ) ** (self.dim / (self.dim - 2))
+ inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
+
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
+
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
+ emb = torch.cat((freqs, freqs), dim=-1)
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
+
+
+# Copied from transformers.model.llama.modeling_llama.rotate_half
+def rotate_half(x):
+ """Rotates half the hidden dims of the input."""
+ x1 = x[..., : x.shape[-1] // 2]
+ x2 = x[..., x.shape[-1] // 2 :]
+ return torch.cat((-x2, x1), dim=-1)
+
+
+# Copied from transformers.model.llama.modeling_llama.apply_rotary_pos_emb
+def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
+ """Applies Rotary Position Embedding to the query and key tensors."""
+ cos = cos[position_ids].unsqueeze(unsqueeze_dim)
+ sin = sin[position_ids].unsqueeze(unsqueeze_dim)
+ q_embed = (q * cos) + (rotate_half(q) * sin)
+ k_embed = (k * cos) + (rotate_half(k) * sin)
+ return q_embed, k_embed
+
+
+class InternLM2MLP(nn.Module):
+ def __init__(self, config):
+ super().__init__()
+ self.config = config
+ self.hidden_size = config.hidden_size
+ self.intermediate_size = config.intermediate_size
+ self.w1 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
+ self.w3 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
+ self.w2 = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[config.hidden_act]
+
+ def forward(self, x):
+ down_proj = self.w2(self.act_fn(self.w1(x)) * self.w3(x))
+
+ return down_proj
+
+
+# Copied from transformers.model.llama.modeling_llama.repeat_kv
+def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
+ """
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
+ """
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
+ if n_rep == 1:
+ return hidden_states
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
+
+
+# Modified from transformers.model.llama.modeling_llama.LlamaAttention
+class InternLM2Attention(nn.Module):
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
+
+ def __init__(self, config: InternLM2Config):
+ super().__init__()
+ self.config = config
+ self.hidden_size = config.hidden_size
+ self.num_heads = config.num_attention_heads
+ self.head_dim = self.hidden_size // self.num_heads
+ self.num_key_value_heads = config.num_key_value_heads
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
+ self.max_position_embeddings = config.max_position_embeddings
+ self.is_causal = True
+
+ if (self.head_dim * self.num_heads) != self.hidden_size:
+ raise ValueError(
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
+ f" and `num_heads`: {self.num_heads})."
+ )
+
+ self.wqkv = nn.Linear(
+ self.hidden_size,
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
+ bias=config.bias,
+ )
+
+ self.wo = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
+ self._init_rope()
+
+ def _init_rope(self):
+ if self.config.rope_scaling is None:
+ self.rotary_emb = InternLM2RotaryEmbedding(
+ self.head_dim,
+ max_position_embeddings=self.max_position_embeddings,
+ base=self.config.rope_theta,
+ )
+ else:
+ scaling_type = self.config.rope_scaling["type"]
+ scaling_factor = self.config.rope_scaling["factor"]
+ if scaling_type == "dynamic":
+ self.rotary_emb = InternLM2DynamicNTKScalingRotaryEmbedding(
+ self.head_dim,
+ max_position_embeddings=self.max_position_embeddings,
+ base=self.config.rope_theta,
+ scaling_factor=scaling_factor,
+ )
+ elif scaling_type == "linear":
+ self.rotary_emb = InternLM2LinearScalingRotaryEmbedding(
+ self.head_dim,
+ max_position_embeddings=self.max_position_embeddings,
+ base=self.config.rope_theta,
+ scaling_factor=scaling_factor,
+ )
+ else:
+ raise ValueError("Currently we only support rotary embedding's type being 'dynamic' or 'linear'.")
+ return self.rotary_emb
+
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
+ output_attentions: bool = False,
+ use_cache: bool = False,
+ **kwargs,
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
+ if "padding_mask" in kwargs:
+ warnings.warn(
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
+ "Please make sure use `attention_mask` instead.`"
+ )
+
+ bsz, q_len, _ = hidden_states.size()
+
+ qkv_states = self.wqkv(hidden_states)
+
+ qkv_states = rearrange(
+ qkv_states,
+ "b q (h gs d) -> b q h gs d",
+ gs=2 + self.num_key_value_groups,
+ d=self.head_dim,
+ )
+
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
+ key_states = qkv_states[..., -2, :]
+ value_states = qkv_states[..., -1, :]
+
+ query_states = query_states.transpose(1, 2)
+ key_states = key_states.transpose(1, 2)
+ value_states = value_states.transpose(1, 2)
+
+ kv_seq_len = key_states.shape[-2]
+ if past_key_value is not None:
+ kv_seq_len += past_key_value[0].shape[-2]
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
+
+ if past_key_value is not None:
+ # reuse k, v, self_attention
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
+
+ past_key_value = (key_states, value_states) if use_cache else None
+
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
+
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
+
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
+ raise ValueError(
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
+ f" {attn_weights.size()}"
+ )
+
+ if attention_mask is not None:
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
+ raise ValueError(
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
+ )
+ attn_weights = attn_weights + attention_mask
+
+ # upcast attention to fp32
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
+ attn_output = torch.matmul(attn_weights, value_states)
+
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
+ raise ValueError(
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
+ f" {attn_output.size()}"
+ )
+
+ attn_output = attn_output.transpose(1, 2).contiguous()
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
+
+ attn_output = self.wo(attn_output)
+
+ if not output_attentions:
+ attn_weights = None
+
+ return attn_output, attn_weights, past_key_value
+
+
+# Modified from transformers.model.llama.modeling_llama.InternLM2FlashAttention2
+class InternLM2FlashAttention2(InternLM2Attention):
+ """
+ InternLM2 flash attention module. This module inherits from `InternLM2Attention` as the weights of the module stays
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
+ flash attention and deal with padding tokens in case the input contains any of them.
+ """
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.LongTensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
+ output_attentions: bool = False,
+ use_cache: bool = False,
+ **kwargs,
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
+ # InternLM2FlashAttention2 attention does not support output_attentions
+ if "padding_mask" in kwargs:
+ warnings.warn(
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
+ "Please make sure use `attention_mask` instead.`"
+ )
+
+ # overwrite attention_mask with padding_mask
+ attention_mask = kwargs.pop("padding_mask")
+
+ output_attentions = False
+
+ bsz, q_len, _ = hidden_states.size()
+
+ qkv_states = self.wqkv(hidden_states)
+
+ qkv_states = rearrange(
+ qkv_states,
+ "b q (h gs d) -> b q h gs d",
+ gs=2 + self.num_key_value_groups,
+ d=self.head_dim,
+ )
+
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
+ key_states = qkv_states[..., -2, :]
+ value_states = qkv_states[..., -1, :]
+
+ query_states = query_states.transpose(1, 2)
+ key_states = key_states.transpose(1, 2)
+ value_states = value_states.transpose(1, 2)
+
+ kv_seq_len = key_states.shape[-2]
+ if past_key_value is not None:
+ kv_seq_len += past_key_value[0].shape[-2]
+
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
+
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
+
+ if past_key_value is not None:
+ # reuse k, v, self_attention
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
+
+ past_key_value = (key_states, value_states) if use_cache else None
+
+ query_states = query_states.transpose(1, 2)
+ key_states = key_states.transpose(1, 2)
+ value_states = value_states.transpose(1, 2)
+
+ attn_output = self._flash_attention_forward(
+ query_states, key_states, value_states, attention_mask, q_len
+ )
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
+ attn_output = self.wo(attn_output)
+
+ if not output_attentions:
+ attn_weights = None
+
+ return attn_output, attn_weights, past_key_value
+
+ def _flash_attention_forward(
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
+ ):
+ """
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
+ first unpad the input, then computes the attention scores and pad the final attention scores.
+
+ Args:
+ query_states (`torch.Tensor`):
+ Input query states to be passed to Flash Attention API
+ key_states (`torch.Tensor`):
+ Input key states to be passed to Flash Attention API
+ value_states (`torch.Tensor`):
+ Input value states to be passed to Flash Attention API
+ attention_mask (`torch.Tensor`):
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
+ position of padding tokens and 1 for the position of non-padding tokens.
+ dropout (`int`, *optional*):
+ Attention dropout
+ softmax_scale (`float`, *optional*):
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
+ """
+ # Contains at least one padding token in the sequence
+ causal = self.is_causal and query_length != 1
+ if attention_mask is not None:
+ batch_size = query_states.shape[0]
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._unpad_input(
+ query_states, key_states, value_states, attention_mask, query_length
+ )
+
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
+
+ attn_output_unpad = flash_attn_varlen_func(
+ query_states,
+ key_states,
+ value_states,
+ cu_seqlens_q=cu_seqlens_q,
+ cu_seqlens_k=cu_seqlens_k,
+ max_seqlen_q=max_seqlen_in_batch_q,
+ max_seqlen_k=max_seqlen_in_batch_k,
+ dropout_p=dropout,
+ softmax_scale=softmax_scale,
+ causal=causal,
+ )
+
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
+ else:
+ attn_output = flash_attn_func(
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
+ )
+
+ return attn_output
+
+ def _unpad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
+
+ key_layer = index_first_axis(
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
+ )
+ value_layer = index_first_axis(
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
+ )
+
+ if query_length == kv_seq_len:
+ query_layer = index_first_axis(
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
+ )
+ cu_seqlens_q = cu_seqlens_k
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
+ indices_q = indices_k
+ elif query_length == 1:
+ max_seqlen_in_batch_q = 1
+ cu_seqlens_q = torch.arange(
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
+ ) # There is a memcpy here, that is very bad.
+ indices_q = cu_seqlens_q[:-1]
+ query_layer = query_layer.squeeze(1)
+ else:
+ # The -q_len: slice assumes left padding.
+ attention_mask = attention_mask[:, -query_length:]
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
+
+ return (
+ query_layer,
+ key_layer,
+ value_layer,
+ indices_q.to(torch.int64),
+ (cu_seqlens_q, cu_seqlens_k),
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
+ )
+
+INTERNLM2_ATTENTION_CLASSES = {
+ "eager": InternLM2Attention,
+ "flash_attention_2": InternLM2FlashAttention2,
+}
+
+# Modified from transformers.model.llama.modeling_llama.LlamaDecoderLayer
+class InternLM2DecoderLayer(nn.Module):
+ def __init__(self, config: InternLM2Config):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attention = INTERNLM2_ATTENTION_CLASSES[config.attn_implementation](config=config)
+
+ self.feed_forward = InternLM2MLP(config)
+ self.attention_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
+ self.ffn_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
+ output_attentions: Optional[bool] = False,
+ use_cache: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ """
+ Args:
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
+ attention_mask (`torch.FloatTensor`, *optional*):
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
+ query_sequence_length, key_sequence_length)` if default attention is used.
+ output_attentions (`bool`, *optional*):
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
+ returned tensors for more detail.
+ use_cache (`bool`, *optional*):
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
+ (see `past_key_values`).
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
+ """
+ if "padding_mask" in kwargs:
+ warnings.warn(
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
+ "Please make sure use `attention_mask` instead.`"
+ )
+
+ residual = hidden_states
+
+ hidden_states = self.attention_norm(hidden_states)
+
+ # Self Attention
+ hidden_states, self_attn_weights, present_key_value = self.attention(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ position_ids=position_ids,
+ past_key_value=past_key_value,
+ output_attentions=output_attentions,
+ use_cache=use_cache,
+ **kwargs,
+ )
+ hidden_states = residual + hidden_states
+
+ # Fully Connected
+ residual = hidden_states
+ hidden_states = self.ffn_norm(hidden_states)
+ hidden_states = self.feed_forward(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states,)
+
+ if output_attentions:
+ outputs += (self_attn_weights,)
+
+ if use_cache:
+ outputs += (present_key_value,)
+
+ return outputs
+
+
+InternLM2_START_DOCSTRING = r"""
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
+ etc.)
+
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
+ and behavior.
+
+ Parameters:
+ config ([`InternLM2Config`]):
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
+ load the weights associated with the model, only the configuration. Check out the
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
+"""
+
+
+# Copied from transformers.models.llama.modeling_llama.LlamaPreTrainedModel with Llama->InternLM2
+@add_start_docstrings(
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
+ InternLM2_START_DOCSTRING,
+)
+class InternLM2PreTrainedModel(PreTrainedModel):
+ config_class = InternLM2Config
+ base_model_prefix = "model"
+ supports_gradient_checkpointing = True
+ _no_split_modules = ["InternLM2DecoderLayer"]
+ _skip_keys_device_placement = "past_key_values"
+
+ def _init_weights(self, module):
+ std = self.config.initializer_range
+ if isinstance(module, nn.Linear):
+ module.weight.data.normal_(mean=0.0, std=std)
+ if module.bias is not None:
+ module.bias.data.zero_()
+ elif isinstance(module, nn.Embedding):
+ module.weight.data.normal_(mean=0.0, std=std)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+
+InternLM2_INPUTS_DOCSTRING = r"""
+ Args:
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
+ it.
+
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
+ [`PreTrainedTokenizer.__call__`] for details.
+
+ [What are input IDs?](../glossary#input-ids)
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
+
+ - 1 for tokens that are **not masked**,
+ - 0 for tokens that are **masked**.
+
+ [What are attention masks?](../glossary#attention-mask)
+
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
+ [`PreTrainedTokenizer.__call__`] for details.
+
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
+ `past_key_values`).
+
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
+ information on the default strategy.
+
+ - 1 indicates the head is **not masked**,
+ - 0 indicates the head is **masked**.
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
+ config.n_positions - 1]`.
+
+ [What are position IDs?](../glossary#position-ids)
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or
+ when `config.use_cache=True`):
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
+ `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)`.
+
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
+
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
+ of shape `(batch_size, sequence_length)`.
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
+ model's internal embedding lookup matrix.
+ use_cache (`bool`, *optional*):
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
+ `past_key_values`).
+ output_attentions (`bool`, *optional*):
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
+ tensors for more detail.
+ output_hidden_states (`bool`, *optional*):
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
+ more detail.
+ return_dict (`bool`, *optional*):
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
+"""
+
+
+# Modified from transformers.model.llama.modeling_llama.LlamaModel
+@add_start_docstrings(
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
+ InternLM2_START_DOCSTRING,
+)
+class InternLM2Model(InternLM2PreTrainedModel):
+ """
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLM2DecoderLayer`]
+
+ Args:
+ config: InternLM2Config
+ """
+
+ _auto_class = "AutoModel"
+
+ def __init__(self, config: InternLM2Config):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+ self.config = config
+
+ self.tok_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+
+ self.layers = nn.ModuleList([InternLM2DecoderLayer(config) for _ in range(config.num_hidden_layers)])
+ self.norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
+
+ self.gradient_checkpointing = False
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.tok_embeddings
+
+ def set_input_embeddings(self, value):
+ self.tok_embeddings = value
+
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
+ # create causal mask
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
+ combined_attention_mask = None
+ if input_shape[-1] > 1:
+ combined_attention_mask = _make_causal_mask(
+ input_shape,
+ inputs_embeds.dtype,
+ device=inputs_embeds.device,
+ past_key_values_length=past_key_values_length,
+ )
+
+ if attention_mask is not None:
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
+ inputs_embeds.device
+ )
+ combined_attention_mask = (
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
+ )
+
+ return combined_attention_mask
+
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
+
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ if self.config.attn_implementation == "flash_attention_2":
+ _import_flash_attn()
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ elif input_ids is not None:
+ batch_size, seq_length = input_ids.shape[:2]
+ elif inputs_embeds is not None:
+ batch_size, seq_length = inputs_embeds.shape[:2]
+ else:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ seq_length_with_past = seq_length
+ past_key_values_length = 0
+ if past_key_values is not None:
+ past_key_values_length = past_key_values[0][0].shape[2]
+ seq_length_with_past = seq_length_with_past + past_key_values_length
+
+ if position_ids is None:
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
+ position_ids = torch.arange(
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
+ )
+ position_ids = position_ids.unsqueeze(0)
+
+ if inputs_embeds is None:
+ inputs_embeds = self.tok_embeddings(input_ids)
+
+ if self.config.attn_implementation == "flash_attention_2":
+ # 2d mask is passed through the layers
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
+ else:
+ if attention_mask is None:
+ attention_mask = torch.ones(
+ (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
+ )
+ attention_mask = self._prepare_decoder_attention_mask(
+ attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
+ )
+
+ # embed positions
+ hidden_states = inputs_embeds
+
+ if self.gradient_checkpointing and self.training:
+ if use_cache:
+ logger.warning_once(
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
+ )
+ use_cache = False
+
+ # decoder layers
+ all_hidden_states = () if output_hidden_states else None
+ all_self_attns = () if output_attentions else None
+ next_decoder_cache = () if use_cache else None
+
+ for idx, decoder_layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
+
+ if self.gradient_checkpointing and self.training:
+
+ def create_custom_forward(module):
+ def custom_forward(*inputs):
+ # None for past_key_value
+ return module(*inputs, output_attentions, None)
+
+ return custom_forward
+
+ layer_outputs = torch.utils.checkpoint.checkpoint(
+ create_custom_forward(decoder_layer),
+ hidden_states,
+ attention_mask,
+ position_ids,
+ None,
+ )
+ else:
+ layer_outputs = decoder_layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ position_ids=position_ids,
+ past_key_value=past_key_value,
+ output_attentions=output_attentions,
+ use_cache=use_cache,
+ )
+
+ hidden_states = layer_outputs[0]
+
+ if use_cache:
+ next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
+
+ if output_attentions:
+ all_self_attns += (layer_outputs[1],)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ next_cache = next_decoder_cache if use_cache else None
+ if not return_dict:
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=next_cache,
+ hidden_states=all_hidden_states,
+ attentions=all_self_attns,
+ )
+
+
+# Modified from transformers.model.llama.modeling_llama.LlamaForCausalLM
+class InternLM2ForCausalLM(InternLM2PreTrainedModel):
+ _auto_class = "AutoModelForCausalLM"
+
+ _tied_weights_keys = ["output.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = InternLM2Model(config)
+ self.vocab_size = config.vocab_size
+ self.output = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.tok_embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.tok_embeddings = value
+
+ def get_output_embeddings(self):
+ return self.output
+
+ def set_output_embeddings(self, new_embeddings):
+ self.output = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ r"""
+ Args:
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
+
+ Returns:
+
+ Example:
+
+ ```python
+ >>> from transformers import AutoTokenizer, InternLM2ForCausalLM
+
+ >>> model = InternLM2ForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
+ >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
+
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
+
+ >>> # Generate
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
+ ```"""
+
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ position_ids=position_ids,
+ past_key_values=past_key_values,
+ inputs_embeds=inputs_embeds,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+
+ hidden_states = outputs[0]
+ logits = self.output(hidden_states)
+ logits = logits.float()
+
+ loss = None
+ if labels is not None:
+ # Shift so that tokens < n predict n
+ shift_logits = logits[..., :-1, :].contiguous()
+ shift_labels = labels[..., 1:].contiguous()
+ # Flatten the tokens
+ loss_fct = CrossEntropyLoss()
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
+ shift_labels = shift_labels.view(-1)
+ # Enable model parallelism
+ shift_labels = shift_labels.to(shift_logits.device)
+ loss = loss_fct(shift_logits, shift_labels)
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
+
+ def prepare_inputs_for_generation(
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
+ ):
+ if past_key_values is not None:
+ past_length = past_key_values[0][0].shape[2]
+
+ # Some generation methods already pass only the last input ID
+ if input_ids.shape[1] > past_length:
+ remove_prefix_length = past_length
+ else:
+ # Default to old behavior: keep only final ID
+ remove_prefix_length = input_ids.shape[1] - 1
+
+ input_ids = input_ids[:, remove_prefix_length:]
+
+ position_ids = kwargs.get("position_ids", None)
+ if attention_mask is not None and position_ids is None:
+ # create position_ids on the fly for batch generation
+ position_ids = attention_mask.long().cumsum(-1) - 1
+ position_ids.masked_fill_(attention_mask == 0, 1)
+ if past_key_values:
+ position_ids = position_ids[:, -input_ids.shape[1] :]
+
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {"inputs_embeds": inputs_embeds}
+ else:
+ model_inputs = {"input_ids": input_ids}
+
+ model_inputs.update(
+ {
+ "position_ids": position_ids,
+ "past_key_values": past_key_values,
+ "use_cache": kwargs.get("use_cache"),
+ "attention_mask": attention_mask,
+ }
+ )
+ return model_inputs
+
+ @staticmethod
+ def _reorder_cache(past_key_values, beam_idx):
+ reordered_past = ()
+ for layer_past in past_key_values:
+ reordered_past += (
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
+ )
+ return reordered_past
+
+ def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = [], meta_instruction=""):
+ if tokenizer.add_bos_token:
+ prompt = ""
+ else:
+ prompt = tokenizer.bos_token
+ if meta_instruction:
+ prompt += f"""<|im_start|>system\n{meta_instruction}<|im_end|>\n"""
+ for record in history:
+ prompt += f"""<|im_start|>user\n{record[0]}<|im_end|>\n<|im_start|>assistant\n{record[1]}<|im_end|>\n"""
+ prompt += f"""<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n"""
+ return tokenizer([prompt], return_tensors="pt")
+
+ @torch.no_grad()
+ def chat(
+ self,
+ tokenizer,
+ query: str,
+ history: List[Tuple[str, str]] = [],
+ streamer: Optional[BaseStreamer] = None,
+ max_new_tokens: int = 1024,
+ do_sample: bool = True,
+ temperature: float = 0.8,
+ top_p: float = 0.8,
+ meta_instruction: str = "You are an AI assistant whose name is InternLM (书生·浦语).\n"
+ "- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n"
+ "- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.",
+ **kwargs,
+ ):
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
+ inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
+ eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(["<|im_end|>"])[0]]
+ outputs = self.generate(
+ **inputs,
+ streamer=streamer,
+ max_new_tokens=max_new_tokens,
+ do_sample=do_sample,
+ temperature=temperature,
+ top_p=top_p,
+ eos_token_id=eos_token_id,
+ **kwargs,
+ )
+ outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]) :]
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
+ response = response.split("<|im_end|>")[0]
+ history = history + [(query, response)]
+ return response, history
+
+ @torch.no_grad()
+ def stream_chat(
+ self,
+ tokenizer,
+ query: str,
+ history: List[Tuple[str, str]] = [],
+ max_new_tokens: int = 1024,
+ do_sample: bool = True,
+ temperature: float = 0.8,
+ top_p: float = 0.8,
+ **kwargs,
+ ):
+ """
+ Return a generator in format: (response, history)
+ Eg.
+ ('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')])
+ ('你好,有什么可以帮助您的吗?', [('你好', '你好,有什么可以帮助您的吗?')])
+ """
+ if BaseStreamer is None:
+ raise ModuleNotFoundError(
+ "The version of `transformers` is too low. Please make sure "
+ "that you have installed `transformers>=4.28.0`."
+ )
+
+ response_queue = queue.Queue(maxsize=20)
+
+ class ChatStreamer(BaseStreamer):
+ def __init__(self, tokenizer) -> None:
+ super().__init__()
+ self.tokenizer = tokenizer
+ self.queue = response_queue
+ self.query = query
+ self.history = history
+ self.response = ""
+ self.cache = []
+ self.received_inputs = False
+ self.queue.put((self.response, history + [(self.query, self.response)]))
+
+ def put(self, value):
+ if len(value.shape) > 1 and value.shape[0] > 1:
+ raise ValueError("ChatStreamer only supports batch size 1")
+ elif len(value.shape) > 1:
+ value = value[0]
+
+ if not self.received_inputs:
+ # The first received value is input_ids, ignore here
+ self.received_inputs = True
+ return
+
+ self.cache.extend(value.tolist())
+ token = self.tokenizer.decode(self.cache, skip_special_tokens=True)
+ if token.strip() != "<|im_end|>":
+ self.response = self.response + token
+ history = self.history + [(self.query, self.response)]
+ self.queue.put((self.response, history))
+ self.cache = []
+ else:
+ self.end()
+
+ def end(self):
+ self.queue.put(None)
+
+ def stream_producer():
+ return self.chat(
+ tokenizer=tokenizer,
+ query=query,
+ streamer=ChatStreamer(tokenizer=tokenizer),
+ history=history,
+ max_new_tokens=max_new_tokens,
+ do_sample=do_sample,
+ temperature=temperature,
+ top_p=top_p,
+ **kwargs,
+ )
+
+ def consumer():
+ producer = threading.Thread(target=stream_producer)
+ producer.start()
+ while True:
+ res = response_queue.get()
+ if res is None:
+ return
+ yield res
+
+ return consumer()
+
+
+# Copied from transformers.model.llama.modeling_llama.LlamaForSequenceClassification with Llama->InternLM2
+@add_start_docstrings(
+ """
+ The InternLM2 Model transformer with a sequence classification head on top (linear layer).
+
+ [`InternLM2ForSequenceClassification`] uses the last token in order to do the classification,
+ as other causal models (e.g. GPT-2) do.
+
+ Since it does classification on the last token, it requires to know the position of the last token. If a
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
+ each row of the batch).
+ """,
+ InternLM2_START_DOCSTRING,
+)
+class InternLM2ForSequenceClassification(InternLM2PreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+ self.num_labels = config.num_labels
+ self.model = InternLM2Model(config)
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.tok_embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.tok_embeddings = value
+
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ position_ids: Optional[torch.LongTensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
+ r"""
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ transformer_outputs = self.model(
+ input_ids,
+ attention_mask=attention_mask,
+ position_ids=position_ids,
+ past_key_values=past_key_values,
+ inputs_embeds=inputs_embeds,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+ hidden_states = transformer_outputs[0]
+ logits = self.score(hidden_states)
+
+ if input_ids is not None:
+ batch_size = input_ids.shape[0]
+ else:
+ batch_size = inputs_embeds.shape[0]
+
+ if self.config.pad_token_id is None and batch_size != 1:
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
+ if self.config.pad_token_id is None:
+ sequence_lengths = -1
+ else:
+ if input_ids is not None:
+ sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1).to(
+ logits.device
+ )
+ else:
+ sequence_lengths = -1
+
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
+
+ loss = None
+ if labels is not None:
+ labels = labels.to(logits.device)
+ if self.config.problem_type is None:
+ if self.num_labels == 1:
+ self.config.problem_type = "regression"
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
+ self.config.problem_type = "single_label_classification"
+ else:
+ self.config.problem_type = "multi_label_classification"
+
+ if self.config.problem_type == "regression":
+ loss_fct = MSELoss()
+ if self.num_labels == 1:
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
+ else:
+ loss = loss_fct(pooled_logits, labels)
+ elif self.config.problem_type == "single_label_classification":
+ loss_fct = CrossEntropyLoss()
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
+ elif self.config.problem_type == "multi_label_classification":
+ loss_fct = BCEWithLogitsLoss()
+ loss = loss_fct(pooled_logits, labels)
+ if not return_dict:
+ output = (pooled_logits,) + transformer_outputs[1:]
+ return ((loss,) + output) if loss is not None else output
+
+ return SequenceClassifierOutputWithPast(
+ loss=loss,
+ logits=pooled_logits,
+ past_key_values=transformer_outputs.past_key_values,
+ hidden_states=transformer_outputs.hidden_states,
+ attentions=transformer_outputs.attentions,
+ )
diff --git a/special_tokens_map.json b/special_tokens_map.json
new file mode 100644
index 0000000..9bfed75
--- /dev/null
+++ b/special_tokens_map.json
@@ -0,0 +1,6 @@
+{
+ "bos_token": "",
+ "eos_token": "",
+ "pad_token": "",
+ "unk_token": ""
+}
diff --git a/tokenization_internlm2.py b/tokenization_internlm2.py
new file mode 100644
index 0000000..ff53eba
--- /dev/null
+++ b/tokenization_internlm2.py
@@ -0,0 +1,236 @@
+# coding=utf-8
+# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
+#
+# This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tokenization classes for InternLM."""
+import os
+from shutil import copyfile
+from typing import Any, Dict, List, Optional, Tuple
+
+import sentencepiece as spm
+from transformers.tokenization_utils import PreTrainedTokenizer
+from transformers.utils import logging
+
+logger = logging.get_logger(__name__)
+
+VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
+
+PRETRAINED_VOCAB_FILES_MAP = {}
+
+
+# Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
+class InternLM2Tokenizer(PreTrainedTokenizer):
+ """
+ Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
+
+ Args:
+ vocab_file (`str`):
+ Path to the vocabulary file.
+ """
+
+ vocab_files_names = VOCAB_FILES_NAMES
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
+ model_input_names = ["input_ids", "attention_mask"]
+ _auto_class = "AutoTokenizer"
+
+ def __init__(
+ self,
+ vocab_file,
+ unk_token="",
+ bos_token="",
+ eos_token="",
+ pad_token="",
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
+ add_bos_token=True,
+ add_eos_token=False,
+ decode_with_prefix_space=False,
+ clean_up_tokenization_spaces=False,
+ **kwargs,
+ ):
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
+ self.vocab_file = vocab_file
+ self.add_bos_token = add_bos_token
+ self.add_eos_token = add_eos_token
+ self.decode_with_prefix_space = decode_with_prefix_space
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
+ self.sp_model.Load(vocab_file)
+ self._no_prefix_space_tokens = None
+ super().__init__(
+ bos_token=bos_token,
+ eos_token=eos_token,
+ unk_token=unk_token,
+ pad_token=pad_token,
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
+ **kwargs,
+ )
+
+ @property
+ def no_prefix_space_tokens(self):
+ if self._no_prefix_space_tokens is None:
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
+ return self._no_prefix_space_tokens
+
+ @property
+ def vocab_size(self):
+ """Returns vocab size"""
+ return self.sp_model.get_piece_size()
+
+ @property
+ def bos_token_id(self) -> Optional[int]:
+ return self.sp_model.bos_id()
+
+ @property
+ def eos_token_id(self) -> Optional[int]:
+ return self.sp_model.eos_id()
+
+ def get_vocab(self):
+ """Returns vocab as a dict"""
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
+ vocab.update(self.added_tokens_encoder)
+ return vocab
+
+ def _tokenize(self, text):
+ """Returns a tokenized string."""
+ return self.sp_model.encode(text, out_type=str)
+
+ def _convert_token_to_id(self, token):
+ """Converts a token (str) in an id using the vocab."""
+ return self.sp_model.piece_to_id(token)
+
+ def _convert_id_to_token(self, index):
+ """Converts an index (integer) in a token (str) using the vocab."""
+ token = self.sp_model.IdToPiece(index)
+ return token
+
+ def _maybe_add_prefix_space(self, tokens, decoded):
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
+ return " " + decoded
+ else:
+ return decoded
+
+ def convert_tokens_to_string(self, tokens):
+ """Converts a sequence of tokens (string) in a single string."""
+ current_sub_tokens = []
+ out_string = ""
+ prev_is_special = False
+ for token in tokens:
+ # make sure that special tokens are not decoded using sentencepiece model
+ if token in self.all_special_tokens:
+ if not prev_is_special:
+ out_string += " "
+ out_string += self.sp_model.decode(current_sub_tokens) + token
+ prev_is_special = True
+ current_sub_tokens = []
+ else:
+ current_sub_tokens.append(token)
+ prev_is_special = False
+ out_string += self.sp_model.decode(current_sub_tokens)
+ out_string = self.clean_up_tokenization(out_string)
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
+ return out_string[1:]
+
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
+ """
+ Save the vocabulary and special tokens file to a directory.
+
+ Args:
+ save_directory (`str`):
+ The directory in which to save the vocabulary.
+
+ Returns:
+ `Tuple(str)`: Paths to the files saved.
+ """
+ if not os.path.isdir(save_directory):
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
+ return
+ out_vocab_file = os.path.join(
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
+ )
+
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
+ copyfile(self.vocab_file, out_vocab_file)
+ elif not os.path.isfile(self.vocab_file):
+ with open(out_vocab_file, "wb") as fi:
+ content_spiece_model = self.sp_model.serialized_model_proto()
+ fi.write(content_spiece_model)
+
+ return (out_vocab_file,)
+
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
+ if self.add_bos_token:
+ bos_token_ids = [self.bos_token_id]
+ else:
+ bos_token_ids = []
+
+ output = bos_token_ids + token_ids_0
+
+ if token_ids_1 is not None:
+ output = output + token_ids_1
+
+ if self.add_eos_token:
+ output = output + [self.eos_token_id]
+
+ return output
+
+ def get_special_tokens_mask(
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
+ ) -> List[int]:
+ """
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
+ special tokens using the tokenizer `prepare_for_model` method.
+
+ Args:
+ token_ids_0 (`List[int]`):
+ List of IDs.
+ token_ids_1 (`List[int]`, *optional*):
+ Optional second list of IDs for sequence pairs.
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
+ Whether or not the token list is already formatted with special tokens for the model.
+
+ Returns:
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
+ """
+ if already_has_special_tokens:
+ return super().get_special_tokens_mask(
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
+ )
+
+ if token_ids_1 is None:
+ return [1] + ([0] * len(token_ids_0)) + [1]
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
+
+ def create_token_type_ids_from_sequences(
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
+ ) -> List[int]:
+ """
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
+ use of token type ids, therefore a list of zeros is returned.
+
+ Args:
+ token_ids_0 (`List[int]`):
+ List of IDs.
+ token_ids_1 (`List[int]`, *optional*):
+ Optional second list of IDs for sequence pairs.
+
+ Returns:
+ `List[int]`: List of zeros.
+ """
+ eos = [self.eos_token_id]
+
+ if token_ids_1 is None:
+ return len(token_ids_0 + eos) * [0]
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
diff --git a/tokenization_internlm2_fast.py b/tokenization_internlm2_fast.py
new file mode 100644
index 0000000..1506e11
--- /dev/null
+++ b/tokenization_internlm2_fast.py
@@ -0,0 +1,214 @@
+# coding=utf-8
+# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
+#
+# This code is based on transformers/src/transformers/models/llama/tokenization_llama_fast.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tokenization Fast class for InternLM."""
+import os
+from shutil import copyfile
+from typing import Any, Dict, Optional, Tuple
+
+from tokenizers import processors, decoders, Tokenizer, normalizers
+from tokenizers.models import BPE
+
+from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
+from transformers.utils import logging
+
+from transformers.convert_slow_tokenizer import (
+ SLOW_TO_FAST_CONVERTERS,
+ SpmConverter,
+ SentencePieceExtractor,
+)
+
+from .tokenization_internlm2 import InternLM2Tokenizer
+
+logger = logging.get_logger(__name__)
+
+VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
+
+# Modified from transformers.convert_slow_tokenizer.LlamaConverter
+class InternLM2Converter(SpmConverter):
+ handle_byte_fallback = True
+
+ def vocab(self, proto):
+ vocab = [
+ ("", 0.0),
+ ("", 0.0),
+ ("", 0.0),
+ ]
+ vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]]
+ return vocab
+
+ def unk_id(self, proto):
+ unk_id = 0
+ return unk_id
+
+ def decoder(self, replacement, add_prefix_space):
+ return decoders.Sequence(
+ [
+ decoders.Replace("▁", " "),
+ decoders.ByteFallback(),
+ decoders.Fuse(),
+ decoders.Strip(content=" ", left=1),
+ ]
+ )
+
+ def tokenizer(self, proto):
+ model_type = proto.trainer_spec.model_type
+ vocab_scores = self.vocab(proto)
+ # special tokens
+ added_tokens = self.original_tokenizer.added_tokens_decoder
+ for i in range(len(vocab_scores)):
+ piece, score = vocab_scores[i]
+ if i in added_tokens:
+ vocab_scores[i] = (added_tokens[i].content, score)
+ if model_type == 1:
+ raise RuntimeError("InternLM2 is supposed to be a BPE model!")
+
+ elif model_type == 2:
+ _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract(vocab_scores)
+ bpe_vocab = {word: i for i, (word, _score) in enumerate(vocab_scores)}
+ tokenizer = Tokenizer(
+ BPE(bpe_vocab, merges, unk_token=proto.trainer_spec.unk_piece, fuse_unk=True, byte_fallback=True)
+ )
+ tokenizer.add_special_tokens(
+ [ added_token for index, added_token in added_tokens.items()]
+ )
+ else:
+ raise Exception(
+ "You're trying to run a `Unigram` model but you're file was trained with a different algorithm"
+ )
+
+ return tokenizer
+
+ def normalizer(self, proto):
+ normalizers_list = []
+ if proto.normalizer_spec.add_dummy_prefix:
+ normalizers_list.append(normalizers.Prepend(prepend="▁"))
+ normalizers_list.append(normalizers.Replace(pattern=" ", content="▁"))
+ return normalizers.Sequence(normalizers_list)
+
+ def pre_tokenizer(self, replacement, add_prefix_space):
+ return None
+
+SLOW_TO_FAST_CONVERTERS["InternLM2Tokenizer"] = InternLM2Converter
+
+
+# Modified from transformers.model.llama.tokenization_llama_fast.LlamaTokenizerFast -> InternLM2TokenizerFast
+class InternLM2TokenizerFast(PreTrainedTokenizerFast):
+ vocab_files_names = VOCAB_FILES_NAMES
+ slow_tokenizer_class = InternLM2Tokenizer
+ padding_side = "left"
+ model_input_names = ["input_ids", "attention_mask"]
+ _auto_class = "AutoTokenizer"
+
+ def __init__(
+ self,
+ vocab_file,
+ unk_token="",
+ bos_token="",
+ eos_token="",
+ pad_token="",
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
+ add_bos_token=True,
+ add_eos_token=False,
+ decode_with_prefix_space=False,
+ clean_up_tokenization_spaces=False,
+ **kwargs,
+ ):
+ super().__init__(
+ vocab_file=vocab_file,
+ unk_token=unk_token,
+ bos_token=bos_token,
+ eos_token=eos_token,
+ pad_token=pad_token,
+ sp_model_kwargs=sp_model_kwargs,
+ add_bos_token=add_bos_token,
+ add_eos_token=add_eos_token,
+ decode_with_prefix_space=decode_with_prefix_space,
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
+ **kwargs,
+ )
+ self._add_bos_token = add_bos_token
+ self._add_eos_token = add_eos_token
+ self.update_post_processor()
+ self.vocab_file = vocab_file
+
+ @property
+ def can_save_slow_tokenizer(self) -> bool:
+ return os.path.isfile(self.vocab_file) if self.vocab_file else False
+
+ def update_post_processor(self):
+ """
+ Updates the underlying post processor with the current `bos_token` and `eos_token`.
+ """
+ bos = self.bos_token
+ bos_token_id = self.bos_token_id
+ if bos is None and self.add_bos_token:
+ raise ValueError("add_bos_token = True but bos_token = None")
+
+ eos = self.eos_token
+ eos_token_id = self.eos_token_id
+ if eos is None and self.add_eos_token:
+ raise ValueError("add_eos_token = True but eos_token = None")
+
+ single = f"{(bos+':0 ') if self.add_bos_token else ''}$A:0{(' '+eos+':0') if self.add_eos_token else ''}"
+ pair = f"{single}{(' '+bos+':1') if self.add_bos_token else ''} $B:1{(' '+eos+':1') if self.add_eos_token else ''}"
+
+ special_tokens = []
+ if self.add_bos_token:
+ special_tokens.append((bos, bos_token_id))
+ if self.add_eos_token:
+ special_tokens.append((eos, eos_token_id))
+ self._tokenizer.post_processor = processors.TemplateProcessing(
+ single=single, pair=pair, special_tokens=special_tokens
+ )
+
+ @property
+ def add_eos_token(self):
+ return self._add_eos_token
+
+ @property
+ def add_bos_token(self):
+ return self._add_bos_token
+
+ @add_eos_token.setter
+ def add_eos_token(self, value):
+ self._add_eos_token = value
+ self.update_post_processor()
+
+ @add_bos_token.setter
+ def add_bos_token(self, value):
+ self._add_bos_token = value
+ self.update_post_processor()
+
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
+ if not self.can_save_slow_tokenizer:
+ raise ValueError(
+ "Your fast tokenizer does not have the necessary information to save the vocabulary for a slow "
+ "tokenizer."
+ )
+
+ if not os.path.isdir(save_directory):
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
+ return
+ out_vocab_file = os.path.join(
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
+ )
+
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
+ copyfile(self.vocab_file, out_vocab_file)
+
+ return (out_vocab_file,)
diff --git a/tokenizer.model b/tokenizer.model
new file mode 100644
index 0000000..6600712
--- /dev/null
+++ b/tokenizer.model
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
+size 1477754
diff --git a/tokenizer_config.json b/tokenizer_config.json
new file mode 100644
index 0000000..50ba041
--- /dev/null
+++ b/tokenizer_config.json
@@ -0,0 +1,90 @@
+{
+ "auto_map": {
+ "AutoTokenizer": [
+ "tokenization_internlm2.InternLM2Tokenizer",
+ "tokenization_internlm2_fast.InternLM2TokenizerFast"
+ ]
+ },
+ "bos_token": "",
+ "clean_up_tokenization_spaces": false,
+ "eos_token": "",
+ "model_max_length": 1000000000000000019884624838656,
+ "pad_token": "",
+ "tokenizer_class": "InternLM2Tokenizer",
+ "unk_token": "",
+ "added_tokens_decoder": {
+ "0": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "1": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "2": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92543": {
+ "content": "<|im_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92542": {
+ "content": "<|im_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92541": {
+ "content": "<|action_start|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92540": {
+ "content": "<|action_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92539": {
+ "content": "<|interpreter|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "92538": {
+ "content": "<|plugin|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ }
+ },
+ "chat_template": "{{ bos_token }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
+}
\ No newline at end of file