初始化项目,由ModelHub XC社区提供模型
Model: numind/NuExtract-2-2B-experimental Source: Original Platform
This commit is contained in:
47
.gitattributes
vendored
Normal file
47
.gitattributes
vendored
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
617
README.md
Normal file
617
README.md
Normal file
@@ -0,0 +1,617 @@
|
|||||||
|
---
|
||||||
|
license: mit
|
||||||
|
language:
|
||||||
|
- multilingual
|
||||||
|
tags:
|
||||||
|
- nlp
|
||||||
|
base_model: OpenGVLab/InternVL2_5-2B
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
inference: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# NuExtract-2-2B [experimental version] by NuMind 🔥
|
||||||
|
|
||||||
|
NuExtract 2.0 experimental is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.
|
||||||
|
|
||||||
|
NB: This is an experimental version that will be superseeded by NuExtract 2.0
|
||||||
|
|
||||||
|
We provide several versions of different sizes, all based on the InternVL2.5 family.
|
||||||
|
| Model Size | Model Name | Base Model | Huggingface Link |
|
||||||
|
|------------|------------|------------|------------------|
|
||||||
|
| 2B | NuExtract-2.0-2B | [InternVL2_5-2B](https://huggingface.co/OpenGVLab/InternVL2_5-2B) | [NuExtract-2-2B](https://huggingface.co/numind/NuExtract-2-2B) |
|
||||||
|
| 4B | NuExtract-2.0-4B | [InternVL2_5-4B](https://huggingface.co/OpenGVLab/InternVL2_5-4B) | [NuExtract-2-4B](https://huggingface.co/numind/NuExtract-2-4B) |
|
||||||
|
| 8B | NuExtract-2.0-8B | [InternVL2_5-8B](https://huggingface.co/OpenGVLab/InternVL2_5-8B) | [NuExtract-2-8B](https://huggingface.co/numind/NuExtract-2-8B) |
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.
|
||||||
|
|
||||||
|
Support types include:
|
||||||
|
* `verbatim-string` - instructs the model to extract text that is present verbatim in the input.
|
||||||
|
* `string` - a generic string field that can incorporate paraphrasing/abstraction.
|
||||||
|
* `integer` - a whole number.
|
||||||
|
* `number` - a whole or decimal number.
|
||||||
|
* `date-time` - ISO formatted date.
|
||||||
|
* Array of any of the above types (e.g. `["string"]`)
|
||||||
|
* `enum` - a choice from set of possible answers (represented in template as an array of options, e.g. `["yes", "no", "maybe"]`).
|
||||||
|
* `multi-label` - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. `[["A", "B", "C"]]`).
|
||||||
|
|
||||||
|
If the model does not identify relevant information for a field, it will return `null` or `[]` (for arrays and multi-labels).
|
||||||
|
|
||||||
|
The following is an example template:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"first_name": "verbatim-string",
|
||||||
|
"last_name": "verbatim-string",
|
||||||
|
"description": "string",
|
||||||
|
"age": "integer",
|
||||||
|
"gpa": "number",
|
||||||
|
"birth_date": "date-time",
|
||||||
|
"nationality": ["France", "England", "Japan", "USA", "China"],
|
||||||
|
"languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
An example output:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"first_name": "Susan",
|
||||||
|
"last_name": "Smith",
|
||||||
|
"description": "A student studying computer science.",
|
||||||
|
"age": 20,
|
||||||
|
"gpa": 3.7,
|
||||||
|
"birth_date": "2005-03-01",
|
||||||
|
"nationality": "England",
|
||||||
|
"languages_spoken": ["English", "French"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.
|
||||||
|
|
||||||
|
## Inference
|
||||||
|
|
||||||
|
Use the following code to handle loading and preprocessing of input data:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
import torchvision.transforms as T
|
||||||
|
from PIL import Image
|
||||||
|
from torchvision.transforms.functional import InterpolationMode
|
||||||
|
|
||||||
|
IMAGENET_MEAN = (0.485, 0.456, 0.406)
|
||||||
|
IMAGENET_STD = (0.229, 0.224, 0.225)
|
||||||
|
|
||||||
|
def build_transform(input_size):
|
||||||
|
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
|
||||||
|
transform = T.Compose([
|
||||||
|
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
|
||||||
|
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
|
||||||
|
T.ToTensor(),
|
||||||
|
T.Normalize(mean=MEAN, std=STD)
|
||||||
|
])
|
||||||
|
return transform
|
||||||
|
|
||||||
|
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
|
||||||
|
best_ratio_diff = float('inf')
|
||||||
|
best_ratio = (1, 1)
|
||||||
|
area = width * height
|
||||||
|
for ratio in target_ratios:
|
||||||
|
target_aspect_ratio = ratio[0] / ratio[1]
|
||||||
|
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
|
||||||
|
if ratio_diff < best_ratio_diff:
|
||||||
|
best_ratio_diff = ratio_diff
|
||||||
|
best_ratio = ratio
|
||||||
|
elif ratio_diff == best_ratio_diff:
|
||||||
|
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
|
||||||
|
best_ratio = ratio
|
||||||
|
return best_ratio
|
||||||
|
|
||||||
|
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
|
||||||
|
orig_width, orig_height = image.size
|
||||||
|
aspect_ratio = orig_width / orig_height
|
||||||
|
|
||||||
|
# calculate the existing image aspect ratio
|
||||||
|
target_ratios = set(
|
||||||
|
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
|
||||||
|
i * j <= max_num and i * j >= min_num)
|
||||||
|
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
|
||||||
|
|
||||||
|
# find the closest aspect ratio to the target
|
||||||
|
target_aspect_ratio = find_closest_aspect_ratio(
|
||||||
|
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
|
||||||
|
|
||||||
|
# calculate the target width and height
|
||||||
|
target_width = image_size * target_aspect_ratio[0]
|
||||||
|
target_height = image_size * target_aspect_ratio[1]
|
||||||
|
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
|
||||||
|
|
||||||
|
# resize the image
|
||||||
|
resized_img = image.resize((target_width, target_height))
|
||||||
|
processed_images = []
|
||||||
|
for i in range(blocks):
|
||||||
|
box = (
|
||||||
|
(i % (target_width // image_size)) * image_size,
|
||||||
|
(i // (target_width // image_size)) * image_size,
|
||||||
|
((i % (target_width // image_size)) + 1) * image_size,
|
||||||
|
((i // (target_width // image_size)) + 1) * image_size
|
||||||
|
)
|
||||||
|
# split the image
|
||||||
|
split_img = resized_img.crop(box)
|
||||||
|
processed_images.append(split_img)
|
||||||
|
assert len(processed_images) == blocks
|
||||||
|
if use_thumbnail and len(processed_images) != 1:
|
||||||
|
thumbnail_img = image.resize((image_size, image_size))
|
||||||
|
processed_images.append(thumbnail_img)
|
||||||
|
return processed_images
|
||||||
|
|
||||||
|
def load_image(image_file, input_size=448, max_num=12):
|
||||||
|
image = Image.open(image_file).convert('RGB')
|
||||||
|
transform = build_transform(input_size=input_size)
|
||||||
|
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
|
||||||
|
pixel_values = [transform(image) for image in images]
|
||||||
|
pixel_values = torch.stack(pixel_values)
|
||||||
|
return pixel_values
|
||||||
|
|
||||||
|
def prepare_inputs(messages, image_paths, tokenizer, device='cuda', dtype=torch.bfloat16):
|
||||||
|
"""
|
||||||
|
Prepares multi-modal input components (supports multiple images per prompt).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
messages: List of input messages/prompts (strings or dicts with 'role' and 'content')
|
||||||
|
image_paths: List where each element is either None (for text-only) or a list of image paths
|
||||||
|
tokenizer: The tokenizer to use for applying chat templates
|
||||||
|
device: Device to place tensors on ('cuda', 'cpu', etc.)
|
||||||
|
dtype: Data type for image tensors (default: torch.bfloat16)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
dict: Contains 'prompts', 'pixel_values_list', and 'num_patches_list' ready for the model
|
||||||
|
"""
|
||||||
|
# Make sure image_paths list is at least as long as messages
|
||||||
|
if len(image_paths) < len(messages):
|
||||||
|
# Pad with None for text-only messages
|
||||||
|
image_paths = image_paths + [None] * (len(messages) - len(image_paths))
|
||||||
|
|
||||||
|
# Process images and collect patch information
|
||||||
|
loaded_images = []
|
||||||
|
num_patches_list = []
|
||||||
|
for paths in image_paths:
|
||||||
|
if paths and isinstance(paths, list) and len(paths) > 0:
|
||||||
|
# Load each image in this prompt
|
||||||
|
prompt_images = []
|
||||||
|
prompt_patches = []
|
||||||
|
|
||||||
|
for path in paths:
|
||||||
|
# Load the image
|
||||||
|
img = load_image(path).to(dtype=dtype, device=device)
|
||||||
|
|
||||||
|
# Ensure img has correct shape [patches, C, H, W]
|
||||||
|
if len(img.shape) == 3: # [C, H, W] -> [1, C, H, W]
|
||||||
|
img = img.unsqueeze(0)
|
||||||
|
|
||||||
|
prompt_images.append(img)
|
||||||
|
# Record the number of patches for this image
|
||||||
|
prompt_patches.append(img.shape[0])
|
||||||
|
|
||||||
|
loaded_images.append(prompt_images)
|
||||||
|
num_patches_list.append(prompt_patches)
|
||||||
|
else:
|
||||||
|
# Text-only prompt
|
||||||
|
loaded_images.append(None)
|
||||||
|
num_patches_list.append([])
|
||||||
|
|
||||||
|
# Create the concatenated pixel_values_list
|
||||||
|
pixel_values_list = []
|
||||||
|
for prompt_images in loaded_images:
|
||||||
|
if prompt_images:
|
||||||
|
# Concatenate all images for this prompt
|
||||||
|
pixel_values_list.append(torch.cat(prompt_images, dim=0))
|
||||||
|
else:
|
||||||
|
# Text-only prompt
|
||||||
|
pixel_values_list.append(None)
|
||||||
|
|
||||||
|
# Format messages for the model
|
||||||
|
if all(isinstance(m, str) for m in messages):
|
||||||
|
# Simple string messages: convert to chat format
|
||||||
|
batch_messages = [
|
||||||
|
[{"role": "user", "content": message}]
|
||||||
|
for message in messages
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
# Assume messages are already in the right format
|
||||||
|
batch_messages = messages
|
||||||
|
|
||||||
|
# Apply chat template
|
||||||
|
prompts = tokenizer.apply_chat_template(
|
||||||
|
batch_messages,
|
||||||
|
tokenize=False,
|
||||||
|
add_generation_prompt=True
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'prompts': prompts,
|
||||||
|
'pixel_values_list': pixel_values_list,
|
||||||
|
'num_patches_list': num_patches_list
|
||||||
|
}
|
||||||
|
|
||||||
|
def construct_message(text, template, examples=None):
|
||||||
|
"""
|
||||||
|
Construct the individual NuExtract message texts, prior to chat template formatting.
|
||||||
|
"""
|
||||||
|
# add few-shot examples if needed
|
||||||
|
if examples is not None and len(examples) > 0:
|
||||||
|
icl = "# Examples:\n"
|
||||||
|
for row in examples:
|
||||||
|
icl += f"## Input:\n{row['input']}\n## Output:\n{row['output']}\n"
|
||||||
|
else:
|
||||||
|
icl = ""
|
||||||
|
|
||||||
|
return f"""# Template:\n{template}\n{icl}# Context:\n{text}"""
|
||||||
|
```
|
||||||
|
|
||||||
|
To handle inference:
|
||||||
|
|
||||||
|
```python
|
||||||
|
IMG_START_TOKEN='<img>'
|
||||||
|
IMG_END_TOKEN='</img>'
|
||||||
|
IMG_CONTEXT_TOKEN='<IMG_CONTEXT>'
|
||||||
|
|
||||||
|
def nuextract_generate(model, tokenizer, prompts, generation_config, pixel_values_list=None, num_patches_list=None):
|
||||||
|
"""
|
||||||
|
Generate responses for a batch of NuExtract inputs.
|
||||||
|
Support for multiple and varying numbers of images per prompt.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
model: The vision-language model
|
||||||
|
tokenizer: The tokenizer for the model
|
||||||
|
pixel_values_list: List of tensor batches, one per prompt
|
||||||
|
Each batch has shape [num_images, channels, height, width] or None for text-only prompts
|
||||||
|
prompts: List of text prompts
|
||||||
|
generation_config: Configuration for text generation
|
||||||
|
num_patches_list: List of lists, each containing patch counts for images in a prompt
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of generated responses
|
||||||
|
"""
|
||||||
|
img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
|
||||||
|
model.img_context_token_id = img_context_token_id
|
||||||
|
|
||||||
|
# Replace all image placeholders with appropriate tokens
|
||||||
|
modified_prompts = []
|
||||||
|
total_image_files = 0
|
||||||
|
total_patches = 0
|
||||||
|
image_containing_prompts = []
|
||||||
|
for idx, prompt in enumerate(prompts):
|
||||||
|
# check if this prompt has images
|
||||||
|
has_images = (pixel_values_list and
|
||||||
|
idx < len(pixel_values_list) and
|
||||||
|
pixel_values_list[idx] is not None and
|
||||||
|
isinstance(pixel_values_list[idx], torch.Tensor) and
|
||||||
|
pixel_values_list[idx].shape[0] > 0)
|
||||||
|
|
||||||
|
if has_images:
|
||||||
|
# prompt with image placeholders
|
||||||
|
image_containing_prompts.append(idx)
|
||||||
|
modified_prompt = prompt
|
||||||
|
|
||||||
|
patches = num_patches_list[idx] if (num_patches_list and idx < len(num_patches_list)) else []
|
||||||
|
num_images = len(patches)
|
||||||
|
total_image_files += num_images
|
||||||
|
total_patches += sum(patches)
|
||||||
|
|
||||||
|
# replace each <image> placeholder with image tokens
|
||||||
|
for i, num_patches in enumerate(patches):
|
||||||
|
image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * model.num_image_token * num_patches + IMG_END_TOKEN
|
||||||
|
modified_prompt = modified_prompt.replace('<image>', image_tokens, 1)
|
||||||
|
else:
|
||||||
|
# text-only prompt
|
||||||
|
modified_prompt = prompt
|
||||||
|
|
||||||
|
modified_prompts.append(modified_prompt)
|
||||||
|
|
||||||
|
# process all prompts in a single batch
|
||||||
|
tokenizer.padding_side = 'left'
|
||||||
|
model_inputs = tokenizer(modified_prompts, return_tensors='pt', padding=True)
|
||||||
|
input_ids = model_inputs['input_ids'].to(model.device)
|
||||||
|
attention_mask = model_inputs['attention_mask'].to(model.device)
|
||||||
|
|
||||||
|
eos_token_id = tokenizer.convert_tokens_to_ids("<|im_end|>\n".strip())
|
||||||
|
generation_config['eos_token_id'] = eos_token_id
|
||||||
|
|
||||||
|
# prepare pixel values
|
||||||
|
flattened_pixel_values = None
|
||||||
|
if image_containing_prompts:
|
||||||
|
# collect and concatenate all image tensors
|
||||||
|
all_pixel_values = []
|
||||||
|
for idx in image_containing_prompts:
|
||||||
|
all_pixel_values.append(pixel_values_list[idx])
|
||||||
|
|
||||||
|
flattened_pixel_values = torch.cat(all_pixel_values, dim=0)
|
||||||
|
print(f"Processing batch with {len(prompts)} prompts, {total_image_files} actual images, and {total_patches} total patches")
|
||||||
|
else:
|
||||||
|
print(f"Processing text-only batch with {len(prompts)} prompts")
|
||||||
|
|
||||||
|
# generate outputs
|
||||||
|
outputs = model.generate(
|
||||||
|
pixel_values=flattened_pixel_values, # will be None for text-only prompts
|
||||||
|
input_ids=input_ids,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
**generation_config
|
||||||
|
)
|
||||||
|
|
||||||
|
# Decode responses
|
||||||
|
responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
||||||
|
|
||||||
|
return responses
|
||||||
|
```
|
||||||
|
|
||||||
|
To load the model:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
model_name = ""
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, padding_side='left')
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
|
||||||
|
torch_dtype=torch.bfloat16,
|
||||||
|
attn_implementation="flash_attention_2" # we recommend using flash attention
|
||||||
|
).to("cuda")
|
||||||
|
```
|
||||||
|
|
||||||
|
Simple 0-shot text-only example:
|
||||||
|
```python
|
||||||
|
template = """{"names": ["verbatim-string"]}"""
|
||||||
|
text = "John went to the restaurant with Mary. James went to the cinema."
|
||||||
|
|
||||||
|
input_messages = [construct_message(text, template)]
|
||||||
|
|
||||||
|
input_content = prepare_inputs(
|
||||||
|
messages=input_messages,
|
||||||
|
image_paths=[],
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
)
|
||||||
|
|
||||||
|
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
result = nuextract_generate(
|
||||||
|
model=model,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
prompts=input_content['prompts'],
|
||||||
|
pixel_values_list=input_content['pixel_values_list'],
|
||||||
|
num_patches_list=input_content['num_patches_list'],
|
||||||
|
generation_config=generation_config
|
||||||
|
)
|
||||||
|
for y in result:
|
||||||
|
print(y)
|
||||||
|
# {"names": ["John", "Mary", "James"]}
|
||||||
|
```
|
||||||
|
|
||||||
|
Text-only input with an in-context example:
|
||||||
|
```python
|
||||||
|
template = """{"names": ["verbatim-string"], "female_names": ["verbatim-string"]}"""
|
||||||
|
text = "John went to the restaurant with Mary. James went to the cinema."
|
||||||
|
examples = [
|
||||||
|
{
|
||||||
|
"input": "Stephen is the manager at Susan's store.",
|
||||||
|
"output": """{"names": ["STEPHEN", "SUSAN"], "female_names": ["SUSAN"]}"""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
input_messages = [construct_message(text, template, examples)]
|
||||||
|
|
||||||
|
input_content = prepare_inputs(
|
||||||
|
messages=input_messages,
|
||||||
|
image_paths=[],
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
)
|
||||||
|
|
||||||
|
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
result = nuextract_generate(
|
||||||
|
model=model,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
prompts=input_content['prompts'],
|
||||||
|
pixel_values_list=input_content['pixel_values_list'],
|
||||||
|
num_patches_list=input_content['num_patches_list'],
|
||||||
|
generation_config=generation_config
|
||||||
|
)
|
||||||
|
for y in result:
|
||||||
|
print(y)
|
||||||
|
# {"names": ["JOHN", "MARY", "JAMES"], "female_names": ["MARY"]}
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with image input and an in-context example. Image inputs should use `<image>` placeholder instead of text and image paths should be provided in a list in order of appearance in the prompt (in this example `0.jpg` will be for the in-context example and `1.jpg` for the true input).
|
||||||
|
```python
|
||||||
|
template = """{"store": "verbatim-string"}"""
|
||||||
|
text = "<image>"
|
||||||
|
examples = [
|
||||||
|
{
|
||||||
|
"input": "<image>",
|
||||||
|
"output": """{"store": "Walmart"}"""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
input_messages = [construct_message(text, template, examples)]
|
||||||
|
|
||||||
|
images = [
|
||||||
|
["0.jpg", "1.jpg"]
|
||||||
|
]
|
||||||
|
|
||||||
|
input_content = prepare_inputs(
|
||||||
|
messages=input_messages,
|
||||||
|
image_paths=images,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
)
|
||||||
|
|
||||||
|
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
result = nuextract_generate(
|
||||||
|
model=model,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
prompts=input_content['prompts'],
|
||||||
|
pixel_values_list=input_content['pixel_values_list'],
|
||||||
|
num_patches_list=input_content['num_patches_list'],
|
||||||
|
generation_config=generation_config
|
||||||
|
)
|
||||||
|
for y in result:
|
||||||
|
print(y)
|
||||||
|
# {"store": "Trader Joe's"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Multi-modal batched input:
|
||||||
|
```python
|
||||||
|
inputs = [
|
||||||
|
# image input with no ICL examples
|
||||||
|
{
|
||||||
|
"text": "<image>",
|
||||||
|
"template": """{"store_name": "verbatim-string"}""",
|
||||||
|
"examples": None,
|
||||||
|
},
|
||||||
|
# image input with 1 ICL example
|
||||||
|
{
|
||||||
|
"text": "<image>",
|
||||||
|
"template": """{"store_name": "verbatim-string"}""",
|
||||||
|
"examples": [
|
||||||
|
{
|
||||||
|
"input": "<image>",
|
||||||
|
"output": """{"store_name": "Walmart"}""",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
},
|
||||||
|
# text input with no ICL examples
|
||||||
|
{
|
||||||
|
"text": "John went to the restaurant with Mary. James went to the cinema.",
|
||||||
|
"template": """{"names": ["verbatim-string"]}""",
|
||||||
|
"examples": None,
|
||||||
|
},
|
||||||
|
# text input with ICL example
|
||||||
|
{
|
||||||
|
"text": "John went to the restaurant with Mary. James went to the cinema.",
|
||||||
|
"template": """{"names": ["verbatim-string"], "female_names": ["verbatim-string"]}""",
|
||||||
|
"examples": [
|
||||||
|
{
|
||||||
|
"input": "Stephen is the manager at Susan's store.",
|
||||||
|
"output": """{"names": ["STEPHEN", "SUSAN"], "female_names": ["SUSAN"]}"""
|
||||||
|
}
|
||||||
|
],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
input_messages = [
|
||||||
|
construct_message(
|
||||||
|
x["text"],
|
||||||
|
x["template"],
|
||||||
|
x["examples"]
|
||||||
|
) for x in inputs
|
||||||
|
]
|
||||||
|
|
||||||
|
images = [
|
||||||
|
["0.jpg"],
|
||||||
|
["0.jpg", "1.jpg"],
|
||||||
|
None,
|
||||||
|
None
|
||||||
|
]
|
||||||
|
|
||||||
|
input_content = prepare_inputs(
|
||||||
|
messages=input_messages,
|
||||||
|
image_paths=images,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
)
|
||||||
|
|
||||||
|
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
result = nuextract_generate(
|
||||||
|
model=model,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
prompts=input_content['prompts'],
|
||||||
|
pixel_values_list=input_content['pixel_values_list'],
|
||||||
|
num_patches_list=input_content['num_patches_list'],
|
||||||
|
generation_config=generation_config
|
||||||
|
)
|
||||||
|
for y in result:
|
||||||
|
print(y)
|
||||||
|
# {"store_name": "WAL*MART"}
|
||||||
|
# {"store_name": "Trader Joe's"}
|
||||||
|
# {"names": ["John", "Mary", "James"]}
|
||||||
|
# {"names": ["JOHN", "MARY", "JAMES"], "female_names": ["MARY"]}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Template Generation
|
||||||
|
If you want to convert existing schema files you have in other formats (e.g. XML, YAML, etc.) or start from an example, NuExtract 2 models can automatically generate this for you.
|
||||||
|
|
||||||
|
E.g. convert XML into a NuExtract template:
|
||||||
|
```python
|
||||||
|
def generate_template(description):
|
||||||
|
input_messages = [description]
|
||||||
|
input_content = prepare_inputs(
|
||||||
|
messages=input_messages,
|
||||||
|
image_paths=[],
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
)
|
||||||
|
generation_config = {"do_sample": True, "temperature": 0.4, "max_new_tokens": 256}
|
||||||
|
with torch.no_grad():
|
||||||
|
result = nuextract_generate(
|
||||||
|
model=model,
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
prompts=input_content['prompts'],
|
||||||
|
pixel_values_list=input_content['pixel_values_list'],
|
||||||
|
num_patches_list=input_content['num_patches_list'],
|
||||||
|
generation_config=generation_config
|
||||||
|
)
|
||||||
|
return result[0]
|
||||||
|
xml_template = """<SportResult>
|
||||||
|
<Date></Date>
|
||||||
|
<Sport></Sport>
|
||||||
|
<Venue></Venue>
|
||||||
|
<HomeTeam></HomeTeam>
|
||||||
|
<AwayTeam></AwayTeam>
|
||||||
|
<HomeScore></HomeScore>
|
||||||
|
<AwayScore></AwayScore>
|
||||||
|
<TopScorer></TopScorer>
|
||||||
|
</SportResult>"""
|
||||||
|
result = generate_template(xml_template)
|
||||||
|
|
||||||
|
print(result)
|
||||||
|
# {
|
||||||
|
# "SportResult": {
|
||||||
|
# "Date": "date-time",
|
||||||
|
# "Sport": "verbatim-string",
|
||||||
|
# "Venue": "verbatim-string",
|
||||||
|
# "HomeTeam": "verbatim-string",
|
||||||
|
# "AwayTeam": "verbatim-string",
|
||||||
|
# "HomeScore": "integer",
|
||||||
|
# "AwayScore": "integer",
|
||||||
|
# "TopScorer": "verbatim-string"
|
||||||
|
# }
|
||||||
|
# }
|
||||||
|
```
|
||||||
|
|
||||||
|
E.g. generate a template from natural language description:
|
||||||
|
```python
|
||||||
|
text = """Give me relevant info about startup companies mentioned."""
|
||||||
|
result = generate_template(text)
|
||||||
|
|
||||||
|
print(result)
|
||||||
|
# {
|
||||||
|
# "Startup_Companies": [
|
||||||
|
# {
|
||||||
|
# "Name": "verbatim-string",
|
||||||
|
# "Products": [
|
||||||
|
# "string"
|
||||||
|
# ],
|
||||||
|
# "Location": "verbatim-string",
|
||||||
|
# "Company_Type": [
|
||||||
|
# "Technology",
|
||||||
|
# "Finance",
|
||||||
|
# "Health",
|
||||||
|
# "Education",
|
||||||
|
# "Other"
|
||||||
|
# ]
|
||||||
|
# }
|
||||||
|
# ]
|
||||||
|
# }
|
||||||
|
```
|
||||||
11
added_tokens.json
Normal file
11
added_tokens.json
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
{
|
||||||
|
"</box>": 92552,
|
||||||
|
"</img>": 92545,
|
||||||
|
"</quad>": 92548,
|
||||||
|
"</ref>": 92550,
|
||||||
|
"<IMG_CONTEXT>": 92546,
|
||||||
|
"<box>": 92551,
|
||||||
|
"<img>": 92544,
|
||||||
|
"<quad>": 92547,
|
||||||
|
"<ref>": 92549
|
||||||
|
}
|
||||||
203
config.json
Normal file
203
config.json
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
{
|
||||||
|
"_commit_hash": null,
|
||||||
|
"_name_or_path": "experiments/intervl2B_filter/checkpoint-6504",
|
||||||
|
"architectures": [
|
||||||
|
"InternVLChatModel"
|
||||||
|
],
|
||||||
|
"auto_map": {
|
||||||
|
"AutoConfig": "configuration_internvl_chat.InternVLChatConfig",
|
||||||
|
"AutoModel": "numind/NuExtract-2-2B--modeling_internvl_chat.InternVLChatModel",
|
||||||
|
"AutoModelForCausalLM": "numind/NuExtract-2-2B--modeling_internvl_chat.InternVLChatModel"
|
||||||
|
},
|
||||||
|
"downsample_ratio": 0.5,
|
||||||
|
"dynamic_image_size": true,
|
||||||
|
"force_image_size": 448,
|
||||||
|
"llm_config": {
|
||||||
|
"_attn_implementation_autoset": true,
|
||||||
|
"_name_or_path": "internlm/internlm2_5-1_8b-chat",
|
||||||
|
"add_cross_attention": false,
|
||||||
|
"architectures": [
|
||||||
|
"InternLM2ForCausalLM"
|
||||||
|
],
|
||||||
|
"attn_implementation": "flash_attention_2",
|
||||||
|
"auto_map": {
|
||||||
|
"AutoConfig": "configuration_internlm2.InternLM2Config",
|
||||||
|
"AutoModel": "modeling_internlm2.InternLM2ForCausalLM",
|
||||||
|
"AutoModelForCausalLM": "modeling_internlm2.InternLM2ForCausalLM",
|
||||||
|
"AutoModelForSequenceClassification": "modeling_internlm2.InternLM2ForSequenceClassification"
|
||||||
|
},
|
||||||
|
"bad_words_ids": null,
|
||||||
|
"begin_suppress_tokens": null,
|
||||||
|
"bias": false,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"chunk_size_feed_forward": 0,
|
||||||
|
"cross_attention_hidden_size": null,
|
||||||
|
"decoder_start_token_id": null,
|
||||||
|
"diversity_penalty": 0.0,
|
||||||
|
"do_sample": false,
|
||||||
|
"early_stopping": false,
|
||||||
|
"encoder_no_repeat_ngram_size": 0,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"exponential_decay_length_penalty": null,
|
||||||
|
"finetuning_task": null,
|
||||||
|
"forced_bos_token_id": null,
|
||||||
|
"forced_eos_token_id": null,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 2048,
|
||||||
|
"id2label": {
|
||||||
|
"0": "LABEL_0",
|
||||||
|
"1": "LABEL_1"
|
||||||
|
},
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 8192,
|
||||||
|
"is_decoder": false,
|
||||||
|
"is_encoder_decoder": false,
|
||||||
|
"label2id": {
|
||||||
|
"LABEL_0": 0,
|
||||||
|
"LABEL_1": 1
|
||||||
|
},
|
||||||
|
"length_penalty": 1.0,
|
||||||
|
"max_length": 20,
|
||||||
|
"max_position_embeddings": 32768,
|
||||||
|
"min_length": 0,
|
||||||
|
"model_type": "internlm2",
|
||||||
|
"no_repeat_ngram_size": 0,
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_beam_groups": 1,
|
||||||
|
"num_beams": 1,
|
||||||
|
"num_hidden_layers": 24,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"num_return_sequences": 1,
|
||||||
|
"output_attentions": false,
|
||||||
|
"output_hidden_states": false,
|
||||||
|
"output_scores": false,
|
||||||
|
"pad_token_id": 2,
|
||||||
|
"prefix": null,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"problem_type": null,
|
||||||
|
"pruned_heads": {},
|
||||||
|
"remove_invalid_values": false,
|
||||||
|
"repetition_penalty": 1.0,
|
||||||
|
"return_dict": true,
|
||||||
|
"return_dict_in_generate": false,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_scaling": {
|
||||||
|
"factor": 2.0,
|
||||||
|
"type": "dynamic"
|
||||||
|
},
|
||||||
|
"rope_theta": 1000000,
|
||||||
|
"sep_token_id": null,
|
||||||
|
"suppress_tokens": null,
|
||||||
|
"task_specific_params": null,
|
||||||
|
"temperature": 1.0,
|
||||||
|
"tf_legacy_loss": false,
|
||||||
|
"tie_encoder_decoder": false,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"tokenizer_class": null,
|
||||||
|
"top_k": 50,
|
||||||
|
"top_p": 1.0,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"torchscript": false,
|
||||||
|
"transformers_version": "4.49.0.dev0",
|
||||||
|
"typical_p": 1.0,
|
||||||
|
"use_bfloat16": true,
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 92553
|
||||||
|
},
|
||||||
|
"max_dynamic_patch": 12,
|
||||||
|
"min_dynamic_patch": 1,
|
||||||
|
"model_type": "internvl_chat",
|
||||||
|
"ps_version": "v2",
|
||||||
|
"select_layer": -1,
|
||||||
|
"template": "internvl2_5",
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": null,
|
||||||
|
"use_backbone_lora": 0,
|
||||||
|
"use_llm_lora": 0,
|
||||||
|
"use_thumbnail": true,
|
||||||
|
"vision_config": {
|
||||||
|
"_attn_implementation_autoset": true,
|
||||||
|
"_name_or_path": "",
|
||||||
|
"add_cross_attention": false,
|
||||||
|
"architectures": [
|
||||||
|
"InternVisionModel"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bad_words_ids": null,
|
||||||
|
"begin_suppress_tokens": null,
|
||||||
|
"bos_token_id": null,
|
||||||
|
"chunk_size_feed_forward": 0,
|
||||||
|
"cross_attention_hidden_size": null,
|
||||||
|
"decoder_start_token_id": null,
|
||||||
|
"diversity_penalty": 0.0,
|
||||||
|
"do_sample": false,
|
||||||
|
"drop_path_rate": 0.0,
|
||||||
|
"dropout": 0.0,
|
||||||
|
"early_stopping": false,
|
||||||
|
"encoder_no_repeat_ngram_size": 0,
|
||||||
|
"eos_token_id": null,
|
||||||
|
"exponential_decay_length_penalty": null,
|
||||||
|
"finetuning_task": null,
|
||||||
|
"forced_bos_token_id": null,
|
||||||
|
"forced_eos_token_id": null,
|
||||||
|
"hidden_act": "gelu",
|
||||||
|
"hidden_size": 1024,
|
||||||
|
"id2label": {
|
||||||
|
"0": "LABEL_0",
|
||||||
|
"1": "LABEL_1"
|
||||||
|
},
|
||||||
|
"image_size": 448,
|
||||||
|
"initializer_factor": 1.0,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 4096,
|
||||||
|
"is_decoder": false,
|
||||||
|
"is_encoder_decoder": false,
|
||||||
|
"label2id": {
|
||||||
|
"LABEL_0": 0,
|
||||||
|
"LABEL_1": 1
|
||||||
|
},
|
||||||
|
"layer_norm_eps": 1e-06,
|
||||||
|
"length_penalty": 1.0,
|
||||||
|
"max_length": 20,
|
||||||
|
"min_length": 0,
|
||||||
|
"model_type": "intern_vit_6b",
|
||||||
|
"no_repeat_ngram_size": 0,
|
||||||
|
"norm_type": "layer_norm",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_beam_groups": 1,
|
||||||
|
"num_beams": 1,
|
||||||
|
"num_channels": 3,
|
||||||
|
"num_hidden_layers": 24,
|
||||||
|
"num_return_sequences": 1,
|
||||||
|
"output_attentions": false,
|
||||||
|
"output_hidden_states": false,
|
||||||
|
"output_scores": false,
|
||||||
|
"pad_token_id": null,
|
||||||
|
"patch_size": 14,
|
||||||
|
"prefix": null,
|
||||||
|
"problem_type": null,
|
||||||
|
"pruned_heads": {},
|
||||||
|
"qk_normalization": false,
|
||||||
|
"qkv_bias": true,
|
||||||
|
"remove_invalid_values": false,
|
||||||
|
"repetition_penalty": 1.0,
|
||||||
|
"return_dict": true,
|
||||||
|
"return_dict_in_generate": false,
|
||||||
|
"sep_token_id": null,
|
||||||
|
"suppress_tokens": null,
|
||||||
|
"task_specific_params": null,
|
||||||
|
"temperature": 1.0,
|
||||||
|
"tf_legacy_loss": false,
|
||||||
|
"tie_encoder_decoder": false,
|
||||||
|
"tie_word_embeddings": true,
|
||||||
|
"tokenizer_class": null,
|
||||||
|
"top_k": 50,
|
||||||
|
"top_p": 1.0,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"torchscript": false,
|
||||||
|
"transformers_version": "4.49.0.dev0",
|
||||||
|
"typical_p": 1.0,
|
||||||
|
"use_bfloat16": true,
|
||||||
|
"use_flash_attn": true
|
||||||
|
}
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
120
configuration_intern_vit.py
Normal file
120
configuration_intern_vit.py
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# --------------------------------------------------------
|
||||||
|
# InternVL
|
||||||
|
# Copyright (c) 2024 OpenGVLab
|
||||||
|
# Licensed under The MIT License [see LICENSE for details]
|
||||||
|
# --------------------------------------------------------
|
||||||
|
|
||||||
|
import os
|
||||||
|
from typing import Union
|
||||||
|
|
||||||
|
from transformers.configuration_utils import PretrainedConfig
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class InternVisionConfig(PretrainedConfig):
|
||||||
|
r"""
|
||||||
|
This is the configuration class to store the configuration of a [`InternVisionModel`]. It is used to
|
||||||
|
instantiate a vision encoder according to the specified arguments, defining the model architecture.
|
||||||
|
|
||||||
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||||
|
documentation from [`PretrainedConfig`] for more information.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
num_channels (`int`, *optional*, defaults to 3):
|
||||||
|
Number of color channels in the input images (e.g., 3 for RGB).
|
||||||
|
patch_size (`int`, *optional*, defaults to 14):
|
||||||
|
The size (resolution) of each patch.
|
||||||
|
image_size (`int`, *optional*, defaults to 224):
|
||||||
|
The size (resolution) of each image.
|
||||||
|
qkv_bias (`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether to add a bias to the queries and values in the self-attention layers.
|
||||||
|
hidden_size (`int`, *optional*, defaults to 3200):
|
||||||
|
Dimensionality of the encoder layers and the pooler layer.
|
||||||
|
num_attention_heads (`int`, *optional*, defaults to 25):
|
||||||
|
Number of attention heads for each attention layer in the Transformer encoder.
|
||||||
|
intermediate_size (`int`, *optional*, defaults to 12800):
|
||||||
|
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
||||||
|
qk_normalization (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether to normalize the queries and keys in the self-attention layers.
|
||||||
|
num_hidden_layers (`int`, *optional*, defaults to 48):
|
||||||
|
Number of hidden layers in the Transformer encoder.
|
||||||
|
use_flash_attn (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether to use flash attention mechanism.
|
||||||
|
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
||||||
|
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
||||||
|
`"relu"`, `"selu"` and `"gelu_new"` ``"gelu"` are supported.
|
||||||
|
layer_norm_eps (`float`, *optional*, defaults to 1e-6):
|
||||||
|
The epsilon used by the layer normalization layers.
|
||||||
|
dropout (`float`, *optional*, defaults to 0.0):
|
||||||
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
||||||
|
drop_path_rate (`float`, *optional*, defaults to 0.0):
|
||||||
|
Dropout rate for stochastic depth.
|
||||||
|
attention_dropout (`float`, *optional*, defaults to 0.0):
|
||||||
|
The dropout ratio for the attention probabilities.
|
||||||
|
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||||
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||||
|
initializer_factor (`float`, *optional*, defaults to 0.1):
|
||||||
|
A factor for layer scale.
|
||||||
|
"""
|
||||||
|
|
||||||
|
model_type = 'intern_vit_6b'
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
num_channels=3,
|
||||||
|
patch_size=14,
|
||||||
|
image_size=224,
|
||||||
|
qkv_bias=False,
|
||||||
|
hidden_size=3200,
|
||||||
|
num_attention_heads=25,
|
||||||
|
intermediate_size=12800,
|
||||||
|
qk_normalization=True,
|
||||||
|
num_hidden_layers=48,
|
||||||
|
use_flash_attn=True,
|
||||||
|
hidden_act='gelu',
|
||||||
|
norm_type='rms_norm',
|
||||||
|
layer_norm_eps=1e-6,
|
||||||
|
dropout=0.0,
|
||||||
|
drop_path_rate=0.0,
|
||||||
|
attention_dropout=0.0,
|
||||||
|
initializer_range=0.02,
|
||||||
|
initializer_factor=0.1,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
super().__init__(**kwargs)
|
||||||
|
|
||||||
|
self.hidden_size = hidden_size
|
||||||
|
self.intermediate_size = intermediate_size
|
||||||
|
self.dropout = dropout
|
||||||
|
self.drop_path_rate = drop_path_rate
|
||||||
|
self.num_hidden_layers = num_hidden_layers
|
||||||
|
self.num_attention_heads = num_attention_heads
|
||||||
|
self.num_channels = num_channels
|
||||||
|
self.patch_size = patch_size
|
||||||
|
self.image_size = image_size
|
||||||
|
self.initializer_range = initializer_range
|
||||||
|
self.initializer_factor = initializer_factor
|
||||||
|
self.attention_dropout = attention_dropout
|
||||||
|
self.layer_norm_eps = layer_norm_eps
|
||||||
|
self.hidden_act = hidden_act
|
||||||
|
self.norm_type = norm_type
|
||||||
|
self.qkv_bias = qkv_bias
|
||||||
|
self.qk_normalization = qk_normalization
|
||||||
|
self.use_flash_attn = use_flash_attn
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> 'PretrainedConfig':
|
||||||
|
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
|
||||||
|
|
||||||
|
if 'vision_config' in config_dict:
|
||||||
|
config_dict = config_dict['vision_config']
|
||||||
|
|
||||||
|
if 'model_type' in config_dict and hasattr(cls, 'model_type') and config_dict['model_type'] != cls.model_type:
|
||||||
|
logger.warning(
|
||||||
|
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
|
||||||
|
f'{cls.model_type}. This is not supported for all configurations of models and can yield errors.'
|
||||||
|
)
|
||||||
|
|
||||||
|
return cls.from_dict(config_dict, **kwargs)
|
||||||
150
configuration_internlm2.py
Normal file
150
configuration_internlm2.py
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
|
||||||
|
#
|
||||||
|
# This code is based on transformers/src/transformers/models/llama/configuration_llama.py
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
""" InternLM2 model configuration"""
|
||||||
|
|
||||||
|
from transformers.configuration_utils import PretrainedConfig
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
INTERNLM2_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
||||||
|
|
||||||
|
|
||||||
|
# Modified from transformers.model.llama.configuration_llama.LlamaConfig
|
||||||
|
class InternLM2Config(PretrainedConfig):
|
||||||
|
r"""
|
||||||
|
This is the configuration class to store the configuration of a [`InternLM2Model`]. It is used to instantiate
|
||||||
|
an InternLM2 model according to the specified arguments, defining the model architecture. Instantiating a
|
||||||
|
configuration with the defaults will yield a similar configuration to that of the InternLM2-7B.
|
||||||
|
|
||||||
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||||
|
documentation from [`PretrainedConfig`] for more information.
|
||||||
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
vocab_size (`int`, *optional*, defaults to 32000):
|
||||||
|
Vocabulary size of the InternLM2 model. Defines the number of different tokens that can be represented by the
|
||||||
|
`inputs_ids` passed when calling [`InternLM2Model`]
|
||||||
|
hidden_size (`int`, *optional*, defaults to 4096):
|
||||||
|
Dimension of the hidden representations.
|
||||||
|
intermediate_size (`int`, *optional*, defaults to 11008):
|
||||||
|
Dimension of the MLP representations.
|
||||||
|
num_hidden_layers (`int`, *optional*, defaults to 32):
|
||||||
|
Number of hidden layers in the Transformer encoder.
|
||||||
|
num_attention_heads (`int`, *optional*, defaults to 32):
|
||||||
|
Number of attention heads for each attention layer in the Transformer encoder.
|
||||||
|
num_key_value_heads (`int`, *optional*):
|
||||||
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
||||||
|
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
||||||
|
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
||||||
|
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
||||||
|
by meanpooling all the original heads within that group. For more details checkout [this
|
||||||
|
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
||||||
|
`num_attention_heads`.
|
||||||
|
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
||||||
|
The non-linear activation function (function or string) in the decoder.
|
||||||
|
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
||||||
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
||||||
|
just in case (e.g., 512 or 1024 or 2048).
|
||||||
|
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||||
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||||
|
rms_norm_eps (`float`, *optional*, defaults to 1e-12):
|
||||||
|
The epsilon used by the rms normalization layers.
|
||||||
|
use_cache (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
||||||
|
relevant if `config.is_decoder=True`.
|
||||||
|
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether to tie weight embeddings
|
||||||
|
Example:
|
||||||
|
|
||||||
|
"""
|
||||||
|
model_type = 'internlm2'
|
||||||
|
_auto_class = 'AutoConfig'
|
||||||
|
|
||||||
|
def __init__( # pylint: disable=W0102
|
||||||
|
self,
|
||||||
|
vocab_size=103168,
|
||||||
|
hidden_size=4096,
|
||||||
|
intermediate_size=11008,
|
||||||
|
num_hidden_layers=32,
|
||||||
|
num_attention_heads=32,
|
||||||
|
num_key_value_heads=None,
|
||||||
|
hidden_act='silu',
|
||||||
|
max_position_embeddings=2048,
|
||||||
|
initializer_range=0.02,
|
||||||
|
rms_norm_eps=1e-6,
|
||||||
|
use_cache=True,
|
||||||
|
pad_token_id=0,
|
||||||
|
bos_token_id=1,
|
||||||
|
eos_token_id=2,
|
||||||
|
tie_word_embeddings=False,
|
||||||
|
bias=True,
|
||||||
|
rope_theta=10000,
|
||||||
|
rope_scaling=None,
|
||||||
|
attn_implementation='eager',
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
self.vocab_size = vocab_size
|
||||||
|
self.max_position_embeddings = max_position_embeddings
|
||||||
|
self.hidden_size = hidden_size
|
||||||
|
self.intermediate_size = intermediate_size
|
||||||
|
self.num_hidden_layers = num_hidden_layers
|
||||||
|
self.num_attention_heads = num_attention_heads
|
||||||
|
self.bias = bias
|
||||||
|
|
||||||
|
if num_key_value_heads is None:
|
||||||
|
num_key_value_heads = num_attention_heads
|
||||||
|
self.num_key_value_heads = num_key_value_heads
|
||||||
|
|
||||||
|
self.hidden_act = hidden_act
|
||||||
|
self.initializer_range = initializer_range
|
||||||
|
self.rms_norm_eps = rms_norm_eps
|
||||||
|
self.use_cache = use_cache
|
||||||
|
self.rope_theta = rope_theta
|
||||||
|
self.rope_scaling = rope_scaling
|
||||||
|
self._rope_scaling_validation()
|
||||||
|
|
||||||
|
self.attn_implementation = attn_implementation
|
||||||
|
if self.attn_implementation is None:
|
||||||
|
self.attn_implementation = 'eager'
|
||||||
|
super().__init__(
|
||||||
|
pad_token_id=pad_token_id,
|
||||||
|
bos_token_id=bos_token_id,
|
||||||
|
eos_token_id=eos_token_id,
|
||||||
|
tie_word_embeddings=tie_word_embeddings,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _rope_scaling_validation(self):
|
||||||
|
"""
|
||||||
|
Validate the `rope_scaling` configuration.
|
||||||
|
"""
|
||||||
|
if self.rope_scaling is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
|
||||||
|
raise ValueError(
|
||||||
|
'`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, '
|
||||||
|
f'got {self.rope_scaling}'
|
||||||
|
)
|
||||||
|
rope_scaling_type = self.rope_scaling.get('type', None)
|
||||||
|
rope_scaling_factor = self.rope_scaling.get('factor', None)
|
||||||
|
if rope_scaling_type is None or rope_scaling_type not in ['linear', 'dynamic']:
|
||||||
|
raise ValueError(
|
||||||
|
f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
|
||||||
|
)
|
||||||
|
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor < 1.0:
|
||||||
|
raise ValueError(f"`rope_scaling`'s factor field must be a float >= 1, got {rope_scaling_factor}")
|
||||||
96
configuration_internvl_chat.py
Normal file
96
configuration_internvl_chat.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# --------------------------------------------------------
|
||||||
|
# InternVL
|
||||||
|
# Copyright (c) 2024 OpenGVLab
|
||||||
|
# Licensed under The MIT License [see LICENSE for details]
|
||||||
|
# --------------------------------------------------------
|
||||||
|
|
||||||
|
import copy
|
||||||
|
|
||||||
|
from transformers import AutoConfig, LlamaConfig
|
||||||
|
from transformers.configuration_utils import PretrainedConfig
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
from .configuration_intern_vit import InternVisionConfig
|
||||||
|
from .configuration_internlm2 import InternLM2Config
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class InternVLChatConfig(PretrainedConfig):
|
||||||
|
model_type = 'internvl_chat'
|
||||||
|
is_composition = True
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
vision_config=None,
|
||||||
|
llm_config=None,
|
||||||
|
use_backbone_lora=0,
|
||||||
|
use_llm_lora=0,
|
||||||
|
select_layer=-1,
|
||||||
|
force_image_size=None,
|
||||||
|
downsample_ratio=0.5,
|
||||||
|
template=None,
|
||||||
|
dynamic_image_size=False,
|
||||||
|
use_thumbnail=False,
|
||||||
|
ps_version='v1',
|
||||||
|
min_dynamic_patch=1,
|
||||||
|
max_dynamic_patch=6,
|
||||||
|
**kwargs):
|
||||||
|
super().__init__(**kwargs)
|
||||||
|
|
||||||
|
if vision_config is None:
|
||||||
|
vision_config = {'architectures': ['InternVisionModel']}
|
||||||
|
logger.info('vision_config is None. Initializing the InternVisionConfig with default values.')
|
||||||
|
|
||||||
|
if llm_config is None:
|
||||||
|
llm_config = {'architectures': ['InternLM2ForCausalLM']}
|
||||||
|
logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
|
||||||
|
|
||||||
|
self.vision_config = InternVisionConfig(**vision_config)
|
||||||
|
if llm_config.get('architectures')[0] == 'LlamaForCausalLM':
|
||||||
|
self.llm_config = LlamaConfig(**llm_config)
|
||||||
|
elif llm_config.get('architectures')[0] == 'InternLM2ForCausalLM':
|
||||||
|
self.llm_config = InternLM2Config(**llm_config)
|
||||||
|
else:
|
||||||
|
raise ValueError('Unsupported architecture: {}'.format(llm_config.get('architectures')[0]))
|
||||||
|
self.use_backbone_lora = use_backbone_lora
|
||||||
|
self.use_llm_lora = use_llm_lora
|
||||||
|
self.select_layer = select_layer
|
||||||
|
self.force_image_size = force_image_size
|
||||||
|
self.downsample_ratio = downsample_ratio
|
||||||
|
self.template = template
|
||||||
|
self.dynamic_image_size = dynamic_image_size
|
||||||
|
self.use_thumbnail = use_thumbnail
|
||||||
|
self.ps_version = ps_version # pixel shuffle version
|
||||||
|
self.min_dynamic_patch = min_dynamic_patch
|
||||||
|
self.max_dynamic_patch = max_dynamic_patch
|
||||||
|
|
||||||
|
logger.info(f'vision_select_layer: {self.select_layer}')
|
||||||
|
logger.info(f'ps_version: {self.ps_version}')
|
||||||
|
logger.info(f'min_dynamic_patch: {self.min_dynamic_patch}')
|
||||||
|
logger.info(f'max_dynamic_patch: {self.max_dynamic_patch}')
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
"""
|
||||||
|
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`].
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
|
||||||
|
"""
|
||||||
|
output = copy.deepcopy(self.__dict__)
|
||||||
|
output['vision_config'] = self.vision_config.to_dict()
|
||||||
|
output['llm_config'] = self.llm_config.to_dict()
|
||||||
|
output['model_type'] = self.__class__.model_type
|
||||||
|
output['use_backbone_lora'] = self.use_backbone_lora
|
||||||
|
output['use_llm_lora'] = self.use_llm_lora
|
||||||
|
output['select_layer'] = self.select_layer
|
||||||
|
output['force_image_size'] = self.force_image_size
|
||||||
|
output['downsample_ratio'] = self.downsample_ratio
|
||||||
|
output['template'] = self.template
|
||||||
|
output['dynamic_image_size'] = self.dynamic_image_size
|
||||||
|
output['use_thumbnail'] = self.use_thumbnail
|
||||||
|
output['ps_version'] = self.ps_version
|
||||||
|
output['min_dynamic_patch'] = self.min_dynamic_patch
|
||||||
|
output['max_dynamic_patch'] = self.max_dynamic_patch
|
||||||
|
|
||||||
|
return output
|
||||||
391
conversation.py
Normal file
391
conversation.py
Normal file
@@ -0,0 +1,391 @@
|
|||||||
|
"""
|
||||||
|
Conversation prompt templates.
|
||||||
|
|
||||||
|
We kindly request that you import fastchat instead of copying this file if you wish to use it.
|
||||||
|
If you have changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates.
|
||||||
|
|
||||||
|
Modified from https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import dataclasses
|
||||||
|
from enum import IntEnum, auto
|
||||||
|
from typing import Dict, List, Tuple, Union
|
||||||
|
|
||||||
|
|
||||||
|
class SeparatorStyle(IntEnum):
|
||||||
|
"""Separator styles."""
|
||||||
|
|
||||||
|
ADD_COLON_SINGLE = auto()
|
||||||
|
ADD_COLON_TWO = auto()
|
||||||
|
ADD_COLON_SPACE_SINGLE = auto()
|
||||||
|
NO_COLON_SINGLE = auto()
|
||||||
|
NO_COLON_TWO = auto()
|
||||||
|
ADD_NEW_LINE_SINGLE = auto()
|
||||||
|
LLAMA2 = auto()
|
||||||
|
CHATGLM = auto()
|
||||||
|
CHATML = auto()
|
||||||
|
CHATINTERN = auto()
|
||||||
|
DOLLY = auto()
|
||||||
|
RWKV = auto()
|
||||||
|
PHOENIX = auto()
|
||||||
|
ROBIN = auto()
|
||||||
|
FALCON_CHAT = auto()
|
||||||
|
CHATGLM3 = auto()
|
||||||
|
INTERNVL_ZH = auto()
|
||||||
|
MPT = auto()
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class Conversation:
|
||||||
|
"""A class that manages prompt templates and keeps all conversation history."""
|
||||||
|
|
||||||
|
# The name of this template
|
||||||
|
name: str
|
||||||
|
# The template of the system prompt
|
||||||
|
system_template: str = '{system_message}'
|
||||||
|
# The system message
|
||||||
|
system_message: str = ''
|
||||||
|
# The names of two roles
|
||||||
|
roles: Tuple[str] = ('USER', 'ASSISTANT')
|
||||||
|
# All messages. Each item is (role, message).
|
||||||
|
messages: List[List[str]] = ()
|
||||||
|
# The number of few shot examples
|
||||||
|
offset: int = 0
|
||||||
|
# The separator style and configurations
|
||||||
|
sep_style: SeparatorStyle = SeparatorStyle.ADD_COLON_SINGLE
|
||||||
|
sep: str = '\n'
|
||||||
|
sep2: str = None
|
||||||
|
# Stop criteria (the default one is EOS token)
|
||||||
|
stop_str: Union[str, List[str]] = None
|
||||||
|
# Stops generation if meeting any token in this list
|
||||||
|
stop_token_ids: List[int] = None
|
||||||
|
|
||||||
|
def get_prompt(self) -> str:
|
||||||
|
"""Get the prompt for generation."""
|
||||||
|
system_prompt = self.system_template.format(system_message=self.system_message)
|
||||||
|
if self.sep_style == SeparatorStyle.ADD_COLON_SINGLE:
|
||||||
|
ret = system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.ADD_COLON_TWO:
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
ret = system_prompt + seps[0]
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + message + seps[i % 2]
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.ADD_COLON_SPACE_SINGLE:
|
||||||
|
ret = system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role + ': ' # must be end with a space
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.ADD_NEW_LINE_SINGLE:
|
||||||
|
ret = '' if system_prompt == '' else system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + '\n' + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role + '\n'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.NO_COLON_SINGLE:
|
||||||
|
ret = system_prompt
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.NO_COLON_TWO:
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
ret = system_prompt
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if message:
|
||||||
|
ret += role + message + seps[i % 2]
|
||||||
|
else:
|
||||||
|
ret += role
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.RWKV:
|
||||||
|
ret = system_prompt
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if message:
|
||||||
|
ret += (
|
||||||
|
role
|
||||||
|
+ ': '
|
||||||
|
+ message.replace('\r\n', '\n').replace('\n\n', '\n')
|
||||||
|
)
|
||||||
|
ret += '\n\n'
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.LLAMA2:
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
if self.system_message:
|
||||||
|
ret = system_prompt
|
||||||
|
else:
|
||||||
|
ret = '[INST] '
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
tag = self.roles[i % 2]
|
||||||
|
if message:
|
||||||
|
if i == 0:
|
||||||
|
ret += message + ' '
|
||||||
|
else:
|
||||||
|
ret += tag + ' ' + message + seps[i % 2]
|
||||||
|
else:
|
||||||
|
ret += tag
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.CHATGLM:
|
||||||
|
# source: https://huggingface.co/THUDM/chatglm-6b/blob/1d240ba371910e9282298d4592532d7f0f3e9f3e/modeling_chatglm.py#L1302-L1308
|
||||||
|
# source2: https://huggingface.co/THUDM/chatglm2-6b/blob/e186c891cf64310ac66ef10a87e6635fa6c2a579/modeling_chatglm.py#L926
|
||||||
|
round_add_n = 1 if self.name == 'chatglm2' else 0
|
||||||
|
if system_prompt:
|
||||||
|
ret = system_prompt + self.sep
|
||||||
|
else:
|
||||||
|
ret = ''
|
||||||
|
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if i % 2 == 0:
|
||||||
|
ret += f'[Round {i//2 + round_add_n}]{self.sep}'
|
||||||
|
|
||||||
|
if message:
|
||||||
|
ret += f'{role}:{message}{self.sep}'
|
||||||
|
else:
|
||||||
|
ret += f'{role}:'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.CHATML:
|
||||||
|
ret = '' if system_prompt == '' else system_prompt + self.sep + '\n'
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + '\n' + message + self.sep + '\n'
|
||||||
|
else:
|
||||||
|
ret += role + '\n'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.CHATGLM3:
|
||||||
|
ret = ''
|
||||||
|
if self.system_message:
|
||||||
|
ret += system_prompt
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + '\n' + ' ' + message
|
||||||
|
else:
|
||||||
|
ret += role
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.CHATINTERN:
|
||||||
|
# source: https://huggingface.co/internlm/internlm-chat-7b-8k/blob/bd546fa984b4b0b86958f56bf37f94aa75ab8831/modeling_internlm.py#L771
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
ret = system_prompt
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
# if i % 2 == 0:
|
||||||
|
# ret += "<s>"
|
||||||
|
if message:
|
||||||
|
ret += role + ':' + message + seps[i % 2] + '\n'
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.DOLLY:
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
ret = system_prompt
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if message:
|
||||||
|
ret += role + ':\n' + message + seps[i % 2]
|
||||||
|
if i % 2 == 1:
|
||||||
|
ret += '\n\n'
|
||||||
|
else:
|
||||||
|
ret += role + ':\n'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.PHOENIX:
|
||||||
|
ret = system_prompt
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + '<s>' + message + '</s>'
|
||||||
|
else:
|
||||||
|
ret += role + ': ' + '<s>'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.ROBIN:
|
||||||
|
ret = system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + ':\n' + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role + ':\n'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.FALCON_CHAT:
|
||||||
|
ret = ''
|
||||||
|
if self.system_message:
|
||||||
|
ret += system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.INTERNVL_ZH:
|
||||||
|
seps = [self.sep, self.sep2]
|
||||||
|
ret = self.system_message + seps[0]
|
||||||
|
for i, (role, message) in enumerate(self.messages):
|
||||||
|
if message:
|
||||||
|
ret += role + ': ' + message + seps[i % 2]
|
||||||
|
else:
|
||||||
|
ret += role + ':'
|
||||||
|
return ret
|
||||||
|
elif self.sep_style == SeparatorStyle.MPT:
|
||||||
|
ret = system_prompt + self.sep
|
||||||
|
for role, message in self.messages:
|
||||||
|
if message:
|
||||||
|
if type(message) is tuple:
|
||||||
|
message, _, _ = message
|
||||||
|
ret += role + message + self.sep
|
||||||
|
else:
|
||||||
|
ret += role
|
||||||
|
return ret
|
||||||
|
else:
|
||||||
|
raise ValueError(f'Invalid style: {self.sep_style}')
|
||||||
|
|
||||||
|
def set_system_message(self, system_message: str):
|
||||||
|
"""Set the system message."""
|
||||||
|
self.system_message = system_message
|
||||||
|
|
||||||
|
def append_message(self, role: str, message: str):
|
||||||
|
"""Append a new message."""
|
||||||
|
self.messages.append([role, message])
|
||||||
|
|
||||||
|
def update_last_message(self, message: str):
|
||||||
|
"""Update the last output.
|
||||||
|
|
||||||
|
The last message is typically set to be None when constructing the prompt,
|
||||||
|
so we need to update it in-place after getting the response from a model.
|
||||||
|
"""
|
||||||
|
self.messages[-1][1] = message
|
||||||
|
|
||||||
|
def to_gradio_chatbot(self):
|
||||||
|
"""Convert the conversation to gradio chatbot format."""
|
||||||
|
ret = []
|
||||||
|
for i, (role, msg) in enumerate(self.messages[self.offset :]):
|
||||||
|
if i % 2 == 0:
|
||||||
|
ret.append([msg, None])
|
||||||
|
else:
|
||||||
|
ret[-1][-1] = msg
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def to_openai_api_messages(self):
|
||||||
|
"""Convert the conversation to OpenAI chat completion format."""
|
||||||
|
ret = [{'role': 'system', 'content': self.system_message}]
|
||||||
|
|
||||||
|
for i, (_, msg) in enumerate(self.messages[self.offset :]):
|
||||||
|
if i % 2 == 0:
|
||||||
|
ret.append({'role': 'user', 'content': msg})
|
||||||
|
else:
|
||||||
|
if msg is not None:
|
||||||
|
ret.append({'role': 'assistant', 'content': msg})
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def copy(self):
|
||||||
|
return Conversation(
|
||||||
|
name=self.name,
|
||||||
|
system_template=self.system_template,
|
||||||
|
system_message=self.system_message,
|
||||||
|
roles=self.roles,
|
||||||
|
messages=[[x, y] for x, y in self.messages],
|
||||||
|
offset=self.offset,
|
||||||
|
sep_style=self.sep_style,
|
||||||
|
sep=self.sep,
|
||||||
|
sep2=self.sep2,
|
||||||
|
stop_str=self.stop_str,
|
||||||
|
stop_token_ids=self.stop_token_ids,
|
||||||
|
)
|
||||||
|
|
||||||
|
def dict(self):
|
||||||
|
return {
|
||||||
|
'template_name': self.name,
|
||||||
|
'system_message': self.system_message,
|
||||||
|
'roles': self.roles,
|
||||||
|
'messages': self.messages,
|
||||||
|
'offset': self.offset,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# A global registry for all conversation templates
|
||||||
|
conv_templates: Dict[str, Conversation] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def register_conv_template(template: Conversation, override: bool = False):
|
||||||
|
"""Register a new conversation template."""
|
||||||
|
if not override:
|
||||||
|
assert (
|
||||||
|
template.name not in conv_templates
|
||||||
|
), f'{template.name} has been registered.'
|
||||||
|
|
||||||
|
conv_templates[template.name] = template
|
||||||
|
|
||||||
|
|
||||||
|
def get_conv_template(name: str) -> Conversation:
|
||||||
|
"""Get a conversation template."""
|
||||||
|
return conv_templates[name].copy()
|
||||||
|
|
||||||
|
|
||||||
|
# Both Hermes-2 and internlm2-chat are chatml-format conversation templates. The difference
|
||||||
|
# is that during training, the preprocessing function for the Hermes-2 template doesn't add
|
||||||
|
# <s> at the beginning of the tokenized sequence, while the internlm2-chat template does.
|
||||||
|
# Therefore, they are completely equivalent during inference.
|
||||||
|
register_conv_template(
|
||||||
|
Conversation(
|
||||||
|
name='Hermes-2',
|
||||||
|
system_template='<|im_start|>system\n{system_message}',
|
||||||
|
# note: The new system prompt was not used here to avoid changes in benchmark performance.
|
||||||
|
# system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',
|
||||||
|
system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
|
||||||
|
roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
|
||||||
|
sep_style=SeparatorStyle.MPT,
|
||||||
|
sep='<|im_end|>',
|
||||||
|
stop_str='<|endoftext|>',
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
register_conv_template(
|
||||||
|
Conversation(
|
||||||
|
name='internlm2-chat',
|
||||||
|
system_template='<|im_start|>system\n{system_message}',
|
||||||
|
# note: The new system prompt was not used here to avoid changes in benchmark performance.
|
||||||
|
# system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',
|
||||||
|
system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
|
||||||
|
roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
|
||||||
|
sep_style=SeparatorStyle.MPT,
|
||||||
|
sep='<|im_end|>',
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
register_conv_template(
|
||||||
|
Conversation(
|
||||||
|
name='phi3-chat',
|
||||||
|
system_template='<|system|>\n{system_message}',
|
||||||
|
# note: The new system prompt was not used here to avoid changes in benchmark performance.
|
||||||
|
# system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',
|
||||||
|
system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
|
||||||
|
roles=('<|user|>\n', '<|assistant|>\n'),
|
||||||
|
sep_style=SeparatorStyle.MPT,
|
||||||
|
sep='<|end|>',
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
register_conv_template(
|
||||||
|
Conversation(
|
||||||
|
name='internvl2_5',
|
||||||
|
system_template='<|im_start|>system\n{system_message}',
|
||||||
|
system_message='你是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。',
|
||||||
|
roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
|
||||||
|
sep_style=SeparatorStyle.MPT,
|
||||||
|
sep='<|im_end|>\n',
|
||||||
|
)
|
||||||
|
)
|
||||||
8
generation_config.json
Normal file
8
generation_config.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"eos_token_id": [
|
||||||
|
92542,
|
||||||
|
92543
|
||||||
|
],
|
||||||
|
"transformers_version": "4.49.0.dev0"
|
||||||
|
}
|
||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:86cc8956add5f1144986e0c0acb3bba7d2274403f13294b952280ab3500c4f8a
|
||||||
|
size 4411571040
|
||||||
430
modeling_intern_vit.py
Normal file
430
modeling_intern_vit.py
Normal file
@@ -0,0 +1,430 @@
|
|||||||
|
# --------------------------------------------------------
|
||||||
|
# InternVL
|
||||||
|
# Copyright (c) 2024 OpenGVLab
|
||||||
|
# Licensed under The MIT License [see LICENSE for details]
|
||||||
|
# --------------------------------------------------------
|
||||||
|
|
||||||
|
from typing import Optional, Tuple, Union
|
||||||
|
|
||||||
|
import torch
|
||||||
|
import torch.nn.functional as F
|
||||||
|
import torch.utils.checkpoint
|
||||||
|
from einops import rearrange
|
||||||
|
from timm.models.layers import DropPath
|
||||||
|
from torch import nn
|
||||||
|
from transformers.activations import ACT2FN
|
||||||
|
from transformers.modeling_outputs import (BaseModelOutput,
|
||||||
|
BaseModelOutputWithPooling)
|
||||||
|
from transformers.modeling_utils import PreTrainedModel
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
from .configuration_intern_vit import InternVisionConfig
|
||||||
|
|
||||||
|
try:
|
||||||
|
from flash_attn.bert_padding import pad_input, unpad_input
|
||||||
|
from flash_attn.flash_attn_interface import \
|
||||||
|
flash_attn_varlen_qkvpacked_func
|
||||||
|
has_flash_attn = True
|
||||||
|
except:
|
||||||
|
print('FlashAttention2 is not installed.')
|
||||||
|
has_flash_attn = False
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class FlashAttention(nn.Module):
|
||||||
|
"""Implement the scaled dot product attention with softmax.
|
||||||
|
Arguments
|
||||||
|
---------
|
||||||
|
softmax_scale: The temperature to use for the softmax attention.
|
||||||
|
(default: 1/sqrt(d_keys) where d_keys is computed at
|
||||||
|
runtime)
|
||||||
|
attention_dropout: The dropout rate to apply to the attention
|
||||||
|
(default: 0.0)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, softmax_scale=None, attention_dropout=0.0, device=None, dtype=None):
|
||||||
|
super().__init__()
|
||||||
|
self.softmax_scale = softmax_scale
|
||||||
|
self.dropout_p = attention_dropout
|
||||||
|
|
||||||
|
def forward(self, qkv, key_padding_mask=None, causal=False, cu_seqlens=None,
|
||||||
|
max_s=None, need_weights=False):
|
||||||
|
"""Implements the multihead softmax attention.
|
||||||
|
Arguments
|
||||||
|
---------
|
||||||
|
qkv: The tensor containing the query, key, and value. (B, S, 3, H, D) if key_padding_mask is None
|
||||||
|
if unpadded: (nnz, 3, h, d)
|
||||||
|
key_padding_mask: a bool tensor of shape (B, S)
|
||||||
|
"""
|
||||||
|
assert not need_weights
|
||||||
|
assert qkv.dtype in [torch.float16, torch.bfloat16]
|
||||||
|
assert qkv.is_cuda
|
||||||
|
|
||||||
|
if cu_seqlens is None:
|
||||||
|
batch_size = qkv.shape[0]
|
||||||
|
seqlen = qkv.shape[1]
|
||||||
|
if key_padding_mask is None:
|
||||||
|
qkv = rearrange(qkv, 'b s ... -> (b s) ...')
|
||||||
|
max_s = seqlen
|
||||||
|
cu_seqlens = torch.arange(0, (batch_size + 1) * seqlen, step=seqlen, dtype=torch.int32,
|
||||||
|
device=qkv.device)
|
||||||
|
output = flash_attn_varlen_qkvpacked_func(
|
||||||
|
qkv, cu_seqlens, max_s, self.dropout_p if self.training else 0.0,
|
||||||
|
softmax_scale=self.softmax_scale, causal=causal
|
||||||
|
)
|
||||||
|
output = rearrange(output, '(b s) ... -> b s ...', b=batch_size)
|
||||||
|
else:
|
||||||
|
nheads = qkv.shape[-2]
|
||||||
|
x = rearrange(qkv, 'b s three h d -> b s (three h d)')
|
||||||
|
x_unpad, indices, cu_seqlens, max_s = unpad_input(x, key_padding_mask)
|
||||||
|
x_unpad = rearrange(x_unpad, 'nnz (three h d) -> nnz three h d', three=3, h=nheads)
|
||||||
|
output_unpad = flash_attn_varlen_qkvpacked_func(
|
||||||
|
x_unpad, cu_seqlens, max_s, self.dropout_p if self.training else 0.0,
|
||||||
|
softmax_scale=self.softmax_scale, causal=causal
|
||||||
|
)
|
||||||
|
output = rearrange(pad_input(rearrange(output_unpad, 'nnz h d -> nnz (h d)'),
|
||||||
|
indices, batch_size, seqlen),
|
||||||
|
'b s (h d) -> b s h d', h=nheads)
|
||||||
|
else:
|
||||||
|
assert max_s is not None
|
||||||
|
output = flash_attn_varlen_qkvpacked_func(
|
||||||
|
qkv, cu_seqlens, max_s, self.dropout_p if self.training else 0.0,
|
||||||
|
softmax_scale=self.softmax_scale, causal=causal
|
||||||
|
)
|
||||||
|
|
||||||
|
return output, None
|
||||||
|
|
||||||
|
|
||||||
|
class InternRMSNorm(nn.Module):
|
||||||
|
def __init__(self, hidden_size, eps=1e-6):
|
||||||
|
super().__init__()
|
||||||
|
self.weight = nn.Parameter(torch.ones(hidden_size))
|
||||||
|
self.variance_epsilon = eps
|
||||||
|
|
||||||
|
def forward(self, hidden_states):
|
||||||
|
input_dtype = hidden_states.dtype
|
||||||
|
hidden_states = hidden_states.to(torch.float32)
|
||||||
|
variance = hidden_states.pow(2).mean(-1, keepdim=True)
|
||||||
|
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
|
||||||
|
return self.weight * hidden_states.to(input_dtype)
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
from apex.normalization import FusedRMSNorm
|
||||||
|
|
||||||
|
InternRMSNorm = FusedRMSNorm # noqa
|
||||||
|
|
||||||
|
logger.info('Discovered apex.normalization.FusedRMSNorm - will use it instead of InternRMSNorm')
|
||||||
|
except ImportError:
|
||||||
|
# using the normal InternRMSNorm
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
logger.warning('discovered apex but it failed to load, falling back to InternRMSNorm')
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
NORM2FN = {
|
||||||
|
'rms_norm': InternRMSNorm,
|
||||||
|
'layer_norm': nn.LayerNorm,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class InternVisionEmbeddings(nn.Module):
|
||||||
|
def __init__(self, config: InternVisionConfig):
|
||||||
|
super().__init__()
|
||||||
|
self.config = config
|
||||||
|
self.embed_dim = config.hidden_size
|
||||||
|
self.image_size = config.image_size
|
||||||
|
self.patch_size = config.patch_size
|
||||||
|
|
||||||
|
self.class_embedding = nn.Parameter(
|
||||||
|
torch.randn(1, 1, self.embed_dim),
|
||||||
|
)
|
||||||
|
|
||||||
|
self.patch_embedding = nn.Conv2d(
|
||||||
|
in_channels=3, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size
|
||||||
|
)
|
||||||
|
|
||||||
|
self.num_patches = (self.image_size // self.patch_size) ** 2
|
||||||
|
self.num_positions = self.num_patches + 1
|
||||||
|
|
||||||
|
self.position_embedding = nn.Parameter(torch.randn(1, self.num_positions, self.embed_dim))
|
||||||
|
|
||||||
|
def _get_pos_embed(self, pos_embed, H, W):
|
||||||
|
target_dtype = pos_embed.dtype
|
||||||
|
pos_embed = pos_embed.float().reshape(
|
||||||
|
1, self.image_size // self.patch_size, self.image_size // self.patch_size, -1).permute(0, 3, 1, 2)
|
||||||
|
pos_embed = F.interpolate(pos_embed, size=(H, W), mode='bicubic', align_corners=False). \
|
||||||
|
reshape(1, -1, H * W).permute(0, 2, 1).to(target_dtype)
|
||||||
|
return pos_embed
|
||||||
|
|
||||||
|
def forward(self, pixel_values: torch.FloatTensor) -> torch.Tensor:
|
||||||
|
target_dtype = self.patch_embedding.weight.dtype
|
||||||
|
patch_embeds = self.patch_embedding(pixel_values) # shape = [*, channel, width, height]
|
||||||
|
batch_size, _, height, width = patch_embeds.shape
|
||||||
|
patch_embeds = patch_embeds.flatten(2).transpose(1, 2)
|
||||||
|
class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype)
|
||||||
|
embeddings = torch.cat([class_embeds, patch_embeds], dim=1)
|
||||||
|
position_embedding = torch.cat([
|
||||||
|
self.position_embedding[:, :1, :],
|
||||||
|
self._get_pos_embed(self.position_embedding[:, 1:, :], height, width)
|
||||||
|
], dim=1)
|
||||||
|
embeddings = embeddings + position_embedding.to(target_dtype)
|
||||||
|
return embeddings
|
||||||
|
|
||||||
|
|
||||||
|
class InternAttention(nn.Module):
|
||||||
|
"""Multi-headed attention from 'Attention Is All You Need' paper"""
|
||||||
|
|
||||||
|
def __init__(self, config: InternVisionConfig):
|
||||||
|
super().__init__()
|
||||||
|
self.config = config
|
||||||
|
self.embed_dim = config.hidden_size
|
||||||
|
self.num_heads = config.num_attention_heads
|
||||||
|
self.use_flash_attn = config.use_flash_attn and has_flash_attn
|
||||||
|
if config.use_flash_attn and not has_flash_attn:
|
||||||
|
print('Warning: Flash Attention is not available, use_flash_attn is set to False.')
|
||||||
|
self.head_dim = self.embed_dim // self.num_heads
|
||||||
|
if self.head_dim * self.num_heads != self.embed_dim:
|
||||||
|
raise ValueError(
|
||||||
|
f'embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:'
|
||||||
|
f' {self.num_heads}).'
|
||||||
|
)
|
||||||
|
|
||||||
|
self.scale = self.head_dim ** -0.5
|
||||||
|
self.qkv = nn.Linear(self.embed_dim, 3 * self.embed_dim, bias=config.qkv_bias)
|
||||||
|
self.attn_drop = nn.Dropout(config.attention_dropout)
|
||||||
|
self.proj_drop = nn.Dropout(config.dropout)
|
||||||
|
|
||||||
|
self.qk_normalization = config.qk_normalization
|
||||||
|
|
||||||
|
if self.qk_normalization:
|
||||||
|
self.q_norm = InternRMSNorm(self.embed_dim, eps=config.layer_norm_eps)
|
||||||
|
self.k_norm = InternRMSNorm(self.embed_dim, eps=config.layer_norm_eps)
|
||||||
|
|
||||||
|
if self.use_flash_attn:
|
||||||
|
self.inner_attn = FlashAttention(attention_dropout=config.attention_dropout)
|
||||||
|
self.proj = nn.Linear(self.embed_dim, self.embed_dim)
|
||||||
|
|
||||||
|
def _naive_attn(self, x):
|
||||||
|
B, N, C = x.shape
|
||||||
|
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
|
||||||
|
q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple)
|
||||||
|
|
||||||
|
if self.qk_normalization:
|
||||||
|
B_, H_, N_, D_ = q.shape
|
||||||
|
q = self.q_norm(q.transpose(1, 2).flatten(-2, -1)).view(B_, N_, H_, D_).transpose(1, 2)
|
||||||
|
k = self.k_norm(k.transpose(1, 2).flatten(-2, -1)).view(B_, N_, H_, D_).transpose(1, 2)
|
||||||
|
|
||||||
|
attn = ((q * self.scale) @ k.transpose(-2, -1))
|
||||||
|
attn = attn.softmax(dim=-1)
|
||||||
|
attn = self.attn_drop(attn)
|
||||||
|
|
||||||
|
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
|
||||||
|
x = self.proj(x)
|
||||||
|
x = self.proj_drop(x)
|
||||||
|
return x
|
||||||
|
|
||||||
|
def _flash_attn(self, x, key_padding_mask=None, need_weights=False):
|
||||||
|
qkv = self.qkv(x)
|
||||||
|
qkv = rearrange(qkv, 'b s (three h d) -> b s three h d', three=3, h=self.num_heads)
|
||||||
|
|
||||||
|
if self.qk_normalization:
|
||||||
|
q, k, v = qkv.unbind(2)
|
||||||
|
q = self.q_norm(q.flatten(-2, -1)).view(q.shape)
|
||||||
|
k = self.k_norm(k.flatten(-2, -1)).view(k.shape)
|
||||||
|
qkv = torch.stack([q, k, v], dim=2)
|
||||||
|
|
||||||
|
context, _ = self.inner_attn(
|
||||||
|
qkv, key_padding_mask=key_padding_mask, need_weights=need_weights, causal=False
|
||||||
|
)
|
||||||
|
outs = self.proj(rearrange(context, 'b s h d -> b s (h d)'))
|
||||||
|
outs = self.proj_drop(outs)
|
||||||
|
return outs
|
||||||
|
|
||||||
|
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
|
||||||
|
x = self._naive_attn(hidden_states) if not self.use_flash_attn else self._flash_attn(hidden_states)
|
||||||
|
return x
|
||||||
|
|
||||||
|
|
||||||
|
class InternMLP(nn.Module):
|
||||||
|
def __init__(self, config: InternVisionConfig):
|
||||||
|
super().__init__()
|
||||||
|
self.config = config
|
||||||
|
self.act = ACT2FN[config.hidden_act]
|
||||||
|
self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size)
|
||||||
|
self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
|
||||||
|
|
||||||
|
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
|
||||||
|
hidden_states = self.fc1(hidden_states)
|
||||||
|
hidden_states = self.act(hidden_states)
|
||||||
|
hidden_states = self.fc2(hidden_states)
|
||||||
|
return hidden_states
|
||||||
|
|
||||||
|
|
||||||
|
class InternVisionEncoderLayer(nn.Module):
|
||||||
|
def __init__(self, config: InternVisionConfig, drop_path_rate: float):
|
||||||
|
super().__init__()
|
||||||
|
self.embed_dim = config.hidden_size
|
||||||
|
self.intermediate_size = config.intermediate_size
|
||||||
|
self.norm_type = config.norm_type
|
||||||
|
|
||||||
|
self.attn = InternAttention(config)
|
||||||
|
self.mlp = InternMLP(config)
|
||||||
|
self.norm1 = NORM2FN[self.norm_type](self.embed_dim, eps=config.layer_norm_eps)
|
||||||
|
self.norm2 = NORM2FN[self.norm_type](self.embed_dim, eps=config.layer_norm_eps)
|
||||||
|
|
||||||
|
self.ls1 = nn.Parameter(config.initializer_factor * torch.ones(self.embed_dim))
|
||||||
|
self.ls2 = nn.Parameter(config.initializer_factor * torch.ones(self.embed_dim))
|
||||||
|
self.drop_path1 = DropPath(drop_path_rate) if drop_path_rate > 0. else nn.Identity()
|
||||||
|
self.drop_path2 = DropPath(drop_path_rate) if drop_path_rate > 0. else nn.Identity()
|
||||||
|
|
||||||
|
def forward(
|
||||||
|
self,
|
||||||
|
hidden_states: torch.Tensor,
|
||||||
|
) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor], Optional[Tuple[torch.FloatTensor]]]:
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
hidden_states (`Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
||||||
|
"""
|
||||||
|
hidden_states = hidden_states + self.drop_path1(self.attn(self.norm1(hidden_states).to(hidden_states.dtype)) * self.ls1)
|
||||||
|
|
||||||
|
hidden_states = hidden_states + self.drop_path2(self.mlp(self.norm2(hidden_states).to(hidden_states.dtype)) * self.ls2)
|
||||||
|
|
||||||
|
return hidden_states
|
||||||
|
|
||||||
|
|
||||||
|
class InternVisionEncoder(nn.Module):
|
||||||
|
"""
|
||||||
|
Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
|
||||||
|
[`InternEncoderLayer`].
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config (`InternConfig`):
|
||||||
|
The corresponding vision configuration for the `InternEncoder`.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config: InternVisionConfig):
|
||||||
|
super().__init__()
|
||||||
|
self.config = config
|
||||||
|
# stochastic depth decay rule
|
||||||
|
dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, config.num_hidden_layers)]
|
||||||
|
self.layers = nn.ModuleList([
|
||||||
|
InternVisionEncoderLayer(config, dpr[idx]) for idx in range(config.num_hidden_layers)])
|
||||||
|
self.gradient_checkpointing = True
|
||||||
|
|
||||||
|
def forward(
|
||||||
|
self,
|
||||||
|
inputs_embeds,
|
||||||
|
output_hidden_states: Optional[bool] = None,
|
||||||
|
return_dict: Optional[bool] = None,
|
||||||
|
) -> Union[Tuple, BaseModelOutput]:
|
||||||
|
r"""
|
||||||
|
Args:
|
||||||
|
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
|
||||||
|
Embedded representation of the inputs. Should be float, not int tokens.
|
||||||
|
output_hidden_states (`bool`, *optional*):
|
||||||
|
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
|
||||||
|
for more detail.
|
||||||
|
return_dict (`bool`, *optional*):
|
||||||
|
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
||||||
|
"""
|
||||||
|
output_hidden_states = (
|
||||||
|
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
||||||
|
)
|
||||||
|
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
||||||
|
|
||||||
|
encoder_states = () if output_hidden_states else None
|
||||||
|
hidden_states = inputs_embeds
|
||||||
|
|
||||||
|
for idx, encoder_layer in enumerate(self.layers):
|
||||||
|
if output_hidden_states:
|
||||||
|
encoder_states = encoder_states + (hidden_states,)
|
||||||
|
if self.gradient_checkpointing and self.training:
|
||||||
|
layer_outputs = torch.utils.checkpoint.checkpoint(
|
||||||
|
encoder_layer,
|
||||||
|
hidden_states)
|
||||||
|
else:
|
||||||
|
layer_outputs = encoder_layer(
|
||||||
|
hidden_states,
|
||||||
|
)
|
||||||
|
hidden_states = layer_outputs
|
||||||
|
|
||||||
|
if output_hidden_states:
|
||||||
|
encoder_states = encoder_states + (hidden_states,)
|
||||||
|
|
||||||
|
if not return_dict:
|
||||||
|
return tuple(v for v in [hidden_states, encoder_states] if v is not None)
|
||||||
|
return BaseModelOutput(
|
||||||
|
last_hidden_state=hidden_states, hidden_states=encoder_states
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class InternVisionModel(PreTrainedModel):
|
||||||
|
main_input_name = 'pixel_values'
|
||||||
|
_supports_flash_attn_2 = True
|
||||||
|
config_class = InternVisionConfig
|
||||||
|
_no_split_modules = ['InternVisionEncoderLayer']
|
||||||
|
|
||||||
|
def __init__(self, config: InternVisionConfig):
|
||||||
|
super().__init__(config)
|
||||||
|
self.config = config
|
||||||
|
|
||||||
|
self.embeddings = InternVisionEmbeddings(config)
|
||||||
|
self.encoder = InternVisionEncoder(config)
|
||||||
|
|
||||||
|
def resize_pos_embeddings(self, old_size, new_size, patch_size):
|
||||||
|
pos_emb = self.embeddings.position_embedding
|
||||||
|
_, num_positions, embed_dim = pos_emb.shape
|
||||||
|
cls_emb = pos_emb[:, :1, :]
|
||||||
|
pos_emb = pos_emb[:, 1:, :].reshape(1, old_size // patch_size, old_size // patch_size, -1).permute(0, 3, 1, 2)
|
||||||
|
pos_emb = F.interpolate(pos_emb.float(), size=new_size // patch_size, mode='bicubic', align_corners=False)
|
||||||
|
pos_emb = pos_emb.to(cls_emb.dtype).reshape(1, embed_dim, -1).permute(0, 2, 1)
|
||||||
|
pos_emb = torch.cat([cls_emb, pos_emb], dim=1)
|
||||||
|
self.embeddings.position_embedding = nn.Parameter(pos_emb)
|
||||||
|
self.embeddings.image_size = new_size
|
||||||
|
logger.info('Resized position embeddings from {} to {}'.format(old_size, new_size))
|
||||||
|
|
||||||
|
def get_input_embeddings(self):
|
||||||
|
return self.embeddings
|
||||||
|
|
||||||
|
def forward(
|
||||||
|
self,
|
||||||
|
pixel_values: Optional[torch.FloatTensor] = None,
|
||||||
|
output_hidden_states: Optional[bool] = None,
|
||||||
|
return_dict: Optional[bool] = None,
|
||||||
|
pixel_embeds: Optional[torch.FloatTensor] = None,
|
||||||
|
) -> Union[Tuple, BaseModelOutputWithPooling]:
|
||||||
|
output_hidden_states = (
|
||||||
|
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
||||||
|
)
|
||||||
|
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
||||||
|
|
||||||
|
if pixel_values is None and pixel_embeds is None:
|
||||||
|
raise ValueError('You have to specify pixel_values or pixel_embeds')
|
||||||
|
|
||||||
|
if pixel_embeds is not None:
|
||||||
|
hidden_states = pixel_embeds
|
||||||
|
else:
|
||||||
|
if len(pixel_values.shape) == 4:
|
||||||
|
hidden_states = self.embeddings(pixel_values)
|
||||||
|
else:
|
||||||
|
raise ValueError(f'wrong pixel_values size: {pixel_values.shape}')
|
||||||
|
encoder_outputs = self.encoder(
|
||||||
|
inputs_embeds=hidden_states,
|
||||||
|
output_hidden_states=output_hidden_states,
|
||||||
|
return_dict=return_dict,
|
||||||
|
)
|
||||||
|
last_hidden_state = encoder_outputs.last_hidden_state
|
||||||
|
pooled_output = last_hidden_state[:, 0, :]
|
||||||
|
|
||||||
|
if not return_dict:
|
||||||
|
return (last_hidden_state, pooled_output) + encoder_outputs[1:]
|
||||||
|
|
||||||
|
return BaseModelOutputWithPooling(
|
||||||
|
last_hidden_state=last_hidden_state,
|
||||||
|
pooler_output=pooled_output,
|
||||||
|
hidden_states=encoder_outputs.hidden_states,
|
||||||
|
attentions=encoder_outputs.attentions,
|
||||||
|
)
|
||||||
1415
modeling_internlm2.py
Normal file
1415
modeling_internlm2.py
Normal file
File diff suppressed because it is too large
Load Diff
349
modeling_internvl_chat.py
Normal file
349
modeling_internvl_chat.py
Normal file
@@ -0,0 +1,349 @@
|
|||||||
|
# --------------------------------------------------------
|
||||||
|
# InternVL
|
||||||
|
# Copyright (c) 2024 OpenGVLab
|
||||||
|
# Licensed under The MIT License [see LICENSE for details]
|
||||||
|
# --------------------------------------------------------
|
||||||
|
|
||||||
|
import warnings
|
||||||
|
from typing import List, Optional, Tuple, Union
|
||||||
|
|
||||||
|
import torch.utils.checkpoint
|
||||||
|
import transformers
|
||||||
|
from torch import nn
|
||||||
|
from torch.nn import CrossEntropyLoss
|
||||||
|
from transformers import (AutoModel, GenerationConfig, LlamaForCausalLM,
|
||||||
|
LlamaTokenizer)
|
||||||
|
from transformers.modeling_outputs import CausalLMOutputWithPast
|
||||||
|
from transformers.modeling_utils import PreTrainedModel
|
||||||
|
from transformers.utils import ModelOutput, logging
|
||||||
|
|
||||||
|
from .configuration_internvl_chat import InternVLChatConfig
|
||||||
|
from .conversation import get_conv_template
|
||||||
|
from .modeling_intern_vit import InternVisionModel, has_flash_attn
|
||||||
|
from .modeling_internlm2 import InternLM2ForCausalLM
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def version_cmp(v1, v2, op='eq'):
|
||||||
|
import operator
|
||||||
|
|
||||||
|
from packaging import version
|
||||||
|
op_func = getattr(operator, op)
|
||||||
|
return op_func(version.parse(v1), version.parse(v2))
|
||||||
|
|
||||||
|
|
||||||
|
class InternVLChatModel(PreTrainedModel):
|
||||||
|
config_class = InternVLChatConfig
|
||||||
|
main_input_name = 'pixel_values'
|
||||||
|
base_model_prefix = 'language_model'
|
||||||
|
_supports_flash_attn_2 = True
|
||||||
|
_no_split_modules = ['InternVisionModel', 'LlamaDecoderLayer', 'InternLM2DecoderLayer']
|
||||||
|
|
||||||
|
def __init__(self, config: InternVLChatConfig, vision_model=None, language_model=None, use_flash_attn=True):
|
||||||
|
super().__init__(config)
|
||||||
|
|
||||||
|
assert version_cmp(transformers.__version__, '4.37.0', 'ge')
|
||||||
|
image_size = config.force_image_size or config.vision_config.image_size
|
||||||
|
patch_size = config.vision_config.patch_size
|
||||||
|
self.patch_size = patch_size
|
||||||
|
self.select_layer = config.select_layer
|
||||||
|
self.template = config.template
|
||||||
|
self.num_image_token = int((image_size // patch_size) ** 2 * (config.downsample_ratio ** 2))
|
||||||
|
self.downsample_ratio = config.downsample_ratio
|
||||||
|
self.ps_version = config.ps_version
|
||||||
|
use_flash_attn = use_flash_attn if has_flash_attn else False
|
||||||
|
config.vision_config.use_flash_attn = True if use_flash_attn else False
|
||||||
|
config.llm_config.attn_implementation = 'flash_attention_2' if use_flash_attn else 'eager'
|
||||||
|
|
||||||
|
logger.info(f'num_image_token: {self.num_image_token}')
|
||||||
|
logger.info(f'ps_version: {self.ps_version}')
|
||||||
|
if vision_model is not None:
|
||||||
|
self.vision_model = vision_model
|
||||||
|
else:
|
||||||
|
self.vision_model = InternVisionModel(config.vision_config)
|
||||||
|
if language_model is not None:
|
||||||
|
self.language_model = language_model
|
||||||
|
else:
|
||||||
|
if config.llm_config.architectures[0] == 'LlamaForCausalLM':
|
||||||
|
self.language_model = LlamaForCausalLM(config.llm_config)
|
||||||
|
elif config.llm_config.architectures[0] == 'InternLM2ForCausalLM':
|
||||||
|
self.language_model = InternLM2ForCausalLM(config.llm_config)
|
||||||
|
else:
|
||||||
|
raise NotImplementedError(f'{config.llm_config.architectures[0]} is not implemented.')
|
||||||
|
|
||||||
|
vit_hidden_size = config.vision_config.hidden_size
|
||||||
|
llm_hidden_size = config.llm_config.hidden_size
|
||||||
|
|
||||||
|
self.mlp1 = nn.Sequential(
|
||||||
|
nn.LayerNorm(vit_hidden_size * int(1 / self.downsample_ratio) ** 2),
|
||||||
|
nn.Linear(vit_hidden_size * int(1 / self.downsample_ratio) ** 2, llm_hidden_size),
|
||||||
|
nn.GELU(),
|
||||||
|
nn.Linear(llm_hidden_size, llm_hidden_size)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.img_context_token_id = None
|
||||||
|
self.conv_template = get_conv_template(self.template)
|
||||||
|
self.system_message = self.conv_template.system_message
|
||||||
|
|
||||||
|
def forward(
|
||||||
|
self,
|
||||||
|
pixel_values: torch.FloatTensor,
|
||||||
|
input_ids: torch.LongTensor = None,
|
||||||
|
attention_mask: Optional[torch.Tensor] = None,
|
||||||
|
position_ids: Optional[torch.LongTensor] = None,
|
||||||
|
image_flags: Optional[torch.LongTensor] = None,
|
||||||
|
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
||||||
|
labels: Optional[torch.LongTensor] = None,
|
||||||
|
use_cache: Optional[bool] = None,
|
||||||
|
output_attentions: Optional[bool] = None,
|
||||||
|
output_hidden_states: Optional[bool] = None,
|
||||||
|
return_dict: Optional[bool] = None,
|
||||||
|
) -> Union[Tuple, CausalLMOutputWithPast]:
|
||||||
|
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
||||||
|
|
||||||
|
image_flags = image_flags.squeeze(-1)
|
||||||
|
input_embeds = self.language_model.get_input_embeddings()(input_ids).clone()
|
||||||
|
|
||||||
|
vit_embeds = self.extract_feature(pixel_values)
|
||||||
|
vit_embeds = vit_embeds[image_flags == 1]
|
||||||
|
vit_batch_size = pixel_values.shape[0]
|
||||||
|
|
||||||
|
B, N, C = input_embeds.shape
|
||||||
|
input_embeds = input_embeds.reshape(B * N, C)
|
||||||
|
|
||||||
|
if torch.distributed.is_initialized() and torch.distributed.get_rank() == 0:
|
||||||
|
print(f'dynamic ViT batch size: {vit_batch_size}, images per sample: {vit_batch_size / B}, dynamic token length: {N}')
|
||||||
|
|
||||||
|
input_ids = input_ids.reshape(B * N)
|
||||||
|
selected = (input_ids == self.img_context_token_id)
|
||||||
|
try:
|
||||||
|
input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds.reshape(-1, C)
|
||||||
|
except Exception as e:
|
||||||
|
vit_embeds = vit_embeds.reshape(-1, C)
|
||||||
|
print(f'warning: {e}, input_embeds[selected].shape={input_embeds[selected].shape}, '
|
||||||
|
f'vit_embeds.shape={vit_embeds.shape}')
|
||||||
|
n_token = selected.sum()
|
||||||
|
input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds[:n_token]
|
||||||
|
|
||||||
|
input_embeds = input_embeds.reshape(B, N, C)
|
||||||
|
|
||||||
|
outputs = self.language_model(
|
||||||
|
inputs_embeds=input_embeds,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
position_ids=position_ids,
|
||||||
|
past_key_values=past_key_values,
|
||||||
|
use_cache=use_cache,
|
||||||
|
output_attentions=output_attentions,
|
||||||
|
output_hidden_states=output_hidden_states,
|
||||||
|
return_dict=return_dict,
|
||||||
|
)
|
||||||
|
logits = outputs.logits
|
||||||
|
|
||||||
|
loss = None
|
||||||
|
if labels is not None:
|
||||||
|
# Shift so that tokens < n predict n
|
||||||
|
shift_logits = logits[..., :-1, :].contiguous()
|
||||||
|
shift_labels = labels[..., 1:].contiguous()
|
||||||
|
# Flatten the tokens
|
||||||
|
loss_fct = CrossEntropyLoss()
|
||||||
|
shift_logits = shift_logits.view(-1, self.language_model.config.vocab_size)
|
||||||
|
shift_labels = shift_labels.view(-1)
|
||||||
|
# Enable model parallelism
|
||||||
|
shift_labels = shift_labels.to(shift_logits.device)
|
||||||
|
loss = loss_fct(shift_logits, shift_labels)
|
||||||
|
|
||||||
|
if not return_dict:
|
||||||
|
output = (logits,) + outputs[1:]
|
||||||
|
return (loss,) + output if loss is not None else output
|
||||||
|
|
||||||
|
return CausalLMOutputWithPast(
|
||||||
|
loss=loss,
|
||||||
|
logits=logits,
|
||||||
|
past_key_values=outputs.past_key_values,
|
||||||
|
hidden_states=outputs.hidden_states,
|
||||||
|
attentions=outputs.attentions,
|
||||||
|
)
|
||||||
|
|
||||||
|
def pixel_shuffle(self, x, scale_factor=0.5):
|
||||||
|
n, w, h, c = x.size()
|
||||||
|
# N, W, H, C --> N, W, H * scale, C // scale
|
||||||
|
x = x.view(n, w, int(h * scale_factor), int(c / scale_factor))
|
||||||
|
# N, W, H * scale, C // scale --> N, H * scale, W, C // scale
|
||||||
|
x = x.permute(0, 2, 1, 3).contiguous()
|
||||||
|
# N, H * scale, W, C // scale --> N, H * scale, W * scale, C // (scale ** 2)
|
||||||
|
x = x.view(n, int(h * scale_factor), int(w * scale_factor),
|
||||||
|
int(c / (scale_factor * scale_factor)))
|
||||||
|
if self.ps_version == 'v1':
|
||||||
|
warnings.warn("In ps_version 'v1', the height and width have not been swapped back, "
|
||||||
|
'which results in a transposed image.')
|
||||||
|
else:
|
||||||
|
x = x.permute(0, 2, 1, 3).contiguous()
|
||||||
|
return x
|
||||||
|
|
||||||
|
def extract_feature(self, pixel_values):
|
||||||
|
if self.select_layer == -1:
|
||||||
|
vit_embeds = self.vision_model(
|
||||||
|
pixel_values=pixel_values,
|
||||||
|
output_hidden_states=False,
|
||||||
|
return_dict=True).last_hidden_state
|
||||||
|
else:
|
||||||
|
vit_embeds = self.vision_model(
|
||||||
|
pixel_values=pixel_values,
|
||||||
|
output_hidden_states=True,
|
||||||
|
return_dict=True).hidden_states[self.select_layer]
|
||||||
|
vit_embeds = vit_embeds[:, 1:, :]
|
||||||
|
|
||||||
|
h = w = int(vit_embeds.shape[1] ** 0.5)
|
||||||
|
vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
|
||||||
|
vit_embeds = self.pixel_shuffle(vit_embeds, scale_factor=self.downsample_ratio)
|
||||||
|
vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
|
||||||
|
vit_embeds = self.mlp1(vit_embeds)
|
||||||
|
return vit_embeds
|
||||||
|
|
||||||
|
def batch_chat(self, tokenizer, pixel_values, questions, generation_config, num_patches_list=None,
|
||||||
|
history=None, return_history=False, IMG_START_TOKEN='<img>', IMG_END_TOKEN='</img>',
|
||||||
|
IMG_CONTEXT_TOKEN='<IMG_CONTEXT>', verbose=False, image_counts=None):
|
||||||
|
if history is not None or return_history:
|
||||||
|
print('Now multi-turn chat is not supported in batch_chat.')
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
if image_counts is not None:
|
||||||
|
num_patches_list = image_counts
|
||||||
|
print('Warning: `image_counts` is deprecated. Please use `num_patches_list` instead.')
|
||||||
|
|
||||||
|
img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
|
||||||
|
self.img_context_token_id = img_context_token_id
|
||||||
|
|
||||||
|
if verbose and pixel_values is not None:
|
||||||
|
image_bs = pixel_values.shape[0]
|
||||||
|
print(f'dynamic ViT batch size: {image_bs}')
|
||||||
|
|
||||||
|
queries = []
|
||||||
|
for idx, num_patches in enumerate(num_patches_list):
|
||||||
|
question = questions[idx]
|
||||||
|
if pixel_values is not None and '<image>' not in question:
|
||||||
|
question = '<image>\n' + question
|
||||||
|
template = get_conv_template(self.template)
|
||||||
|
template.system_message = self.system_message
|
||||||
|
template.append_message(template.roles[0], question)
|
||||||
|
template.append_message(template.roles[1], None)
|
||||||
|
query = template.get_prompt()
|
||||||
|
|
||||||
|
image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * self.num_image_token * num_patches + IMG_END_TOKEN
|
||||||
|
query = query.replace('<image>', image_tokens, 1)
|
||||||
|
queries.append(query)
|
||||||
|
|
||||||
|
tokenizer.padding_side = 'left'
|
||||||
|
model_inputs = tokenizer(queries, return_tensors='pt', padding=True)
|
||||||
|
input_ids = model_inputs['input_ids'].to(self.device)
|
||||||
|
attention_mask = model_inputs['attention_mask'].to(self.device)
|
||||||
|
eos_token_id = tokenizer.convert_tokens_to_ids(template.sep.strip())
|
||||||
|
generation_config['eos_token_id'] = eos_token_id
|
||||||
|
generation_output = self.generate(
|
||||||
|
pixel_values=pixel_values,
|
||||||
|
input_ids=input_ids,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
**generation_config
|
||||||
|
)
|
||||||
|
responses = tokenizer.batch_decode(generation_output, skip_special_tokens=True)
|
||||||
|
responses = [response.split(template.sep.strip())[0].strip() for response in responses]
|
||||||
|
return responses
|
||||||
|
|
||||||
|
def chat(self, tokenizer, pixel_values, question, generation_config, history=None, return_history=False,
|
||||||
|
num_patches_list=None, IMG_START_TOKEN='<img>', IMG_END_TOKEN='</img>', IMG_CONTEXT_TOKEN='<IMG_CONTEXT>',
|
||||||
|
verbose=False):
|
||||||
|
|
||||||
|
if history is None and pixel_values is not None and '<image>' not in question:
|
||||||
|
question = '<image>\n' + question
|
||||||
|
|
||||||
|
if num_patches_list is None:
|
||||||
|
num_patches_list = [pixel_values.shape[0]] if pixel_values is not None else []
|
||||||
|
assert pixel_values is None or len(pixel_values) == sum(num_patches_list)
|
||||||
|
|
||||||
|
img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
|
||||||
|
self.img_context_token_id = img_context_token_id
|
||||||
|
|
||||||
|
template = get_conv_template(self.template)
|
||||||
|
template.system_message = self.system_message
|
||||||
|
eos_token_id = tokenizer.convert_tokens_to_ids(template.sep.strip())
|
||||||
|
|
||||||
|
history = [] if history is None else history
|
||||||
|
for (old_question, old_answer) in history:
|
||||||
|
template.append_message(template.roles[0], old_question)
|
||||||
|
template.append_message(template.roles[1], old_answer)
|
||||||
|
template.append_message(template.roles[0], question)
|
||||||
|
template.append_message(template.roles[1], None)
|
||||||
|
query = template.get_prompt()
|
||||||
|
|
||||||
|
if verbose and pixel_values is not None:
|
||||||
|
image_bs = pixel_values.shape[0]
|
||||||
|
print(f'dynamic ViT batch size: {image_bs}')
|
||||||
|
|
||||||
|
for num_patches in num_patches_list:
|
||||||
|
image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * self.num_image_token * num_patches + IMG_END_TOKEN
|
||||||
|
query = query.replace('<image>', image_tokens, 1)
|
||||||
|
|
||||||
|
model_inputs = tokenizer(query, return_tensors='pt')
|
||||||
|
input_ids = model_inputs['input_ids'].to(self.device)
|
||||||
|
attention_mask = model_inputs['attention_mask'].to(self.device)
|
||||||
|
generation_config['eos_token_id'] = eos_token_id
|
||||||
|
generation_output = self.generate(
|
||||||
|
pixel_values=pixel_values,
|
||||||
|
input_ids=input_ids,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
**generation_config
|
||||||
|
)
|
||||||
|
response = tokenizer.batch_decode(generation_output, skip_special_tokens=True)[0]
|
||||||
|
response = response.split(template.sep.strip())[0].strip()
|
||||||
|
history.append((question, response))
|
||||||
|
if return_history:
|
||||||
|
return response, history
|
||||||
|
else:
|
||||||
|
query_to_print = query.replace(IMG_CONTEXT_TOKEN, '')
|
||||||
|
query_to_print = query_to_print.replace(f'{IMG_START_TOKEN}{IMG_END_TOKEN}', '<image>')
|
||||||
|
if verbose:
|
||||||
|
print(query_to_print, response)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@torch.no_grad()
|
||||||
|
def generate(
|
||||||
|
self,
|
||||||
|
pixel_values: Optional[torch.FloatTensor] = None,
|
||||||
|
input_ids: Optional[torch.FloatTensor] = None,
|
||||||
|
attention_mask: Optional[torch.LongTensor] = None,
|
||||||
|
visual_features: Optional[torch.FloatTensor] = None,
|
||||||
|
generation_config: Optional[GenerationConfig] = None,
|
||||||
|
output_hidden_states: Optional[bool] = None,
|
||||||
|
**generate_kwargs,
|
||||||
|
) -> torch.LongTensor:
|
||||||
|
|
||||||
|
assert self.img_context_token_id is not None
|
||||||
|
if pixel_values is not None:
|
||||||
|
if visual_features is not None:
|
||||||
|
vit_embeds = visual_features
|
||||||
|
else:
|
||||||
|
vit_embeds = self.extract_feature(pixel_values)
|
||||||
|
input_embeds = self.language_model.get_input_embeddings()(input_ids)
|
||||||
|
B, N, C = input_embeds.shape
|
||||||
|
input_embeds = input_embeds.reshape(B * N, C)
|
||||||
|
|
||||||
|
input_ids = input_ids.reshape(B * N)
|
||||||
|
selected = (input_ids == self.img_context_token_id)
|
||||||
|
assert selected.sum() != 0
|
||||||
|
input_embeds[selected] = vit_embeds.reshape(-1, C).to(input_embeds.device)
|
||||||
|
|
||||||
|
input_embeds = input_embeds.reshape(B, N, C)
|
||||||
|
else:
|
||||||
|
input_embeds = self.language_model.get_input_embeddings()(input_ids)
|
||||||
|
|
||||||
|
outputs = self.language_model.generate(
|
||||||
|
inputs_embeds=input_embeds,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
generation_config=generation_config,
|
||||||
|
output_hidden_states=output_hidden_states,
|
||||||
|
use_cache=True,
|
||||||
|
**generate_kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
return outputs
|
||||||
47
special_tokens_map.json
Normal file
47
special_tokens_map.json
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|action_start|>",
|
||||||
|
"<|action_end|>",
|
||||||
|
"<|interpreter|>",
|
||||||
|
"<|plugin|>",
|
||||||
|
"<img>",
|
||||||
|
"</img>",
|
||||||
|
"<IMG_CONTEXT>",
|
||||||
|
"<quad>",
|
||||||
|
"</quad>",
|
||||||
|
"<ref>",
|
||||||
|
"</ref>",
|
||||||
|
"<box>",
|
||||||
|
"</box>"
|
||||||
|
],
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
235
tokenization_internlm2.py
Normal file
235
tokenization_internlm2.py
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
# Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
|
||||||
|
#
|
||||||
|
# This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Tokenization classes for InternLM."""
|
||||||
|
import os
|
||||||
|
from shutil import copyfile
|
||||||
|
from typing import Any, Dict, List, Optional, Tuple
|
||||||
|
|
||||||
|
import sentencepiece as spm
|
||||||
|
from transformers.tokenization_utils import PreTrainedTokenizer
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
VOCAB_FILES_NAMES = {'vocab_file': './tokenizer.model'}
|
||||||
|
|
||||||
|
PRETRAINED_VOCAB_FILES_MAP = {}
|
||||||
|
|
||||||
|
|
||||||
|
# Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
|
||||||
|
class InternLM2Tokenizer(PreTrainedTokenizer):
|
||||||
|
"""
|
||||||
|
Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
vocab_file (`str`):
|
||||||
|
Path to the vocabulary file.
|
||||||
|
"""
|
||||||
|
|
||||||
|
vocab_files_names = VOCAB_FILES_NAMES
|
||||||
|
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||||
|
model_input_names = ['input_ids', 'attention_mask']
|
||||||
|
_auto_class = 'AutoTokenizer'
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
vocab_file,
|
||||||
|
unk_token='<unk>',
|
||||||
|
bos_token='<s>',
|
||||||
|
eos_token='</s>',
|
||||||
|
pad_token='</s>',
|
||||||
|
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
||||||
|
add_bos_token=True,
|
||||||
|
add_eos_token=False,
|
||||||
|
decode_with_prefix_space=False,
|
||||||
|
clean_up_tokenization_spaces=False,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
||||||
|
self.vocab_file = vocab_file
|
||||||
|
self.add_bos_token = add_bos_token
|
||||||
|
self.add_eos_token = add_eos_token
|
||||||
|
self.decode_with_prefix_space = decode_with_prefix_space
|
||||||
|
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
||||||
|
self.sp_model.Load(vocab_file)
|
||||||
|
self._no_prefix_space_tokens = None
|
||||||
|
super().__init__(
|
||||||
|
bos_token=bos_token,
|
||||||
|
eos_token=eos_token,
|
||||||
|
unk_token=unk_token,
|
||||||
|
pad_token=pad_token,
|
||||||
|
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def no_prefix_space_tokens(self):
|
||||||
|
if self._no_prefix_space_tokens is None:
|
||||||
|
vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
|
||||||
|
self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith('▁')}
|
||||||
|
return self._no_prefix_space_tokens
|
||||||
|
|
||||||
|
@property
|
||||||
|
def vocab_size(self):
|
||||||
|
"""Returns vocab size"""
|
||||||
|
return self.sp_model.get_piece_size()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def bos_token_id(self) -> Optional[int]:
|
||||||
|
return self.sp_model.bos_id()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def eos_token_id(self) -> Optional[int]:
|
||||||
|
return self.sp_model.eos_id()
|
||||||
|
|
||||||
|
def get_vocab(self):
|
||||||
|
"""Returns vocab as a dict"""
|
||||||
|
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
||||||
|
vocab.update(self.added_tokens_encoder)
|
||||||
|
return vocab
|
||||||
|
|
||||||
|
def _tokenize(self, text):
|
||||||
|
"""Returns a tokenized string."""
|
||||||
|
return self.sp_model.encode(text, out_type=str)
|
||||||
|
|
||||||
|
def _convert_token_to_id(self, token):
|
||||||
|
"""Converts a token (str) in an id using the vocab."""
|
||||||
|
return self.sp_model.piece_to_id(token)
|
||||||
|
|
||||||
|
def _convert_id_to_token(self, index):
|
||||||
|
"""Converts an index (integer) in a token (str) using the vocab."""
|
||||||
|
token = self.sp_model.IdToPiece(index)
|
||||||
|
return token
|
||||||
|
|
||||||
|
def _maybe_add_prefix_space(self, tokens, decoded):
|
||||||
|
if tokens and tokens[0] not in self.no_prefix_space_tokens:
|
||||||
|
return ' ' + decoded
|
||||||
|
else:
|
||||||
|
return decoded
|
||||||
|
|
||||||
|
def convert_tokens_to_string(self, tokens):
|
||||||
|
"""Converts a sequence of tokens (string) in a single string."""
|
||||||
|
current_sub_tokens = []
|
||||||
|
out_string = ''
|
||||||
|
prev_is_special = False
|
||||||
|
for token in tokens:
|
||||||
|
# make sure that special tokens are not decoded using sentencepiece model
|
||||||
|
if token in self.all_special_tokens:
|
||||||
|
if not prev_is_special:
|
||||||
|
out_string += ' '
|
||||||
|
out_string += self.sp_model.decode(current_sub_tokens) + token
|
||||||
|
prev_is_special = True
|
||||||
|
current_sub_tokens = []
|
||||||
|
else:
|
||||||
|
current_sub_tokens.append(token)
|
||||||
|
prev_is_special = False
|
||||||
|
out_string += self.sp_model.decode(current_sub_tokens)
|
||||||
|
out_string = self.clean_up_tokenization(out_string)
|
||||||
|
out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
|
||||||
|
return out_string[1:]
|
||||||
|
|
||||||
|
def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
||||||
|
"""
|
||||||
|
Save the vocabulary and special tokens file to a directory.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
save_directory (`str`):
|
||||||
|
The directory in which to save the vocabulary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`Tuple(str)`: Paths to the files saved.
|
||||||
|
"""
|
||||||
|
if not os.path.isdir(save_directory):
|
||||||
|
logger.error(f'Vocabulary path ({save_directory}) should be a directory')
|
||||||
|
return
|
||||||
|
out_vocab_file = os.path.join(
|
||||||
|
save_directory, (filename_prefix + '-' if filename_prefix else '') + VOCAB_FILES_NAMES['vocab_file']
|
||||||
|
)
|
||||||
|
|
||||||
|
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
|
||||||
|
copyfile(self.vocab_file, out_vocab_file)
|
||||||
|
elif not os.path.isfile(self.vocab_file):
|
||||||
|
with open(out_vocab_file, 'wb') as fi:
|
||||||
|
content_spiece_model = self.sp_model.serialized_model_proto()
|
||||||
|
fi.write(content_spiece_model)
|
||||||
|
|
||||||
|
return (out_vocab_file,)
|
||||||
|
|
||||||
|
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
||||||
|
if self.add_bos_token:
|
||||||
|
bos_token_ids = [self.bos_token_id]
|
||||||
|
else:
|
||||||
|
bos_token_ids = []
|
||||||
|
|
||||||
|
output = bos_token_ids + token_ids_0
|
||||||
|
|
||||||
|
if token_ids_1 is not None:
|
||||||
|
output = output + token_ids_1
|
||||||
|
|
||||||
|
if self.add_eos_token:
|
||||||
|
output = output + [self.eos_token_id]
|
||||||
|
|
||||||
|
return output
|
||||||
|
|
||||||
|
def get_special_tokens_mask(
|
||||||
|
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
|
||||||
|
) -> List[int]:
|
||||||
|
"""
|
||||||
|
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
|
||||||
|
special tokens using the tokenizer `prepare_for_model` method.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
token_ids_0 (`List[int]`):
|
||||||
|
List of IDs.
|
||||||
|
token_ids_1 (`List[int]`, *optional*):
|
||||||
|
Optional second list of IDs for sequence pairs.
|
||||||
|
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether or not the token list is already formatted with special tokens for the model.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
||||||
|
"""
|
||||||
|
if already_has_special_tokens:
|
||||||
|
return super().get_special_tokens_mask(
|
||||||
|
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
|
||||||
|
)
|
||||||
|
|
||||||
|
if token_ids_1 is None:
|
||||||
|
return [1] + ([0] * len(token_ids_0)) + [1]
|
||||||
|
return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
|
||||||
|
|
||||||
|
def create_token_type_ids_from_sequences(
|
||||||
|
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
||||||
|
) -> List[int]:
|
||||||
|
"""
|
||||||
|
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
|
||||||
|
use of token type ids, therefore a list of zeros is returned.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
token_ids_0 (`List[int]`):
|
||||||
|
List of IDs.
|
||||||
|
token_ids_1 (`List[int]`, *optional*):
|
||||||
|
Optional second list of IDs for sequence pairs.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`List[int]`: List of zeros.
|
||||||
|
"""
|
||||||
|
eos = [self.eos_token_id]
|
||||||
|
|
||||||
|
if token_ids_1 is None:
|
||||||
|
return len(token_ids_0 + eos) * [0]
|
||||||
|
return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
|
||||||
|
size 1477754
|
||||||
180
tokenizer_config.json
Normal file
180
tokenizer_config.json
Normal file
@@ -0,0 +1,180 @@
|
|||||||
|
{
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"0": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"1": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"2": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92538": {
|
||||||
|
"content": "<|plugin|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92539": {
|
||||||
|
"content": "<|interpreter|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92540": {
|
||||||
|
"content": "<|action_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92541": {
|
||||||
|
"content": "<|action_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92542": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92543": {
|
||||||
|
"content": "<|im_start|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92544": {
|
||||||
|
"content": "<img>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92545": {
|
||||||
|
"content": "</img>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92546": {
|
||||||
|
"content": "<IMG_CONTEXT>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92547": {
|
||||||
|
"content": "<quad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92548": {
|
||||||
|
"content": "</quad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92549": {
|
||||||
|
"content": "<ref>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92550": {
|
||||||
|
"content": "</ref>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92551": {
|
||||||
|
"content": "<box>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"92552": {
|
||||||
|
"content": "</box>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|action_start|>",
|
||||||
|
"<|action_end|>",
|
||||||
|
"<|interpreter|>",
|
||||||
|
"<|plugin|>",
|
||||||
|
"<img>",
|
||||||
|
"</img>",
|
||||||
|
"<IMG_CONTEXT>",
|
||||||
|
"<quad>",
|
||||||
|
"</quad>",
|
||||||
|
"<ref>",
|
||||||
|
"</ref>",
|
||||||
|
"<box>",
|
||||||
|
"</box>"
|
||||||
|
],
|
||||||
|
"auto_map": {
|
||||||
|
"AutoTokenizer": [
|
||||||
|
"tokenization_internlm2.InternLM2Tokenizer",
|
||||||
|
null
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"chat_template": "{{ bos_token }}{% for message in messages %}\n {%- if message['role'] == 'user' and template -%}\n {{- '<|im_start|>' + message['role'] -}}\n {{ '\n# Template:' }}\n {{- '\n' + template }}\n {% if examples %}\n {{- '# Examples:' }}\n {% for example in examples %}\n {{- '## Input:\n' }}\n {{- example['input'] + '\n' }}\n {{- '## Output:\n' }}\n {{- example['output'] | trim }}\n {% endfor %}\n {%- endif %}\n {{- '# Context:' }}\n {% if message['content'] is string %}\n {{- message['content'] | trim }}\n {% else %}\n {% for content in message['content'] %}\n {%- if content is string %}\n {{- content | trim }}\n {%- elif content['type'] == 'text' %}\n {{- content['text'] | trim }}\n {%- endif %}\n {% endfor %}\n {% endif %}\n {{- '<|im_end|> '}}\n {% else %}\n {{- '<|im_start|>' + message['role'] }}\n {% if message['content'] is string %}\n {{- message['content'] | trim }}\n {% else %}\n {% for content in message['content'] %}\n {%- if content is string %}\n {{- content | trim }}\n {%- elif content['type'] == 'text' %}\n {{- content['text'] | trim }}\n {%- endif %}\n {% endfor %}\n {% endif %}\n {{- '<|im_end|> '}}\n {% endif %}\n{% endfor -%}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant' }}\n{% endif -%}\n",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"model_max_length": 8192,
|
||||||
|
"pad_token": "</s>",
|
||||||
|
"tokenizer_class": "InternLM2Tokenizer",
|
||||||
|
"unk_token": "<unk>"
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user