myownskyW7 commited on
Commit
aca37dc
·
verified ·
1 Parent(s): 3a3f54b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ examples/dubai.png filter=lfs diff=lfs merge=lfs -text
37
+ examples/liuxiang.mp4 filter=lfs diff=lfs merge=lfs -text
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ pipeline_tag: visual-question-answering
4
+ ---
5
+
6
+
7
+ <p align="center">
8
+ <img src="logo_en.png" width="600"/>
9
+ <p>
10
+
11
+ <p align="center">
12
+ <b><font size="6">InternLM-XComposer-2.5-Chat</font></b>
13
+ <p>
14
+
15
+ <div align="center">
16
+
17
+ [💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
18
+
19
+ [Online Demo](https://huggingface.co/spaces/Willow123/InternLM-XComposer)
20
+
21
+ [Paper](https://huggingface.co/papers/2407.03320)
22
+
23
+ </div>
24
+
25
+ **InternLM-XComposer2.5-Chat** is a chat model trained on [internlm/internlm-xcomposer2d5-7b](https://huggingface.co/internlm/internlm-xcomposer2d5-7b),
26
+ offers improved multi-modal instruction following and open-ended dialogue capabilities.
27
+
28
+ ### Import from Transformers
29
+ To load the InternLM-XComposer2-4KHD model using Transformers, use the following code:
30
+ ```python
31
+ import torch
32
+ from transformers import AutoTokenizer, AutoModelForCausalLM
33
+ ckpt_path = "internlm/internlm-xcomposer2d5-7b-chat"
34
+ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
35
+ # Set `torch_dtype=torch.floatb16` to load model in bfloat16, otherwise it will be loaded as float32 and might cause OOM Error.
36
+ model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
37
+ model = model.eval()
38
+ ```
39
+
40
+ ## Quickstart
41
+
42
+ We provide a simple example to show how to use InternLM-XComposer2.5 with 🤗 Transformers.
43
+
44
+ <details>
45
+ <summary>
46
+ <b>Video Understanding</b>
47
+ </summary>
48
+
49
+ ```python
50
+ import torch
51
+ from transformers import AutoModel, AutoTokenizer
52
+
53
+ torch.set_grad_enabled(False)
54
+
55
+ # init model and tokenizer
56
+ model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval()
57
+ tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', trust_remote_code=True)
58
+ model.tokenizer = tokenizer
59
+
60
+ query = 'Here are some frames of a video. Describe this video in detail'
61
+ image = ['./examples/liuxiang.mp4',]
62
+ with torch.autocast(device_type='cuda', dtype=torch.float16):
63
+ response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
64
+ print(response)
65
+ # The video begins with a man in a red and yellow uniform standing on the starting line of a track, preparing to compete in the 110-meter hurdles at the Athens 2004 Olympic Games. He is identified as Liu Xiang, a Chinese athlete, and his bib number is 1363. The scene is set in a stadium filled with spectators, indicating the significance of the event.
66
+ # As the race begins, all the athletes start running, but Liu Xiang quickly takes the lead. However, he encounters a hurdle and knocks it over. Despite this setback, he quickly recovers and continues to run. The race is intense, with athletes from various countries competing fiercely. In the end, Liu Xiang emerges as the winner with a time of 12.91 seconds, securing the gold medal for China.
67
+ # The video then transitions to a slow-motion replay of the race, focusing on Liu Xiang's performance and the knockdown of the hurdle. This allows viewers to appreciate the skill and determination of the athlete.
68
+ # Following the race, Liu Xiang is seen lying on the track, possibly exhausted from the intense competition. He then stands up and begins to celebrate his victory, waving his arms in the air and running around the track. The crowd cheers and celebrates with him, creating a joyful atmosphere.
69
+ # The video concludes with a replay of Liu Xiang's gold medal-winning moment, emphasizing the significance of his achievement at the Athens 2004 Olympic Games.
70
+ # Throughout the video, the Olympic logo is prominently displayed, reminding viewers of the global significance of the event and the athletes' dedication and perseverance in their pursuit of victory.
71
+
72
+ query = 'tell me the athlete code of Liu Xiang'
73
+ image = ['./examples/liuxiang.mp4',]
74
+ with torch.autocast(device_type='cuda', dtype=torch.float16):
75
+ response, _ = model.chat(tokenizer, query, image, history=his, do_sample=False, num_beams=3, use_meta=True)
76
+ print(response)
77
+ # The athlete code of Liu Xiang is 1363.
78
+ ```
79
+
80
+ </details>
81
+
82
+ <details>
83
+ <summary>
84
+ <b>Multi-Image Mutli-Tune Dialog</b>
85
+ </summary>
86
+
87
+ ```python
88
+ import torch
89
+ from transformers import AutoModel, AutoTokenizer
90
+
91
+ torch.set_grad_enabled(False)
92
+
93
+ # init model and tokenizer
94
+ model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval()
95
+ tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', trust_remote_code=True)
96
+ model.tokenizer = tokenizer
97
+
98
+ query = 'Image1 <ImageHere>; Image2 <ImageHere>; Image3 <ImageHere>; I want to buy a car from the three given cars, analyze their advantages and weaknesses one by one'
99
+ image = ['./examples/cars1.jpg',
100
+ './examples/cars2.jpg',
101
+ './examples/cars3.jpg',]
102
+ with torch.autocast(device_type='cuda', dtype=torch.float16):
103
+ response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
104
+ print(response)
105
+ # Certainly! Let's analyze the advantages and disadvantages of each car:
106
+ #
107
+ # 1. **Mercedes-Benz G-Class (SUV)**:
108
+ # - **Advantages**:
109
+ # - **Off-Road Capability**: The G-Class is renowned for its off-road prowess. It can handle a variety of terrains, making it ideal for adventurous driving.
110
+ # - **Reliability**: Mercedes-Benz is known for producing reliable vehicles, and the G-Class is no exception. It has a reputation for durability and longevity.
111
+ # - **Luxury Features**: As a Mercedes-Benz, the G-Class comes with a host of luxury features, including high-quality materials and advanced technology.
112
+ # - **Disadvantages**:
113
+ # - **Fuel Efficiency**: The G-Class is not known for its fuel efficiency. It consumes a significant amount of gasoline, which can be a disadvantage for those concerned with fuel economy.
114
+ # - **Size and Weight**: The G-Class is large and heavy, which can affect its handling and maneuverability, especially in urban environments.
115
+ # - **Cost**: The G-Class is generally more expensive compared to other SUVs, which can be a deterrent for some buyers.
116
+ #
117
+ # 2. **Bugatti Chiron (Sports Car)**:
118
+ # - **Advantages**:
119
+ # - **Performance**: The Bugatti Chiron is one of the fastest production cars available. It boasts impressive acceleration and top speed, making it a thrilling driving experience.
120
+ # - **Design**: The Chiron has a sleek and futuristic design that is both aesthetically pleasing and aerodynamically efficient.
121
+ # - **Status Symbol**: Owning a Bugatti is often seen as a status symbol, attracting attention and admiration.
122
+ # - **Disadvantages**:
123
+ # - **Cost**: The Bugatti Chiron is one of the most expensive cars in the world, making it out of reach for many potential buyers.
124
+ # - **Maintenance**: Bugatti vehicles require specialized maintenance, which can be costly and difficult to find.
125
+ # - **Road Legality**: In some regions, the Chiron may not be road-legal due to its high top speed and powerful engine.
126
+ #
127
+ # 3. **BMW i Vision Dynamics (Electric Concept Car)**:
128
+ # - **Advantages**:
129
+ # - **Environmental Impact**: As an electric vehicle, the i Vision Dynamics has a lower environmental impact compared to traditional gasoline-powered cars.
130
+ # - **Design**: The i Vision Dynamics has a unique and futuristic design that sets it apart from conventional cars.
131
+ # - **Performance**: Despite being an electric vehicle, the i Vision Dynamics boasts impressive performance, including a top speed of over 300 km/h.
132
+ # - **Disadvantages**:
133
+ # - **Cost**: Like the Bugatti Chiron, the i Vision Dynamics is an expensive concept car, making it inaccessible to most buyers.
134
+ # - **Range**: Electric vehicles typically have a limited range compared to gasoline-powered cars, which can be a concern for long-distance travel.
135
+ # - **Infrastructure**: The availability of charging stations can vary widely by region, which can limit the practicality of owning an electric car.
136
+
137
+ # In summary, the Mercedes-Benz G-Class is a reliable and capable off-road vehicle with luxury features, but it may not be the most fuel-efficient or affordable option. The Bugatti Chiron offers unparalleled performance and status, but it is extremely expensive and requires specialized maintenance. The BMW i Vision Dynamics is an innovative and environmentally friendly concept car with impressive performance, but it is also costly and limited by range and charging infrastructure. Each car has its own unique set of advantages and disadvantages, and the choice ultimately depends on individual preferences and priorities.
138
+
139
+ query = 'Image4 <ImageHere>; How about the car in Image4'
140
+ image.append('./examples/cars4.jpg')
141
+ with torch.autocast(device_type='cuda', dtype=torch.float16):
142
+ response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, history= his, use_meta=True)
143
+ print(response)
144
+ # The car in Image4 is a **Ferrari LaFerrari**. The LaFerrari is a hybrid supercar produced by Ferrari, combining a V12 engine with an electric motor. Here's a detailed analysis of its advantages and disadvantages:
145
+ #
146
+ # ### **Advantages:**
147
+ # 1. **Performance**: The LaFerrari is one of the fastest production cars available, with a top speed of over 217 mph (349 km/h). It boasts impressive acceleration and handling, making it a thrilling driving experience.
148
+ # 2. **Design**: The LaFerrari has a distinctive and aggressive design that sets it apart from other supercars. Its aerodynamic features and sleek lines contribute to its performance and visual appeal.
149
+ # 3. **Hybrid Technology**: The LaFerrari uses a hybrid powertrain, combining a 6.3-liter V12 engine with an electric motor. This hybrid system provides a balance of power and efficiency, reducing emissions compared to traditional gasoline engines.
150
+ # 4. **Status Symbol**: Owning a LaFerrari is often seen as a status symbol, attracting attention and admiration. It represents a pinnacle of automotive engineering and luxury.
151
+ # 5. **Reliability**: Ferrari is known for producing high-quality, reliable vehicles. The LaFerrari benefits from the brand's reputation for excellence in engineering and craftsmanship.
152
+
153
+ ### **Disadvantages:**
154
+ # 1. **Cost**: The LaFerrari is one of the most expensive cars in the world, making it inaccessible to most potential buyers. Its high price can be a significant deterrent.
155
+ # 2. **Maintenance**: Ferrari vehicles require specialized maintenance, which can be costly and difficult to find. The hybrid system may also add to the complexity and expense of servicing the car.
156
+ # 3. **Road Legality**: In some regions, the LaFerrari may not be road-legal due to its high top speed and powerful engine. This can limit its usability and appeal.
157
+ # 4. **Fuel Efficiency**: Despite the hybrid system, the LaFerrari consumes a significant amount of fuel, which can be a disadvantage for those concerned with fuel economy.
158
+ # 5. **Size and Weight**: The LaFerrari is a large and heavy vehicle, which can affect its handling and maneuverability, especially in urban environments.
159
+
160
+ # In summary, the Ferrari LaFerrari is a high-performance hybrid supercar with a distinctive design and impressive capabilities. However, its high cost, specialized maintenance requirements, and limited road legality can be significant disadvantages for some buyers. The LaFerrari is best suited for those who prioritize performance, luxury, and status over practicality and affordability.
161
+ ```
162
+
163
+
164
+ </details>
165
+
166
+ <details>
167
+ <summary>
168
+ <b>High Resolution Image Understanding</b>
169
+ </summary>
170
+
171
+ ```python
172
+ import torch
173
+ from transformers import AutoModel, AutoTokenizer
174
+
175
+ torch.set_grad_enabled(False)
176
+
177
+ # init model and tokenizer
178
+ model = AutoModel.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval()
179
+ tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2d5-7b-chat', trust_remote_code=True)
180
+ model.tokenizer = tokenizer
181
+
182
+ query = 'Analyze the given image in a detail manner'
183
+ image = ['./examples/dubai.png']
184
+ with torch.autocast(device_type='cuda', dtype=torch.float16):
185
+ response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
186
+ print(response)
187
+ # The image is an infographic titled "Amazing Facts About Dubai." Here's a detailed analysis of its content:
188
+ #
189
+ # 1. **Title and Introduction:**
190
+ # - The title is prominently displayed at the top of the image in bold, blue letters.
191
+ # - The image features a colorful skyline of Dubai, highlighting the city's modern architecture.
192
+ #
193
+ # 2. **Facts About Palm Jumeirah:**
194
+ # - Palm Jumeirah is the largest artificial island and is visible from space.
195
+ # - In 1968, there were only 1.5 million cars in Dubai.
196
+ #
197
+ # 3. **Dubai's Gold Chain:**
198
+ # - Dubai has the world's largest Gold Chain, which is 4.2 km long.
199
+ # - 7 out of the 10 tallest hotels in the world are located in Dubai.
200
+ #
201
+ # 4. **Crime Rate and Income Tax:**
202
+ # - The crime rate is near 0%.
203
+ # - The income tax rate is 0%.
204
+ #
205
+ # 5. **Dubai Mall:**
206
+ # - Dubai Mall is the largest shopping mall in the world with 1200 stores.
207
+ # - 17% of the population is Emirati, and 83% are immigrants.
208
+ #
209
+ # 6. **Dubai's Address System:**
210
+ # - Dubai has no standard address system, with no zip codes, area codes, or postal services.
211
+ #
212
+ # 7. **Dispense Gold:**
213
+ # - Dubai is building a climate-controlled City, 2.25 times as big as Monaco.
214
+ # - The Royal Suite at Burj Al Arab is $24,000 per night.
215
+ #
216
+ # 8. **License and Billionaires:**
217
+ # - You need a license to drink alcohol even at home.
218
+ # - The net worth of the four listed billionaires is roughly equal to the GDP of Honduras.
219
+ #
220
+ # 9. **Sources:**
221
+ # - The infographic cites sources from Wikipedia, Forbes, Gulf News, and The Guardian.
222
+ #
223
+ # 10. **Design and Compilation:**
224
+ # - The image is designed and compiled by FMEXtensions, a company based in the United Arab Emirates.
225
+ #
226
+ # The infographic uses a combination of text, icons, and images to convey interesting facts about Dubai, emphasizing its modernity, wealth, and unique features.
227
+
228
+ ```
229
+
230
+ </details>
231
+
232
+ ### Open Source License
233
+ The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
.ipynb_checkpoints/build_mlp-checkpoint.py ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import re
4
+ import math
5
+ from transformers import CLIPVisionModel, CLIPImageProcessor, CLIPVisionConfig
6
+
7
+
8
+ def build_vision_tower():
9
+ vision_tower = 'internlm/internlm-xcomposer2d5-clip'
10
+ return CLIPVisionTower(vision_tower)
11
+
12
+
13
+ def build_vision_projector():
14
+ projector_type = 'mlp2x_gelu'
15
+ mm_hidden_size = 4096
16
+ mid_hidden_size = 4096
17
+ hidden_size = 4096
18
+
19
+ mlp_gelu_match = re.match(r'^mlp(\d+)x_gelu$', projector_type)
20
+ if mlp_gelu_match:
21
+ mlp_depth = int(mlp_gelu_match.group(1))
22
+ modules = [nn.Linear(mm_hidden_size, mid_hidden_size)]
23
+ for _ in range(1, mlp_depth):
24
+ modules.append(nn.GELU())
25
+ modules.append(nn.Linear(mid_hidden_size, mid_hidden_size))
26
+
27
+ return nn.Sequential(*modules)
28
+
29
+ if projector_type == 'identity':
30
+ return IdentityMap()
31
+
32
+ raise ValueError(f'Unknown projector type: {projector_type}')
33
+
34
+ class IdentityMap(nn.Module):
35
+ def __init__(self):
36
+ super().__init__()
37
+
38
+ def forward(self, x, *args, **kwargs):
39
+ return x
40
+
41
+ @property
42
+ def config(self):
43
+ return {"mm_projector_type": 'identity'}
44
+
45
+
46
+ class CLIPVisionTower(nn.Module):
47
+ def __init__(self, vision_tower):
48
+ super().__init__()
49
+
50
+ self.is_loaded = False
51
+
52
+ self.vision_tower_name = vision_tower
53
+ #self.conv_dim = 8192
54
+ #self.conv = torch.nn.Conv2d(1024, self.conv_dim,3,2,1)
55
+ self.select_layer = -1
56
+ self.select_feature = 'patch'
57
+ self.load_model()
58
+
59
+ def load_model(self):
60
+ self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name)
61
+ self.vision_tower.requires_grad_(False)
62
+
63
+ self.is_loaded = True
64
+
65
+ def resize_pos(self):
66
+ print ('Dummy Resized')
67
+
68
+ def feature_select(self, image_forward_outs):
69
+ image_features = image_forward_outs.hidden_states[self.select_layer]
70
+ if self.select_feature == 'patch':
71
+ image_features = image_features[:, 1:]
72
+ elif self.select_feature == 'cls_patch':
73
+ image_features = image_features
74
+ else:
75
+ raise ValueError(f'Unexpected select feature: {self.select_feature}')
76
+ return image_features
77
+
78
+ def forward(self, images, glb_GN, sub_GN):
79
+ if not self.is_loaded:
80
+ self.load_model()
81
+ assert type(images) is list
82
+ shapes = []
83
+ input_imgs = []
84
+ for img in images:
85
+ _, C, H, W = img.shape
86
+ shapes.append([H//560, W//560])
87
+ sub_img = img.reshape(1,3,H//560,560,W//560,560).permute(0,2,4,1,3,5).reshape(-1,3,560,560).contiguous()
88
+ glb_img = torch.nn.functional.interpolate(img.float(), size=(560,560), mode='bicubic',).to(sub_img.dtype)
89
+ input_imgs.append(glb_img)
90
+ input_imgs.append(sub_img)
91
+ input_imgs = torch.cat(input_imgs, dim=0)
92
+
93
+ image_forward_outs = self.vision_tower(input_imgs.to(device=self.device, dtype=self.dtype), output_hidden_states=True)
94
+ image_features = self.feature_select(image_forward_outs).to(input_imgs.dtype) ### B*?, N, C
95
+ _, N, C = image_features.shape
96
+ H = int(math.sqrt(N))
97
+ assert N == 40 ** 2
98
+
99
+ output_imgs = []
100
+ output_len = []
101
+ for [h, w] in shapes:
102
+ B_ = h*w
103
+ glb_img = image_features[:1] ### 1, N, C
104
+ glb_img = glb_img.reshape(1,H,H,C).reshape(1,H//2,2,H//2,2,C).contiguous().permute(0,1,3,2,4,5).reshape(1,H//2,H//2,4*C).contiguous()
105
+ temp_glb_GN = sub_GN.repeat(1, H//2, 1, 1)
106
+ glb_img = torch.cat([glb_img, temp_glb_GN], dim=2).reshape(1,-1,4*C)
107
+
108
+ sub_img = image_features[1:1+B_] ### ?, N, C
109
+ sub_img = sub_img.reshape(B_,H,H,C).reshape(B_,H//2,2,H//2,2,C).contiguous().permute(0,1,3,2,4,5).reshape(B_,-1,4*C).contiguous()
110
+ sub_img = sub_img.reshape(1, h, w, 20, 20, -1).permute(0,1,3,2,4,5).reshape(1,h*20,w*20,4*C)
111
+ temp_sub_GN = sub_GN.repeat(1, h*20, 1, 1)
112
+ sub_img = torch.cat([sub_img, temp_sub_GN], dim=2).reshape(1,-1,4*C)
113
+
114
+ output_imgs.append(torch.cat([glb_img, glb_GN, sub_img], dim=1))
115
+ temp_len = int((h*w+1)*400 + 1 + (h+1)*20)
116
+ assert temp_len == output_imgs[-1].shape[1]
117
+ output_len.append(temp_len)
118
+
119
+ image_features = image_features[1+h*w:]
120
+
121
+ output_imgs = torch.cat(output_imgs, dim=1)
122
+
123
+ return output_imgs, output_len
124
+
125
+ @property
126
+ def dummy_feature(self):
127
+ return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
128
+
129
+ @property
130
+ def dtype(self):
131
+ return self.vision_tower.dtype
132
+
133
+ @property
134
+ def device(self):
135
+ return self.vision_tower.device
136
+
137
+ @property
138
+ def config(self):
139
+ if self.is_loaded:
140
+ return self.vision_tower.config
141
+ else:
142
+ return self.cfg_only
143
+
144
+ @property
145
+ def hidden_size(self):
146
+ return self.config.hidden_size
147
+
148
+ @property
149
+ def num_patches(self):
150
+ return (self.config.image_size // self.config.patch_size) ** 2
151
+
152
+ class PLoRA(nn.Linear):
153
+ def __init__(self,
154
+ in_features: int,
155
+ out_features: int,
156
+ bias: bool = True,
157
+ device=None,
158
+ dtype=None,
159
+ lora_r=8,
160
+ lora_alpha=16,
161
+ lora_dropout=0.05,
162
+ lora_len=0,
163
+ **kwargs) -> None:
164
+ super().__init__(in_features, out_features, bias, device, dtype)
165
+ self.lora_r = lora_r
166
+ self.lora_alpha = lora_alpha
167
+ self.lora_len = lora_len
168
+ if lora_dropout > 0.:
169
+ self.lora_dropout = nn.Dropout(p=lora_dropout)
170
+ else:
171
+ self.lora_dropout = lambda x: x
172
+ self.lora_scaling = self.lora_alpha / self.lora_r
173
+
174
+ self.Plora_A = nn.Linear(in_features,
175
+ self.lora_r,
176
+ bias=False,
177
+ device=device,
178
+ dtype=dtype)
179
+ self.Plora_B = nn.Linear(self.lora_r,
180
+ out_features,
181
+ bias=False,
182
+ device=device,
183
+ dtype=dtype)
184
+
185
+ self.lora_sft_A = nn.Linear(in_features,
186
+ 256,
187
+ bias=False,
188
+ device=device,
189
+ dtype=dtype)
190
+ self.lora_sft_B = nn.Linear(256,
191
+ out_features,
192
+ bias=False,
193
+ device=device,
194
+ dtype=dtype)
195
+
196
+ self.lora_dpo_A = nn.Linear(in_features,
197
+ 256,
198
+ bias=False,
199
+ device=device,
200
+ dtype=dtype)
201
+ self.lora_dpo_B = nn.Linear(256,
202
+ out_features,
203
+ bias=False,
204
+ device=device,
205
+ dtype=dtype)
206
+
207
+ self.lora_web_A = nn.Linear(in_features,
208
+ 512,
209
+ bias=False,
210
+ device=device,
211
+ dtype=dtype)
212
+ self.lora_web_B = nn.Linear(512,
213
+ out_features,
214
+ bias=False,
215
+ device=device,
216
+ dtype=dtype)
217
+
218
+ self.reset_parameters()
219
+
220
+ def reset_parameters(self):
221
+ if hasattr(self, 'lora_A'):
222
+ # initialize A the same way as the default for nn.Linear and B to zero
223
+ nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
224
+ nn.init.zeros_(self.lora_B.weight)
225
+ #print ("lora weight init {} {}".format(torch.mean(self.lora_A.weight), torch.mean(self.lora_B.weight)))
226
+
227
+ def forward(self, x, im_mask=None, infer_mode='base'):
228
+ B, N, C = x.shape
229
+ im_mask = im_mask.view(-1)
230
+ x = x.reshape(-1, C)
231
+ res = super().forward(x)
232
+ if infer_mode == 'web':
233
+ res += self.lora_web_B(self.lora_web_A(x))
234
+ elif infer_mode == 'write':
235
+ res += self.lora_sft_B(self.lora_sft_A(x))
236
+ res += self.lora_dpo_B(self.lora_dpo_A(x))
237
+ else:
238
+ pass
239
+ if im_mask is not None:
240
+ if torch.sum(im_mask) > 0:
241
+ part_x = x[im_mask]
242
+ res[im_mask] += self.Plora_B(self.Plora_A(
243
+ self.lora_dropout(part_x))) * self.lora_scaling
244
+ else:
245
+ part_x = x[:1]
246
+ res[:1] += self.Plora_B(self.Plora_A(
247
+ self.lora_dropout(part_x))) * 0
248
+
249
+ return res.reshape(B, N, -1)
.ipynb_checkpoints/config-checkpoint.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/fs-computility/mllm/shared/zangyuhang/share_models/internlm-xcomposer2d5-7b-dpo-turn",
3
+ "architectures": [
4
+ "InternLMXComposer2ForCausalLM"
5
+ ],
6
+ "attn_implementation": "flash_attention_2",
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_internlm_xcomposer2.InternLMXcomposer2Config",
9
+ "AutoModel": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM",
10
+ "AutoModelForCausalLM": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM"
11
+ },
12
+ "bias": false,
13
+ "bos_token_id": 1,
14
+ "eos_token_id": 2,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 14336,
19
+ "max_length": 16384,
20
+ "max_position_embeddings": 24576,
21
+ "model_type": "internlm2",
22
+ "num_attention_heads": 32,
23
+ "num_hidden_layers": 32,
24
+ "num_key_value_heads": 8,
25
+ "pad_token_id": 2,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "factor": 2.0,
29
+ "type": "dynamic"
30
+ },
31
+ "rope_theta": 1000000,
32
+ "tie_word_embeddings": false,
33
+ "torch_dtype": "float16",
34
+ "transformers_version": "4.33.1",
35
+ "use_cache": false,
36
+ "vocab_size": 92544
37
+ }
.ipynb_checkpoints/ixc_utils-checkpoint.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import numpy as np
4
+ import torchvision
5
+ from urllib.request import urlopen
6
+ from PIL import Image, ImageDraw, ImageFont
7
+ from torchvision.transforms.functional import InterpolationMode
8
+ import torchvision.transforms as transforms
9
+ from decord import VideoReader
10
+
11
+ def get_font():
12
+ truetype_url = 'https://huggingface.co/internlm/internlm-xcomposer2d5-7b/resolve/main/SimHei.ttf?download=true'
13
+ ff = urlopen(truetype_url)
14
+ font = ImageFont.truetype(ff, size=40)
15
+ return font
16
+
17
+ def padding_336(b, pad=336):
18
+ width, height = b.size
19
+ tar = int(np.ceil(height / pad) * pad)
20
+ top_padding = 0 # int((tar - height)/2)
21
+ bottom_padding = tar - height - top_padding
22
+ left_padding = 0
23
+ right_padding = 0
24
+ b = transforms.functional.pad(b, [left_padding, top_padding, right_padding, bottom_padding], fill=[255,255,255])
25
+
26
+ return b
27
+
28
+ def Image_transform(img, hd_num=25):
29
+ width, height = img.size
30
+ trans = False
31
+ if width < height:
32
+ img = img.transpose(Image.TRANSPOSE)
33
+ trans = True
34
+ width, height = img.size
35
+ ratio = (width/ height)
36
+ scale = 1
37
+ while scale*np.ceil(scale/ratio) <= hd_num:
38
+ scale += 1
39
+ scale -= 1
40
+ scale = min(np.ceil(width / 560), scale)
41
+ new_w = int(scale * 560)
42
+ new_h = int(new_w / ratio)
43
+ #print (scale, f'{height}/{new_h}, {width}/{new_w}')
44
+
45
+ img = transforms.functional.resize(img, [new_h, new_w],)
46
+ img = padding_336(img, 560)
47
+ width, height = img.size
48
+ if trans:
49
+ img = img.transpose(Image.TRANSPOSE)
50
+
51
+ return img
52
+
53
+
54
+ def Video_transform(img, hd_num=25):
55
+ width, height = img.size
56
+ trans = False
57
+ if width < height:
58
+ img = img.transpose(Image.TRANSPOSE)
59
+ trans = True
60
+ width, height = img.size
61
+ ratio = (width/ height)
62
+ scale = 1
63
+ new_h = int(scale * 560)
64
+ new_w = int(new_h * ratio)
65
+ #print (new_h, new_w)
66
+
67
+ img = transforms.functional.resize(img, [new_h, new_w],)
68
+ img = img.transpose(Image.TRANSPOSE)
69
+ img = padding_336(img, 560)
70
+ width, height = img.size
71
+ if not trans:
72
+ img = img.transpose(Image.TRANSPOSE)
73
+
74
+ return img
75
+
76
+ def frame2img(imgs, font):
77
+ new_imgs = []
78
+ for img in imgs:
79
+ w, h = img.size
80
+ scale = w/h
81
+ if w > h:
82
+ new_w = 560 * 2
83
+ new_h = int(560 * 2 / scale)
84
+ else:
85
+ new_w = int(560 * 2 * scale)
86
+ new_h = 560 * 2
87
+ img = transforms.functional.resize(img, [new_h, new_w],)
88
+ new_imgs.append(img)
89
+ imgs = new_imgs
90
+ new_w = 0
91
+ new_h = 0
92
+ pad = 40
93
+ if w > h:
94
+ for im in imgs:
95
+ w,h = im.size
96
+ new_w = max(new_w, w)
97
+ new_h += h + 10 + pad
98
+ new_img = Image.new('RGB', (new_w, new_h), 'white')
99
+ draw = ImageDraw.Draw(new_img)
100
+ curr_h = 0
101
+ for idx, im in enumerate(imgs):
102
+ w,h = im.size
103
+ new_img.paste(im, (0, pad + curr_h))
104
+ draw.text((0, curr_h ), f'<IMAGE {idx}>', font=font, fill='black')
105
+ if idx + 1 < len(imgs):
106
+ draw.line([(0, pad +curr_h + h +5), (new_w, pad +curr_h + h +5)], fill = 'black', width=2)
107
+ curr_h += h + 10 + pad
108
+ #print (new_w, new_h)
109
+ else:
110
+ for im in imgs:
111
+ w,h = im.size
112
+ new_w += w + 10
113
+ new_h = max(new_h, h)
114
+ new_h += pad
115
+ new_img = Image.new('RGB', (new_w, new_h), 'white')
116
+ draw = ImageDraw.Draw(new_img)
117
+ curr_w = 0
118
+ for idx, im in enumerate(imgs):
119
+ w,h = im.size
120
+ new_img.paste(im, (curr_w, pad))
121
+ draw.text((curr_w, 0), f'<IMAGE {idx}>', font=font, fill='black')
122
+ if idx + 1 < len(imgs):
123
+ draw.line([(curr_w + w + 5, 0), (curr_w + w + 5, new_h)], fill = 'black', width=2)
124
+ curr_w += w + 10
125
+ return new_img
126
+
127
+ def load_video(video_path, num_frm=32, start=None, end=None):
128
+ vid = VideoReader(video_path, num_threads=1)
129
+ fps = vid.get_avg_fps()
130
+ t_stride = int(round(float(fps) / int(1)))
131
+ start_idx = 0 if start is None else start
132
+ end_idx = len(vid) if end is None else end
133
+ all_pos = list(range(start_idx, end_idx, t_stride))
134
+ try:
135
+ images = [vid[i].numpy() for i in all_pos]
136
+ except:
137
+ images = [vid[i].asnumpy() for i in all_pos]
138
+ if len(images) > num_frm:
139
+ num_frm = min(num_frm, len(images))
140
+ step_size = len(images) / (num_frm + 1)
141
+ indices = [int(i*step_size) for i in range(num_frm)]
142
+ images = [images[i] for i in indices]
143
+ images = [Image.fromarray(arr) for arr in images]
144
+ return images
145
+
.ipynb_checkpoints/modeling_internlm_xcomposer2-checkpoint.py ADDED
@@ -0,0 +1,662 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/modeling_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ """PyTorch InternLMXComposer2 model."""
18
+ import os
19
+ import re
20
+ import copy
21
+ import queue
22
+ import threading
23
+ from typing import List, Optional, Tuple, Union
24
+
25
+ import torch
26
+ import torch.utils.checkpoint
27
+ from PIL import Image
28
+ import numpy as np
29
+ import random
30
+ from torch import nn
31
+ from torch.nn import CrossEntropyLoss
32
+ from torchvision import transforms
33
+ from torchvision.transforms.functional import InterpolationMode
34
+ from transformers.modeling_outputs import CausalLMOutputWithPast
35
+ from transformers.utils import (add_start_docstrings_to_model_forward,
36
+ replace_return_docstrings)
37
+ from transformers import StoppingCriteria, StoppingCriteriaList
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
39
+ try:
40
+ from transformers.generation.streamers import BaseStreamer
41
+ except: # noqa # pylint: disable=bare-except
42
+ BaseStreamer = None
43
+
44
+ import torchvision.transforms as transforms
45
+ from torchvision.transforms.functional import InterpolationMode
46
+
47
+ from .build_mlp import build_vision_projector, build_vision_tower
48
+ from .ixc_utils import Image_transform, Video_transform, load_video, frame2img, get_font
49
+ from .configuration_internlm_xcomposer2 import InternLMXcomposer2Config
50
+ from .modeling_internlm2 import (InternLM2_INPUTS_DOCSTRING, InternLM2Model,
51
+ InternLM2PreTrainedModel)
52
+
53
+ _CONFIG_FOR_DOC = 'InternLMXcomposer2Config'
54
+
55
+ image_extensions = {'.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp'}
56
+ video_extensions = {'.mp4', '.avi', '.mkv', '.mov', '.wmv'}
57
+
58
+ class StoppingCriteriaSub(StoppingCriteria):
59
+
60
+ def __init__(self, stops=[], encounters=1):
61
+ super().__init__()
62
+ self.stops = stops
63
+
64
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
65
+ for stop in self.stops:
66
+ if torch.all((stop == input_ids[0][-len(stop):])).item():
67
+ return True
68
+ return False
69
+
70
+
71
+ def get_stopping_criteria(stop_words_ids):
72
+ stop_words_ids = [torch.tensor([i]).cuda() for i in stop_words_ids]
73
+ stopping_criteria = StoppingCriteriaList(
74
+ [StoppingCriteriaSub(stops=stop_words_ids)])
75
+ return stopping_criteria
76
+
77
+ def set_random_seed(seed, set_cudnn=False):
78
+ """Set the random seed for reproducibility.
79
+
80
+ Parameters:
81
+ seed (int): The seed to use for generating random numbers.
82
+ """
83
+ torch.manual_seed(seed)
84
+ if torch.cuda.is_available():
85
+ torch.cuda.manual_seed_all(seed) # For multi-GPU.
86
+ np.random.seed(seed)
87
+ random.seed(seed)
88
+ if set_cudnn and torch.backends.cudnn.is_available():
89
+ torch.backends.cudnn.deterministic = True
90
+ torch.backends.cudnn.benchmark = False
91
+
92
+ class InternLMXComposer2ForCausalLM(InternLM2PreTrainedModel):
93
+ _auto_class = 'AutoModelForCausalLM'
94
+
95
+ _tied_weights_keys = ['output.weight']
96
+
97
+ def __init__(self, config):
98
+ super().__init__(config)
99
+ self.model = InternLM2Model(config)
100
+ self.vocab_size = config.vocab_size
101
+ self.output = nn.Linear(
102
+ config.hidden_size, config.vocab_size, bias=False)
103
+ self.tokenizer = None
104
+ self.hd_num = 25
105
+ self.font = get_font()
106
+
107
+ self.max_length = config.max_length
108
+ print(f'Set max length to {self.max_length}')
109
+ # Initialize weights and apply final processing
110
+ self.post_init()
111
+ self.plora_glb_GN = nn.Parameter(torch.zeros([1, 1, 4096]))
112
+ self.plora_sub_GN = nn.Parameter(torch.zeros([1, 1, 1, 4096]))
113
+
114
+ self.vit = build_vision_tower()
115
+ self.vision_proj = build_vision_projector()
116
+
117
+ self.vis_processor = transforms.Compose([
118
+ transforms.ToTensor(),
119
+ transforms.Normalize((0.48145466, 0.4578275, 0.40821073),
120
+ (0.26862954, 0.26130258, 0.27577711)),
121
+ ])
122
+
123
+
124
+
125
+
126
+ def _set_gradient_checkpointing(self, module, value=False):
127
+ if isinstance(module, InternLM2Model):
128
+ module.gradient_checkpointing = value
129
+ if value:
130
+ self.vit.vision_tower.vision_model.encoder.gradient_checkpointing = value
131
+
132
+ def get_input_embeddings(self):
133
+ return self.model.tok_embeddings
134
+
135
+ def set_input_embeddings(self, value):
136
+ self.model.tok_embeddings = value
137
+
138
+ def get_output_embeddings(self):
139
+ return self.output
140
+
141
+ def set_output_embeddings(self, new_embeddings):
142
+ self.output = new_embeddings
143
+
144
+ def set_decoder(self, decoder):
145
+ self.model = decoder
146
+
147
+ def get_decoder(self):
148
+ return self.model
149
+
150
+ def encode_text(self, text, add_special_tokens=False):
151
+ token = self.tokenizer(
152
+ text, return_tensors='pt',
153
+ add_special_tokens=add_special_tokens).input_ids.to(self.device)
154
+ embs = self.model.tok_embeddings(token)
155
+ return embs
156
+
157
+ def encode_img(self, image, hd_num=25):
158
+ if image is None:
159
+ return None
160
+ if isinstance(image, str):
161
+ _, ext = os.path.splitext(image)
162
+ if ext.lower() in image_extensions:
163
+ image = Image.open(image).convert('RGB')
164
+ image = Image_transform(image, hd_num = hd_num)
165
+ elif ext.lower() in video_extensions:
166
+ image = load_video(image)
167
+ image = frame2img(image, self.font)
168
+ image = Video_transform(image, hd_num = hd_num)
169
+ else:
170
+ print ('Unknow input format', image)
171
+ return None
172
+ image = self.vis_processor(image).unsqueeze(0).to(self.device)
173
+ else:
174
+ assert isinstance(image, torch.Tensor)
175
+
176
+ img_embeds, atts_img, img_target = self.img2emb(image)
177
+ return img_embeds
178
+
179
+ def img2emb(self, image):
180
+ img_embeds, img_split = self.vit([image],
181
+ self.plora_glb_GN, self.plora_sub_GN)
182
+ if len(img_split) > 1:
183
+ print ('Batch Size >1 is not supported.')
184
+ assert 0
185
+ #print (img_embeds.shape)
186
+ img_embeds = self.vision_proj(img_embeds)
187
+ atts_img = torch.ones(
188
+ img_embeds.size()[:-1], dtype=torch.long).to(img_embeds.device)
189
+
190
+ img_target = torch.ones(
191
+ img_embeds.size()[:2], dtype=torch.long).to(
192
+ img_embeds.device) * -100
193
+
194
+ return img_embeds, atts_img, img_target
195
+
196
+ def prompt_wrap(self, img_embeds, prompt):
197
+ batch_size = img_embeds.shape[0]
198
+ p_before, p_after = prompt.split('<ImageHere>')
199
+ p_before_tokens = self.tokenizer(
200
+ p_before, return_tensors='pt',
201
+ add_special_tokens=True).to(img_embeds.device)
202
+
203
+ p_before_embeds = self.model.tok_embeddings(
204
+ p_before_tokens.input_ids).expand(batch_size, -1, -1)
205
+ wrapped_img_embeds = torch.cat([p_before_embeds, img_embeds], dim=1)
206
+
207
+ wrapped_atts_img = torch.ones(
208
+ wrapped_img_embeds.size()[:-1],
209
+ dtype=torch.long).to(img_embeds.device)
210
+
211
+ wrapped_target = torch.ones(
212
+ batch_size, wrapped_img_embeds.shape[1], dtype=torch.long).to(
213
+ img_embeds.device) * -100
214
+
215
+ return wrapped_img_embeds, wrapped_atts_img, wrapped_target
216
+
217
+ def text2emb(self, text, add_special_tokens=False):
218
+ to_regress_tokens = self.tokenizer(
219
+ text,
220
+ return_tensors='pt',
221
+ padding='longest',
222
+ truncation=True,
223
+ max_length=self.max_length,
224
+ add_special_tokens=add_special_tokens
225
+ ).to(self.device)
226
+
227
+ targets = self.mask_human_targets(to_regress_tokens.input_ids)
228
+ targets = targets.to(self.device)
229
+ return to_regress_tokens, targets
230
+
231
+ def interleav_wrap_chat(self, query, image, history = [], meta_instruction='', max_length=16384, hd_num=24):
232
+ self.max_length = max_length
233
+ prompt = ''
234
+ if meta_instruction:
235
+ prompt += f"""[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
236
+ for record in history:
237
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
238
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
239
+
240
+ image_nums = len(image)
241
+ if image_nums == 1 and prompt.find('<ImageHere>') == -1:
242
+ #print ('auto append image at the begining')
243
+ prompt = '<ImageHere>' + prompt
244
+
245
+ parts = prompt.split('<ImageHere>')
246
+ wrap_embeds, wrap_im_mask = [], []
247
+ temp_len = 0
248
+ need_bos = True
249
+
250
+ if len(parts) != image_nums + 1:
251
+ #raise ValueError('Invalid <ImageHere> prompt format.')
252
+ print ('Waring! The image number != given position!')
253
+ if image_nums > 1:
254
+ hd_num = 6
255
+ else:
256
+ hu_num = hd_num
257
+ for idx, part in enumerate(parts):
258
+ if need_bos or len(part) > 0:
259
+ part_tokens = self.tokenizer(
260
+ part,
261
+ return_tensors='pt',
262
+ padding='longest',
263
+ add_special_tokens=need_bos).to(self.device)
264
+ if need_bos:
265
+ need_bos = False
266
+
267
+ part_embeds = self.model.tok_embeddings(
268
+ part_tokens.input_ids)
269
+ wrap_embeds.append(part_embeds)
270
+ wrap_im_mask.append(torch.zeros(part_embeds.shape[:2]))
271
+ temp_len += part_embeds.shape[1]
272
+ if idx < image_nums:
273
+ img = self.encode_img(image[idx], hd_num)
274
+ wrap_embeds.append(img)
275
+ wrap_im_mask.append(torch.ones(img.shape[:2]))
276
+ temp_len += img.shape[1]
277
+
278
+ if temp_len > self.max_length:
279
+ break
280
+
281
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
282
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
283
+ wrap_embeds = wrap_embeds[:, :self.max_length].to(self.device)
284
+ wrap_im_mask = wrap_im_mask[:, :self.max_length].to(self.device).bool()
285
+ inputs = {
286
+ 'inputs_embeds': wrap_embeds
287
+ }
288
+ return inputs, wrap_im_mask, temp_len
289
+
290
+ def interleav_wrap(self, img_list, text_list, image_nums):
291
+ temp_embeds = []
292
+ temp_im_mask = []
293
+ temp_tars = []
294
+
295
+ # encode_image
296
+ img_embeds, img_split = self.vit(img_list, self.plora_glb_GN, self.plora_sub_GN)
297
+ img_embeds = self.vision_proj(img_embeds)
298
+
299
+ text_list = text_list[0]
300
+ for idx, text in enumerate(text_list):
301
+ image_num = image_nums[idx]
302
+ im_id = int(np.sum(image_nums[:idx]))
303
+ images = []
304
+ for i in range(image_nums[idx]):
305
+ st = int(np.sum(img_split[:im_id + i]))
306
+ sp = img_split[im_id + i]
307
+ temp_img = img_embeds[:, st:st+sp]
308
+ images.append(temp_img)
309
+ atts_img = torch.ones((len(images), images[0].shape[1]), dtype=torch.long).to(self.device)
310
+ img_target = torch.ones(
311
+ (len(images), images[0].shape[1]), dtype=torch.long).to(
312
+ self.device) * -100
313
+
314
+ if image_num == 1 and text.find('<ImageHere>') == -1:
315
+ text = '<ImageHere>' + text
316
+ parts = text.split('<ImageHere>')
317
+
318
+ wrap_tokens, wrap_embeds, wrap_im_mask = [], [], []
319
+ temp_len = 0
320
+ need_bos = True
321
+ for idx, part in enumerate(parts):
322
+ if need_bos or len(part) > 0:
323
+ part_tokens = self.tokenizer(part, return_tensors='pt', padding='longest',
324
+ add_special_tokens=need_bos).to(self.device)
325
+ if need_bos:
326
+ need_bos = False
327
+ wrap_tokens.append(part_tokens.input_ids)
328
+ part_embeds = self.model.tok_embeddings(part_tokens.input_ids)
329
+ wrap_embeds.append(part_embeds)
330
+ wrap_im_mask.append(torch.zeros(part_embeds.shape[:2]).to(self.device))
331
+ temp_len += part_embeds.shape[1]
332
+ if idx < image_num:
333
+ wrap_embeds.append(images[idx])
334
+ wrap_token = torch.ones(images[idx].shape[:2], dtype=torch.long).to(self.device) * -100
335
+ wrap_tokens.append(wrap_token)
336
+ wrap_im_mask.append(torch.ones(images[idx].shape[:2]).to(self.device))
337
+ temp_len += images[idx].shape[1]
338
+ if temp_len > self.max_length:
339
+ break
340
+ wrap_tokens = torch.cat(wrap_tokens, dim=1)
341
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
342
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
343
+
344
+ wrap_target = self.mask_human_targets(wrap_tokens).to(self.device)
345
+
346
+ temp_embeds.append(wrap_embeds)
347
+ temp_im_mask.append(wrap_im_mask)
348
+ temp_tars.append(wrap_target)
349
+
350
+ temp_max_len = np.max([i.shape[1] for i in temp_embeds])
351
+ temp_max_len = min(temp_max_len, self.max_length)
352
+
353
+ final_input, final_atts, final_tars, final_mask = [], [], [], []
354
+ pad = torch.ones([1, 1]) * self.tokenizer.pad_token_id
355
+ pad = pad.long().to(self.device)
356
+ pad_emb = self.model.tok_embeddings(pad)
357
+
358
+ for idx in range(len(temp_embeds)):
359
+ temp_len = temp_embeds[idx].shape[1]
360
+ if temp_len >= temp_max_len:
361
+ final_input.append(temp_embeds[idx][:, :temp_max_len])
362
+ final_atts.append(torch.ones(1, temp_max_len).to(wrap_target.dtype).to(self.device))
363
+ final_tars.append(temp_tars[idx][:, :temp_max_len])
364
+ final_mask.append(temp_im_mask[idx][:, :temp_max_len])
365
+ else:
366
+ final_input.append(torch.cat([temp_embeds[idx], pad_emb.repeat(1, temp_max_len-temp_len, 1)], dim=1))
367
+ final_atts.append(torch.cat([torch.ones(1, temp_len), torch.zeros(1, temp_max_len-temp_len)], dim=1).to(wrap_target.dtype).to(self.device))
368
+ final_tars.append(torch.cat([temp_tars[idx], (torch.ones(1, temp_max_len-temp_len)*-100).to(wrap_target.dtype).to(self.device)], dim=1))
369
+ final_mask.append(torch.cat([temp_im_mask[idx], (torch.zeros(1, temp_max_len-temp_len)).to(wrap_target.dtype).to(self.device)], dim=1))
370
+
371
+ inputs_embeds = torch.cat(final_input, dim=0)
372
+ attention_mask = torch.cat(final_atts, dim=0)
373
+ targets = torch.cat(final_tars, dim=0)
374
+ im_mask = torch.cat(final_mask, dim=0)
375
+
376
+ return inputs_embeds, attention_mask, targets, im_mask
377
+
378
+ def mask_human_targets(self, input_ids, pure=False):
379
+ target_batch = []
380
+ for bs in range(input_ids.shape[0]):
381
+ ids = input_ids[bs]
382
+ targets = copy.deepcopy(ids)
383
+ end_count = 0
384
+ last_eoa = 0
385
+ for i, temp_id in enumerate(ids):
386
+ if temp_id == 92542:
387
+ if end_count % 2 == 0:
388
+ targets[last_eoa:i + 6] = -100
389
+ else:
390
+ last_eoa = i + 1
391
+ end_count += 1
392
+ # # eos and following pad
393
+ elif temp_id == 2:
394
+ # loss on eos, but not on pad
395
+ targets[i + 1:] = -100
396
+ break
397
+ # trunction, end at last question
398
+ if temp_id != 2 and end_count % 2 == 0:
399
+ # mask all after the last answer
400
+ targets[last_eoa + 1:] = -100
401
+ target_batch.append(targets.unsqueeze(0))
402
+ target_batch = torch.cat(target_batch, dim=0)
403
+ return target_batch
404
+
405
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
406
+ @replace_return_docstrings(
407
+ output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
408
+ def forward(self,
409
+ input_ids: torch.LongTensor = None,
410
+ attention_mask: Optional[torch.Tensor] = None,
411
+ position_ids: Optional[torch.LongTensor] = None,
412
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
413
+ inputs_embeds: Optional[torch.FloatTensor] = None,
414
+ labels: Optional[torch.LongTensor] = None,
415
+ use_cache: Optional[bool] = None,
416
+ output_attentions: Optional[bool] = None,
417
+ output_hidden_states: Optional[bool] = None,
418
+ return_dict: Optional[bool] = None,
419
+ **kwargs) -> Union[Tuple, CausalLMOutputWithPast]:
420
+ r"""
421
+ Args:
422
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
423
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
424
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
425
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
426
+ Returns:
427
+ """
428
+
429
+ samples = kwargs.get('samples', None)
430
+ if samples:
431
+ infer_mode = samples.get('infer_mode', 'base')
432
+ if samples['data_type'][0] == 'text':
433
+ has_img = False
434
+ elif samples['data_type'][0] == 'multi':
435
+ has_img = True
436
+ else:
437
+ raise NotImplementedError
438
+
439
+ # encode text
440
+ text = samples['text_input']
441
+ # encode image
442
+ if has_img:
443
+ image = samples['image'][0]
444
+ bs = len(samples['text_input'][0])
445
+ image_nums = []
446
+ temp_image = []
447
+ for im in image:
448
+ if type(im) is list:
449
+ image_nums.append(len(im))
450
+ temp_image.extend(im)
451
+ else:
452
+ image_nums.append(1)
453
+ temp_image.append(im)
454
+ image = temp_image
455
+ assert type(image) is list and len(image_nums) == bs
456
+
457
+ to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap(
458
+ image, text, image_nums)
459
+ else:
460
+ to_regress_tokens, targets = self.text2emb(
461
+ text, add_special_tokens=True)
462
+ to_regress_embeds = self.model.tok_embeddings(
463
+ to_regress_tokens.input_ids)
464
+ attention_mask = to_regress_tokens.attention_mask
465
+ im_mask = torch.zeros(to_regress_embeds.shape[:2]).cuda()
466
+
467
+ inputs_embeds = to_regress_embeds[:, :self.max_length]
468
+ attention_mask = attention_mask[:, :self.max_length]
469
+ targets = targets[:, :self.max_length]
470
+ im_mask = im_mask[:, :self.max_length].bool()
471
+ labels = targets
472
+ else:
473
+ im_mask = kwargs.get('im_mask', None)
474
+ infer_mode = kwargs.get('infer_mode', 'base')
475
+ if im_mask is None and inputs_embeds is not None:
476
+ im_mask = torch.zeros(inputs_embeds.shape[:2]).to(
477
+ inputs_embeds.device)
478
+ im_mask = im_mask.bool()
479
+
480
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
481
+ output_hidden_states = (
482
+ output_hidden_states if output_hidden_states is not None else
483
+ self.config.output_hidden_states)
484
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
485
+
486
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
487
+ outputs = self.model(
488
+ input_ids=input_ids,
489
+ attention_mask=attention_mask,
490
+ position_ids=position_ids,
491
+ past_key_values=past_key_values,
492
+ inputs_embeds=inputs_embeds,
493
+ use_cache=use_cache,
494
+ output_attentions=output_attentions,
495
+ output_hidden_states=output_hidden_states,
496
+ return_dict=return_dict,
497
+ im_mask=im_mask,
498
+ infer_mode=infer_mode,
499
+ )
500
+
501
+ hidden_states = outputs[0]
502
+ logits = self.output(hidden_states)
503
+ logits = logits.float()
504
+
505
+ loss = None
506
+ if labels is not None:
507
+ # Shift so that tokens < n predict n
508
+ shift_logits = logits[..., :-1, :].contiguous()
509
+ shift_labels = labels[..., 1:].contiguous()
510
+ # Flatten the tokens
511
+ loss_fct = CrossEntropyLoss()
512
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
513
+ shift_labels = shift_labels.view(-1)
514
+ # Enable model parallelism
515
+ shift_labels = shift_labels.to(shift_logits.device)
516
+ loss = loss_fct(shift_logits, shift_labels)
517
+
518
+ if not return_dict:
519
+ output = (logits, ) + outputs[1:]
520
+ return (loss, ) + output if loss is not None else output
521
+
522
+ return CausalLMOutputWithPast(
523
+ loss=loss,
524
+ logits=logits,
525
+ past_key_values=outputs.past_key_values,
526
+ hidden_states=outputs.hidden_states,
527
+ attentions=outputs.attentions,
528
+ )
529
+
530
+ def prepare_inputs_for_generation(self,
531
+ input_ids,
532
+ past_key_values=None,
533
+ attention_mask=None,
534
+ inputs_embeds=None,
535
+ im_mask=None,
536
+ infer_mode='base',
537
+ **kwargs):
538
+ if past_key_values is not None:
539
+ past_length = past_key_values[0][0].shape[2]
540
+
541
+ # Some generation methods already pass only the last input ID
542
+ if input_ids.shape[1] > past_length:
543
+ remove_prefix_length = past_length
544
+ else:
545
+ # Default to old behavior: keep only final ID
546
+ remove_prefix_length = input_ids.shape[1] - 1
547
+
548
+ input_ids = input_ids[:, remove_prefix_length:]
549
+
550
+ position_ids = kwargs.get('position_ids', None)
551
+ if attention_mask is not None and position_ids is None:
552
+ # create position_ids on the fly for batch generation
553
+ position_ids = attention_mask.long().cumsum(-1) - 1
554
+ position_ids.masked_fill_(attention_mask == 0, 1)
555
+ if past_key_values:
556
+ position_ids = position_ids[:, -input_ids.shape[1]:]
557
+
558
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
559
+ if inputs_embeds is not None and past_key_values is None:
560
+ model_inputs = {'inputs_embeds': inputs_embeds}
561
+ else:
562
+ model_inputs = {'input_ids': input_ids}
563
+
564
+ im_mask = im_mask
565
+
566
+ model_inputs.update({
567
+ 'position_ids': position_ids,
568
+ 'past_key_values': past_key_values,
569
+ 'use_cache': kwargs.get('use_cache'),
570
+ 'attention_mask': attention_mask,
571
+ 'im_mask': im_mask,
572
+ 'infer_mode': infer_mode,
573
+ })
574
+ return model_inputs
575
+
576
+ @staticmethod
577
+ def _reorder_cache(past_key_values, beam_idx):
578
+ reordered_past = ()
579
+ for layer_past in past_key_values:
580
+ reordered_past += (tuple(
581
+ past_state.index_select(0, beam_idx.to(past_state.device))
582
+ for past_state in layer_past), )
583
+ return reordered_past
584
+
585
+ def build_inputs(self,
586
+ tokenizer,
587
+ query: str,
588
+ history: List[Tuple[str, str]] = [],
589
+ meta_instruction=''):
590
+ prompt = ''
591
+ if meta_instruction:
592
+ prompt += f"""<s>[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
593
+ else:
594
+ prompt += '<s>'
595
+ for record in history:
596
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
597
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
598
+ return tokenizer([prompt], return_tensors='pt')
599
+
600
+ @torch.no_grad()
601
+ def chat(
602
+ self,
603
+ tokenizer,
604
+ query: str,
605
+ image: List[Tuple[str, str]] = [],
606
+ hd_num: int = 24,
607
+ history: List[Tuple[str, str]] = [],
608
+ streamer: Optional[BaseStreamer] = None,
609
+ max_new_tokens: int = 1024,
610
+ do_sample: bool = True,
611
+ num_beams: int = 1,
612
+ temperature: float = 1.0,
613
+ top_p: float = 0.8,
614
+ repetition_penalty: float=1.005,
615
+ infer_mode: str = 'base',
616
+ use_meta: bool = False,
617
+ meta_instruction:
618
+ str = 'You are an AI assistant whose name is InternLM-XComposer (浦语·灵笔).\n'
619
+ '- InternLM-XComposer (浦语·灵笔) is a multi-modality conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n'
620
+ '- InternLM-XComposer (浦语·灵笔) can understand and communicate fluently in the language chosen by the user such as English and 中文.\n'
621
+ '- InternLM-XComposer (浦语·灵笔) is capable of comprehending and articulating responses effectively based on the provided image.',
622
+ **kwargs,
623
+ ):
624
+
625
+ if not use_meta:
626
+ meta_instruction = ''
627
+ if image is None:
628
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
629
+ im_mask = torch.zeros(inputs['input_ids'].shape[:2]).cuda().bool()
630
+ else:
631
+ inputs, im_mask, _ = self.interleav_wrap_chat(query, image, history=history, meta_instruction=meta_instruction, hd_num=hd_num)
632
+ inputs = {
633
+ k: v.to(self.device)
634
+ for k, v in inputs.items() if torch.is_tensor(v)
635
+ }
636
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
637
+ eos_token_id = [
638
+ tokenizer.eos_token_id,
639
+ tokenizer.convert_tokens_to_ids(['[UNUSED_TOKEN_145]'])[0]
640
+ ]
641
+ outputs = self.generate(
642
+ **inputs,
643
+ streamer=streamer,
644
+ max_new_tokens=max_new_tokens,
645
+ num_beams=num_beams,
646
+ do_sample=do_sample,
647
+ temperature=temperature,
648
+ top_p=top_p,
649
+ eos_token_id=eos_token_id,
650
+ repetition_penalty=repetition_penalty,
651
+ im_mask=im_mask,
652
+ infer_mode=infer_mode,
653
+ **kwargs,
654
+ )
655
+ if image is None:
656
+ outputs = outputs[0].cpu().tolist()[len(inputs['input_ids'][0]):]
657
+ else:
658
+ outputs = outputs[0].cpu().tolist()
659
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
660
+ response = response.split('[UNUSED_TOKEN_145]')[0]
661
+ history = history + [(query, response)]
662
+ return response, history
README.md CHANGED
@@ -62,17 +62,19 @@ image = ['./examples/liuxiang.mp4',]
62
  with torch.autocast(device_type='cuda', dtype=torch.float16):
63
  response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
64
  print(response)
65
- #The video opens with a shot of an athlete, dressed in a red and yellow uniform with the word "CHINA" emblazoned across the front, preparing for a race.
66
- #The athlete, Liu Xiang, is seen in a crouched position, focused and ready, with the Olympic rings visible in the background, indicating the prestigious setting of the Olympic Games. As the race commences, the athletes are seen sprinting towards the hurdles, their determination evident in their powerful strides.
67
- #The camera captures the intensity of the competition, with the athletes' numbers and times displayed on the screen, providing a real-time update on their performance. The race reaches a climax as Liu Xiang, still in his red and yellow uniform, triumphantly crosses the finish line, his arms raised in victory.
68
- #The crowd in the stands erupts into cheers, their excitement palpable as they witness the athlete's success. The video concludes with a close-up shot of Liu Xiang, still basking in the glory of his victory, as the Olympic rings continue to symbolize the significance of the event.
 
 
69
 
70
  query = 'tell me the athlete code of Liu Xiang'
71
  image = ['./examples/liuxiang.mp4',]
72
  with torch.autocast(device_type='cuda', dtype=torch.float16):
73
  response, _ = model.chat(tokenizer, query, image, history=his, do_sample=False, num_beams=3, use_meta=True)
74
  print(response)
75
- #The athlete code of Liu Xiang, as displayed on his uniform in the video, is "1363".
76
  ```
77
 
78
  </details>
@@ -100,21 +102,62 @@ image = ['./examples/cars1.jpg',
100
  with torch.autocast(device_type='cuda', dtype=torch.float16):
101
  response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
102
  print(response)
103
- #To analyze the advantages and disadvantages of each car, we need to consider factors such as brand reputation, performance, design, cost, and maintenance.
104
- #1. Mercedes-Benz: - Advantages: Known for its luxury and reliability, the Mercedes-Benz brand offers a high level of comfort, advanced technology, and superior craftsmanship. The vehicle in the image appears to be an SUV, which is versatile for both city driving and off-road conditions. - Disadvantages: Typically, Mercedes-Benz vehicles are more expensive compared to other brands, and they may require more frequent maintenance due to their luxury status.
105
- #2. Bugatti: - Advantages: Bugatti is renowned for producing some of the fastest and most powerful cars in the world. The vehicle in the image is a sports car, likely offering an exhilarating driving experience with its high-performance engine and advanced aerodynamics. - Disadvantages: Bugatti cars are extremely expensive, making them less accessible to the average consumer. They also require specialized knowledge for maintenance and may not be suitable for everyday driving due to their high-performance nature.
106
- #3. BMW: - Advantages: BMW is a well-respected manufacturer known for its innovative engineering and design. The car in the image appears to be an electric vehicle (EV), which offers advantages such as lower emissions, reduced noise pollution, and potentially lower operating costs. - Disadvantages: EVs like the one in the image may have a shorter range compared to traditional gasoline-powered vehicles, limiting their suitability for long-distance travel.
107
- #Additionally, the initial cost of purchasing an EV can be higher than that of a conventional car. In conclusion, each car has its own set of advantages and disadvantages. The Mercedes-Benz offers luxury and versatility, the Bugatti provides high performance and exclusivity, and the BMW represents innovation in electric vehicle technology. The choice ultimately depends on the buyer's priorities, such as comfort, performance, environmental impact, and budget.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
  query = 'Image4 <ImageHere>; How about the car in Image4'
110
  image.append('./examples/cars4.jpg')
111
  with torch.autocast(device_type='cuda', dtype=torch.float16):
112
  response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, history= his, use_meta=True)
113
  print(response)
114
- #The car in Image4 is a red sports car, which appears to be a Ferrari. Ferrari is a renowned Italian manufacturer known for producing some of the most iconic and high-performance sports cars in the world. - Advantages: Ferrari vehicles are synonymous with speed, luxury, and engineering excellence.
115
- #The car in the image likely offers an exhilarating driving experience with its powerful engine, advanced aerodynamics, and high-quality craftsmanship. The red color adds to the car's aesthetic appeal, making it stand out on the road. - Disadvantages: Ferrari cars are extremely expensive, making them less accessible to the average consumer.
116
- #They also require specialized knowledge for maintenance and may not be suitable for everyday driving due to their high-performance nature. In conclusion, the Ferrari in Image4 represents a pinnacle of automotive engineering and design, offering unmatched performance and luxury.
117
- #However, its high cost and specialized maintenance requirements make it less practical for everyday use compared to the other vehicles in the images.
 
 
 
 
 
 
 
 
 
 
 
 
 
118
  ```
119
 
120
 
@@ -141,10 +184,46 @@ image = ['./examples/dubai.png']
141
  with torch.autocast(device_type='cuda', dtype=torch.float16):
142
  response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
143
  print(response)
144
- #The infographic is a visual representation of various facts about Dubai. It begins with a statement about Palm Jumeirah, highlighting it as the largest artificial island visible from space. It then provides a historical context, noting that in 1968, there were only a few cars in Dubai, contrasting this with the current figure of more than 1.5 million vehicles.
145
- #The infographic also points out that Dubai has the world's largest Gold Chain, with 7 of the top 10 tallest hotels located there. Additionally, it mentions that the crime rate is near 0%, and the income tax rate is also 0%, with 20% of the world's total cranes operating in Dubai. Furthermore, it states that 17% of the population is Emirati, and 83% are immigrants.
146
- #The Dubai Mall is highlighted as the largest shopping mall in the world, with 1200 stores. The infographic also notes that Dubai has no standard address system, with no zip codes, area codes, or postal services. It mentions that the Burj Khalifa is so tall that its residents on top floors need to wait longer to break fast during Ramadan.
147
- #The infographic also includes information about Dubai's climate-controlled City, with the Royal Suite at Burj Al Arab costing $24,000 per night. Lastly, it notes that the net worth of the four listed billionaires is roughly equal to the GDP of Honduras.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
 
149
  ```
150
 
 
62
  with torch.autocast(device_type='cuda', dtype=torch.float16):
63
  response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
64
  print(response)
65
+ # The video begins with a man in a red and yellow uniform standing on the starting line of a track, preparing to compete in the 110-meter hurdles at the Athens 2004 Olympic Games. He is identified as Liu Xiang, a Chinese athlete, and his bib number is 1363. The scene is set in a stadium filled with spectators, indicating the significance of the event.
66
+ # As the race begins, all the athletes start running, but Liu Xiang quickly takes the lead. However, he encounters a hurdle and knocks it over. Despite this setback, he quickly recovers and continues to run. The race is intense, with athletes from various countries competing fiercely. In the end, Liu Xiang emerges as the winner with a time of 12.91 seconds, securing the gold medal for China.
67
+ # The video then transitions to a slow-motion replay of the race, focusing on Liu Xiang's performance and the knockdown of the hurdle. This allows viewers to appreciate the skill and determination of the athlete.
68
+ # Following the race, Liu Xiang is seen lying on the track, possibly exhausted from the intense competition. He then stands up and begins to celebrate his victory, waving his arms in the air and running around the track. The crowd cheers and celebrates with him, creating a joyful atmosphere.
69
+ # The video concludes with a replay of Liu Xiang's gold medal-winning moment, emphasizing the significance of his achievement at the Athens 2004 Olympic Games.
70
+ # Throughout the video, the Olympic logo is prominently displayed, reminding viewers of the global significance of the event and the athletes' dedication and perseverance in their pursuit of victory.
71
 
72
  query = 'tell me the athlete code of Liu Xiang'
73
  image = ['./examples/liuxiang.mp4',]
74
  with torch.autocast(device_type='cuda', dtype=torch.float16):
75
  response, _ = model.chat(tokenizer, query, image, history=his, do_sample=False, num_beams=3, use_meta=True)
76
  print(response)
77
+ # The athlete code of Liu Xiang is 1363.
78
  ```
79
 
80
  </details>
 
102
  with torch.autocast(device_type='cuda', dtype=torch.float16):
103
  response, his = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
104
  print(response)
105
+ # Certainly! Let's analyze the advantages and disadvantages of each car:
106
+ #
107
+ # 1. **Mercedes-Benz G-Class (SUV)**:
108
+ # - **Advantages**:
109
+ # - **Off-Road Capability**: The G-Class is renowned for its off-road prowess. It can handle a variety of terrains, making it ideal for adventurous driving.
110
+ # - **Reliability**: Mercedes-Benz is known for producing reliable vehicles, and the G-Class is no exception. It has a reputation for durability and longevity.
111
+ # - **Luxury Features**: As a Mercedes-Benz, the G-Class comes with a host of luxury features, including high-quality materials and advanced technology.
112
+ # - **Disadvantages**:
113
+ # - **Fuel Efficiency**: The G-Class is not known for its fuel efficiency. It consumes a significant amount of gasoline, which can be a disadvantage for those concerned with fuel economy.
114
+ # - **Size and Weight**: The G-Class is large and heavy, which can affect its handling and maneuverability, especially in urban environments.
115
+ # - **Cost**: The G-Class is generally more expensive compared to other SUVs, which can be a deterrent for some buyers.
116
+ #
117
+ # 2. **Bugatti Chiron (Sports Car)**:
118
+ # - **Advantages**:
119
+ # - **Performance**: The Bugatti Chiron is one of the fastest production cars available. It boasts impressive acceleration and top speed, making it a thrilling driving experience.
120
+ # - **Design**: The Chiron has a sleek and futuristic design that is both aesthetically pleasing and aerodynamically efficient.
121
+ # - **Status Symbol**: Owning a Bugatti is often seen as a status symbol, attracting attention and admiration.
122
+ # - **Disadvantages**:
123
+ # - **Cost**: The Bugatti Chiron is one of the most expensive cars in the world, making it out of reach for many potential buyers.
124
+ # - **Maintenance**: Bugatti vehicles require specialized maintenance, which can be costly and difficult to find.
125
+ # - **Road Legality**: In some regions, the Chiron may not be road-legal due to its high top speed and powerful engine.
126
+ #
127
+ # 3. **BMW i Vision Dynamics (Electric Concept Car)**:
128
+ # - **Advantages**:
129
+ # - **Environmental Impact**: As an electric vehicle, the i Vision Dynamics has a lower environmental impact compared to traditional gasoline-powered cars.
130
+ # - **Design**: The i Vision Dynamics has a unique and futuristic design that sets it apart from conventional cars.
131
+ # - **Performance**: Despite being an electric vehicle, the i Vision Dynamics boasts impressive performance, including a top speed of over 300 km/h.
132
+ # - **Disadvantages**:
133
+ # - **Cost**: Like the Bugatti Chiron, the i Vision Dynamics is an expensive concept car, making it inaccessible to most buyers.
134
+ # - **Range**: Electric vehicles typically have a limited range compared to gasoline-powered cars, which can be a concern for long-distance travel.
135
+ # - **Infrastructure**: The availability of charging stations can vary widely by region, which can limit the practicality of owning an electric car.
136
+
137
+ # In summary, the Mercedes-Benz G-Class is a reliable and capable off-road vehicle with luxury features, but it may not be the most fuel-efficient or affordable option. The Bugatti Chiron offers unparalleled performance and status, but it is extremely expensive and requires specialized maintenance. The BMW i Vision Dynamics is an innovative and environmentally friendly concept car with impressive performance, but it is also costly and limited by range and charging infrastructure. Each car has its own unique set of advantages and disadvantages, and the choice ultimately depends on individual preferences and priorities.
138
 
139
  query = 'Image4 <ImageHere>; How about the car in Image4'
140
  image.append('./examples/cars4.jpg')
141
  with torch.autocast(device_type='cuda', dtype=torch.float16):
142
  response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, history= his, use_meta=True)
143
  print(response)
144
+ # The car in Image4 is a **Ferrari LaFerrari**. The LaFerrari is a hybrid supercar produced by Ferrari, combining a V12 engine with an electric motor. Here's a detailed analysis of its advantages and disadvantages:
145
+ #
146
+ # ### **Advantages:**
147
+ # 1. **Performance**: The LaFerrari is one of the fastest production cars available, with a top speed of over 217 mph (349 km/h). It boasts impressive acceleration and handling, making it a thrilling driving experience.
148
+ # 2. **Design**: The LaFerrari has a distinctive and aggressive design that sets it apart from other supercars. Its aerodynamic features and sleek lines contribute to its performance and visual appeal.
149
+ # 3. **Hybrid Technology**: The LaFerrari uses a hybrid powertrain, combining a 6.3-liter V12 engine with an electric motor. This hybrid system provides a balance of power and efficiency, reducing emissions compared to traditional gasoline engines.
150
+ # 4. **Status Symbol**: Owning a LaFerrari is often seen as a status symbol, attracting attention and admiration. It represents a pinnacle of automotive engineering and luxury.
151
+ # 5. **Reliability**: Ferrari is known for producing high-quality, reliable vehicles. The LaFerrari benefits from the brand's reputation for excellence in engineering and craftsmanship.
152
+
153
+ ### **Disadvantages:**
154
+ # 1. **Cost**: The LaFerrari is one of the most expensive cars in the world, making it inaccessible to most potential buyers. Its high price can be a significant deterrent.
155
+ # 2. **Maintenance**: Ferrari vehicles require specialized maintenance, which can be costly and difficult to find. The hybrid system may also add to the complexity and expense of servicing the car.
156
+ # 3. **Road Legality**: In some regions, the LaFerrari may not be road-legal due to its high top speed and powerful engine. This can limit its usability and appeal.
157
+ # 4. **Fuel Efficiency**: Despite the hybrid system, the LaFerrari consumes a significant amount of fuel, which can be a disadvantage for those concerned with fuel economy.
158
+ # 5. **Size and Weight**: The LaFerrari is a large and heavy vehicle, which can affect its handling and maneuverability, especially in urban environments.
159
+
160
+ # In summary, the Ferrari LaFerrari is a high-performance hybrid supercar with a distinctive design and impressive capabilities. However, its high cost, specialized maintenance requirements, and limited road legality can be significant disadvantages for some buyers. The LaFerrari is best suited for those who prioritize performance, luxury, and status over practicality and affordability.
161
  ```
162
 
163
 
 
184
  with torch.autocast(device_type='cuda', dtype=torch.float16):
185
  response, _ = model.chat(tokenizer, query, image, do_sample=False, num_beams=3, use_meta=True)
186
  print(response)
187
+ # The image is an infographic titled "Amazing Facts About Dubai." Here's a detailed analysis of its content:
188
+ #
189
+ # 1. **Title and Introduction:**
190
+ # - The title is prominently displayed at the top of the image in bold, blue letters.
191
+ # - The image features a colorful skyline of Dubai, highlighting the city's modern architecture.
192
+ #
193
+ # 2. **Facts About Palm Jumeirah:**
194
+ # - Palm Jumeirah is the largest artificial island and is visible from space.
195
+ # - In 1968, there were only 1.5 million cars in Dubai.
196
+ #
197
+ # 3. **Dubai's Gold Chain:**
198
+ # - Dubai has the world's largest Gold Chain, which is 4.2 km long.
199
+ # - 7 out of the 10 tallest hotels in the world are located in Dubai.
200
+ #
201
+ # 4. **Crime Rate and Income Tax:**
202
+ # - The crime rate is near 0%.
203
+ # - The income tax rate is 0%.
204
+ #
205
+ # 5. **Dubai Mall:**
206
+ # - Dubai Mall is the largest shopping mall in the world with 1200 stores.
207
+ # - 17% of the population is Emirati, and 83% are immigrants.
208
+ #
209
+ # 6. **Dubai's Address System:**
210
+ # - Dubai has no standard address system, with no zip codes, area codes, or postal services.
211
+ #
212
+ # 7. **Dispense Gold:**
213
+ # - Dubai is building a climate-controlled City, 2.25 times as big as Monaco.
214
+ # - The Royal Suite at Burj Al Arab is $24,000 per night.
215
+ #
216
+ # 8. **License and Billionaires:**
217
+ # - You need a license to drink alcohol even at home.
218
+ # - The net worth of the four listed billionaires is roughly equal to the GDP of Honduras.
219
+ #
220
+ # 9. **Sources:**
221
+ # - The infographic cites sources from Wikipedia, Forbes, Gulf News, and The Guardian.
222
+ #
223
+ # 10. **Design and Compilation:**
224
+ # - The image is designed and compiled by FMEXtensions, a company based in the United Arab Emirates.
225
+ #
226
+ # The infographic uses a combination of text, icons, and images to convey interesting facts about Dubai, emphasizing its modernity, wealth, and unique features.
227
 
228
  ```
229
 
added_tokens.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|action_end|>": 92547,
3
+ "<|action_start|>": 92546,
4
+ "<|im_end|>": 92545,
5
+ "<|im_start|>": 92544,
6
+ "<|interpreter|>": 92548,
7
+ "<|plugin|>": 92549
8
+ }
build_mlp.py ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import re
4
+ import math
5
+ from transformers import CLIPVisionModel, CLIPImageProcessor, CLIPVisionConfig
6
+
7
+
8
+ def build_vision_tower():
9
+ vision_tower = 'internlm/internlm-xcomposer2d5-clip'
10
+ return CLIPVisionTower(vision_tower)
11
+
12
+
13
+ def build_vision_projector():
14
+ projector_type = 'mlp2x_gelu'
15
+ mm_hidden_size = 4096
16
+ mid_hidden_size = 4096
17
+ hidden_size = 4096
18
+
19
+ mlp_gelu_match = re.match(r'^mlp(\d+)x_gelu$', projector_type)
20
+ if mlp_gelu_match:
21
+ mlp_depth = int(mlp_gelu_match.group(1))
22
+ modules = [nn.Linear(mm_hidden_size, mid_hidden_size)]
23
+ for _ in range(1, mlp_depth):
24
+ modules.append(nn.GELU())
25
+ modules.append(nn.Linear(mid_hidden_size, mid_hidden_size))
26
+
27
+ return nn.Sequential(*modules)
28
+
29
+ if projector_type == 'identity':
30
+ return IdentityMap()
31
+
32
+ raise ValueError(f'Unknown projector type: {projector_type}')
33
+
34
+ class IdentityMap(nn.Module):
35
+ def __init__(self):
36
+ super().__init__()
37
+
38
+ def forward(self, x, *args, **kwargs):
39
+ return x
40
+
41
+ @property
42
+ def config(self):
43
+ return {"mm_projector_type": 'identity'}
44
+
45
+
46
+ class CLIPVisionTower(nn.Module):
47
+ def __init__(self, vision_tower):
48
+ super().__init__()
49
+
50
+ self.is_loaded = False
51
+
52
+ self.vision_tower_name = vision_tower
53
+ #self.conv_dim = 8192
54
+ #self.conv = torch.nn.Conv2d(1024, self.conv_dim,3,2,1)
55
+ self.select_layer = -1
56
+ self.select_feature = 'patch'
57
+ self.load_model()
58
+
59
+ def load_model(self):
60
+ self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name)
61
+ self.vision_tower.requires_grad_(False)
62
+
63
+ self.is_loaded = True
64
+
65
+ def resize_pos(self):
66
+ print ('Dummy Resized')
67
+
68
+ def feature_select(self, image_forward_outs):
69
+ image_features = image_forward_outs.hidden_states[self.select_layer]
70
+ if self.select_feature == 'patch':
71
+ image_features = image_features[:, 1:]
72
+ elif self.select_feature == 'cls_patch':
73
+ image_features = image_features
74
+ else:
75
+ raise ValueError(f'Unexpected select feature: {self.select_feature}')
76
+ return image_features
77
+
78
+ def forward(self, images, glb_GN, sub_GN):
79
+ if not self.is_loaded:
80
+ self.load_model()
81
+ assert type(images) is list
82
+ shapes = []
83
+ input_imgs = []
84
+ for img in images:
85
+ _, C, H, W = img.shape
86
+ shapes.append([H//560, W//560])
87
+ sub_img = img.reshape(1,3,H//560,560,W//560,560).permute(0,2,4,1,3,5).reshape(-1,3,560,560).contiguous()
88
+ glb_img = torch.nn.functional.interpolate(img.float(), size=(560,560), mode='bicubic',).to(sub_img.dtype)
89
+ input_imgs.append(glb_img)
90
+ input_imgs.append(sub_img)
91
+ input_imgs = torch.cat(input_imgs, dim=0)
92
+
93
+ image_forward_outs = self.vision_tower(input_imgs.to(device=self.device, dtype=self.dtype), output_hidden_states=True)
94
+ image_features = self.feature_select(image_forward_outs).to(input_imgs.dtype) ### B*?, N, C
95
+ _, N, C = image_features.shape
96
+ H = int(math.sqrt(N))
97
+ assert N == 40 ** 2
98
+
99
+ output_imgs = []
100
+ output_len = []
101
+ for [h, w] in shapes:
102
+ B_ = h*w
103
+ glb_img = image_features[:1] ### 1, N, C
104
+ glb_img = glb_img.reshape(1,H,H,C).reshape(1,H//2,2,H//2,2,C).contiguous().permute(0,1,3,2,4,5).reshape(1,H//2,H//2,4*C).contiguous()
105
+ temp_glb_GN = sub_GN.repeat(1, H//2, 1, 1)
106
+ glb_img = torch.cat([glb_img, temp_glb_GN], dim=2).reshape(1,-1,4*C)
107
+
108
+ sub_img = image_features[1:1+B_] ### ?, N, C
109
+ sub_img = sub_img.reshape(B_,H,H,C).reshape(B_,H//2,2,H//2,2,C).contiguous().permute(0,1,3,2,4,5).reshape(B_,-1,4*C).contiguous()
110
+ sub_img = sub_img.reshape(1, h, w, 20, 20, -1).permute(0,1,3,2,4,5).reshape(1,h*20,w*20,4*C)
111
+ temp_sub_GN = sub_GN.repeat(1, h*20, 1, 1)
112
+ sub_img = torch.cat([sub_img, temp_sub_GN], dim=2).reshape(1,-1,4*C)
113
+
114
+ output_imgs.append(torch.cat([glb_img, glb_GN, sub_img], dim=1))
115
+ temp_len = int((h*w+1)*400 + 1 + (h+1)*20)
116
+ assert temp_len == output_imgs[-1].shape[1]
117
+ output_len.append(temp_len)
118
+
119
+ image_features = image_features[1+h*w:]
120
+
121
+ output_imgs = torch.cat(output_imgs, dim=1)
122
+
123
+ return output_imgs, output_len
124
+
125
+ @property
126
+ def dummy_feature(self):
127
+ return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
128
+
129
+ @property
130
+ def dtype(self):
131
+ return self.vision_tower.dtype
132
+
133
+ @property
134
+ def device(self):
135
+ return self.vision_tower.device
136
+
137
+ @property
138
+ def config(self):
139
+ if self.is_loaded:
140
+ return self.vision_tower.config
141
+ else:
142
+ return self.cfg_only
143
+
144
+ @property
145
+ def hidden_size(self):
146
+ return self.config.hidden_size
147
+
148
+ @property
149
+ def num_patches(self):
150
+ return (self.config.image_size // self.config.patch_size) ** 2
151
+
152
+ class PLoRA(nn.Linear):
153
+ def __init__(self,
154
+ in_features: int,
155
+ out_features: int,
156
+ bias: bool = True,
157
+ device=None,
158
+ dtype=None,
159
+ lora_r=8,
160
+ lora_alpha=16,
161
+ lora_dropout=0.05,
162
+ lora_len=0,
163
+ **kwargs) -> None:
164
+ super().__init__(in_features, out_features, bias, device, dtype)
165
+ self.lora_r = lora_r
166
+ self.lora_alpha = lora_alpha
167
+ self.lora_len = lora_len
168
+ if lora_dropout > 0.:
169
+ self.lora_dropout = nn.Dropout(p=lora_dropout)
170
+ else:
171
+ self.lora_dropout = lambda x: x
172
+ self.lora_scaling = self.lora_alpha / self.lora_r
173
+
174
+ self.Plora_A = nn.Linear(in_features,
175
+ self.lora_r,
176
+ bias=False,
177
+ device=device,
178
+ dtype=dtype)
179
+ self.Plora_B = nn.Linear(self.lora_r,
180
+ out_features,
181
+ bias=False,
182
+ device=device,
183
+ dtype=dtype)
184
+
185
+ self.lora_sft_A = nn.Linear(in_features,
186
+ 256,
187
+ bias=False,
188
+ device=device,
189
+ dtype=dtype)
190
+ self.lora_sft_B = nn.Linear(256,
191
+ out_features,
192
+ bias=False,
193
+ device=device,
194
+ dtype=dtype)
195
+
196
+ self.lora_dpo_A = nn.Linear(in_features,
197
+ 256,
198
+ bias=False,
199
+ device=device,
200
+ dtype=dtype)
201
+ self.lora_dpo_B = nn.Linear(256,
202
+ out_features,
203
+ bias=False,
204
+ device=device,
205
+ dtype=dtype)
206
+
207
+ self.lora_web_A = nn.Linear(in_features,
208
+ 512,
209
+ bias=False,
210
+ device=device,
211
+ dtype=dtype)
212
+ self.lora_web_B = nn.Linear(512,
213
+ out_features,
214
+ bias=False,
215
+ device=device,
216
+ dtype=dtype)
217
+
218
+ self.reset_parameters()
219
+
220
+ def reset_parameters(self):
221
+ if hasattr(self, 'lora_A'):
222
+ # initialize A the same way as the default for nn.Linear and B to zero
223
+ nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
224
+ nn.init.zeros_(self.lora_B.weight)
225
+ #print ("lora weight init {} {}".format(torch.mean(self.lora_A.weight), torch.mean(self.lora_B.weight)))
226
+
227
+ def forward(self, x, im_mask=None, infer_mode='base'):
228
+ B, N, C = x.shape
229
+ im_mask = im_mask.view(-1)
230
+ x = x.reshape(-1, C)
231
+ res = super().forward(x)
232
+ if infer_mode == 'web':
233
+ res += self.lora_web_B(self.lora_web_A(x))
234
+ elif infer_mode == 'write':
235
+ res += self.lora_sft_B(self.lora_sft_A(x))
236
+ res += self.lora_dpo_B(self.lora_dpo_A(x))
237
+ else:
238
+ pass
239
+ if im_mask is not None:
240
+ if torch.sum(im_mask) > 0:
241
+ part_x = x[im_mask]
242
+ res[im_mask] += self.Plora_B(self.Plora_A(
243
+ self.lora_dropout(part_x))) * self.lora_scaling
244
+ else:
245
+ part_x = x[:1]
246
+ res[:1] += self.Plora_B(self.Plora_A(
247
+ self.lora_dropout(part_x))) * 0
248
+
249
+ return res.reshape(B, N, -1)
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "internlm/internlm-xcomposer2d5-7b-chat",
3
+ "architectures": [
4
+ "InternLMXComposer2ForCausalLM"
5
+ ],
6
+ "attn_implementation": "flash_attention_2",
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_internlm_xcomposer2.InternLMXcomposer2Config",
9
+ "AutoModel": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM",
10
+ "AutoModelForCausalLM": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM"
11
+ },
12
+ "bias": false,
13
+ "bos_token_id": 1,
14
+ "eos_token_id": 2,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 14336,
19
+ "max_length": 16384,
20
+ "max_position_embeddings": 24576,
21
+ "model_type": "internlm2",
22
+ "num_attention_heads": 32,
23
+ "num_hidden_layers": 32,
24
+ "num_key_value_heads": 8,
25
+ "pad_token_id": 2,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "factor": 2.0,
29
+ "type": "dynamic"
30
+ },
31
+ "rope_theta": 1000000,
32
+ "tie_word_embeddings": false,
33
+ "torch_dtype": "float16",
34
+ "transformers_version": "4.33.1",
35
+ "use_cache": false,
36
+ "vocab_size": 92544
37
+ }
configuration_internlm_xcomposer2.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/configuration_llama.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ """ InternLM2 model configuration"""
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+ INTERNLM2_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
25
+
26
+
27
+ class InternLMXcomposer2Config(PretrainedConfig):
28
+ r"""
29
+ This is the configuration class to store the configuration of a [`InternLM2Model`]. It is used to instantiate
30
+ an InternLM2 model according to the specified arguments, defining the model architecture. Instantiating a
31
+ configuration with the defaults will yield a similar configuration to that of the InternLM2-7B.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 32000):
39
+ Vocabulary size of the InternLM2 model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`InternLM2Model`]
41
+ hidden_size (`int`, *optional*, defaults to 4096):
42
+ Dimension of the hidden representations.
43
+ intermediate_size (`int`, *optional*, defaults to 11008):
44
+ Dimension of the MLP representations.
45
+ num_hidden_layers (`int`, *optional*, defaults to 32):
46
+ Number of hidden layers in the Transformer encoder.
47
+ num_attention_heads (`int`, *optional*, defaults to 32):
48
+ Number of attention heads for each attention layer in the Transformer encoder.
49
+ num_key_value_heads (`int`, *optional*):
50
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
51
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
52
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
53
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
54
+ by meanpooling all the original heads within that group. For more details checkout [this
55
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
56
+ `num_attention_heads`.
57
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
58
+ The non-linear activation function (function or string) in the decoder.
59
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
60
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
61
+ just in case (e.g., 512 or 1024 or 2048).
62
+ initializer_range (`float`, *optional*, defaults to 0.02):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
65
+ The epsilon used by the rms normalization layers.
66
+ use_cache (`bool`, *optional*, defaults to `True`):
67
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
68
+ relevant if `config.is_decoder=True`.
69
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
70
+ Whether to tie weight embeddings
71
+ Example:
72
+
73
+ """
74
+ model_type = "internlm2"
75
+ _auto_class = "AutoConfig"
76
+
77
+ def __init__( # pylint: disable=W0102
78
+ self,
79
+ vocab_size=103168,
80
+ hidden_size=4096,
81
+ intermediate_size=11008,
82
+ num_hidden_layers=32,
83
+ num_attention_heads=32,
84
+ num_key_value_heads=None,
85
+ hidden_act="silu",
86
+ max_position_embeddings=2048,
87
+ initializer_range=0.02,
88
+ rms_norm_eps=1e-6,
89
+ use_cache=True,
90
+ pad_token_id=0,
91
+ bos_token_id=1,
92
+ eos_token_id=2,
93
+ tie_word_embeddings=False,
94
+ bias=True,
95
+ rope_theta=10000,
96
+ rope_scaling=None,
97
+ attn_implementation="flash_attention_2",
98
+ **kwargs,
99
+ ):
100
+ self.vocab_size = vocab_size
101
+ self.max_position_embeddings = max_position_embeddings
102
+ self.hidden_size = hidden_size
103
+ self.intermediate_size = intermediate_size
104
+ self.num_hidden_layers = num_hidden_layers
105
+ self.num_attention_heads = num_attention_heads
106
+ self.bias = bias
107
+
108
+ if num_key_value_heads is None:
109
+ num_key_value_heads = num_attention_heads
110
+ self.num_key_value_heads = num_key_value_heads
111
+
112
+ self.hidden_act = hidden_act
113
+ self.initializer_range = initializer_range
114
+ self.rms_norm_eps = rms_norm_eps
115
+ self.use_cache = use_cache
116
+ self.rope_theta = rope_theta
117
+ self.rope_scaling = rope_scaling
118
+ self._rope_scaling_validation()
119
+
120
+ self.attn_implementation = attn_implementation
121
+ if self.attn_implementation is None:
122
+ self.attn_implementation = "flash_attention_2"
123
+ super().__init__(
124
+ pad_token_id=pad_token_id,
125
+ bos_token_id=bos_token_id,
126
+ eos_token_id=eos_token_id,
127
+ tie_word_embeddings=tie_word_embeddings,
128
+ **kwargs,
129
+ )
130
+
131
+ def _rope_scaling_validation(self):
132
+ """
133
+ Validate the `rope_scaling` configuration.
134
+ """
135
+ if self.rope_scaling is None:
136
+ return
137
+
138
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
139
+ raise ValueError(
140
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
141
+ f"got {self.rope_scaling}"
142
+ )
143
+ rope_scaling_type = self.rope_scaling.get("type", None)
144
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
145
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
146
+ raise ValueError(
147
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
148
+ )
149
+ if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor < 1.0:
150
+ raise ValueError(f"`rope_scaling`'s factor field must be a float >= 1, got {rope_scaling_factor}")
examples/cars1.jpg ADDED
examples/cars2.jpg ADDED
examples/cars3.jpg ADDED
examples/cars4.jpg ADDED
examples/dubai.png ADDED

Git LFS Details

  • SHA256: d1791fdc7767a6e868da0e35d0158f02eae0c78229a0f4505580d756b4ea3929
  • Pointer size: 132 Bytes
  • Size of remote file: 2.8 MB
examples/liuxiang.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29e1448fe188d8cca2e85fd81c236c53fd61784063d93bc09e2301d33798937a
3
+ size 26855609
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "max_length": 16384,
6
+ "pad_token_id": 2,
7
+ "transformers_version": "4.33.1",
8
+ "use_cache": false
9
+ }
ixc_utils.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import numpy as np
4
+ import torchvision
5
+ from urllib.request import urlopen
6
+ from PIL import Image, ImageDraw, ImageFont
7
+ from torchvision.transforms.functional import InterpolationMode
8
+ import torchvision.transforms as transforms
9
+ from decord import VideoReader
10
+
11
+ def get_font():
12
+ truetype_url = 'https://huggingface.co/internlm/internlm-xcomposer2d5-7b/resolve/main/SimHei.ttf?download=true'
13
+ ff = urlopen(truetype_url)
14
+ font = ImageFont.truetype(ff, size=40)
15
+ return font
16
+
17
+ def padding_336(b, pad=336):
18
+ width, height = b.size
19
+ tar = int(np.ceil(height / pad) * pad)
20
+ top_padding = 0 # int((tar - height)/2)
21
+ bottom_padding = tar - height - top_padding
22
+ left_padding = 0
23
+ right_padding = 0
24
+ b = transforms.functional.pad(b, [left_padding, top_padding, right_padding, bottom_padding], fill=[255,255,255])
25
+
26
+ return b
27
+
28
+ def Image_transform(img, hd_num=25):
29
+ width, height = img.size
30
+ trans = False
31
+ if width < height:
32
+ img = img.transpose(Image.TRANSPOSE)
33
+ trans = True
34
+ width, height = img.size
35
+ ratio = (width/ height)
36
+ scale = 1
37
+ while scale*np.ceil(scale/ratio) <= hd_num:
38
+ scale += 1
39
+ scale -= 1
40
+ scale = min(np.ceil(width / 560), scale)
41
+ new_w = int(scale * 560)
42
+ new_h = int(new_w / ratio)
43
+ #print (scale, f'{height}/{new_h}, {width}/{new_w}')
44
+
45
+ img = transforms.functional.resize(img, [new_h, new_w],)
46
+ img = padding_336(img, 560)
47
+ width, height = img.size
48
+ if trans:
49
+ img = img.transpose(Image.TRANSPOSE)
50
+
51
+ return img
52
+
53
+
54
+ def Video_transform(img, hd_num=25):
55
+ width, height = img.size
56
+ trans = False
57
+ if width < height:
58
+ img = img.transpose(Image.TRANSPOSE)
59
+ trans = True
60
+ width, height = img.size
61
+ ratio = (width/ height)
62
+ scale = 1
63
+ new_h = int(scale * 560)
64
+ new_w = int(new_h * ratio)
65
+ #print (new_h, new_w)
66
+
67
+ img = transforms.functional.resize(img, [new_h, new_w],)
68
+ img = img.transpose(Image.TRANSPOSE)
69
+ img = padding_336(img, 560)
70
+ width, height = img.size
71
+ if not trans:
72
+ img = img.transpose(Image.TRANSPOSE)
73
+
74
+ return img
75
+
76
+ def frame2img(imgs, font):
77
+ new_imgs = []
78
+ for img in imgs:
79
+ w, h = img.size
80
+ scale = w/h
81
+ if w > h:
82
+ new_w = 560 * 2
83
+ new_h = int(560 * 2 / scale)
84
+ else:
85
+ new_w = int(560 * 2 * scale)
86
+ new_h = 560 * 2
87
+ img = transforms.functional.resize(img, [new_h, new_w],)
88
+ new_imgs.append(img)
89
+ imgs = new_imgs
90
+ new_w = 0
91
+ new_h = 0
92
+ pad = 40
93
+ if w > h:
94
+ for im in imgs:
95
+ w,h = im.size
96
+ new_w = max(new_w, w)
97
+ new_h += h + 10 + pad
98
+ new_img = Image.new('RGB', (new_w, new_h), 'white')
99
+ draw = ImageDraw.Draw(new_img)
100
+ curr_h = 0
101
+ for idx, im in enumerate(imgs):
102
+ w,h = im.size
103
+ new_img.paste(im, (0, pad + curr_h))
104
+ draw.text((0, curr_h ), f'<IMAGE {idx}>', font=font, fill='black')
105
+ if idx + 1 < len(imgs):
106
+ draw.line([(0, pad +curr_h + h +5), (new_w, pad +curr_h + h +5)], fill = 'black', width=2)
107
+ curr_h += h + 10 + pad
108
+ #print (new_w, new_h)
109
+ else:
110
+ for im in imgs:
111
+ w,h = im.size
112
+ new_w += w + 10
113
+ new_h = max(new_h, h)
114
+ new_h += pad
115
+ new_img = Image.new('RGB', (new_w, new_h), 'white')
116
+ draw = ImageDraw.Draw(new_img)
117
+ curr_w = 0
118
+ for idx, im in enumerate(imgs):
119
+ w,h = im.size
120
+ new_img.paste(im, (curr_w, pad))
121
+ draw.text((curr_w, 0), f'<IMAGE {idx}>', font=font, fill='black')
122
+ if idx + 1 < len(imgs):
123
+ draw.line([(curr_w + w + 5, 0), (curr_w + w + 5, new_h)], fill = 'black', width=2)
124
+ curr_w += w + 10
125
+ return new_img
126
+
127
+ def load_video(video_path, num_frm=32, start=None, end=None):
128
+ vid = VideoReader(video_path, num_threads=1)
129
+ fps = vid.get_avg_fps()
130
+ t_stride = int(round(float(fps) / int(1)))
131
+ start_idx = 0 if start is None else start
132
+ end_idx = len(vid) if end is None else end
133
+ all_pos = list(range(start_idx, end_idx, t_stride))
134
+ try:
135
+ images = [vid[i].numpy() for i in all_pos]
136
+ except:
137
+ images = [vid[i].asnumpy() for i in all_pos]
138
+ if len(images) > num_frm:
139
+ num_frm = min(num_frm, len(images))
140
+ step_size = len(images) / (num_frm + 1)
141
+ indices = [int(i*step_size) for i in range(num_frm)]
142
+ images = [images[i] for i in indices]
143
+ images = [Image.fromarray(arr) for arr in images]
144
+ return images
145
+
modeling_internlm2.py ADDED
@@ -0,0 +1,997 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/modeling_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ PyTorch InternLM2 model."""
17
+ import math
18
+ import queue
19
+ import threading
20
+ import warnings
21
+ import copy
22
+ import numpy as np
23
+ from typing import List, Optional, Tuple, Union
24
+ from torchvision import transforms
25
+ from torchvision.transforms.functional import InterpolationMode
26
+ from PIL import Image
27
+
28
+ import torch
29
+ import torch.nn.functional as F
30
+ import torch.utils.checkpoint
31
+ from einops import rearrange
32
+ from torch import nn
33
+ from transformers.activations import ACT2FN
34
+ from transformers.modeling_outputs import (
35
+ BaseModelOutputWithPast,
36
+ CausalLMOutputWithPast,
37
+ SequenceClassifierOutputWithPast,
38
+ )
39
+ from transformers.modeling_utils import PreTrainedModel
40
+ from transformers.utils import (
41
+ add_start_docstrings,
42
+ add_start_docstrings_to_model_forward,
43
+ logging,
44
+ replace_return_docstrings,
45
+ )
46
+
47
+ try:
48
+ from transformers.generation.streamers import BaseStreamer
49
+ except: # noqa # pylint: disable=bare-except
50
+ BaseStreamer = None
51
+
52
+ from .build_mlp import PLoRA
53
+ from .configuration_internlm_xcomposer2 import InternLMXcomposer2Config as InternLM2Config
54
+
55
+ logger = logging.get_logger(__name__)
56
+
57
+ _CONFIG_FOR_DOC = "InternLM2Config"
58
+
59
+ flash_attn_func, flash_attn_varlen_func = None, None
60
+ pad_input, index_first_axis, unpad_input = None, None, None
61
+ def _import_flash_attn():
62
+ global flash_attn_func, flash_attn_varlen_func
63
+ global pad_input, index_first_axis, unpad_input
64
+ try:
65
+ from flash_attn import flash_attn_func as _flash_attn_func, flash_attn_varlen_func as _flash_attn_varlen_func
66
+ from flash_attn.bert_padding import pad_input as _pad_input, index_first_axis as _index_first_axis, unpad_input as _unpad_input
67
+ flash_attn_func, flash_attn_varlen_func = _flash_attn_func, _flash_attn_varlen_func
68
+ pad_input, index_first_axis, unpad_input = _pad_input, _index_first_axis, _unpad_input
69
+ except ImportError:
70
+ raise ImportError("flash_attn is not installed.")
71
+
72
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
73
+ def _get_unpad_data(attention_mask):
74
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
75
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
76
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
77
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
78
+ return (
79
+ indices,
80
+ cu_seqlens,
81
+ max_seqlen_in_batch,
82
+ )
83
+
84
+
85
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
86
+ def _make_causal_mask(
87
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
88
+ ):
89
+ """
90
+ Make causal mask used for bi-directional self-attention.
91
+ """
92
+ bsz, tgt_len = input_ids_shape
93
+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
94
+ mask_cond = torch.arange(mask.size(-1), device=device)
95
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
96
+ mask = mask.to(dtype)
97
+
98
+ if past_key_values_length > 0:
99
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
100
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
101
+
102
+
103
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
104
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
105
+ """
106
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
107
+ """
108
+ bsz, src_len = mask.size()
109
+ tgt_len = tgt_len if tgt_len is not None else src_len
110
+
111
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
112
+
113
+ inverted_mask = 1.0 - expanded_mask
114
+
115
+ return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
116
+
117
+
118
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->InternLM2
119
+ class InternLM2RMSNorm(nn.Module):
120
+ def __init__(self, hidden_size, eps=1e-6):
121
+ """
122
+ InternLM2RMSNorm is equivalent to T5LayerNorm
123
+ """
124
+ super().__init__()
125
+ self.weight = nn.Parameter(torch.ones(hidden_size))
126
+ self.variance_epsilon = eps
127
+
128
+ def forward(self, hidden_states):
129
+ input_dtype = hidden_states.dtype
130
+ hidden_states = hidden_states.to(torch.float32)
131
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
132
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
133
+ return self.weight * hidden_states.to(input_dtype)
134
+
135
+
136
+ # Copied from transformers.model.llama.modeling_llama.LlamaRotaryEmbedding with Llama->InternLM2
137
+ class InternLM2RotaryEmbedding(nn.Module):
138
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
139
+ super().__init__()
140
+
141
+ self.dim = dim
142
+ self.max_position_embeddings = max_position_embeddings
143
+ self.base = base
144
+ inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
145
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
146
+
147
+ # Build here to make `torch.jit.trace` work.
148
+ self._set_cos_sin_cache(
149
+ seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype()
150
+ )
151
+
152
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
153
+ self.max_seq_len_cached = seq_len
154
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
155
+
156
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
157
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
158
+ emb = torch.cat((freqs, freqs), dim=-1)
159
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
160
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
161
+
162
+ def forward(self, x, seq_len=None):
163
+ # x: [bs, num_attention_heads, seq_len, head_size]
164
+ if seq_len > self.max_seq_len_cached:
165
+ self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.float32)
166
+
167
+ return (
168
+ self.cos_cached[:seq_len].to(dtype=x.dtype),
169
+ self.sin_cached[:seq_len].to(dtype=x.dtype),
170
+ )
171
+
172
+
173
+ # Copied from transformers.model.llama.modeling_llama.LlamaLinearScalingRotaryEmbedding with Llama->InternLM2
174
+ class InternLM2LinearScalingRotaryEmbedding(InternLM2RotaryEmbedding):
175
+ """InternLM2RotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
176
+
177
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
178
+ self.scaling_factor = scaling_factor
179
+ super().__init__(dim, max_position_embeddings, base, device)
180
+
181
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
182
+ self.max_seq_len_cached = seq_len
183
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
184
+ t = t / self.scaling_factor
185
+
186
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
187
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
188
+ emb = torch.cat((freqs, freqs), dim=-1)
189
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
190
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
191
+
192
+
193
+ # Copied from transformers.model.llama.modeling_llama.LlamaDynamicNTKScalingRotaryEmbedding with Llama->InternLM2
194
+ class InternLM2DynamicNTKScalingRotaryEmbedding(InternLM2RotaryEmbedding):
195
+ """InternLM2RotaryEmbedding extended with Dynamic NTK scaling.
196
+ Credits to the Reddit users /u/bloc97 and /u/emozilla.
197
+ """
198
+
199
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
200
+ self.scaling_factor = scaling_factor
201
+ super().__init__(dim, max_position_embeddings, base, device)
202
+
203
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
204
+ self.max_seq_len_cached = seq_len
205
+
206
+ if seq_len > self.max_position_embeddings:
207
+ base = self.base * (
208
+ (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
209
+ ) ** (self.dim / (self.dim - 2))
210
+ inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
211
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
212
+
213
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
214
+
215
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
216
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
217
+ emb = torch.cat((freqs, freqs), dim=-1)
218
+ self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
219
+ self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
220
+
221
+
222
+ # Copied from transformers.model.llama.modeling_llama.rotate_half
223
+ def rotate_half(x):
224
+ """Rotates half the hidden dims of the input."""
225
+ x1 = x[..., : x.shape[-1] // 2]
226
+ x2 = x[..., x.shape[-1] // 2 :]
227
+ return torch.cat((-x2, x1), dim=-1)
228
+
229
+
230
+ # Copied from transformers.model.llama.modeling_llama.apply_rotary_pos_emb
231
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
232
+ """Applies Rotary Position Embedding to the query and key tensors."""
233
+ cos = cos[position_ids].unsqueeze(unsqueeze_dim)
234
+ sin = sin[position_ids].unsqueeze(unsqueeze_dim)
235
+ q_embed = (q * cos) + (rotate_half(q) * sin)
236
+ k_embed = (k * cos) + (rotate_half(k) * sin)
237
+ return q_embed, k_embed
238
+
239
+
240
+ class InternLM2MLP(nn.Module):
241
+ def __init__(self, config):
242
+ super().__init__()
243
+ self.config = config
244
+ self.hidden_size = config.hidden_size
245
+ self.intermediate_size = config.intermediate_size
246
+ #self.w1 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
247
+ #self.w3 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
248
+ #self.w2 = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
249
+
250
+ self.w1 = PLoRA(self.hidden_size, self.intermediate_size, bias=False,
251
+ lora_r=256, lora_alpha=256, lora_len=1225)
252
+ self.w3 = PLoRA(self.hidden_size, self.intermediate_size, bias=False,
253
+ lora_r=256, lora_alpha=256, lora_len=1225)
254
+ self.w2 = PLoRA(self.intermediate_size, self.hidden_size, bias=False,
255
+ lora_r=256, lora_alpha=256, lora_len=1225)
256
+
257
+ self.act_fn = ACT2FN[config.hidden_act]
258
+
259
+ def forward(self, x, im_mask, infer_mode):
260
+ down_proj = self.w2(self.act_fn(self.w1(x, im_mask, infer_mode)) * self.w3(x, im_mask, infer_mode), im_mask, infer_mode)
261
+
262
+ return down_proj
263
+
264
+
265
+ # Copied from transformers.model.llama.modeling_llama.repeat_kv
266
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
267
+ """
268
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
269
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
270
+ """
271
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
272
+ if n_rep == 1:
273
+ return hidden_states
274
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
275
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
276
+
277
+
278
+ # Modified from transformers.model.llama.modeling_llama.LlamaAttention
279
+ class InternLM2Attention(nn.Module):
280
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
281
+
282
+ def __init__(self, config: InternLM2Config):
283
+ super().__init__()
284
+ self.config = config
285
+ self.hidden_size = config.hidden_size
286
+ self.num_heads = config.num_attention_heads
287
+ self.head_dim = self.hidden_size // self.num_heads
288
+ self.num_key_value_heads = config.num_key_value_heads
289
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
290
+ self.max_position_embeddings = config.max_position_embeddings
291
+ self.is_causal = True
292
+
293
+ if (self.head_dim * self.num_heads) != self.hidden_size:
294
+ raise ValueError(
295
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
296
+ f" and `num_heads`: {self.num_heads})."
297
+ )
298
+
299
+ #self.wqkv = nn.Linear(
300
+ self.wqkv = PLoRA(
301
+ self.hidden_size,
302
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
303
+ bias=config.bias,
304
+ lora_r=256, lora_alpha=256, lora_len=1225
305
+ )
306
+
307
+ #self.wo = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
308
+ self.wo = PLoRA(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias,
309
+ lora_r=256, lora_alpha=256, lora_len=1225)
310
+ self._init_rope()
311
+
312
+ def _init_rope(self):
313
+ if self.config.rope_scaling is None:
314
+ self.rotary_emb = InternLM2RotaryEmbedding(
315
+ self.head_dim,
316
+ max_position_embeddings=self.max_position_embeddings,
317
+ base=self.config.rope_theta,
318
+ )
319
+ else:
320
+ scaling_type = self.config.rope_scaling["type"]
321
+ scaling_factor = self.config.rope_scaling["factor"]
322
+ if scaling_type == "dynamic":
323
+ self.rotary_emb = InternLM2DynamicNTKScalingRotaryEmbedding(
324
+ self.head_dim,
325
+ max_position_embeddings=self.max_position_embeddings,
326
+ base=self.config.rope_theta,
327
+ scaling_factor=scaling_factor,
328
+ )
329
+ elif scaling_type == "linear":
330
+ self.rotary_emb = InternLM2LinearScalingRotaryEmbedding(
331
+ self.head_dim,
332
+ max_position_embeddings=self.max_position_embeddings,
333
+ base=self.config.rope_theta,
334
+ scaling_factor=scaling_factor,
335
+ )
336
+ else:
337
+ raise ValueError("Currently we only support rotary embedding's type being 'dynamic' or 'linear'.")
338
+ return self.rotary_emb
339
+
340
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
341
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
342
+
343
+ def forward(
344
+ self,
345
+ hidden_states: torch.Tensor,
346
+ attention_mask: Optional[torch.Tensor] = None,
347
+ position_ids: Optional[torch.LongTensor] = None,
348
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
349
+ output_attentions: bool = False,
350
+ use_cache: bool = False,
351
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
352
+ infer_mode: str = 'base',
353
+ **kwargs,
354
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
355
+ if "padding_mask" in kwargs:
356
+ warnings.warn(
357
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
358
+ "Please make sure use `attention_mask` instead.`"
359
+ )
360
+
361
+ bsz, q_len, _ = hidden_states.size()
362
+
363
+ qkv_states = self.wqkv(hidden_states, im_mask, infer_mode)
364
+
365
+ qkv_states = rearrange(
366
+ qkv_states,
367
+ "b q (h gs d) -> b q h gs d",
368
+ gs=2 + self.num_key_value_groups,
369
+ d=self.head_dim,
370
+ )
371
+
372
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
373
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
374
+ key_states = qkv_states[..., -2, :]
375
+ value_states = qkv_states[..., -1, :]
376
+
377
+ query_states = query_states.transpose(1, 2)
378
+ key_states = key_states.transpose(1, 2)
379
+ value_states = value_states.transpose(1, 2)
380
+
381
+ kv_seq_len = key_states.shape[-2]
382
+ if past_key_value is not None:
383
+ kv_seq_len += past_key_value[0].shape[-2]
384
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
385
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
386
+
387
+ if past_key_value is not None:
388
+ # reuse k, v, self_attention
389
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
390
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
391
+
392
+ past_key_value = (key_states, value_states) if use_cache else None
393
+
394
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
395
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
396
+
397
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
398
+
399
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
400
+ raise ValueError(
401
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
402
+ f" {attn_weights.size()}"
403
+ )
404
+
405
+ if attention_mask is not None:
406
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
407
+ raise ValueError(
408
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
409
+ )
410
+ attn_weights = attn_weights + attention_mask
411
+
412
+ # upcast attention to fp32
413
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
414
+ attn_output = torch.matmul(attn_weights, value_states)
415
+
416
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
417
+ raise ValueError(
418
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
419
+ f" {attn_output.size()}"
420
+ )
421
+
422
+ attn_output = attn_output.transpose(1, 2).contiguous()
423
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
424
+
425
+ attn_output = self.wo(attn_output, im_mask, infer_mode)
426
+
427
+ if not output_attentions:
428
+ attn_weights = None
429
+
430
+ return attn_output, attn_weights, past_key_value
431
+
432
+
433
+ # Modified from transformers.model.llama.modeling_llama.InternLM2FlashAttention2
434
+ class InternLM2FlashAttention2(InternLM2Attention):
435
+ """
436
+ InternLM2 flash attention module. This module inherits from `InternLM2Attention` as the weights of the module stays
437
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
438
+ flash attention and deal with padding tokens in case the input contains any of them.
439
+ """
440
+
441
+ def forward(
442
+ self,
443
+ hidden_states: torch.Tensor,
444
+ attention_mask: Optional[torch.LongTensor] = None,
445
+ position_ids: Optional[torch.LongTensor] = None,
446
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
447
+ output_attentions: bool = False,
448
+ use_cache: bool = False,
449
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
450
+ infer_mode: str = 'base',
451
+ **kwargs,
452
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
453
+ # InternLM2FlashAttention2 attention does not support output_attentions
454
+ if "padding_mask" in kwargs:
455
+ warnings.warn(
456
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
457
+ "Please make sure use `attention_mask` instead.`"
458
+ )
459
+
460
+ # overwrite attention_mask with padding_mask
461
+ attention_mask = kwargs.pop("padding_mask")
462
+
463
+ output_attentions = False
464
+
465
+ bsz, q_len, _ = hidden_states.size()
466
+
467
+ qkv_states = self.wqkv(hidden_states, im_mask, infer_mode)
468
+
469
+ qkv_states = rearrange(
470
+ qkv_states,
471
+ "b q (h gs d) -> b q h gs d",
472
+ gs=2 + self.num_key_value_groups,
473
+ d=self.head_dim,
474
+ )
475
+
476
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
477
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
478
+ key_states = qkv_states[..., -2, :]
479
+ value_states = qkv_states[..., -1, :]
480
+
481
+ query_states = query_states.transpose(1, 2)
482
+ key_states = key_states.transpose(1, 2)
483
+ value_states = value_states.transpose(1, 2)
484
+
485
+ kv_seq_len = key_states.shape[-2]
486
+ if past_key_value is not None:
487
+ kv_seq_len += past_key_value[0].shape[-2]
488
+
489
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
490
+
491
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
492
+
493
+ if past_key_value is not None:
494
+ # reuse k, v, self_attention
495
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
496
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
497
+
498
+ past_key_value = (key_states, value_states) if use_cache else None
499
+
500
+ query_states = query_states.transpose(1, 2)
501
+ key_states = key_states.transpose(1, 2)
502
+ value_states = value_states.transpose(1, 2)
503
+
504
+ attn_output = self._flash_attention_forward(
505
+ query_states, key_states, value_states, attention_mask, q_len
506
+ )
507
+
508
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
509
+ attn_output = self.wo(attn_output, im_mask, infer_mode)
510
+
511
+ if not output_attentions:
512
+ attn_weights = None
513
+
514
+ return attn_output, attn_weights, past_key_value
515
+
516
+ def _flash_attention_forward(
517
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
518
+ ):
519
+ """
520
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
521
+ first unpad the input, then computes the attention scores and pad the final attention scores.
522
+
523
+ Args:
524
+ query_states (`torch.Tensor`):
525
+ Input query states to be passed to Flash Attention API
526
+ key_states (`torch.Tensor`):
527
+ Input key states to be passed to Flash Attention API
528
+ value_states (`torch.Tensor`):
529
+ Input value states to be passed to Flash Attention API
530
+ attention_mask (`torch.Tensor`):
531
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
532
+ position of padding tokens and 1 for the position of non-padding tokens.
533
+ dropout (`int`, *optional*):
534
+ Attention dropout
535
+ softmax_scale (`float`, *optional*):
536
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
537
+ """
538
+ # Contains at least one padding token in the sequence
539
+ causal = self.is_causal and query_length != 1
540
+ if attention_mask is not None:
541
+ batch_size = query_states.shape[0]
542
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._unpad_input(
543
+ query_states, key_states, value_states, attention_mask, query_length
544
+ )
545
+
546
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
547
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
548
+
549
+ attn_output_unpad = flash_attn_varlen_func(
550
+ query_states,
551
+ key_states,
552
+ value_states,
553
+ cu_seqlens_q=cu_seqlens_q,
554
+ cu_seqlens_k=cu_seqlens_k,
555
+ max_seqlen_q=max_seqlen_in_batch_q,
556
+ max_seqlen_k=max_seqlen_in_batch_k,
557
+ dropout_p=dropout,
558
+ softmax_scale=softmax_scale,
559
+ causal=causal,
560
+ )
561
+
562
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
563
+ else:
564
+ attn_output = flash_attn_func(
565
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
566
+ )
567
+
568
+ return attn_output
569
+
570
+ def _unpad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
571
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
572
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
573
+
574
+ key_layer = index_first_axis(
575
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
576
+ )
577
+ value_layer = index_first_axis(
578
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
579
+ )
580
+
581
+ if query_length == kv_seq_len:
582
+ query_layer = index_first_axis(
583
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
584
+ )
585
+ cu_seqlens_q = cu_seqlens_k
586
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
587
+ indices_q = indices_k
588
+ elif query_length == 1:
589
+ max_seqlen_in_batch_q = 1
590
+ cu_seqlens_q = torch.arange(
591
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
592
+ ) # There is a memcpy here, that is very bad.
593
+ indices_q = cu_seqlens_q[:-1]
594
+ query_layer = query_layer.squeeze(1)
595
+ else:
596
+ # The -q_len: slice assumes left padding.
597
+ attention_mask = attention_mask[:, -query_length:]
598
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
599
+
600
+ return (
601
+ query_layer,
602
+ key_layer,
603
+ value_layer,
604
+ indices_q.to(torch.int64),
605
+ (cu_seqlens_q, cu_seqlens_k),
606
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
607
+ )
608
+
609
+ INTERNLM2_ATTENTION_CLASSES = {
610
+ "eager": InternLM2Attention,
611
+ "flash_attention_2": InternLM2FlashAttention2,
612
+ }
613
+
614
+ # Modified from transformers.model.llama.modeling_llama.LlamaDecoderLayer
615
+ class InternLM2DecoderLayer(nn.Module):
616
+ def __init__(self, config: InternLM2Config):
617
+ super().__init__()
618
+ self.hidden_size = config.hidden_size
619
+
620
+ self.attention = INTERNLM2_ATTENTION_CLASSES[config.attn_implementation](config=config)
621
+
622
+ self.feed_forward = InternLM2MLP(config)
623
+ self.attention_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
624
+ self.ffn_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
625
+
626
+ def forward(
627
+ self,
628
+ hidden_states: torch.Tensor,
629
+ attention_mask: Optional[torch.Tensor] = None,
630
+ position_ids: Optional[torch.LongTensor] = None,
631
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
632
+ output_attentions: Optional[bool] = False,
633
+ use_cache: Optional[bool] = False,
634
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
635
+ infer_mode: str='base',
636
+ **kwargs,
637
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
638
+ """
639
+ Args:
640
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
641
+ attention_mask (`torch.FloatTensor`, *optional*):
642
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
643
+ query_sequence_length, key_sequence_length)` if default attention is used.
644
+ output_attentions (`bool`, *optional*):
645
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
646
+ returned tensors for more detail.
647
+ use_cache (`bool`, *optional*):
648
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
649
+ (see `past_key_values`).
650
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
651
+ """
652
+ if "padding_mask" in kwargs:
653
+ warnings.warn(
654
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. "
655
+ "Please make sure use `attention_mask` instead.`"
656
+ )
657
+
658
+ residual = hidden_states
659
+
660
+ hidden_states = self.attention_norm(hidden_states)
661
+
662
+ # Self Attention
663
+ hidden_states, self_attn_weights, present_key_value = self.attention(
664
+ hidden_states=hidden_states,
665
+ attention_mask=attention_mask,
666
+ position_ids=position_ids,
667
+ past_key_value=past_key_value,
668
+ output_attentions=output_attentions,
669
+ use_cache=use_cache,
670
+ im_mask=im_mask,
671
+ infer_mode=infer_mode,
672
+ **kwargs,
673
+ )
674
+ hidden_states = residual + hidden_states
675
+
676
+ # Fully Connected
677
+ residual = hidden_states
678
+ hidden_states = self.ffn_norm(hidden_states)
679
+ hidden_states = self.feed_forward(hidden_states, im_mask, infer_mode)
680
+ hidden_states = residual + hidden_states
681
+
682
+ outputs = (hidden_states,)
683
+
684
+ if output_attentions:
685
+ outputs += (self_attn_weights,)
686
+
687
+ if use_cache:
688
+ outputs += (present_key_value,)
689
+
690
+ return outputs
691
+
692
+
693
+ InternLM2_START_DOCSTRING = r"""
694
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
695
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
696
+ etc.)
697
+
698
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
699
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
700
+ and behavior.
701
+
702
+ Parameters:
703
+ config ([`InternLM2Config`]):
704
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
705
+ load the weights associated with the model, only the configuration. Check out the
706
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
707
+ """
708
+
709
+
710
+ # Copied from transformers.models.llama.modeling_llama.LlamaPreTrainedModel with Llama->InternLM2
711
+ @add_start_docstrings(
712
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
713
+ InternLM2_START_DOCSTRING,
714
+ )
715
+ class InternLM2PreTrainedModel(PreTrainedModel):
716
+ config_class = InternLM2Config
717
+ base_model_prefix = "model"
718
+ supports_gradient_checkpointing = True
719
+ _no_split_modules = ["InternLM2DecoderLayer"]
720
+ _skip_keys_device_placement = "past_key_values"
721
+
722
+ def _init_weights(self, module):
723
+ std = self.config.initializer_range
724
+ if isinstance(module, nn.Linear):
725
+ module.weight.data.normal_(mean=0.0, std=std)
726
+ if module.bias is not None:
727
+ module.bias.data.zero_()
728
+ elif isinstance(module, nn.Embedding):
729
+ module.weight.data.normal_(mean=0.0, std=std)
730
+ if module.padding_idx is not None:
731
+ module.weight.data[module.padding_idx].zero_()
732
+
733
+
734
+ InternLM2_INPUTS_DOCSTRING = r"""
735
+ Args:
736
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
737
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
738
+ it.
739
+
740
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
741
+ [`PreTrainedTokenizer.__call__`] for details.
742
+
743
+ [What are input IDs?](../glossary#input-ids)
744
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
745
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
746
+
747
+ - 1 for tokens that are **not masked**,
748
+ - 0 for tokens that are **masked**.
749
+
750
+ [What are attention masks?](../glossary#attention-mask)
751
+
752
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
753
+ [`PreTrainedTokenizer.__call__`] for details.
754
+
755
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
756
+ `past_key_values`).
757
+
758
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
759
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
760
+ information on the default strategy.
761
+
762
+ - 1 indicates the head is **not masked**,
763
+ - 0 indicates the head is **masked**.
764
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
765
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
766
+ config.n_positions - 1]`.
767
+
768
+ [What are position IDs?](../glossary#position-ids)
769
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or
770
+ when `config.use_cache=True`):
771
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
772
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
773
+ `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)`.
774
+
775
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
776
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
777
+
778
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
779
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
780
+ of shape `(batch_size, sequence_length)`.
781
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
782
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
783
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
784
+ model's internal embedding lookup matrix.
785
+ use_cache (`bool`, *optional*):
786
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
787
+ `past_key_values`).
788
+ output_attentions (`bool`, *optional*):
789
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
790
+ tensors for more detail.
791
+ output_hidden_states (`bool`, *optional*):
792
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
793
+ more detail.
794
+ return_dict (`bool`, *optional*):
795
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
796
+ """
797
+
798
+
799
+ # Modified from transformers.model.llama.modeling_llama.LlamaModel
800
+ @add_start_docstrings(
801
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
802
+ InternLM2_START_DOCSTRING,
803
+ )
804
+ class InternLM2Model(InternLM2PreTrainedModel):
805
+ """
806
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLM2DecoderLayer`]
807
+
808
+ Args:
809
+ config: InternLM2Config
810
+ """
811
+
812
+ _auto_class = "AutoModel"
813
+
814
+ def __init__(self, config: InternLM2Config):
815
+ super().__init__(config)
816
+ self.padding_idx = config.pad_token_id
817
+ self.vocab_size = config.vocab_size
818
+ self.config = config
819
+
820
+ self.tok_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
821
+
822
+ self.layers = nn.ModuleList([InternLM2DecoderLayer(config) for _ in range(config.num_hidden_layers)])
823
+ self.norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
824
+
825
+ self.gradient_checkpointing = False
826
+ # Initialize weights and apply final processing
827
+ self.post_init()
828
+
829
+ def get_input_embeddings(self):
830
+ return self.tok_embeddings
831
+
832
+ def set_input_embeddings(self, value):
833
+ self.tok_embeddings = value
834
+
835
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
836
+ # create causal mask
837
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
838
+ combined_attention_mask = None
839
+ if input_shape[-1] > 1:
840
+ combined_attention_mask = _make_causal_mask(
841
+ input_shape,
842
+ inputs_embeds.dtype,
843
+ device=inputs_embeds.device,
844
+ past_key_values_length=past_key_values_length,
845
+ )
846
+
847
+ if attention_mask is not None:
848
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
849
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
850
+ inputs_embeds.device
851
+ )
852
+ combined_attention_mask = (
853
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
854
+ )
855
+
856
+ return combined_attention_mask
857
+
858
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
859
+ def forward(
860
+ self,
861
+ input_ids: torch.LongTensor = None,
862
+ attention_mask: Optional[torch.Tensor] = None,
863
+ position_ids: Optional[torch.LongTensor] = None,
864
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
865
+ inputs_embeds: Optional[torch.FloatTensor] = None,
866
+ use_cache: Optional[bool] = None,
867
+ output_attentions: Optional[bool] = None,
868
+ output_hidden_states: Optional[bool] = None,
869
+ return_dict: Optional[bool] = None,
870
+ **kwargs
871
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
872
+
873
+ im_mask = kwargs.get('im_mask', None)
874
+ infer_mode = kwargs.get('infer_mode', 'base')
875
+
876
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
877
+ output_hidden_states = (
878
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
879
+ )
880
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
881
+
882
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
883
+
884
+ if self.config.attn_implementation == "flash_attention_2":
885
+ _import_flash_attn()
886
+
887
+ # retrieve input_ids and inputs_embeds
888
+ if input_ids is not None and inputs_embeds is not None:
889
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
890
+ elif input_ids is not None:
891
+ batch_size, seq_length = input_ids.shape[:2]
892
+ elif inputs_embeds is not None:
893
+ batch_size, seq_length = inputs_embeds.shape[:2]
894
+ else:
895
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
896
+
897
+ seq_length_with_past = seq_length
898
+ past_key_values_length = 0
899
+ if past_key_values is not None:
900
+ past_key_values_length = past_key_values[0][0].shape[2]
901
+ seq_length_with_past = seq_length_with_past + past_key_values_length
902
+
903
+ if position_ids is None:
904
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
905
+ position_ids = torch.arange(
906
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
907
+ )
908
+ position_ids = position_ids.unsqueeze(0)
909
+
910
+ if inputs_embeds is None:
911
+ inputs_embeds = self.tok_embeddings(input_ids)
912
+ im_mask = torch.zeros(inputs_embeds.shape[:2]).to(inputs_embeds.device).bool()
913
+
914
+ if self.config.attn_implementation == "flash_attention_2":
915
+ # 2d mask is passed through the layers
916
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
917
+ else:
918
+ if attention_mask is None:
919
+ attention_mask = torch.ones(
920
+ (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
921
+ )
922
+ attention_mask = self._prepare_decoder_attention_mask(
923
+ attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
924
+ )
925
+
926
+ # embed positions
927
+ hidden_states = inputs_embeds
928
+
929
+ if self.gradient_checkpointing and self.training:
930
+ if use_cache:
931
+ logger.warning_once(
932
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
933
+ )
934
+ use_cache = False
935
+
936
+ # decoder layers
937
+ all_hidden_states = () if output_hidden_states else None
938
+ all_self_attns = () if output_attentions else None
939
+ next_decoder_cache = () if use_cache else None
940
+
941
+ for idx, decoder_layer in enumerate(self.layers):
942
+ if output_hidden_states:
943
+ all_hidden_states += (hidden_states,)
944
+
945
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
946
+
947
+ if self.gradient_checkpointing and self.training:
948
+
949
+ def create_custom_forward(module):
950
+ def custom_forward(*inputs):
951
+ # None for past_key_value
952
+ return module(*inputs, output_attentions, None, im_mask, infer_mode)
953
+
954
+ return custom_forward
955
+
956
+ layer_outputs = torch.utils.checkpoint.checkpoint(
957
+ create_custom_forward(decoder_layer),
958
+ hidden_states,
959
+ attention_mask,
960
+ position_ids,
961
+ None,
962
+ )
963
+ else:
964
+ layer_outputs = decoder_layer(
965
+ hidden_states,
966
+ attention_mask=attention_mask,
967
+ position_ids=position_ids,
968
+ past_key_value=past_key_value,
969
+ output_attentions=output_attentions,
970
+ use_cache=use_cache,
971
+ im_mask=im_mask,
972
+ infer_mode=infer_mode,
973
+ )
974
+
975
+ hidden_states = layer_outputs[0]
976
+
977
+ if use_cache:
978
+ next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
979
+
980
+ if output_attentions:
981
+ all_self_attns += (layer_outputs[1],)
982
+
983
+ hidden_states = self.norm(hidden_states)
984
+
985
+ # add hidden states from the last decoder layer
986
+ if output_hidden_states:
987
+ all_hidden_states += (hidden_states,)
988
+
989
+ next_cache = next_decoder_cache if use_cache else None
990
+ if not return_dict:
991
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
992
+ return BaseModelOutputWithPast(
993
+ last_hidden_state=hidden_states,
994
+ past_key_values=next_cache,
995
+ hidden_states=all_hidden_states,
996
+ attentions=all_self_attns,
997
+ )
modeling_internlm_xcomposer2.py ADDED
@@ -0,0 +1,662 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/modeling_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ """PyTorch InternLMXComposer2 model."""
18
+ import os
19
+ import re
20
+ import copy
21
+ import queue
22
+ import threading
23
+ from typing import List, Optional, Tuple, Union
24
+
25
+ import torch
26
+ import torch.utils.checkpoint
27
+ from PIL import Image
28
+ import numpy as np
29
+ import random
30
+ from torch import nn
31
+ from torch.nn import CrossEntropyLoss
32
+ from torchvision import transforms
33
+ from torchvision.transforms.functional import InterpolationMode
34
+ from transformers.modeling_outputs import CausalLMOutputWithPast
35
+ from transformers.utils import (add_start_docstrings_to_model_forward,
36
+ replace_return_docstrings)
37
+ from transformers import StoppingCriteria, StoppingCriteriaList
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
39
+ try:
40
+ from transformers.generation.streamers import BaseStreamer
41
+ except: # noqa # pylint: disable=bare-except
42
+ BaseStreamer = None
43
+
44
+ import torchvision.transforms as transforms
45
+ from torchvision.transforms.functional import InterpolationMode
46
+
47
+ from .build_mlp import build_vision_projector, build_vision_tower
48
+ from .ixc_utils import Image_transform, Video_transform, load_video, frame2img, get_font
49
+ from .configuration_internlm_xcomposer2 import InternLMXcomposer2Config
50
+ from .modeling_internlm2 import (InternLM2_INPUTS_DOCSTRING, InternLM2Model,
51
+ InternLM2PreTrainedModel)
52
+
53
+ _CONFIG_FOR_DOC = 'InternLMXcomposer2Config'
54
+
55
+ image_extensions = {'.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp'}
56
+ video_extensions = {'.mp4', '.avi', '.mkv', '.mov', '.wmv'}
57
+
58
+ class StoppingCriteriaSub(StoppingCriteria):
59
+
60
+ def __init__(self, stops=[], encounters=1):
61
+ super().__init__()
62
+ self.stops = stops
63
+
64
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
65
+ for stop in self.stops:
66
+ if torch.all((stop == input_ids[0][-len(stop):])).item():
67
+ return True
68
+ return False
69
+
70
+
71
+ def get_stopping_criteria(stop_words_ids):
72
+ stop_words_ids = [torch.tensor([i]).cuda() for i in stop_words_ids]
73
+ stopping_criteria = StoppingCriteriaList(
74
+ [StoppingCriteriaSub(stops=stop_words_ids)])
75
+ return stopping_criteria
76
+
77
+ def set_random_seed(seed, set_cudnn=False):
78
+ """Set the random seed for reproducibility.
79
+
80
+ Parameters:
81
+ seed (int): The seed to use for generating random numbers.
82
+ """
83
+ torch.manual_seed(seed)
84
+ if torch.cuda.is_available():
85
+ torch.cuda.manual_seed_all(seed) # For multi-GPU.
86
+ np.random.seed(seed)
87
+ random.seed(seed)
88
+ if set_cudnn and torch.backends.cudnn.is_available():
89
+ torch.backends.cudnn.deterministic = True
90
+ torch.backends.cudnn.benchmark = False
91
+
92
+ class InternLMXComposer2ForCausalLM(InternLM2PreTrainedModel):
93
+ _auto_class = 'AutoModelForCausalLM'
94
+
95
+ _tied_weights_keys = ['output.weight']
96
+
97
+ def __init__(self, config):
98
+ super().__init__(config)
99
+ self.model = InternLM2Model(config)
100
+ self.vocab_size = config.vocab_size
101
+ self.output = nn.Linear(
102
+ config.hidden_size, config.vocab_size, bias=False)
103
+ self.tokenizer = None
104
+ self.hd_num = 25
105
+ self.font = get_font()
106
+
107
+ self.max_length = config.max_length
108
+ print(f'Set max length to {self.max_length}')
109
+ # Initialize weights and apply final processing
110
+ self.post_init()
111
+ self.plora_glb_GN = nn.Parameter(torch.zeros([1, 1, 4096]))
112
+ self.plora_sub_GN = nn.Parameter(torch.zeros([1, 1, 1, 4096]))
113
+
114
+ self.vit = build_vision_tower()
115
+ self.vision_proj = build_vision_projector()
116
+
117
+ self.vis_processor = transforms.Compose([
118
+ transforms.ToTensor(),
119
+ transforms.Normalize((0.48145466, 0.4578275, 0.40821073),
120
+ (0.26862954, 0.26130258, 0.27577711)),
121
+ ])
122
+
123
+
124
+
125
+
126
+ def _set_gradient_checkpointing(self, module, value=False):
127
+ if isinstance(module, InternLM2Model):
128
+ module.gradient_checkpointing = value
129
+ if value:
130
+ self.vit.vision_tower.vision_model.encoder.gradient_checkpointing = value
131
+
132
+ def get_input_embeddings(self):
133
+ return self.model.tok_embeddings
134
+
135
+ def set_input_embeddings(self, value):
136
+ self.model.tok_embeddings = value
137
+
138
+ def get_output_embeddings(self):
139
+ return self.output
140
+
141
+ def set_output_embeddings(self, new_embeddings):
142
+ self.output = new_embeddings
143
+
144
+ def set_decoder(self, decoder):
145
+ self.model = decoder
146
+
147
+ def get_decoder(self):
148
+ return self.model
149
+
150
+ def encode_text(self, text, add_special_tokens=False):
151
+ token = self.tokenizer(
152
+ text, return_tensors='pt',
153
+ add_special_tokens=add_special_tokens).input_ids.to(self.device)
154
+ embs = self.model.tok_embeddings(token)
155
+ return embs
156
+
157
+ def encode_img(self, image, hd_num=25):
158
+ if image is None:
159
+ return None
160
+ if isinstance(image, str):
161
+ _, ext = os.path.splitext(image)
162
+ if ext.lower() in image_extensions:
163
+ image = Image.open(image).convert('RGB')
164
+ image = Image_transform(image, hd_num = hd_num)
165
+ elif ext.lower() in video_extensions:
166
+ image = load_video(image)
167
+ image = frame2img(image, self.font)
168
+ image = Video_transform(image, hd_num = hd_num)
169
+ else:
170
+ print ('Unknow input format', image)
171
+ return None
172
+ image = self.vis_processor(image).unsqueeze(0).to(self.device)
173
+ else:
174
+ assert isinstance(image, torch.Tensor)
175
+
176
+ img_embeds, atts_img, img_target = self.img2emb(image)
177
+ return img_embeds
178
+
179
+ def img2emb(self, image):
180
+ img_embeds, img_split = self.vit([image],
181
+ self.plora_glb_GN, self.plora_sub_GN)
182
+ if len(img_split) > 1:
183
+ print ('Batch Size >1 is not supported.')
184
+ assert 0
185
+ #print (img_embeds.shape)
186
+ img_embeds = self.vision_proj(img_embeds)
187
+ atts_img = torch.ones(
188
+ img_embeds.size()[:-1], dtype=torch.long).to(img_embeds.device)
189
+
190
+ img_target = torch.ones(
191
+ img_embeds.size()[:2], dtype=torch.long).to(
192
+ img_embeds.device) * -100
193
+
194
+ return img_embeds, atts_img, img_target
195
+
196
+ def prompt_wrap(self, img_embeds, prompt):
197
+ batch_size = img_embeds.shape[0]
198
+ p_before, p_after = prompt.split('<ImageHere>')
199
+ p_before_tokens = self.tokenizer(
200
+ p_before, return_tensors='pt',
201
+ add_special_tokens=True).to(img_embeds.device)
202
+
203
+ p_before_embeds = self.model.tok_embeddings(
204
+ p_before_tokens.input_ids).expand(batch_size, -1, -1)
205
+ wrapped_img_embeds = torch.cat([p_before_embeds, img_embeds], dim=1)
206
+
207
+ wrapped_atts_img = torch.ones(
208
+ wrapped_img_embeds.size()[:-1],
209
+ dtype=torch.long).to(img_embeds.device)
210
+
211
+ wrapped_target = torch.ones(
212
+ batch_size, wrapped_img_embeds.shape[1], dtype=torch.long).to(
213
+ img_embeds.device) * -100
214
+
215
+ return wrapped_img_embeds, wrapped_atts_img, wrapped_target
216
+
217
+ def text2emb(self, text, add_special_tokens=False):
218
+ to_regress_tokens = self.tokenizer(
219
+ text,
220
+ return_tensors='pt',
221
+ padding='longest',
222
+ truncation=True,
223
+ max_length=self.max_length,
224
+ add_special_tokens=add_special_tokens
225
+ ).to(self.device)
226
+
227
+ targets = self.mask_human_targets(to_regress_tokens.input_ids)
228
+ targets = targets.to(self.device)
229
+ return to_regress_tokens, targets
230
+
231
+ def interleav_wrap_chat(self, query, image, history = [], meta_instruction='', max_length=16384, hd_num=24):
232
+ self.max_length = max_length
233
+ prompt = ''
234
+ if meta_instruction:
235
+ prompt += f"""[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
236
+ for record in history:
237
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
238
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
239
+
240
+ image_nums = len(image)
241
+ if image_nums == 1 and prompt.find('<ImageHere>') == -1:
242
+ #print ('auto append image at the begining')
243
+ prompt = '<ImageHere>' + prompt
244
+
245
+ parts = prompt.split('<ImageHere>')
246
+ wrap_embeds, wrap_im_mask = [], []
247
+ temp_len = 0
248
+ need_bos = True
249
+
250
+ if len(parts) != image_nums + 1:
251
+ #raise ValueError('Invalid <ImageHere> prompt format.')
252
+ print ('Waring! The image number != given position!')
253
+ if image_nums > 1:
254
+ hd_num = 6
255
+ else:
256
+ hu_num = hd_num
257
+ for idx, part in enumerate(parts):
258
+ if need_bos or len(part) > 0:
259
+ part_tokens = self.tokenizer(
260
+ part,
261
+ return_tensors='pt',
262
+ padding='longest',
263
+ add_special_tokens=need_bos).to(self.device)
264
+ if need_bos:
265
+ need_bos = False
266
+
267
+ part_embeds = self.model.tok_embeddings(
268
+ part_tokens.input_ids)
269
+ wrap_embeds.append(part_embeds)
270
+ wrap_im_mask.append(torch.zeros(part_embeds.shape[:2]))
271
+ temp_len += part_embeds.shape[1]
272
+ if idx < image_nums:
273
+ img = self.encode_img(image[idx], hd_num)
274
+ wrap_embeds.append(img)
275
+ wrap_im_mask.append(torch.ones(img.shape[:2]))
276
+ temp_len += img.shape[1]
277
+
278
+ if temp_len > self.max_length:
279
+ break
280
+
281
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
282
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
283
+ wrap_embeds = wrap_embeds[:, :self.max_length].to(self.device)
284
+ wrap_im_mask = wrap_im_mask[:, :self.max_length].to(self.device).bool()
285
+ inputs = {
286
+ 'inputs_embeds': wrap_embeds
287
+ }
288
+ return inputs, wrap_im_mask, temp_len
289
+
290
+ def interleav_wrap(self, img_list, text_list, image_nums):
291
+ temp_embeds = []
292
+ temp_im_mask = []
293
+ temp_tars = []
294
+
295
+ # encode_image
296
+ img_embeds, img_split = self.vit(img_list, self.plora_glb_GN, self.plora_sub_GN)
297
+ img_embeds = self.vision_proj(img_embeds)
298
+
299
+ text_list = text_list[0]
300
+ for idx, text in enumerate(text_list):
301
+ image_num = image_nums[idx]
302
+ im_id = int(np.sum(image_nums[:idx]))
303
+ images = []
304
+ for i in range(image_nums[idx]):
305
+ st = int(np.sum(img_split[:im_id + i]))
306
+ sp = img_split[im_id + i]
307
+ temp_img = img_embeds[:, st:st+sp]
308
+ images.append(temp_img)
309
+ atts_img = torch.ones((len(images), images[0].shape[1]), dtype=torch.long).to(self.device)
310
+ img_target = torch.ones(
311
+ (len(images), images[0].shape[1]), dtype=torch.long).to(
312
+ self.device) * -100
313
+
314
+ if image_num == 1 and text.find('<ImageHere>') == -1:
315
+ text = '<ImageHere>' + text
316
+ parts = text.split('<ImageHere>')
317
+
318
+ wrap_tokens, wrap_embeds, wrap_im_mask = [], [], []
319
+ temp_len = 0
320
+ need_bos = True
321
+ for idx, part in enumerate(parts):
322
+ if need_bos or len(part) > 0:
323
+ part_tokens = self.tokenizer(part, return_tensors='pt', padding='longest',
324
+ add_special_tokens=need_bos).to(self.device)
325
+ if need_bos:
326
+ need_bos = False
327
+ wrap_tokens.append(part_tokens.input_ids)
328
+ part_embeds = self.model.tok_embeddings(part_tokens.input_ids)
329
+ wrap_embeds.append(part_embeds)
330
+ wrap_im_mask.append(torch.zeros(part_embeds.shape[:2]).to(self.device))
331
+ temp_len += part_embeds.shape[1]
332
+ if idx < image_num:
333
+ wrap_embeds.append(images[idx])
334
+ wrap_token = torch.ones(images[idx].shape[:2], dtype=torch.long).to(self.device) * -100
335
+ wrap_tokens.append(wrap_token)
336
+ wrap_im_mask.append(torch.ones(images[idx].shape[:2]).to(self.device))
337
+ temp_len += images[idx].shape[1]
338
+ if temp_len > self.max_length:
339
+ break
340
+ wrap_tokens = torch.cat(wrap_tokens, dim=1)
341
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
342
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
343
+
344
+ wrap_target = self.mask_human_targets(wrap_tokens).to(self.device)
345
+
346
+ temp_embeds.append(wrap_embeds)
347
+ temp_im_mask.append(wrap_im_mask)
348
+ temp_tars.append(wrap_target)
349
+
350
+ temp_max_len = np.max([i.shape[1] for i in temp_embeds])
351
+ temp_max_len = min(temp_max_len, self.max_length)
352
+
353
+ final_input, final_atts, final_tars, final_mask = [], [], [], []
354
+ pad = torch.ones([1, 1]) * self.tokenizer.pad_token_id
355
+ pad = pad.long().to(self.device)
356
+ pad_emb = self.model.tok_embeddings(pad)
357
+
358
+ for idx in range(len(temp_embeds)):
359
+ temp_len = temp_embeds[idx].shape[1]
360
+ if temp_len >= temp_max_len:
361
+ final_input.append(temp_embeds[idx][:, :temp_max_len])
362
+ final_atts.append(torch.ones(1, temp_max_len).to(wrap_target.dtype).to(self.device))
363
+ final_tars.append(temp_tars[idx][:, :temp_max_len])
364
+ final_mask.append(temp_im_mask[idx][:, :temp_max_len])
365
+ else:
366
+ final_input.append(torch.cat([temp_embeds[idx], pad_emb.repeat(1, temp_max_len-temp_len, 1)], dim=1))
367
+ final_atts.append(torch.cat([torch.ones(1, temp_len), torch.zeros(1, temp_max_len-temp_len)], dim=1).to(wrap_target.dtype).to(self.device))
368
+ final_tars.append(torch.cat([temp_tars[idx], (torch.ones(1, temp_max_len-temp_len)*-100).to(wrap_target.dtype).to(self.device)], dim=1))
369
+ final_mask.append(torch.cat([temp_im_mask[idx], (torch.zeros(1, temp_max_len-temp_len)).to(wrap_target.dtype).to(self.device)], dim=1))
370
+
371
+ inputs_embeds = torch.cat(final_input, dim=0)
372
+ attention_mask = torch.cat(final_atts, dim=0)
373
+ targets = torch.cat(final_tars, dim=0)
374
+ im_mask = torch.cat(final_mask, dim=0)
375
+
376
+ return inputs_embeds, attention_mask, targets, im_mask
377
+
378
+ def mask_human_targets(self, input_ids, pure=False):
379
+ target_batch = []
380
+ for bs in range(input_ids.shape[0]):
381
+ ids = input_ids[bs]
382
+ targets = copy.deepcopy(ids)
383
+ end_count = 0
384
+ last_eoa = 0
385
+ for i, temp_id in enumerate(ids):
386
+ if temp_id == 92542:
387
+ if end_count % 2 == 0:
388
+ targets[last_eoa:i + 6] = -100
389
+ else:
390
+ last_eoa = i + 1
391
+ end_count += 1
392
+ # # eos and following pad
393
+ elif temp_id == 2:
394
+ # loss on eos, but not on pad
395
+ targets[i + 1:] = -100
396
+ break
397
+ # trunction, end at last question
398
+ if temp_id != 2 and end_count % 2 == 0:
399
+ # mask all after the last answer
400
+ targets[last_eoa + 1:] = -100
401
+ target_batch.append(targets.unsqueeze(0))
402
+ target_batch = torch.cat(target_batch, dim=0)
403
+ return target_batch
404
+
405
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
406
+ @replace_return_docstrings(
407
+ output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
408
+ def forward(self,
409
+ input_ids: torch.LongTensor = None,
410
+ attention_mask: Optional[torch.Tensor] = None,
411
+ position_ids: Optional[torch.LongTensor] = None,
412
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
413
+ inputs_embeds: Optional[torch.FloatTensor] = None,
414
+ labels: Optional[torch.LongTensor] = None,
415
+ use_cache: Optional[bool] = None,
416
+ output_attentions: Optional[bool] = None,
417
+ output_hidden_states: Optional[bool] = None,
418
+ return_dict: Optional[bool] = None,
419
+ **kwargs) -> Union[Tuple, CausalLMOutputWithPast]:
420
+ r"""
421
+ Args:
422
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
423
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
424
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
425
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
426
+ Returns:
427
+ """
428
+
429
+ samples = kwargs.get('samples', None)
430
+ if samples:
431
+ infer_mode = samples.get('infer_mode', 'base')
432
+ if samples['data_type'][0] == 'text':
433
+ has_img = False
434
+ elif samples['data_type'][0] == 'multi':
435
+ has_img = True
436
+ else:
437
+ raise NotImplementedError
438
+
439
+ # encode text
440
+ text = samples['text_input']
441
+ # encode image
442
+ if has_img:
443
+ image = samples['image'][0]
444
+ bs = len(samples['text_input'][0])
445
+ image_nums = []
446
+ temp_image = []
447
+ for im in image:
448
+ if type(im) is list:
449
+ image_nums.append(len(im))
450
+ temp_image.extend(im)
451
+ else:
452
+ image_nums.append(1)
453
+ temp_image.append(im)
454
+ image = temp_image
455
+ assert type(image) is list and len(image_nums) == bs
456
+
457
+ to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap(
458
+ image, text, image_nums)
459
+ else:
460
+ to_regress_tokens, targets = self.text2emb(
461
+ text, add_special_tokens=True)
462
+ to_regress_embeds = self.model.tok_embeddings(
463
+ to_regress_tokens.input_ids)
464
+ attention_mask = to_regress_tokens.attention_mask
465
+ im_mask = torch.zeros(to_regress_embeds.shape[:2]).cuda()
466
+
467
+ inputs_embeds = to_regress_embeds[:, :self.max_length]
468
+ attention_mask = attention_mask[:, :self.max_length]
469
+ targets = targets[:, :self.max_length]
470
+ im_mask = im_mask[:, :self.max_length].bool()
471
+ labels = targets
472
+ else:
473
+ im_mask = kwargs.get('im_mask', None)
474
+ infer_mode = kwargs.get('infer_mode', 'base')
475
+ if im_mask is None and inputs_embeds is not None:
476
+ im_mask = torch.zeros(inputs_embeds.shape[:2]).to(
477
+ inputs_embeds.device)
478
+ im_mask = im_mask.bool()
479
+
480
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
481
+ output_hidden_states = (
482
+ output_hidden_states if output_hidden_states is not None else
483
+ self.config.output_hidden_states)
484
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
485
+
486
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
487
+ outputs = self.model(
488
+ input_ids=input_ids,
489
+ attention_mask=attention_mask,
490
+ position_ids=position_ids,
491
+ past_key_values=past_key_values,
492
+ inputs_embeds=inputs_embeds,
493
+ use_cache=use_cache,
494
+ output_attentions=output_attentions,
495
+ output_hidden_states=output_hidden_states,
496
+ return_dict=return_dict,
497
+ im_mask=im_mask,
498
+ infer_mode=infer_mode,
499
+ )
500
+
501
+ hidden_states = outputs[0]
502
+ logits = self.output(hidden_states)
503
+ logits = logits.float()
504
+
505
+ loss = None
506
+ if labels is not None:
507
+ # Shift so that tokens < n predict n
508
+ shift_logits = logits[..., :-1, :].contiguous()
509
+ shift_labels = labels[..., 1:].contiguous()
510
+ # Flatten the tokens
511
+ loss_fct = CrossEntropyLoss()
512
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
513
+ shift_labels = shift_labels.view(-1)
514
+ # Enable model parallelism
515
+ shift_labels = shift_labels.to(shift_logits.device)
516
+ loss = loss_fct(shift_logits, shift_labels)
517
+
518
+ if not return_dict:
519
+ output = (logits, ) + outputs[1:]
520
+ return (loss, ) + output if loss is not None else output
521
+
522
+ return CausalLMOutputWithPast(
523
+ loss=loss,
524
+ logits=logits,
525
+ past_key_values=outputs.past_key_values,
526
+ hidden_states=outputs.hidden_states,
527
+ attentions=outputs.attentions,
528
+ )
529
+
530
+ def prepare_inputs_for_generation(self,
531
+ input_ids,
532
+ past_key_values=None,
533
+ attention_mask=None,
534
+ inputs_embeds=None,
535
+ im_mask=None,
536
+ infer_mode='base',
537
+ **kwargs):
538
+ if past_key_values is not None:
539
+ past_length = past_key_values[0][0].shape[2]
540
+
541
+ # Some generation methods already pass only the last input ID
542
+ if input_ids.shape[1] > past_length:
543
+ remove_prefix_length = past_length
544
+ else:
545
+ # Default to old behavior: keep only final ID
546
+ remove_prefix_length = input_ids.shape[1] - 1
547
+
548
+ input_ids = input_ids[:, remove_prefix_length:]
549
+
550
+ position_ids = kwargs.get('position_ids', None)
551
+ if attention_mask is not None and position_ids is None:
552
+ # create position_ids on the fly for batch generation
553
+ position_ids = attention_mask.long().cumsum(-1) - 1
554
+ position_ids.masked_fill_(attention_mask == 0, 1)
555
+ if past_key_values:
556
+ position_ids = position_ids[:, -input_ids.shape[1]:]
557
+
558
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
559
+ if inputs_embeds is not None and past_key_values is None:
560
+ model_inputs = {'inputs_embeds': inputs_embeds}
561
+ else:
562
+ model_inputs = {'input_ids': input_ids}
563
+
564
+ im_mask = im_mask
565
+
566
+ model_inputs.update({
567
+ 'position_ids': position_ids,
568
+ 'past_key_values': past_key_values,
569
+ 'use_cache': kwargs.get('use_cache'),
570
+ 'attention_mask': attention_mask,
571
+ 'im_mask': im_mask,
572
+ 'infer_mode': infer_mode,
573
+ })
574
+ return model_inputs
575
+
576
+ @staticmethod
577
+ def _reorder_cache(past_key_values, beam_idx):
578
+ reordered_past = ()
579
+ for layer_past in past_key_values:
580
+ reordered_past += (tuple(
581
+ past_state.index_select(0, beam_idx.to(past_state.device))
582
+ for past_state in layer_past), )
583
+ return reordered_past
584
+
585
+ def build_inputs(self,
586
+ tokenizer,
587
+ query: str,
588
+ history: List[Tuple[str, str]] = [],
589
+ meta_instruction=''):
590
+ prompt = ''
591
+ if meta_instruction:
592
+ prompt += f"""<s>[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
593
+ else:
594
+ prompt += '<s>'
595
+ for record in history:
596
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
597
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
598
+ return tokenizer([prompt], return_tensors='pt')
599
+
600
+ @torch.no_grad()
601
+ def chat(
602
+ self,
603
+ tokenizer,
604
+ query: str,
605
+ image: List[Tuple[str, str]] = [],
606
+ hd_num: int = 24,
607
+ history: List[Tuple[str, str]] = [],
608
+ streamer: Optional[BaseStreamer] = None,
609
+ max_new_tokens: int = 1024,
610
+ do_sample: bool = True,
611
+ num_beams: int = 1,
612
+ temperature: float = 1.0,
613
+ top_p: float = 0.8,
614
+ repetition_penalty: float=1.005,
615
+ infer_mode: str = 'base',
616
+ use_meta: bool = False,
617
+ meta_instruction:
618
+ str = 'You are an AI assistant whose name is InternLM-XComposer (浦语·灵笔).\n'
619
+ '- InternLM-XComposer (浦语·灵笔) is a multi-modality conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n'
620
+ '- InternLM-XComposer (浦语·灵笔) can understand and communicate fluently in the language chosen by the user such as English and 中文.\n'
621
+ '- InternLM-XComposer (浦语·灵笔) is capable of comprehending and articulating responses effectively based on the provided image.',
622
+ **kwargs,
623
+ ):
624
+
625
+ if not use_meta:
626
+ meta_instruction = ''
627
+ if image is None:
628
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
629
+ im_mask = torch.zeros(inputs['input_ids'].shape[:2]).cuda().bool()
630
+ else:
631
+ inputs, im_mask, _ = self.interleav_wrap_chat(query, image, history=history, meta_instruction=meta_instruction, hd_num=hd_num)
632
+ inputs = {
633
+ k: v.to(self.device)
634
+ for k, v in inputs.items() if torch.is_tensor(v)
635
+ }
636
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
637
+ eos_token_id = [
638
+ tokenizer.eos_token_id,
639
+ tokenizer.convert_tokens_to_ids(['[UNUSED_TOKEN_145]'])[0]
640
+ ]
641
+ outputs = self.generate(
642
+ **inputs,
643
+ streamer=streamer,
644
+ max_new_tokens=max_new_tokens,
645
+ num_beams=num_beams,
646
+ do_sample=do_sample,
647
+ temperature=temperature,
648
+ top_p=top_p,
649
+ eos_token_id=eos_token_id,
650
+ repetition_penalty=repetition_penalty,
651
+ im_mask=im_mask,
652
+ infer_mode=infer_mode,
653
+ **kwargs,
654
+ )
655
+ if image is None:
656
+ outputs = outputs[0].cpu().tolist()[len(inputs['input_ids'][0]):]
657
+ else:
658
+ outputs = outputs[0].cpu().tolist()
659
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
660
+ response = response.split('[UNUSED_TOKEN_145]')[0]
661
+ history = history + [(query, response)]
662
+ return response, history
pytorch_model-00001-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb1f1d717b23f72cec601ef3204636c939dc3f03e75eb92c4798f197927be963
3
+ size 7740922137
pytorch_model-00002-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65ac7b5c38ff2756a481c1ef8960bc348cdbdf4b5d832b7b3ce5f1b025a155f0
3
+ size 7583658858
pytorch_model-00003-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1144532adb96fb4a0d33e481a333e0db794391f7ab2dda6ea5f3ebbf66719384
3
+ size 2035384735
pytorch_model.bin.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|action_start|>",
6
+ "<|action_end|>",
7
+ "<|interpreter|>",
8
+ "<|plugin|>"
9
+ ],
10
+ "bos_token": {
11
+ "content": "<s>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ "eos_token": {
18
+ "content": "</s>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "</s>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ "unk_token": {
32
+ "content": "<unk>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ }
38
+ }
tokenization_internlm2.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """Tokenization classes for InternLM."""
19
+ import os
20
+ from shutil import copyfile
21
+ from typing import Any, Dict, List, Optional, Tuple
22
+
23
+ import sentencepiece as spm
24
+ from transformers.tokenization_utils import PreTrainedTokenizer
25
+ from transformers.utils import logging
26
+
27
+ logger = logging.get_logger(__name__)
28
+
29
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
30
+
31
+ PRETRAINED_VOCAB_FILES_MAP = {}
32
+
33
+
34
+ # Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
35
+ class InternLM2Tokenizer(PreTrainedTokenizer):
36
+ """
37
+ Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
38
+
39
+ Args:
40
+ vocab_file (`str`):
41
+ Path to the vocabulary file.
42
+ """
43
+
44
+ vocab_files_names = VOCAB_FILES_NAMES
45
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
46
+ model_input_names = ["input_ids", "attention_mask"]
47
+ _auto_class = "AutoTokenizer"
48
+
49
+ def __init__(
50
+ self,
51
+ vocab_file,
52
+ unk_token="<unk>",
53
+ bos_token="<s>",
54
+ eos_token="</s>",
55
+ pad_token="</s>",
56
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
57
+ add_bos_token=True,
58
+ add_eos_token=False,
59
+ decode_with_prefix_space=False,
60
+ clean_up_tokenization_spaces=False,
61
+ **kwargs,
62
+ ):
63
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
64
+ self.vocab_file = vocab_file
65
+ self.add_bos_token = add_bos_token
66
+ self.add_eos_token = add_eos_token
67
+ self.decode_with_prefix_space = decode_with_prefix_space
68
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
69
+ self.sp_model.Load(vocab_file)
70
+ self._no_prefix_space_tokens = None
71
+ super().__init__(
72
+ bos_token=bos_token,
73
+ eos_token=eos_token,
74
+ unk_token=unk_token,
75
+ pad_token=pad_token,
76
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
77
+ **kwargs,
78
+ )
79
+
80
+ @property
81
+ def no_prefix_space_tokens(self):
82
+ if self._no_prefix_space_tokens is None:
83
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
84
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
85
+ return self._no_prefix_space_tokens
86
+
87
+ @property
88
+ def vocab_size(self):
89
+ """Returns vocab size"""
90
+ return self.sp_model.get_piece_size()
91
+
92
+ @property
93
+ def bos_token_id(self) -> Optional[int]:
94
+ return self.sp_model.bos_id()
95
+
96
+ @property
97
+ def eos_token_id(self) -> Optional[int]:
98
+ return self.sp_model.eos_id()
99
+
100
+ def get_vocab(self):
101
+ """Returns vocab as a dict"""
102
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
103
+ vocab.update(self.added_tokens_encoder)
104
+ return vocab
105
+
106
+ def _tokenize(self, text):
107
+ """Returns a tokenized string."""
108
+ return self.sp_model.encode(text, out_type=str)
109
+
110
+ def _convert_token_to_id(self, token):
111
+ """Converts a token (str) in an id using the vocab."""
112
+ return self.sp_model.piece_to_id(token)
113
+
114
+ def _convert_id_to_token(self, index):
115
+ """Converts an index (integer) in a token (str) using the vocab."""
116
+ token = self.sp_model.IdToPiece(index)
117
+ return token
118
+
119
+ def _maybe_add_prefix_space(self, tokens, decoded):
120
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
121
+ return " " + decoded
122
+ else:
123
+ return decoded
124
+
125
+ def convert_tokens_to_string(self, tokens):
126
+ """Converts a sequence of tokens (string) in a single string."""
127
+ current_sub_tokens = []
128
+ out_string = ""
129
+ prev_is_special = False
130
+ for token in tokens:
131
+ # make sure that special tokens are not decoded using sentencepiece model
132
+ if token in self.all_special_tokens:
133
+ if not prev_is_special:
134
+ out_string += " "
135
+ out_string += self.sp_model.decode(current_sub_tokens) + token
136
+ prev_is_special = True
137
+ current_sub_tokens = []
138
+ else:
139
+ current_sub_tokens.append(token)
140
+ prev_is_special = False
141
+ out_string += self.sp_model.decode(current_sub_tokens)
142
+ out_string = self.clean_up_tokenization(out_string)
143
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
144
+ return out_string[1:]
145
+
146
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
147
+ """
148
+ Save the vocabulary and special tokens file to a directory.
149
+
150
+ Args:
151
+ save_directory (`str`):
152
+ The directory in which to save the vocabulary.
153
+
154
+ Returns:
155
+ `Tuple(str)`: Paths to the files saved.
156
+ """
157
+ if not os.path.isdir(save_directory):
158
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
159
+ return
160
+ out_vocab_file = os.path.join(
161
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
162
+ )
163
+
164
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
165
+ copyfile(self.vocab_file, out_vocab_file)
166
+ elif not os.path.isfile(self.vocab_file):
167
+ with open(out_vocab_file, "wb") as fi:
168
+ content_spiece_model = self.sp_model.serialized_model_proto()
169
+ fi.write(content_spiece_model)
170
+
171
+ return (out_vocab_file,)
172
+
173
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
174
+ if self.add_bos_token:
175
+ bos_token_ids = [self.bos_token_id]
176
+ else:
177
+ bos_token_ids = []
178
+
179
+ output = bos_token_ids + token_ids_0
180
+
181
+ if token_ids_1 is not None:
182
+ output = output + token_ids_1
183
+
184
+ if self.add_eos_token:
185
+ output = output + [self.eos_token_id]
186
+
187
+ return output
188
+
189
+ def get_special_tokens_mask(
190
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
191
+ ) -> List[int]:
192
+ """
193
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
194
+ special tokens using the tokenizer `prepare_for_model` method.
195
+
196
+ Args:
197
+ token_ids_0 (`List[int]`):
198
+ List of IDs.
199
+ token_ids_1 (`List[int]`, *optional*):
200
+ Optional second list of IDs for sequence pairs.
201
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
202
+ Whether or not the token list is already formatted with special tokens for the model.
203
+
204
+ Returns:
205
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
206
+ """
207
+ if already_has_special_tokens:
208
+ return super().get_special_tokens_mask(
209
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
210
+ )
211
+
212
+ if token_ids_1 is None:
213
+ return [1] + ([0] * len(token_ids_0)) + [1]
214
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
215
+
216
+ def create_token_type_ids_from_sequences(
217
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
218
+ ) -> List[int]:
219
+ """
220
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
221
+ use of token type ids, therefore a list of zeros is returned.
222
+
223
+ Args:
224
+ token_ids_0 (`List[int]`):
225
+ List of IDs.
226
+ token_ids_1 (`List[int]`, *optional*):
227
+ Optional second list of IDs for sequence pairs.
228
+
229
+ Returns:
230
+ `List[int]`: List of zeros.
231
+ """
232
+ eos = [self.eos_token_id]
233
+
234
+ if token_ids_1 is None:
235
+ return len(token_ids_0 + eos) * [0]
236
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "92538": {
28
+ "content": "<|plugin|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "92539": {
36
+ "content": "<|interpreter|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "92540": {
44
+ "content": "<|action_end|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "92541": {
52
+ "content": "<|action_start|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "92542": {
60
+ "content": "<|im_end|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "92543": {
68
+ "content": "<|im_start|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ }
75
+ },
76
+ "additional_special_tokens": [
77
+ "<|im_start|>",
78
+ "<|im_end|>",
79
+ "<|action_start|>",
80
+ "<|action_end|>",
81
+ "<|interpreter|>",
82
+ "<|plugin|>"
83
+ ],
84
+ "auto_map": {
85
+ "AutoTokenizer": [
86
+ "tokenization_internlm2.InternLM2Tokenizer",
87
+ null
88
+ ]
89
+ },
90
+ "bos_token": "<s>",
91
+ "chat_template": "{{ bos_token }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
92
+ "clean_up_tokenization_spaces": false,
93
+ "eos_token": "</s>",
94
+ "model_max_length": 1000000000000000019884624838656,
95
+ "pad_token": "</s>",
96
+ "padding_side": "right",
97
+ "tokenizer_class": "InternLM2Tokenizer",
98
+ "unk_token": "<unk>"
99
+ }