ClownRat commited on
Commit
592e852
·
verified ·
1 Parent(s): 3eb707c

Upload processor

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
image_processing_videollama3.py ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adopted from https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py.
2
+ # Below is the original copyright:
3
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
6
+ # and OPT implementations in this library. It has been modified from its
7
+ # original forms to accommodate minor architectural differences compared
8
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+ """Image processor class for VideoLLaMA3."""
22
+
23
+ import math
24
+ from typing import Dict, List, Optional, Union
25
+
26
+ import numpy as np
27
+
28
+ import torch
29
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
30
+ from transformers.image_utils import ImageInput
31
+ from transformers.image_transforms import (
32
+ convert_to_rgb,
33
+ resize,
34
+ to_channel_dimension_format,
35
+ )
36
+ from transformers.image_utils import (
37
+ OPENAI_CLIP_MEAN,
38
+ OPENAI_CLIP_STD,
39
+ ChannelDimension,
40
+ ImageInput,
41
+ PILImageResampling,
42
+ VideoInput,
43
+ get_image_size,
44
+ infer_channel_dimension_format,
45
+ is_scaled_image,
46
+ is_valid_image,
47
+ make_list_of_images,
48
+ to_numpy_array,
49
+ )
50
+ from transformers.utils import TensorType, is_vision_available, logging
51
+
52
+
53
+ logger = logging.get_logger(__name__)
54
+
55
+
56
+ if is_vision_available():
57
+ from PIL import Image
58
+
59
+
60
+ def is_valid_video(video) -> bool:
61
+ if isinstance(video, (list, tuple)):
62
+ return all(is_valid_image(frame) for frame in video)
63
+ elif isinstance(video, np.ndarray):
64
+ return video.ndim == 4
65
+ elif isinstance(video, torch.Tensor):
66
+ return video.ndim == 4
67
+ return False
68
+
69
+
70
+ def make_batched_images(images) -> List[List[ImageInput]]:
71
+ """
72
+ Accepts images in list or nested list format, and makes a list of images for preprocessing.
73
+
74
+ Args:
75
+ images (`Union[List[List[ImageInput]], List[ImageInput], ImageInput]`):
76
+ The input image.
77
+
78
+ Returns:
79
+ list: A list of images.
80
+ """
81
+ if isinstance(images, (list, tuple)):
82
+ # list of images/videos
83
+ if not all(is_valid_video(image) or is_valid_image(image) for image in images):
84
+ raise ValueError(f"Could not make batched images from {images}")
85
+ return images
86
+ elif is_valid_video(images) or is_valid_image(images):
87
+ # single image/video
88
+ return [images]
89
+
90
+ raise ValueError(f"Could not make batched images from {images}")
91
+
92
+
93
+ def simple_batched_resize(
94
+ images, factor: int = 28, min_tokens: int = 4 * 4, max_tokens: int = 16384, input_data_format: str = None
95
+ ):
96
+ min_pixels = min_tokens * factor * factor
97
+ max_pixels = max_tokens * factor * factor
98
+
99
+ num_images = 0
100
+ for image in images:
101
+ if is_valid_video(image):
102
+ num_images += len(image)
103
+ else:
104
+ num_images += 1
105
+
106
+ image_sizes = []
107
+ for image in images:
108
+ if is_valid_video(image):
109
+ image = image[0]
110
+ if isinstance(image, Image.Image):
111
+ height, width = image.size
112
+ else:
113
+ height, width = get_image_size(image, channel_dim=input_data_format)
114
+ image_sizes.append([height, width])
115
+
116
+ tmp_image_sizes = []
117
+ for height, width in image_sizes:
118
+ h_bar = round(height / factor) * factor
119
+ w_bar = round(width / factor) * factor
120
+ if h_bar * w_bar > (max_pixels // num_images):
121
+ beta = math.sqrt((height * width) / (max_pixels // num_images))
122
+ h_bar = math.floor(height / beta / factor) * factor
123
+ w_bar = math.floor(width / beta / factor) * factor
124
+ # per image min_pixels
125
+ if h_bar * w_bar < min_pixels:
126
+ beta = math.sqrt(min_pixels / (height * width))
127
+ h_bar = math.ceil(height * beta / factor) * factor
128
+ w_bar = math.ceil(width * beta / factor) * factor
129
+ tmp_image_sizes.append((h_bar, w_bar))
130
+ image_sizes = tmp_image_sizes
131
+ return image_sizes
132
+
133
+
134
+ def batched_resize(
135
+ images, factors: List[int], min_tokens: int = 4 * 4, max_tokens: int = 16384, input_data_format: str = None
136
+ ):
137
+ image_sizes = []
138
+ for image in images:
139
+ if is_valid_video(image):
140
+ num_frame = len(image)
141
+ image = image[0]
142
+ else:
143
+ num_frame = 1
144
+ if isinstance(image, Image.Image):
145
+ height, width = image.size
146
+ else:
147
+ height, width = get_image_size(image, channel_dim=input_data_format)
148
+ image_sizes.append([num_frame, height, width])
149
+
150
+ # global max_pixels
151
+ smart_scale_factors = 1.0
152
+ total_tokens = 0
153
+ for (num_frame, height, width), factor in zip(image_sizes, factors):
154
+ total_tokens += num_frame * math.ceil(height / factor) * math.ceil(width / factor)
155
+
156
+ # TODO: add min_pixels
157
+ if total_tokens > max_tokens:
158
+ beta = math.sqrt(total_tokens / max_tokens)
159
+ tmp_image_sizes = []
160
+ for (_, height, width), factor in zip(image_sizes, factors):
161
+ h_bar = math.floor(height / beta / factor) * factor
162
+ w_bar = math.floor(width / beta / factor) * factor
163
+ tmp_image_sizes.append((h_bar, w_bar))
164
+ image_sizes = tmp_image_sizes
165
+ else:
166
+ tmp_image_sizes = []
167
+ for (_, height, width), factor in zip(image_sizes, factors):
168
+ height = round(height / factor) * factor
169
+ width = round(width / factor) * factor
170
+ tmp_image_sizes.append((height, width))
171
+ image_sizes = tmp_image_sizes
172
+
173
+ return image_sizes
174
+
175
+
176
+ class Videollama3ImageProcessor(BaseImageProcessor):
177
+ r"""
178
+ Constructs a DAMOVL image processor that dynamically resizes images based on the original images.
179
+
180
+ Args:
181
+ do_resize (`bool`, *optional*, defaults to `True`):
182
+ Whether to resize the image's (height, width) dimensions.
183
+ resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
184
+ Resampling filter to use when resizing the image.
185
+ do_rescale (`bool`, *optional*, defaults to `True`):
186
+ Whether to rescale the image by the specified scale `rescale_factor`.
187
+ rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
188
+ Scale factor to use if rescaling the image.
189
+ do_normalize (`bool`, *optional*, defaults to `True`):
190
+ Whether to normalize the image.
191
+ image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
192
+ Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
193
+ image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
194
+ Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image.
195
+ do_convert_rgb (`bool`, *optional*, defaults to `True`):
196
+ Whether to convert the image to RGB.
197
+ min_pixels (`int`, *optional*, defaults to `56 * 56`):
198
+ The min pixels of the image to resize the image.
199
+ max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`):
200
+ The max pixels of the image to resize the image.
201
+ patch_size (`int`, *optional*, defaults to 14):
202
+ The spacial patch size of the vision encoder.
203
+ """
204
+
205
+ model_input_names = ["pixel_values", "grid_sizes", "merge_sizes"]
206
+
207
+ def __init__(
208
+ self,
209
+ do_resize: bool = True,
210
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
211
+ do_rescale: bool = True,
212
+ rescale_factor: Union[int, float] = 1 / 255,
213
+ do_normalize: bool = True,
214
+ image_mean: Optional[Union[float, List[float]]] = None,
215
+ image_std: Optional[Union[float, List[float]]] = None,
216
+ do_convert_rgb: bool = True,
217
+ min_tokens: int = 4 * 4,
218
+ max_tokens: int = 16384,
219
+ patch_size: int = 14,
220
+ **kwargs,
221
+ ) -> None:
222
+ super().__init__(**kwargs)
223
+ self.do_resize = do_resize
224
+ self.resample = resample
225
+ self.do_rescale = do_rescale
226
+ self.rescale_factor = rescale_factor
227
+ self.do_normalize = do_normalize
228
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
229
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
230
+ self.min_tokens = min_tokens
231
+ self.max_tokens = max_tokens
232
+ self.patch_size = patch_size
233
+ self.do_convert_rgb = do_convert_rgb
234
+
235
+ def _preprocess(
236
+ self,
237
+ images: Union[ImageInput, VideoInput],
238
+ target_size: List[int],
239
+ merge_size: int = 1,
240
+ do_resize: bool = None,
241
+ resample: PILImageResampling = None,
242
+ do_rescale: bool = None,
243
+ rescale_factor: float = None,
244
+ do_normalize: bool = None,
245
+ image_mean: Optional[Union[float, List[float]]] = None,
246
+ image_std: Optional[Union[float, List[float]]] = None,
247
+ do_convert_rgb: bool = None,
248
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
249
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
250
+ ):
251
+ """
252
+ Preprocess an image or batch of images. Copy of the `preprocess` method from `CLIPImageProcessor`.
253
+
254
+ Args:
255
+ images (`ImageInput`):
256
+ Image or batch of images to preprocess. Expects pixel values ranging from 0 to 255. If pixel values range from 0 to 1, set `do_rescale=False`.
257
+ target_size (`List[int]`):
258
+ The target size to resize the image to. Should be a list of two integers: [target_height, target_width].
259
+ merge_size (`int`, *optional*, defaults to `1`):
260
+ The merge size after the vision encoder.
261
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
262
+ Whether to resize the image.
263
+ resample (`PILImageResampling`, *optional*, defaults to `self.resample`):
264
+ Resampling filter to use if resizing the image. This can be one of the `PILImageResampling` enums.
265
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
266
+ Whether to rescale the image.
267
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
268
+ Scale factor to use if rescaling the image.
269
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
270
+ Whether to normalize the image.
271
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
272
+ Mean to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
273
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
274
+ Standard deviation to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
275
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
276
+ Whether to convert the image to RGB.
277
+ data_format (`ChannelDimension`, *optional*, defaults to `ChannelDimension.FIRST`):
278
+ The channel dimension format for the output image. Can be one of:
279
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
280
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
281
+ - Unset: Use the channel dimension format of the input image.
282
+ input_data_format (`ChannelDimension` or `str`, *optional*):
283
+ The channel dimension format for the input image. Can be one of:
284
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
285
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
286
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
287
+ """
288
+ images = make_list_of_images(images)
289
+
290
+ if do_convert_rgb:
291
+ images = [convert_to_rgb(image) for image in images]
292
+
293
+ # All transformations expect numpy arrays.
294
+ images = [to_numpy_array(image) for image in images]
295
+
296
+ if is_scaled_image(images[0]) and do_rescale:
297
+ logger.warning_once(
298
+ "It looks like you are trying to rescale already rescaled images. If the input"
299
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
300
+ )
301
+ if input_data_format is None:
302
+ # We assume that all images have the same channel dimension format.
303
+ input_data_format = infer_channel_dimension_format(images[0])
304
+
305
+ height, width = get_image_size(images[0], channel_dim=input_data_format)
306
+ resized_height, resized_width = height, width
307
+ processed_images = []
308
+ for image in images:
309
+ if do_resize:
310
+ resized_height, resized_width = target_size
311
+ image = resize(
312
+ image, size=(resized_height, resized_width), resample=resample, input_data_format=input_data_format
313
+ )
314
+
315
+ if do_rescale:
316
+ image = self.rescale(image, scale=rescale_factor, input_data_format=input_data_format)
317
+
318
+ if do_normalize:
319
+ image = self.normalize(
320
+ image=image, mean=image_mean, std=image_std, input_data_format=input_data_format
321
+ )
322
+
323
+ image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format)
324
+ processed_images.append(image)
325
+
326
+ patches = np.array(processed_images)
327
+ if data_format == ChannelDimension.LAST:
328
+ patches = patches.transpose(0, 3, 1, 2)
329
+ t = patches.shape[0]
330
+ channel = patches.shape[1]
331
+ grid_h, grid_w = resized_height // self.patch_size, resized_width // self.patch_size
332
+ patches = patches.reshape(
333
+ t,
334
+ channel,
335
+ grid_h // merge_size,
336
+ merge_size,
337
+ self.patch_size,
338
+ grid_w // merge_size,
339
+ merge_size,
340
+ self.patch_size,
341
+ )
342
+ patches = patches.transpose(0, 2, 5, 3, 6, 1, 4, 7)
343
+ flatten_patches = patches.reshape(
344
+ t * grid_h * grid_w, channel * self.patch_size * self.patch_size
345
+ )
346
+
347
+ return flatten_patches, (t, grid_h, grid_w)
348
+
349
+ def preprocess(
350
+ self,
351
+ images: ImageInput,
352
+ do_resize: bool = None,
353
+ resample: PILImageResampling = None,
354
+ do_rescale: bool = None,
355
+ rescale_factor: float = None,
356
+ do_normalize: bool = None,
357
+ image_mean: Optional[Union[float, List[float]]] = None,
358
+ image_std: Optional[Union[float, List[float]]] = None,
359
+ do_convert_rgb: bool = None,
360
+ merge_size: Optional[Union[int, List[int]]] = None,
361
+ return_tensors: Optional[Union[str, TensorType]] = None,
362
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
363
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
364
+ ):
365
+ """
366
+ Args:
367
+ images (`ImageInput`):
368
+ Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
369
+ passing in images with pixel values between 0 and 1, set `do_rescale=False`.
370
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
371
+ Whether to resize the image.
372
+ resample (`int`, *optional*, defaults to `self.resample`):
373
+ Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
374
+ has an effect if `do_resize` is set to `True`.
375
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
376
+ Whether to rescale the image.
377
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
378
+ Rescale factor to rescale the image by if `do_rescale` is set to `True`.
379
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
380
+ Whether to normalize the image.
381
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
382
+ Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
383
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
384
+ Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
385
+ `True`.
386
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
387
+ Whether to convert the image to RGB.
388
+ return_tensors (`str` or `TensorType`, *optional*):
389
+ The type of tensors to return. Can be one of:
390
+ - Unset: Return a list of `np.ndarray`.
391
+ - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
392
+ - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
393
+ - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
394
+ - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
395
+ data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
396
+ The channel dimension format for the output image. Can be one of:
397
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
398
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
399
+ - Unset: Use the channel dimension format of the input image.
400
+ input_data_format (`ChannelDimension` or `str`, *optional*):
401
+ The channel dimension format for the input image. If unset, the channel dimension format is inferred
402
+ from the input image. Can be one of:
403
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
404
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
405
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
406
+
407
+ """
408
+ do_resize = do_resize if do_resize is not None else self.do_resize
409
+ resample = resample if resample is not None else self.resample
410
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
411
+ rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
412
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
413
+ image_mean = image_mean if image_mean is not None else self.image_mean
414
+ image_std = image_std if image_std is not None else self.image_std
415
+ merge_size = merge_size if merge_size is not None else self.merge_size
416
+ do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
417
+
418
+ images = make_batched_images(images)
419
+
420
+ if isinstance(merge_size, (list, tuple)):
421
+ assert len(merge_size) == len(images), "Merge size must be the same length as images."
422
+ merge_sizes = merge_size
423
+ else:
424
+ merge_sizes = [merge_size for _ in images]
425
+
426
+ if all(merge_size == merge_sizes[0] for merge_size in merge_sizes):
427
+ target_sizes = simple_batched_resize(
428
+ images,
429
+ factor=self.patch_size * merge_sizes[0],
430
+ min_tokens=self.min_tokens,
431
+ max_tokens=self.max_tokens,
432
+ input_data_format=input_data_format,
433
+ )
434
+ else:
435
+ target_sizes = batched_resize(
436
+ images,
437
+ factors=[self.patch_size * merge_size for merge_size in merge_sizes],
438
+ min_tokens=self.min_tokens,
439
+ max_tokens=self.max_tokens,
440
+ input_data_format=input_data_format,
441
+ )
442
+
443
+ pixel_values, grid_sizes = [], []
444
+ for image, merge_size, target_size in zip(images, merge_sizes, target_sizes):
445
+ patches, grid_size = self._preprocess(
446
+ image,
447
+ target_size=target_size,
448
+ merge_size=merge_size,
449
+ do_resize=do_resize,
450
+ resample=resample,
451
+ do_rescale=do_rescale,
452
+ rescale_factor=rescale_factor,
453
+ do_normalize=do_normalize,
454
+ image_mean=image_mean,
455
+ image_std=image_std,
456
+ data_format=data_format,
457
+ do_convert_rgb=do_convert_rgb,
458
+ input_data_format=input_data_format,
459
+ )
460
+ pixel_values.append(patches)
461
+ grid_sizes.append(grid_size)
462
+
463
+ pixel_values = np.concatenate(pixel_values, axis=0)
464
+ grid_sizes = np.array(grid_sizes)
465
+ merge_sizes = np.array(merge_sizes)
466
+
467
+ data = {
468
+ "pixel_values": pixel_values,
469
+ "grid_sizes": grid_sizes,
470
+ "merge_sizes": merge_sizes,
471
+ }
472
+
473
+ return BatchFeature(data=data, tensor_type=return_tensors)
preprocessor_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "image_processing_videollama3.Videollama3ImageProcessor"
4
+ },
5
+ "do_convert_rgb": true,
6
+ "do_normalize": true,
7
+ "do_rescale": true,
8
+ "do_resize": true,
9
+ "image_mean": [
10
+ 0.5,
11
+ 0.5,
12
+ 0.5
13
+ ],
14
+ "image_processor_type": "Videollama3ImageProcessor",
15
+ "image_std": [
16
+ 0.5,
17
+ 0.5,
18
+ 0.5
19
+ ],
20
+ "max_tokens": 16384,
21
+ "min_tokens": 16,
22
+ "patch_size": 14,
23
+ "resample": 3,
24
+ "rescale_factor": 0.00392156862745098
25
+ }