wnma3mz commited on
Commit
170b97b
·
verified ·
1 Parent(s): b6992a7

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ license_name: deepseek
4
+ license_link: LICENSE
5
+ pipeline_tag: any-to-any
6
+ library_name: transformers
7
+ tags:
8
+ - muiltimodal
9
+ - text-to-image
10
+ - unified-model
11
+ ---
12
+
13
+ ## 0. Update
14
+ **2024.10.20**: We have uploaded the correct `tokenizer_config.json`. The previous file was missing the `pad_token`, which caused poor visual generation results.
15
+
16
+
17
+ ## 1. Introduction
18
+
19
+ Janus is a novel autoregressive framework that unifies multimodal understanding and generation.
20
+ It addresses the limitations of previous approaches by decoupling visual encoding into separate pathways, while still utilizing a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder’s roles in understanding and generation, but also enhances the framework’s flexibility.
21
+ Janus surpasses previous unified model and matches or exceeds the performance of task-specific models.
22
+ The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.
23
+
24
+ [Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation](https://arxiv.org/abs/2410.13848)
25
+
26
+ [**Github Repository**](https://github.com/deepseek-ai/Janus)
27
+
28
+ <div align="center">
29
+ <img alt="image" src="teaser.png" style="width:90%;">
30
+ </div>
31
+
32
+
33
+ ### 2. Model Summary
34
+
35
+ Janus is a unified understanding and generation MLLM, which decouples visual encoding for multimodal understanding and generation.
36
+ Janus is constructed based on the DeepSeek-LLM-1.3b-base which is trained on an approximate corpus of 500B text tokens.
37
+ For multimodal understanding, it uses the [SigLIP-L](https://huggingface.co/timm/ViT-L-16-SigLIP-384) as the vision encoder, which supports 384 x 384 image input. For image generation, Janus uses the tokenizer from [here](https://github.com/FoundationVision/LlamaGen) with a downsample rate of 16.
38
+
39
+ <div align="center">
40
+ <img alt="image" src="arch.jpg" style="width:90%;">
41
+ </div>
42
+
43
+ ## 3. Quick Start
44
+
45
+ Please refer to [**Github Repository**](https://github.com/deepseek-ai/Janus)
46
+
47
+
48
+ ## 4. License
49
+
50
+ This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE). The use of Janus models is subject to [DeepSeek Model License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL).
51
+ ## 5. Citation
52
+
53
+ ```
54
+ @misc{wu2024janus,
55
+ title={Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation},
56
+ author={Chengyue Wu and Xiaokang Chen and Zhiyu Wu and Yiyang Ma and Xingchao Liu and Zizheng Pan and Wen Liu and Zhenda Xie and Xingkai Yu and Chong Ruan and Ping Luo},
57
+ year={2024},
58
+ eprint={2410.13848},
59
+ archivePrefix={arXiv},
60
+ primaryClass={cs.CV},
61
+ url={https://arxiv.org/abs/2410.13848},
62
+ }
63
+ ```
64
+
65
+ ## 6. Contact
66
+
67
+ If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).
arch.jpg ADDED
config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "hidden_size": 2048,
6
+ "intermediate_size": 5632,
7
+ "max_position_embeddings": 16384,
8
+ "model_type": "llama",
9
+ "num_attention_heads": 16,
10
+ "num_hidden_layers": 24,
11
+ "num_key_value_heads": 16,
12
+ "torch_dtype": "bfloat16",
13
+ "vocab_size": 102400,
14
+ "eos_token_id": 100001,
15
+ "pad_token_id": 100000,
16
+ "transformers_version": "4.38.2"
17
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9328d29bb8feca7334b9fdcdcc644d6176ad4b6873117e6225fce64c9e3dda53
3
+ size 3305337584
preprocessor_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "background_color": [
3
+ 127,
4
+ 127,
5
+ 127
6
+ ],
7
+ "do_normalize": true,
8
+ "image_mean": [
9
+ 0.5,
10
+ 0.5,
11
+ 0.5
12
+ ],
13
+ "image_processor_type": "VLMImageProcessor",
14
+ "image_size": 384,
15
+ "image_std": [
16
+ 0.5,
17
+ 0.5,
18
+ 0.5
19
+ ],
20
+ "min_size": 14,
21
+ "processor_class": "VLChatProcessor",
22
+ "rescale_factor": 0.00392156862745098
23
+ }
processor_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_special_token": false,
3
+ "ignore_id": -100,
4
+ "image_tag": "<image_placeholder>",
5
+ "mask_prompt": true,
6
+ "num_image_tokens": 576,
7
+ "processor_class": "VLChatProcessor",
8
+ "sft_format": "deepseek_old"
9
+ }
rename_key.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from safetensors import safe_open
2
+
3
+ from safetensors.torch import save_file
4
+
5
+
6
+ if __name__ == "__main__":
7
+ tensors = {}
8
+ with safe_open("model.safetensors", framework="pt", device=0) as f:
9
+ for k in f.keys():
10
+ if k.startswith("language_model."):
11
+ tensors[k.split("language_model.")[1]] = f.get_tensor(k)
12
+
13
+ save_file(tensors, "model_fix.safetensors")
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|begin▁of▁sentence|>",
3
+ "eos_token": "<|end▁of▁sentence|>",
4
+ "pad_token": "<|▁pad▁|>"
5
+ }
teaser.png ADDED
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff