Papers
arxiv:2502.06788

EVEv2: Improved Baselines for Encoder-Free Vision-Language Models

Published on Feb 10
ยท Submitted by Paranioar on Feb 11
Authors:
,
,
,
,
,
,

Abstract

Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment. We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We develop efficient strategies for encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth investigation, we launch EVEv2.0, a new and improved family of encoder-free VLMs. We show that: (i) Properly decomposing and hierarchically associating vision and language within a unified model reduces interference between modalities. (ii) A well-designed training strategy enables effective optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0 represents a thorough study for developing a decoder-only architecture across modalities, demonstrating superior data efficiency and strong vision-reasoning capability. Code is publicly available at: https://github.com/baaivision/EVE.

Community

Paper submitter

๐Ÿ’ก Highlights:

๐Ÿ”ฅ Superior Capability: An originated encoder-free LVLM with minimalist patch embedding layer and arbitrary image aspect ratio, continuing to approach several modular encoder-based LVLMs.

๐Ÿ”ฅ Data Efficiency: Filter solely 92M publicly avaliable data from OpenImages, SAM, LAION, Datacomp for pre-training; Utilizing 7.3M Infinity-MM and LLaVA-onevision SFT data for EVE-7B-HD-v2.0.

๐Ÿ”ฅ Pioneering Route: We attempt to provide an efficient, transparent, and practical training strategy and procedure for developing a pure decoder-only architecture across modalities.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.06788 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.06788 in a Space README.md to link it from this page.

Collections including this paper 1