The Perceptual Observatory Characterizing Robustness and Grounding in MLLMs

Arizona State University
*Equal contribution.

WACV 2026


Overview

Overview illustration of The Perceptual Observatory
Overview of The PERCEPTUAL OBSERVATORY and how it solicits understanding of opaque MLLMs perceptual understanding by measuring properties motivated by human visual perception and robustness against multiple axes. We illustrate the framework for properties revealing true perceptual understanding of MLLMs

Abstract

Recent advances in multimodal large language models (MLLMs) have yielded increasingly powerful models, yet their perceptual capacities remain poorly characterized. In practice, most model families scale language component while reusing nearly identical vision encoders (e.g., Qwen2.5-VL 3B/7B/72B), which raises pivotal concerns about whether progress reflects genuine visual grounding or reliance on internet-scale textual world knowledge. Existing evaluation methods emphasize end-task accuracy, overlooking robustness, attribution fidelity, and reasoning under controlled perturbations. We present The PERCEPTUAL OBSERVATORY, a framework that characterizes MLLMs across verticals like: (i) simple vision tasks, such as face matching and text-in-vision comprehension capabilities; (ii) local-to-global understanding, encompassing image matching, grid pointing game, and attribute localization, which tests general visual grounding. Each vertical is instantiated with ground-truth datasets of faces and words, systematically perturbed through pixel-based augmentations and diffusion-based stylized illusions. The PERCEPTUAL OBSERVATORY moves beyond leaderboard accuracy to yield insights into how MLLMs preserve perceptual grounding and relational structure under perturbations, providing a principled foundation for analyzing strengths and weaknesses of current and future models.

BibTeX