MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning

1Information Sciences Institute, University of Southern California
2Tencent AI Lab, Bellevue, WA 3Department of Computer Science, Faculty of Science, Vrije Universiteit Amsterdam
*Equal Contribution

MARVEL is a multidimensional AVR benchmark with 770 puzzles composed of six core knowledge patterns, geometric and abstract shapes, and five different task configurations .

Abstract

While multi-modal large language models (MLLMs) have shown significant progress on many popular visual reasoning benchmarks, whether they possess abstract visual reasoning abilities remains an open question. Similar to the Sudoku puzzles, abstract visual reasoning (AVR) problems require finding high-level patterns (e.g., repetition constraints) that control the input shapes (e.g., digits) in a specific task configuration (e.g., matrix). However, existing AVR benchmarks only considered a limited set of patterns (addition, conjunction), input shapes (rectangle, square), and task configurations (3 by 3 matrices). To evaluate MLLMs' reasoning abilities comprehensively, we introduce MARVEL, a multidimensional AVR benchmark with 770 puzzles composed of six core knowledge patterns, geometric and abstract shapes, and five different task configurations. To inspect whether the model accuracy is grounded in perception and reasoning, MARVEL complements the general AVR question with perception questions in a hierarchical evaluation framework. We conduct comprehensive experiments on MARVEL with nine representative MLLMs in zero-shot and few-shot settings. Our experiments reveal that all models show nearrandom performance on the AVR question, with significant performance gaps (40%) compared to humans across all patterns and task configurations. Further analysis of perception questions reveals that MLLMs struggle to comprehend the visual features (near-random performance) and even count the panels in the puzzle (<45%), hindering their ability for abstract reasoning. We release our entire code and dataset.

Main Result

Main zero-shot accuracy over MARVEL across all MLLMs. For both open and closed source categories, all models show near-random performance with a huge gap (40%) compared to human performance. Further result based on perception (left columns) highlights the poor visual perception ability of current models.

Result across patterns and task configurations

MLLMs and human performance across patterns and task configurations. Claude3 (Opus) exhibits a balanced and strong reasoning ability, ranking in the top 2 across all patterns. Three out of four MLLMs rank 1st in different task configurations, which verifies the potential bias in single-configuration evaluation and the importance of multidimensional comprehensive evaluation.

Few-shot Result

MLLMs performance in different few-shot COT. None of these approaches yields a positive impact; instead, they lead to a significant drop in performance. Given the complexity and challenging nature of the dataset, the effectiveness of few-shot prompting on MARVEL remains minimal

Few-shot example

An example of zero-and few-shot results from Claude3 (Opus). With the demonstration, the model learns to focus on the correct pattern (blue) at the beginning of the reasoning. However, it fails to adapt precisely to the input shapes in the puzzle (red), leading to errors in subsequent reasoning.

Since poor visual perception is the main obstacle to improving MLLMs' abstract reasoning ability, in this section, we conduct two additional experiments to understand these models' potential when perceptual barriers are mitigated.

Experiment 1: Pattern Reasoning

We analyze the model's perception ability on the question part by presenting models with the same puzzle but asking for possible underlying patterns in a multiple-choice setting. Closed-source models show non-random results when reasoning about the underlying pattern, while nearly all open-source models struggle to outperform random baselines. This gap indicates that closed-source models partially understand the patterns, but more accurate visual perception is needed to complete the entire task.

Experiment 2: MARVEL_TEXT

We add accurate text descriptions of the puzzle on a subset of MARVEL. The result shows a significant boost in performance, with GPT-4V achieving human-level accuracy (65%). On the contrary, open-source MLLMs still lag behind, indicating that while enhanced textual descriptions can improve performance, they do not fully bridge the gap between closed-source and open-source models. This further supports that closed-source models possess superior reasoning capabilities often overshadowed by visual perception. The distinction between different MLLMs also underscores the potential effectiveness of \textsc{MARVEL} in evaluating AVR ability, particularly when the weakness of visual perception is addressed.

BibTeX

@article{jiang2024marvel,
  title={MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning},
  author={Jiang, Yifan and Zhang, Jiarui and Sun, Kexuan and Sourati, Zhivar and Ahrabian, Kian and Ma, Kaixin and Ilievski, Filip and Pujara, Jay},
  journal={arXiv preprint arXiv:2404.13591},
  year={2024}
}