FINEMATCH: Aspect-based Fine-grained Image and Text Mismatch Detection and Correction

1University of Rochester, 2Adobe Research

ECCV 2024

Abstract

Recent progress in large-scale pre-training has led to the development of advanced vision-language models (VLMs) with remarkable proficiency in comprehending and generating multimodal content. Despite the impressive ability to perform complex reasoning for VLMs, current models often struggle to effectively and precisely capture the compositional information on both the image and text sides. To address this, we propose FINEMATCH, a new aspect-based fine-grained text and image matching benchmark, focusing on text and image mismatch detection and correction. This benchmark introduces a novel task for boosting and evaluating the VLMs’ compositionality for aspect-based fine-grained text and image matching. In this task, models are required to identify mismatched aspect phrases within a caption, determine the aspect’s class, and propose corrections for an image-text pair that may contain between 0 and 3 mismatches. To evaluate the models’ performance on this new task, we propose a new evaluation metric named ITM-IoU for which our experiments show a high correlation to human evaluation. In addition, we also provide a comprehensive experimental analysis of existing mainstream VLMs, including fully supervised learning and in-context learning settings. We have found that models trained on FINEMATCHdemonstrate enhanced proficiency in detecting fine-grained text and image mismatches. Moreover, models (e.g., GPT-4V, Gemini Pro Vision) with strong abilities to perform multimodal in-context learning are not as skilled at fine-grained compositional image and text matching analysis. With FINEMATCH, we are able to build a system for text-to-image generation hallucination detection and correction.

Introduction

Task Illustration

Interpolate start reference image.

Given a text and image pair, FINEMATCH enables VLMs to detect the mismatched aspects and the aspect classes in the caption and then give the corresponding corrections.

Interpolate start reference image.

The initial data source distribution (inner circle) and domain distribution (outer circle) for the FINEMATCH training set (left) and test set (right).

BibTeX


  @article{hua2024finematch,
  title={FINEMATCH: Aspect-based Fine-grained Image and Text Mismatch Detection and Correction},
  author={Hua, Hang and Shi, Jing and Kafle, Kushal and Jenni, Simon and Zhang, Daoan and Collomosse, John and Cohen, Scott and Luo, Jiebo},
  journal={arXiv preprint arXiv:2404.14715},
  year={2024}
}