MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models
- Chejian Xu ,
- Jiawei Zhang ,
- Zhaorun Chen ,
- Chulin Xie ,
- Mintong Kang ,
- Zhuowen Yuan ,
- Zidi Xiong ,
- Chenhui Zhang ,
- Lingzhi Yuan ,
- Yi Zeng ,
- Peiyang Xu ,
- Chengquan Guo ,
- Andy Zhou ,
- Jeffrey Ziwei Tan ,
- Zhun Wang ,
- Alexander Xiong ,
- Xuandong Zhao ,
- Yu Gai ,
- Francesco Pinto ,
- Yujin Potter ,
- Zhen Xiang ,
- Zinan Lin ,
- Dan Hendrycks ,
- Dawn Song ,
- Bo Li
ICLR 2025 |
Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants. However, several studies have revealed vulnerabilities in these models, such as generating unsafe content by text-to-image models. Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy. In this paper, we present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs. Our platform assesses models from multiple perspectives, including safety, hallucination, fairness/bias, privacy, adversarial robustness, and out-of-distribution (OOD) generalization. We have designed various evaluation scenarios and red teaming algorithms under different tasks for each perspective to generate challenging data, forming a high-quality benchmark. We evaluate a range of multimodal models using MMDT, and our findings reveal a series of vulnerabilities and areas for improvement across these perspectives. This work introduces the first comprehensive and unique safety and trustworthiness evaluation platform for MMFMs, paving the way for developing safer and more reliable MMFMs and systems.