
TL;DR
A comprehensive survey and position paper on building foundation models that are both reliable (robust, calibrated, safe) and responsible (fair, private, transparent)—essential reading for deploying AI systems in the real world.
Abstract
Foundation models have fundamentally transformed the artificial intelligence landscape, enabling unprecedented capabilities across diverse domains. However, deploying these powerful systems responsibly requires addressing critical challenges related to reliability and responsibility. This survey provides a comprehensive examination of the key dimensions of reliable and responsible foundation models, including robustness, uncertainty quantification, safety, fairness, privacy, and transparency. We discuss the current state of research, identify open challenges, and outline promising directions for building foundation models that are both capable and trustworthy.
Citation
@article{yang2025reliable,
title={Reliable and Responsible Foundation Models},
author={Yang, Xinyu and Han, Junlin and Bommasani, Rishi and others},
journal={Transactions on Machine Learning Research},
year={2025}
}