Research project funded by the Young Scientists Fund of National Natural Science Foundation of China

Title: Uncertainty quantification-based methodology for trustworthiness
assessment of deep learning systems

PI: Xiaoge Zhang

Abstract: The malfunction of deep learning systems in safety-critical applications (e.g., aerospace) will lead to devastating outcomes. Thus, how to ensure the trustworthiness of deep learning systems in high-stakes decision settings is an imperative problem to be tackled. This project aims at accommodating heterogeneous sources of risks pertaining to the input data of deep learning systems in the open world, multi-source uncertainty inherent in the model reliability for individual prediction as well as the coupled relationship between input data-related risk and the model reliability tailored to each individual prediction towards devising an uncertainty quantification-based method for trustworthiness modeling of deep learning systems. We believe that the proposed effort will make substantial contributions to the development of novel and effective theories and models for enhancing the trustworthiness of deep learning systems, offer new insights for the trustworthiness modeling of deep learning systems in the open environment, and boost the advancement of trustworthy deep learning systems.