Congratulations on Long XUE to pass his PhD oral defense!!!
Principled quantification of predictive uncertainty in deep neural networks is crucial for ensuring reliable performance and trustworthy deployment in open-world environments. This thesis presents three complementary methodologies that advance uncertainty quantification and facilitate responsible utilization of deep learning in open-world settings.
First, we propose a continuous optimization framework for constructing neural network (NN)–based prediction intervals (PIs). The proposed method formulates PI construction as a differentiable optimization problem that explicitly prioritizes target coverage while minimizing the width of PIs. By incorporating distance-based differentiable constraints, a shared-bottom architecture, gradient-conflict mitigation, and a coverage–width–aware early stopping mechanism, the approach yields significantly tighter intervals than state-of-the-art PI methods.
Second, we develop an uncertainty-informed risk management framework for open-world environments using spectral-normalized neural Gaussian processes. Our method combines distance-preserving representation learning with a distance-aware output layer to yield Gaussian process-like, distance-sensitive uncertainty estimates. Through a well-defined thresholding mechanism based on Youden’s index, uncertainty estimates are translated into actionable risk levels, enabling reliable uncertainty-aware decision making across normal, shifted, and Out-of-Distribution (OOD) conditions.
Third, we propose a unified framework that integrates conformal prediction with distance-based OOD detection. By filtering OOD inputs and optimizing prediction set size, the method seek to preserve the statistical guarantee of conformal prediction while yielding tighter, more informative prediction sets. Computational experiments demonstrate competitive OOD detection performance and substantial reductions in the average prediction-set size, all achieved efficiently within a single forward pass through the neural network.
Collectively, this thesis advances uncertainty estimation from isolated modeling techniques to an end-to-end framework for reliable deep learning in open-world settings. The proposed methodologies provide practical pathways for the responsible and dependable deployment of deep learning models in real-world applications.


