Congratulations on Long XUE to pass his PhD oral defense!!!

Principled quantification of predictive uncertainty in deep neural networks is crucial for ensuring reliable performance and trustworthy deployment in open-world environments. This thesis presents three complementary methodologies that advance uncertainty quantification and facilitate responsible utilization of deep learning in open-world settings.

First, we propose a continuous optimization framework for constructing neural network (NN)–based prediction intervals (PIs). The proposed method formulates PI construction as a differentiable optimization problem that explicitly prioritizes target coverage while minimizing the width of PIs. By incorporating distance-based differentiable constraints, a shared-bottom architecture, gradient-conflict mitigation, and a coverage–width–aware early stopping mechanism, the approach yields significantly tighter intervals than state-of-the-art PI methods.

Second, we develop an uncertainty-informed risk management framework for open-world environments using spectral-normalized neural Gaussian processes. Our method combines distance-preserving representation learning with a distance-aware output layer to yield Gaussian process-like, distance-sensitive uncertainty estimates. Through a well-defined thresholding mechanism based on Youden’s index, uncertainty estimates are translated into actionable risk levels, enabling reliable uncertainty-aware decision making across normal, shifted, and Out-of-Distribution (OOD) conditions.

Third, we propose a unified framework that integrates conformal prediction with distance-based OOD detection. By filtering OOD inputs and optimizing prediction set size, the method seek to preserve the statistical guarantee of conformal prediction while yielding tighter, more informative prediction sets. Computational experiments demonstrate competitive OOD detection performance and substantial reductions in the average prediction-set size, all achieved efficiently within a single forward pass through the neural network.

Collectively, this thesis advances uncertainty estimation from isolated modeling techniques to an end-to-end framework for reliable deep learning in open-world settings. The proposed methodologies provide practical pathways for the responsible and dependable deployment of deep learning models in real-world applications.

Research paper accepted by IEEE Transactions on Reliability

Deep learning shows great potential for bearing fault diagnosis, but its effectiveness is severely limited by the prevalent issue of highly imbalanced data in real-world industrial settings, where fault events are extremely rare. This paper proposes a novel method for imbalanced bearing fault diagnosis that combines class-aware supervised contrastive learning with a quadratic network backbone. This integrated approach, named CCQNet, is designed to counter the effects of highly skewed data distributions by improving feature representation and classification fairness. Comprehensive experiments show that CCQNet substantially outperforms existing methods in handling imbalanced data, particularly at high imbalance ratios like 50:1. This study provides an effective and innovative solution for imbalanced bearing fault diagnosis. Source codes of this paper are available at https://github.com/yuweien1120/CCQNet for public evaluation.

Research paper accepted by Reliability Engineering and Systems Safety

Although machine learning (ML) and deep learning (DL) methods are increasingly used for anomaly detection in industrial cyber-physical systems, their adoption is hindered by concerns about model trustworthiness, especially high false alarm rates (FARs). Excessive false alarms overwhelm operators, cause unnecessary shutdowns, and reduce operational efficiency. This study addresses these challenges by proposing a novel framework that integrates ML-based anomaly detectors with conformal prediction (CP), a model-agnostic uncertainty quantification technique. To handle distribution shifts in time-series data, our framework incorporates a temporal quantile adjustment method with a sliding calibration set, ensuring statistical guarantees on predefined FARs. A rejection mechanism is further integrated by excluding significant anomalies from the calibration set, improving detection capability while maintaining FAR guarantees. For real-time anomaly monitoring, two P-value-based indicators generated from CP are developed to track anomalous trends and enhance model interpretability. The framework is evaluated by comparing several baseline ML and DL methods to their conformalized counterparts using a public ICPS dataset. Comparative results based on Precision, Recall, F1, and AUROC validate the framework’s compatibility with various ML models and its effectiveness in improving anomaly detection performance by reducing false alarms and guaranteeing FARs across a range of predefined values.