Research paper accepted by IEEE Transactions on Emerging Topics in Computational Intelligence

Equipping deep learning models with a principled uncertainty quantification (UQ) has become essential for ensuring their reliable performance in the open world. To handle uncertainty arising from two prevalent sources – distribution shift and out-of-distribution (OOD) – in the open-world settings, this paper presents a unified uncertainty-informed approach for quantifying and managing the risks these factors pose to the dependable function of deep learning models. Toward this goal, we propose leveraging a principled UQ approach — Spectral-normalized Neural Gaussian Process (SNGP) — to quantify the epistemic uncertainty associated with model predictions. Unlike other UQ methods in the literature, SNGP is characterized by two unique properties: (1) applying spectral normalization to the weights of the neural network’s hidden layers to preserve the relative distances among data points during data transformations; (2) replacing the traditional output layer of neural networks with a Gaussian process to enable distance-aware uncertainty estimation. Based on SNGP’s uncertainty estimate, we apply Youden’s index to determine an optimal threshold for categorizing the uncertainty into distinct levels, thereby enabling decision-makers to make uncertainty-informed decisions. Two datasets of varying scale are used to demonstrate how the proposed method facilitates risk assessment and management of deep learning models in the open environment. Computational results reveal that the proposed method achieves prediction performance comparable to Monte Carlo dropout and deep ensemble methods. Importantly, the proposed approach outperforms the other two methods by providing a computationally efficient, consistent, and principled uncertainty estimation under no distribution shift, distribution shift, and OOD conditions.

Research project funded by the Natural Science Foundation of Guangdong Province-General Program

Uncertainty quantification and spatiotemporal causal discovery for reliable traffic prediction

Research paper accepted by Reliability Engineering and Systems Safety

Multi-state systems (MSS) are widely used for modeling the behavior of engineering applications, where the system and its components can have more than two distinct states. Physics-Informed Neural Networks (PINNs) offer a viable solution for characterizing the dynamic state evolution of MSS. However, existing methods predominantly rely on uniformly sampled collocation points across the problem domain when training PINNs. Although some residual-based active learning methods exist, they are inherently static and local, and often fail to capture a crucial aspect of PINN training: identification and accurate modeling of the “critical transition regions” within the problem domain. To address this fundamental challenge, we treat PINN as a dynamic system and introduce a novel active learning method grounded in chaos theory to identify regions within the problem domain that are highly sensitive to initial conditions. Specifically, our method quantifies the degree of chaos at candidate collocation points by introducing small perturbations and using PINN’s forward propagation to simulate the dynamic evolution of both the original and perturbed collocation points. Collocation points that exhibit pronounced chaotic behavior—- where evolutionary trajectories diverge rapidly following perturbation—are identified as the system’s most unstable and valuable regions for PINN training. By prioritizing these dynamically unstable points, our method directs PINN to focus its learning on accurately delineating the boundaries of state transitions, thereby significantly enhancing the accuracy of reliability analysis. Experimental results on multiple benchmark partial differential equations (PDEs) and several MSSs demonstrate that, compared to other PINN learning schemes, our method shows superior accuracy and computational efficiency in MSS reliability assessment.