Entries by

Dr. Xiaoge Zhang delivered a talk on “Enhancing the Performance of Neural Networks Through Causal Discovery and Integration of Domain Knowledge” at Sichuan University, China

In this talk, I will present a generic methodology to encode hierarchical causal structure among observed variables into a neural network to improve its prediction performance. The proposed causality-informed neural network (CINN) leverages three coherent steps to systematically map the structural causal knowledge into the layer-to-layer design of neural network while strictly preserving the orientation of every causal relationship. In the first step, CINN discovers causal relationships from observational data via directed acyclic graph (DAG) learning, where causal discovery is recast as a continuous optimization problem to avoid the combinatorial nature. In the second step, the discovered hierarchical causal structure among observed variables is encoded into neural network through a dedicated architecture and customized loss function. By categorizing variables as root, intermediate, and leaf nodes, the hierarchical causal DAG is translated into CINN with a one-to-one correspondence between nodes in the DAG and units in the CINN while maintaining the relative order among these nodes. Regarding the loss function, both intermediate and leaf nodes in the DAG are treated as target outputs during CINN training to drive co-learning of causal relationships among different types of nodes. In the final step, as multiple loss components emerge in CINN, we leverage the projection of conflicting gradients to mitigate gradient interference among the multiple learning tasks. Computational experiments across a broad spectrum of UCI datasets demonstrate substantial advantages of CINN in prediction performance over other state-of-the-art methods. In addition, we conduct an ablation study by incrementally injecting structural and quantitative causal knowledge into neural network to demonstrate their role in enhancing neural network’s prediction performance.

Research paper accepted by Reliability Engineering and Systems Safety

Risk management often involves retrofit optimization to enhance the performance of buildings against extreme events but may result in huge upfront mitigation costs. Existing stochastic optimization frameworks could be computationally expensive, may require explicit programming, and are often not intelligent. Hence, an intelligent risk optimization framework is proposed herein for building structures by developing a deep reinforcement learning-enabled actor-critic neural network model. The proposed framework is divided into two parts including (1) a performance-based environment to assess mitigation costs and uncertain future consequences under hazards and (2) a deep reinforcement learning-enabled risk optimization model for performance enhancement. The performance-based environment takes mitigation alternatives as input and provides consequences and retrofit costs as output by utilizing several steps, including hazard assessment, damage assessment, and consequence assessment. The risk optimization is performed by integrating performance-based environment with actor-critic deep neural networks to simultaneously reduce retrofit costs and uncertain future consequences given seismic hazards. For illustration, the proposed framework is implemented on a portfolio with numerous building structures to demonstrate the new paradigm for intelligent risk optimization. Also, the performance of the proposed method is compared with genetic optimization, deep Q-networks, and proximal policy optimization.

Prof. Yan-Fu Li gave a talk on “Recent Research Progresses on Optimal System Reliability Design”

Optimal system reliability design is an important research field in reliability engineering. Since the 1950s, extensive studies have been conducted on various aspects of this issue. This field remains highly active today due to the need to develop new generations of complex engineering systems, such as 5G telecom networks and high-performance computing clusters, which are expected to be highly reliable to meet the stringent, dynamic, and often real-time quality demands of system operators and end-users. Over the past five years, numerous new researches on optimal system reliability design have been published, addressing the theoretical challenges posed by the new engineering systems. This presentation will systematically review these works with the focus on theoretical advancements, including the models and methods for redundancy allocation problem, redundancy allocation under mixed uncertainty, joint reliability-redundancy allocation problem and joint redundancy allocation and maintenance optimization. Through analysis and discussions, we will outline future research directions.

Research paper accepted by Knowledge-Based Systems

Principled quantification of predictive uncertainty in neural networks (NNs) is essential to safeguard their applications in high-stakes decision settings. In this paper, we develop a differentiable mathematical formulation to quantify the uncertainty in NN prediction using prediction intervals (PIs). The formulated optimization problem is differentiable and compatible with the built-in gradient descent optimizers in prevailing deep learning platforms, and two performance metrics composed of prediction interval coverage probability (PICP) and mean prediction interval width (MPIW) are considered in the construction of PIs. Different from existing methods, the developed methodology features four salient characteristics. Firstly, we design two distance-based functions that are differentiable to impose constraints associated with the target coverage in PI construction, where PICP is prioritized explicitly over MPIW in the devised composite loss function. Next, we adopt a shared-bottom NN architecture with intermediate layers to separate the learning of shared and task-specific feature representations along the construction of lower and upper bounds. Thirdly, we leverage the projection of conflicting gradients (PCGrad) to mitigate interference of gradients associated with the two individual learning tasks so as to increase the convergence stability and solution quality. Finally, we design a customized early stopping mechanism to monitor PICP and MPIW simultaneously for the purpose of selecting the set of parameters that not only meets the target coverage but also has a minimal MPIW as the ultimate NN parameters. A broad range of datasets are used to rigorously examine the performance of the developed methodology. Computational results suggest that the developed method significantly outperforms the classic LUBE method across the nine datasets by reducing the PI width by 31.26% on average. More importantly, it achieves competitive results compared to the other three state-of-the-art methods by outperforming them on four out of ten datasets. An ablation study is used to explicitly demonstrate the benefit of shared-bottom NN architecture in the construction of PIs.

Research paper accepted by Reliability Engineering and Systems Safety

Physics-Informed Neural Network (PINN) is a special type of deep learning model that encodes physical laws in the form of partial differential equations as a regularization term in the loss function of neural network. In this paper, we develop a principled uncertainty quantification approach to characterize the model uncertainty of PINN, and the estimated uncertainty is then exploited as an instructive indicator to identify collocation points where PINN produces a large prediction error. To this end, this paper seamlessly integrates spectral-normalized neural Gaussian process (SNGP) into PINN for principled and accurate uncertainty quantification. In the first step, we apply spectral normalization on the weight matrices of hidden layers in the PINN to make the data transformation from input space to the latent space distance-preserving. Next, the dense output layer of PINN is replaced with a Gaussian process to make the quantified uncertainty distance-sensitive. Afterwards, to examine the performance of different UQ approaches, we define several performance metrics tailored to PINN for assessing distance awareness in the measured uncertainty and the uncertainty-informed error detection capability. Finally, we employ three representative physical problems to verify the effectiveness of the proposed method in uncertainty quantification of PINN and compare the developed approach with Monte Carlo (MC) dropout using the developed performance metrics. Computational results suggest that the proposed approach exhibits a superior performance in improving the prediction accuracy of PINN and the estimated uncertainty serves as an informative indicator to detect PINN’s prediction failures.

Research paper accepted by IEEE Transactions on Instrumentation and Measurement

Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at https://github.com/asdvfghg/BearingPGA-Net.

Research paper accepted by IEEE Transactions on Reliability

As safety is the top priority in mission-critical engineering applications, uncertainty quantification emerges as a linchpin to the successful deployment of AI models in these high-stakes domains. In this paper, we seamlessly encode a simple and principled uncertainty quantification module Spectral-normalized Neural Gaussian Process (SNGP) into GoogLeNet to detect various defects in steel wire ropes (SWRs) accurately and reliably. To this end, the developed methodology consists of three coherent steps. In the first step, raw Magnetic Flux Leakage (MFL) signals in waveform associated with normal and defective SWRs that are manifested in the number of broken wires are collected via a dedicated experimental setup. Next, the proposed approach utilizes Gramian Angular Field to represent the MFL signal in 1-D time series as 2-D images while preserving key spatial and temporal structures in the data. Thirdly, built atop the backbone of GoogLeNet, we systematically integrate SNGP by adding the spectral normalization (SN) layer to normalize the weights and replacing the output layers with a Gaussian process (GP) in the main network and auxiliary classifiers of GoogLeNet accordingly, where SN enables to preserve the distance in data transformation and GP makes the output layer of neural network distance aware when assigning uncertainty. Comprehensive comparisons with the state-of-the-art models highlight the advantages of the developed methodology in classifying SWR defects and identifying out-of-distribution (OOD) SWR instances. In addition, a thorough ablation study is performed to quantitatively illustrate the significant role played by SN and GP in the principledness of the estimated uncertainty towards detecting SWR instances with varying OODness.

Prof. Stefan Feuerrigel gave a talk on “Learning policies for decision-making with causal machine learning: The case of development financing”

The Sustainable Development Goals (SDGs) of the United Nations provide a blueprint of a better future by “leaving no one behind”, and, to achieve the SDGs by 2030, poor countries require immense volumes of development aid. In this work, we develop a causal machine learning framework for estimating heterogeneous treatment effects of aid disbursements that inform optimal aid allocation. We demonstrate the effectiveness of our method using data with official development aid earmarked to end HIV/AIDS in 105 countries, amounting to more than USD 5.2 billion. For this, we first show that our method successfully computes heterogeneous treatment-response curves using semi-synthetic data. Then, using real-world HIV data, we find that an optimal aid allocation suggested by our method could reduce the total number of new HIV infections compared to current allocation practice. Our findings indicate the effectiveness of causal machine learning to inform cost-efficient allocations of development aid that maximize progress towards the SDGs.

Two group members attended the 5th International Conference on System Reliability and Safety Engineering (SRSE 2023) in Beijing, China

Two group members attended the 5th International Conference on System Reliability and Safety Engineering (SRSE 2023) in Beijing, China from October 20-23, 2023, conjunction with the annual meeting of Institute for Quality and Reliability (IQR), Tsinghua University. The conference is sponsored by Tsinghua University, supported by National University of Singapore, organized by Institute for Quality and Reliability, Tsinghua University, co-organized by Department of Industrial Engineering, Tsinghua University, patrons with Beijing Institute of Technology, Harbin Institute of Technology, Nanjing University of Science and Technology, Qingdao University, Shanghai University, Shanghai Jiao Tong University, Northwestern Polytechnical University, Sun Yat-sen University, City University of Hong Kong, University of Alberta, etc.

Dr. Xiaoge Zhang delivered a talk on “Safety assessment and risk analysis of complex systems under uncertainty” at Nanjing University, China

This talk showcases two different strategies to assess and analyze the safety of air transportation system. In the first place, considering the rich information in the historical aviation accident events, we analyzed the accidents reported in the National Transporation Safety Board (NTSB) over the past two decades, and developed a large-scale Bayesian network to model the causal relationships among a variety of factors contributing to the occurrence of aviation accidents. The construction of Bayesian network greatly facilitates the root cause diagnosis and outcome analysis of aviation accident. Next, we analyze how to leverage deep learning to forecast flight trajectory. Using Bayesian neural network, we fully characterize the effect of exogenous variables on the flight trajectory. The predicted trajectory is then expanded to multiple flights, and used to assess safety based on horizontal and vertical separation distance between two flights, thus enabling real-time monitoring of in-flight safety.