Dr. Xiaoge Zhang delivered a talk on “Causality-Informed Neural Network (CINN)” at Hong Kong Institute for Advanced Study (HKIAS)

Despite the impressive performance of deep learning in solving long-standing problems (e.g., machine translation, image classification), it has been oftentimes criticized for learning spurious correlations, vulnerable robustness, and poor generalizability. These deficiencies significantly hinder the adoptions of deep learning in high-stakes decision settings, such as healthcare. Although the state-of-the-art literature has made some advances in addressing some issues, the existing attempts to improve robustness, interpretability, and generalizability are unfortunately ad-hoc and unprincipled. In this talk, we develop a generic framework to inject causal knowledge in either structural or qualitative (or quantitative) form (or both) into neural network, where the orientation of each causal relationship is strictly preserved. The proposed causality-informed neural network (CINN) provides a one-stop solution with a great potential to fundamentally solve the existing issues in neural network. More importantly, the incorporation of causal knowledge greatly unlocks the power of neural network in scenarios more than prediction, such as causal effect estimation, interventional query. Several examples are used to demonstrate the appealing features of CINN.

Prof. Sankaran Mahadevan gave a talk on probabilistic Digital Twins for System Monitoring and Decision-Making

The digital twin paradigm integrates information obtained from sensor data, system physics models, as well as the operational and inspection/maintenance/repair history of a physical system or process of interest. As more and more data become available, the resulting updated model becomes increasingly accurate in predicting the future behavior of the system or process, and can potentially be used to support several objectives, such as safety, quality, mission planning, operational maneuvers, process control and risk management. This seminar will present recent advances in using Bayesian computational methods that advance the digital twin technology to support all these objectives, based on several types of computation: current state diagnosis, model updating, future state prognosis, and decision-making. All these computations are affected by uncertainty regarding system properties, operational parameters, usage, and environment, as well as uncertainties in data and prediction models. Thus, uncertainty quantification becomes an important need in system diagnosis and prognosis, considering both aleatory and epistemic uncertainty sources. The Bayesian methodology is able to address this need in a comprehensive manner and aggregate the uncertainty from multiple sources. A wide range of use cases such as additive manufacturing, aviation system safety, and power grid operations will be presented.

Prof. Chao Hu gave a talk on physics-informed machine learning for battery degradation diagnostics

Battery diagnostics aims to monitor a lithium-ion battery’s state of health (SOH) by estimating its capacity and degradation parameters over the service life. The SOH estimation informs online maintenance/control decision making, all performed within a battery management system. This talk will first give an overview of battery degradation diagnostics and then discuss the long-term testing and methodology development efforts led by a team of researchers at Iowa State University and the University of Connecticut. An emphasis will be placed on physics-informed machine learning for degradation diagnostics. Methodologies will be demonstrated using an industry-relevant application on implantable-grade lithium-ion batteries.

Research paper accepted by IEEE Internet of Things Journal

Graph neural networks (GNNs) have transformed network analysis, leading to state-of-the-art performance across a variety of tasks. Especially, GNNs are increasingly been employed as detection tools in the AIoT environment in various security applications. However, GNNs have also been shown vulnerable to adversarial graph perturbation. We present the first approach for certifying robustness of general GNNs against attacks that add or remove graph edges either at training or prediction time. Extensive experiments demonstrate that our approach significantly outperforms prior art in certified robust predictions. In addition, we show that a non-certified adaptation of our method exhibits significantly better robust accuracy against state-of-the-art attacks that past approaches. Thus, we achieve both the best certified bounds and best practical robustness of GNNs to structural attacks to date.

Research paper accepted by Applied Mathematical Modelling

In recent years, multi-agent deep reinforcement learning has progressed rapidly as reflected by its increasing adoptions in industrial applications. This paper proposes a Guided Probabilistic Reinforcement Learning (Guided-PRL) model to tackle maintenance scheduling of multi-component systems in the presence of uncertainty with the goal of minimizing the overall life-cycle cost. The proposed Guided-PRL is deeply rooted in the Actor-Critic (AC) scheme. Since traditional AC falls short in sampling efficiency and suffers from getting stuck in local minima in the context of multi-agent reinforcement learning, it is thus challenging for the actor network to converge to a solution of desirable quality even when the critic network is properly configured. To address these issues, we develop a generic framework to facilitate effective training of the actor network, and the framework consists of environmental reward modeling, degradation formulation, state representation, and policy optimization. The convergence speed of the actor network is significantly improved with a guided sampling scheme for environment exploration by exploiting rules-based domain expert policies. To handle data scarcity, the environmental modeling and policy optimization are approximated with Bayesian models for effective uncertainty quantification. The Guided-PRL model is evaluated using the simulations of a 12-component system as well as GE90 and CFM56 engines. Compared with four alternative deep reinforcement learning schemes, the Guided-PRL lowers life-cycle cost by 34.92% to 88.07%. In comparison with rules-based expert policies, the Guided-PRL decreases the life-cycle cost by 23.26% to 51.36%.