Research paper accepted by Journal of Manufacturing Process

Selective laser melting (SLM) is a commonly used technique in additive manufacturing to produce metal components with complex geometries and high precision. However, the poor process reproducibility and unstable product reliability has hindered its wide adoption in practice. Hence, there is a pressing demand for in-situ quality monitoring and real-time process control. In this paper, a feature-level multi-sensor fusion approach is proposed to combine acoustic emission signals with photodiode signals to realize in-situ quality monitoring for intelligence-driven production of SLM. An off-axial in-situ monitoring system featuring a microphone and a photodiode is developed to capture the process signatures during the building process. According to the 2D porosity and 3D density measurements, the collected acoustic and optical signals are grouped into three categories to indicate the quality of the produced parts. In consideration of the laser scanning information, an approach to transform the 1D signal to 2D image is developed. The converted images are then used to train a convolutional neural network so as to extract and fuse the features derived from the two individual sensors. In comparison with several baseline models, the proposed multi-sensor fusion approach achieves the best performance in quality monitoring.

Prof. Pascal Van Hentenryck gave a talk on Fusing AI and Optimization for Engineering

This talk reviews new methodological developments in fusing data science, machine learning, and optimization, as well as their applications in energy systems, mobility, supply chains, fair recommendations. It highlights the symbiotic relationships between deep learning, reinforcement learning, and optimization, through optimization proxies and end-to-end learning.

Research paper accepted by Structural and Multidisciplinary Optimization

As an emerging technology in the era of Industry 4.0, digital twin is gaining unprecedented attention because of its promise to further optimize process design, quality control, health monitoring, decision and policy making, and more, by comprehensively modeling the physical world as a group of interconnected digital models. In a two-part series of papers, we examine the fundamental role of different modeling techniques, twinning enabling technologies, and uncertainty quantification and optimization methods commonly used in digital twins. This first paper presents a thorough literature review of digital twin trends across many disciplines currently pursuing this area of research. Then, digital twin modeling and twinning enabling technologies are further analyzed by classifying them into two main categories: physical-to-virtual, and virtual-to-physical, based on the direction in which data flows. Finally, this paper provides perspectives on the trajectory of digital twin technology over the next decade, and introduces a few emerging areas of research which will likely be of great use in future digital twin research. In part two of this review, the role of uncertainty quantification and optimization are discussed, a battery digital twin is demonstrated, and more perspectives on the future of digital twin are shared.

Research paper accepted by Structural and Multidisciplinary Optimization

As an emerging technology in the era of Industry 4.0, digital twin is gaining unprecedented attention because of its promise to further optimize process design, quality control, health monitoring, decision and policy making, and more, by comprehensively modeling the physical world as a group of interconnected digital models. In a two-part series of papers, we examine the fundamental role of different modeling techniques, twinning enabling technologies, and uncertainty quantification and optimization methods commonly used in digital twins. This second paper presents a literature review of key enabling technologies of digital twins, with an emphasis on uncertainty quantification, optimization methods, open source datasets and tools, major findings, challenges, and future directions. Discussions focus on current methods of uncertainty quantification and optimization and how they are applied in different dimensions of a digital twin. Additionally, this paper presents a case study where a battery digital twin is constructed and tested to illustrate some of the modeling and twinning methods reviewed in this two-part review. Code and preprocessed data for generating all the results and figures presented in the case study are available on GitHub.

Research paper accepted by Reliability Engineering and Systems Safety

In this paper, we develop a generic physics-informed neural network (PINN)-based framework to assess the reliability of multi-state systems (MSSs). The proposed framework follows a two-step procedure. In the first step, we recast the reliability assessment of MSS as a machine learning problem using the framework of PINN. A feedforward neural network with two individual loss groups is constructed to encode the initial condition and the state transitions governed by ordinary differential equations in MSS, respectively. Next, we tackle the problem of high imbalance in the magnitudes of back-propagated gradients from a multi-task learning perspective and establish a continuous latent function for system reliability assessment. Particularly, we regard each element of the loss function as an individual learning task and project a task’s gradient onto the norm plane of any other task with a conflicting gradient by taking the projecting conflicting gradients (PCGrad) method. We demonstrate the applications of the proposed framework for MSS reliability assessment in a variety of scenarios, including time-independent or dependent state transitions, where system scales increase from small to medium. The computational results indicate that PINN-based framework reveals a promising performance in MSS reliability assessment and incorporation of PCGrad into PINN substantially improves the solution quality and convergence speed of the algorithm.

Research paper accepted by Transportation Research Part C

Collisions during airport surface operations can create risk of injury to passengers, crew or airport personnel and damage to aircraft and ground equipment. A machine learning model that is able to predict the trajectories of ground objects can help to diminish the occurrences of such collision events. In this paper, we pursue this objective by building a spatial-temporal graph convolutional neural network (STG-CNN) model to predict the movement of objects/vehicles on the airport surface. The methodology adopted in this paper consists of three steps: (1) Raw data processing: leverage Apache Spark to parse a large volume of raw data in Flight Information Exchange Model (FIXM) format streamed from the Surface Movement Event Service (SMES) for the purpose of deriving historical trajectory associated with each object on the ground; (2.1) Graph-based representations of ground object movements: build graph-based representations to characterize the movements of ground objects over time, where graph edges are used capture the spatial relationships of ground objects with each other explicitly; (2.2) Trajectory forecasts of all ground objects: combine STG-CNN with Time-Extrapolator Convolution Neural Network (TXP-CNN) to forecast the future trajectories of all the ground objects as a whole; and (3) Separation distance-based safety assessment: define a probabilistic separation distance-based metric to assess the safety of airport surface movements. The performance of the developed model for trajectory prediction of ground objects is validated at two airports with varying scales: Hartsfield-Jackson Atlanta International Airport and LaGuardia airport, under two different scenarios (peak hour and off-peak hour). Two quantitative performance metrics — Average Displacement Error (ADE) and Final Displacement Error (FDE) are used to compare the prediction performance of the proposed model with an alternative method. The computational results indicate that the developed method has an ADE within the range [7.55, 9.33], and it significantly outperforms an alternative approach that combines a STG-CNN with Convolutional Long Short-Term Memory (ConvLSTM) neural network with an ADE of [15.79, 16.89] in airport surface movement prediction, thus facilitating more accurate safety assessment during airport surface operations.

Research paper accepted by Advanced Engineering Informatics

Teams formulated by aviation professionals are essential in maintaining a safe and efficient aerodrome environment. Nonetheless, the shared situational awareness between the flight crews under adverse weather conditions might be impaired. This research aims to evaluate the impact of a proposed enhancement in communication protocol on cognitive workload and develop a human-centred classification model to identify hazardous meteorological conditions. Thirty groups of subjects completed four post-landing taxiing tasks under two visibility conditions (CAVOK/CAT IIIA) while two different communication protocols (presence/absence of turning direction information) were adopted by the air traffic control officer (ATCOs). Electroencephalography (EEG) and the NASA Task Load Index were respectively used to reflect the pilot’s mental state and to evaluate the pilot’s mental workload subjectively. Results indicated that impaired visibility increases the subjective workload significantly, while the inclusion of turning direction information in the ATCO’s instruction would not significantly intensify their cognitive workload. Mutual information was used to quantitatively assess the shared situational awareness between the pilot flying and the pilot monitoring. Finally, this research proposes a humancentred approach to identify potentially hazardous weather conditions from EEG power spectral densities with Bayesian neural networks (BNN). The classification model has outperformed other baseline algorithms with an accuracy of 66.5%, an F1 score of 61.4%, and an area under the ROC of 0.749. Using the concept of explainable AI with Shapley Additive Explanations (SHAP) values, the exploration of latent mental patterns formulates novel knowledge to gain insights into the vital physiological indicators of the pilots in response to different scenarios from the BNN model. In the long term, the model facilitates the decision regarding the necessity of providing automation and decision-making aids to pilots.

Research project funded by Early Career Scheme (ECS) of Research Grants Council

The past few years have witnessed the rapid development of artificial intelligence (AI) and machine learning (ML) in solving long-standing problems. AI/ML has played an indispensable role in profoundly transforming business, transportation, and finance. However, the adoption of AI/ML in risk-sensitive areas is still in its infancy because AI/ML systems exhibit fundamental limits and practical shortcomings: the field of AI/ML lacks a rigorous framework for reasoning about risk, uncertainty, and their potentially catastrophic outcomes, while safety and quality implications are top priorities in practice across a broad array of high-stakes applications that range from medical diagnosis to aerospace systems. Since the consequences of AI/ML system failures in life and safety-critical applications can be disastrous, it is becoming abundantly clear that, reasoning about uncertainty and risk will become a cornerstone when adopting AI/ML systems in large-scale risk-sensitive applications. As ML systems face varied risks arising from the input data (e.g., data bias, dataset shift etc.) as well as the ML model (e.g., model bias, model misspecification, etc.), the goal of this project is to develop a systematic risk-aware ML framework that consists of a series of control checkpoints to safeguard ML systems against potential risks to increase the credibility of adopting ML system in critical applications. Towards this goal, this project consists of three coherent tasks: (1) develop an integrated approach for data quality monitoring by combining feature-based anomaly detection technique with outcome-based uncertainty measure. The developed integrated approach will produce a composite probabilistic risk indicator to reveal input data quality; (2) develop a two-stage ML-based framework to estimate model reliability for individual prediction (MRIP). MRIP characterizes the probability of the observed difference between model prediction and actual value being within a tiny interval while model input varies within a small prescribed range, and it provides an individualized estimation on model prediction reliability for each input x. (3) develop a ML model to learn system-level risk. A ML model will be developed to map data-level and model-level risk indicators derived in the first two tasks to a risk measure at the system level. The proposed effort has profound practical implications, the risk-aware framework will act as an effective safety barrier in preventing ML models from making over-confident predictions on cases that are either too noisy, anomalous, outside the domain of the trained model, or with low model prediction reliability, thus facilitating safe and reliable adoption of ML systems in critical applications.

Research paper accepted by Decision Support Systems

The adoption of artificial intelligence (AI) and machine learning (ML) in risk-sensitive environments is still in its infancy because it lacks a systematic framework for reasoning about risk, uncertainty, and their potentially catastrophic consequences. In high-impact applications, inference on risk and uncertainty will become decisive in the adoption of AI/ML systems. To this end, there is a pressing need for a consolidated understanding on the varied risks arising from AI/ML systems, and how these risks and their side effects emerge and unfold in practice. In this paper, we provide a systematic and comprehensive overview of a broad array of inherent risks that can arise in AI/ML systems. These risks are grouped into two categories: data-level risk (e.g., data bias, dataset shift, out-of-domain data, and adversarial attacks) and model-level risk (e.g., model bias, misspecification, and uncertainty). In addition, we highlight the research needs for developing a holistic framework for risk management dedicated to AI/ML systems to hedge the corresponding risks. Furthermore, we outline several research related challenges and opportunities along with the development of risk-aware AI/ML systems. Our research has the potential to significantly increase the credibility of deploying AI/ML models in high-stakes decision settings for facilitating safety assurance, and preventing systems from unintended consequences.

Research paper accepted by IEEE Transactions on Intelligent Transportation Systems

Landing is generally cited as one of the riskiest phases of a flight, as indicated by the much higher accident rate than other flight phases. In this paper, we focus on the hard landing problem (which is defined as the touchdown vertical speed exceeding a predefined threshold), and build a probabilistic predictive model to forecast the aircraft’s vertical speed at touchdown, using DASHlink data. Previous work has treated hard landing as a classification problem, where the vertical speed is represented as a categorical variable based on a predefined threshold. In this paper, we build a machine learning model to numerically predict the touchdown vertical speed during aircraft landing. Probabilistic forecasting is used to quantify the uncertainty in model prediction, which in turn supports risk-informed decision-making. A Bayesian neural network approach is leveraged to construct the predictive model. The overall methodology consists of five steps. First, a clustering method based on the minimum separation between different airports is developed to identify flights in the dataset that landed at the same airport. Secondly, identifying the touchdown point itself is not straightforward; in this paper, it is determined by comparing the vertical speed distributions derived from different candidate touchdown indicators. Thirdly, a forward and backward filtering (filtfilt) approach is used to smooth the data without introducing phase lag. Next, a minimal-redundancy-maximal-relevance (mRMR) analysis is used to reduce the dimensionality of input variables. Finally, a Bayesian recurrent neural network is trained to predict the touchdown vertical speed and quantify the uncertainty in the prediction. The model is validated using several flights in the test dataset, and computational results demonstrate the satisfactory performance of the proposed approach.