Entries by

Research project funded by Early Career Scheme (ECS) of Research Grants Council

The past few years have witnessed the rapid development of artificial intelligence (AI) and machine learning (ML) in solving long-standing problems. AI/ML has played an indispensable role in profoundly transforming business, transportation, and finance. However, the adoption of AI/ML in risk-sensitive areas is still in its infancy because AI/ML systems exhibit fundamental limits and practical shortcomings: the field of AI/ML lacks a rigorous framework for reasoning about risk, uncertainty, and their potentially catastrophic outcomes, while safety and quality implications are top priorities in practice across a broad array of high-stakes applications that range from medical diagnosis to aerospace systems. Since the consequences of AI/ML system failures in life and safety-critical applications can be disastrous, it is becoming abundantly clear that, reasoning about uncertainty and risk will become a cornerstone when adopting AI/ML systems in large-scale risk-sensitive applications. As ML systems face varied risks arising from the input data (e.g., data bias, dataset shift etc.) as well as the ML model (e.g., model bias, model misspecification, etc.), the goal of this project is to develop a systematic risk-aware ML framework that consists of a series of control checkpoints to safeguard ML systems against potential risks to increase the credibility of adopting ML system in critical applications. Towards this goal, this project consists of three coherent tasks: (1) develop an integrated approach for data quality monitoring by combining feature-based anomaly detection technique with outcome-based uncertainty measure. The developed integrated approach will produce a composite probabilistic risk indicator to reveal input data quality; (2) develop a two-stage ML-based framework to estimate model reliability for individual prediction (MRIP). MRIP characterizes the probability of the observed difference between model prediction and actual value being within a tiny interval while model input varies within a small prescribed range, and it provides an individualized estimation on model prediction reliability for each input x. (3) develop a ML model to learn system-level risk. A ML model will be developed to map data-level and model-level risk indicators derived in the first two tasks to a risk measure at the system level. The proposed effort has profound practical implications, the risk-aware framework will act as an effective safety barrier in preventing ML models from making over-confident predictions on cases that are either too noisy, anomalous, outside the domain of the trained model, or with low model prediction reliability, thus facilitating safe and reliable adoption of ML systems in critical applications.

Research paper accepted by Decision Support Systems

The adoption of artificial intelligence (AI) and machine learning (ML) in risk-sensitive environments is still in its infancy because it lacks a systematic framework for reasoning about risk, uncertainty, and their potentially catastrophic consequences. In high-impact applications, inference on risk and uncertainty will become decisive in the adoption of AI/ML systems. To this end, there is a pressing need for a consolidated understanding on the varied risks arising from AI/ML systems, and how these risks and their side effects emerge and unfold in practice. In this paper, we provide a systematic and comprehensive overview of a broad array of inherent risks that can arise in AI/ML systems. These risks are grouped into two categories: data-level risk (e.g., data bias, dataset shift, out-of-domain data, and adversarial attacks) and model-level risk (e.g., model bias, misspecification, and uncertainty). In addition, we highlight the research needs for developing a holistic framework for risk management dedicated to AI/ML systems to hedge the corresponding risks. Furthermore, we outline several research related challenges and opportunities along with the development of risk-aware AI/ML systems. Our research has the potential to significantly increase the credibility of deploying AI/ML models in high-stakes decision settings for facilitating safety assurance, and preventing systems from unintended consequences.

Research paper accepted by IEEE Transactions on Intelligent Transportation Systems

Landing is generally cited as one of the riskiest phases of a flight, as indicated by the much higher accident rate than other flight phases. In this paper, we focus on the hard landing problem (which is defined as the touchdown vertical speed exceeding a predefined threshold), and build a probabilistic predictive model to forecast the aircraft’s vertical speed at touchdown, using DASHlink data. Previous work has treated hard landing as a classification problem, where the vertical speed is represented as a categorical variable based on a predefined threshold. In this paper, we build a machine learning model to numerically predict the touchdown vertical speed during aircraft landing. Probabilistic forecasting is used to quantify the uncertainty in model prediction, which in turn supports risk-informed decision-making. A Bayesian neural network approach is leveraged to construct the predictive model. The overall methodology consists of five steps. First, a clustering method based on the minimum separation between different airports is developed to identify flights in the dataset that landed at the same airport. Secondly, identifying the touchdown point itself is not straightforward; in this paper, it is determined by comparing the vertical speed distributions derived from different candidate touchdown indicators. Thirdly, a forward and backward filtering (filtfilt) approach is used to smooth the data without introducing phase lag. Next, a minimal-redundancy-maximal-relevance (mRMR) analysis is used to reduce the dimensionality of input variables. Finally, a Bayesian recurrent neural network is trained to predict the touchdown vertical speed and quantify the uncertainty in the prediction. The model is validated using several flights in the test dataset, and computational results demonstrate the satisfactory performance of the proposed approach.

Research paper accepted by Knowledge-Based Systems

The poor explainability of deep learning models has hindered their adoption in safety and quality-critical applications. This paper focuses on image classification models and aims to enhance the explainability of deep learning models through the development of an uncertainty quantification-based framework. The proposed methodology consists of three major steps. In the first step, we adopt dropout-based Bayesian neural network to characterize the structure and parameter uncertainty inherent in deep learning models, propagate and represent such uncertainties to the model prediction as a distribution. Next, we employ entropy as a quantitative indicator to measure the uncertainty in model prediction, and develop an Empirical Cumulative Distribution Function (ECDF)-based approach to determine an appropriate threshold value for the purpose of deciding when to accept or reject the model prediction. Secondly, in the cases with high model prediction uncertainty, we combine the prediction difference analysis (PDA) approach with dropout-based Bayesian neural network to quantify the uncertainty in pixel-wise feature importance, and identify the locations in the input image that highly correlate with the model prediction uncertainty. In the third step, we develop a robustness-based design optimization formulation to enhance the relevance between input features and model prediction, and leverage a differential evolution approach to optimize the pixels in the input image with high uncertainty in feature importance. Experimental studies in MNIST and CIFAR-10 image classifications are included to demonstrate the effectiveness of the proposed approach in increasing the explainability of deep learning models.

Research paper accepted by Journal of Mechanical Design

Identifying a reliable path in uncertain environments is essential for designing reliable off-road autonomous ground vehicles (AGV) considering post-design operations. This paper presents a novel bio-inspired approach for model-based multi-vehicle mission planning under uncertainty for off-road AGVs subjected to mobility reliability constraints in dynamic environments. A physics-based vehicle dynamics simulation model is first employed to predict vehicle mobility (i.e., maximum attainable speed) for any given terrain and soil conditions. Based on physics-based simulations, the vehicle state mobility reliability in operation is then analyzed using an adaptive surrogate modeling method to overcome the computational challenges in mobility reliability analysis by adaptively constructing a surrogate. Subsequently, a bio-inspired approach called Physarum-based algorithm is used in conjunction with a navigation mesh to identify an optimal path satisfying a specific mobility reliability requirement. The developed Physarum-based framework is applied to reliability-based path planning for both a single-vehicle and multiple-vehicle scenarios. A case study is used to demonstrate the efficacy of the proposed methods and algorithms. The results show that the proposed framework can effectively identify optimal paths for both scenarios of a single and multiple vehicles. The required computational time is less than the widely used Dijkstra-based method.

ML model in production at FedEx Express

FedEx Express handles more than 6.5 million packages everyday in nearly 220 countries and regions. Customers expect timely and accurate information on their package deliveries. To address customers demand, I have worked with teams across different departments (e.g., IT, market) and colleagues in the Operations Research and Spatial Analytics (ORSA) to develop and deploy a machine learning-based solution to produce customized expected delivery time windows for millions of packages every day. I am proud that the deployed ML model has been in production with reliable performance since March, 2021.

Research paper accepted by Safety Science

In this paper, we apply a set of data-mining and sequential deep learning techniques to accident investigation reports published by the National Transportation Safety Board (NTSB) in support of the prognosis of adverse events. Our focus is on learning with text data that describes the sequences of events. NTSB creates post-hoc investigation reports which contain raw text narratives of their investigation and their corresponding concise event sequences. Classification models are developed for passenger air carriers, that take either an observed sequence of events or the corresponding raw text narrative as input and make predictions regarding whether an accident or an incident is the likely outcome, whether the aircraft would be damaged or not and whether any fatalities are likely or not. The classification models are developed using Word Embedding and the Long Short-term Memory (LSTM) neural network. The proposed methodology is implemented in two steps: (i) transform the NTSB data extracts into labeled dataset for building supervised machine learning models; and (ii) develop deep learning (DL) models for performing prognosis of adverse events like accidents, aircraft damage or fatalities. We also develop a prototype for an interactive query interface for end-users to test various scenarios including complete or partial event sequences or narratives and get predictions regarding the adverse events. The development of sequential deep learning models facilitates safety professionals in auditing, reviewing, and analyzing accident investigation reports, performing what-if scenario analyses to quantify the contributions of various hazardous events to the occurrence of aviation accidents/incidents.

Research paper accepted by Reliability Engineering and Systems Safety

Safety assurance is of paramount importance in the air transportation system. In this paper, we analyze the historical passenger airline accidents that happened from 1982 to 2006 as reported in the National Transportation Safety Board (NTSB) aviation accident database. A four-step procedure is formulated to construct a Bayesian network to capture the causal relationships embedded in the sequences of these accidents. First of all, with respect to each accident, a graphical representation is developed to facilitate the visualization of the escalation of initiating events into aviation accidents in the system. Next, we develop a Bayesian network representation of all the accidents by aggregating the accident-wise graphical representations together, where the causal and dependent relationships among a wide variety of contributory factors and outcomes in terms of aircraft damage and personnel injury are captured. In the Bayesian network, the prior probabilities are estimated based on the accident occurrence times and the aircraft departure data from the Bureau of Transportation Statistics (BTS). To estimate the conditional probabilities in the Bayesian network, we develop a monotonically increasing function, whose parameters are calibrated using the probability information on single events in the available data. Finally, we develop a computer program to automate the generation of the Bayesian network in compliance with the XML format used in the commercial GeNIe modeler. The constructed Bayesian network is then fed into GeNIe modeler for accident analysis. The mapping of the NTSB data to a Bayesian network facilitates both forward propagation and backward inference in probabilistic analysis, thereby supporting accident investigations and risk analysis. Several accident cases are used to demonstrate the developed approach.

Research paper accepted by IEEE Transactions on Reliability

Resilience is an important capability for many complex systems to mitigate the impact of extreme events as well as timely restoration of system performance in the aftermath of a disruptive event. In this paper, we investigate a bi-level pre-disaster resilience-based design optimization approach for the configuration of logistics service centers. In the bi-level program, the upper-level model considers the impact of potential disruptive events, and characterizes system planners’ decision regarding possible service center configuration that consists of two decision variables: construction of service center at candidate sites and their specific capacities. The optimization in the upper-level model considers both the travel time of each customer from their origins to the service centers and the within-center service time including average waiting time in the queue and mean processing time. The lower-level model captures customers’ behavior in choosing the distribution center to fulfill their requests with the goal of minimizing the cumulative travel time for all the customers. The objective of the formulated bi-level program is to maximize the resilience of the service center configuration, thereby increasing the ability of the system to withstand unexpected events. To tackle this NP-hard optimization problem, an adaptive importance sampling approach – cross entropy-based method – is leveraged to generate samples that gradually concentrates all its mass in the proximity of the optimal solution in an iterative way. A numerical example is used to illustrate the procedures of the developed method and demonstrate the effectiveness of the proposed methodology.

Research project funded by ARPA-E

The increasing role of renewable energy sources is challenging grid operations, which have traditionally relied on highly predictable load and generation. Future grid operations must balance generation costs and system-level risk, shifting from deterministic to stochastic optimization and risk management. The Risk-Aware Market Clearing (RAMC) project will provide a blueprint for an end-to-end, data-driven approach where risk is explicitly modeled, quantified, and optimized, striking a trade-off between cost and system-level risk minimization. The RAMC project focuses on challenges arising from increased stochasticity in generation, load, flow interchanges with adjacent markets, and extreme weather. RAMC addresses these challenges through innovations in machine learning, sampling, and optimization. Starting with the risk quantification of each individual asset obtained from historical data, RAMC learns the correlations between the performance and risk of individual assets, optimizes the selection of asset bundles, and quantifies the system-level risk.