Research paper accepted by Decision Support Systems
The adoption of artificial intelligence (AI) and machine learning (ML) in risk-sensitive environments is still in its infancy because it lacks a systematic framework for reasoning about risk, uncertainty, and their potentially catastrophic consequences. In high-impact applications, inference on risk and uncertainty will become decisive in the adoption of AI/ML systems. To this end, there is a pressing need for a consolidated understanding on the varied risks arising from AI/ML systems, and how these risks and their side effects emerge and unfold in practice. In this paper, we provide a systematic and comprehensive overview of a broad array of inherent risks that can arise in AI/ML systems. These risks are grouped into two categories: data-level risk (e.g., data bias, dataset shift, out-of-domain data, and adversarial attacks) and model-level risk (e.g., model bias, misspecification, and uncertainty). In addition, we highlight the research needs for developing a holistic framework for risk management dedicated to AI/ML systems to hedge the corresponding risks. Furthermore, we outline several research related challenges and opportunities along with the development of risk-aware AI/ML systems. Our research has the potential to significantly increase the credibility of deploying AI/ML models in high-stakes decision settings for facilitating safety assurance, and preventing systems from unintended consequences.
Research paper accepted by IEEE Transactions on Intelligent Transportation Systems
Landing is generally cited as one of the riskiest phases of a flight, as indicated by the much higher accident rate than other flight phases. In this paper, we focus on the hard landing problem (which is defined as the touchdown vertical speed exceeding a predefined threshold), and build a probabilistic predictive model to forecast the aircraft’s vertical speed at touchdown, using DASHlink data. Previous work has treated hard landing as a classification problem, where the vertical speed is represented as a categorical variable based on a predefined threshold. In this paper, we build a machine learning model to numerically predict the touchdown vertical speed during aircraft landing. Probabilistic forecasting is used to quantify the uncertainty in model prediction, which in turn supports risk-informed decision-making. A Bayesian neural network approach is leveraged to construct the predictive model. The overall methodology consists of five steps. First, a clustering method based on the minimum separation between different airports is developed to identify flights in the dataset that landed at the same airport. Secondly, identifying the touchdown point itself is not straightforward; in this paper, it is determined by comparing the vertical speed distributions derived from different candidate touchdown indicators. Thirdly, a forward and backward filtering (filtfilt) approach is used to smooth the data without introducing phase lag. Next, a minimal-redundancy-maximal-relevance (mRMR) analysis is used to reduce the dimensionality of input variables. Finally, a Bayesian recurrent neural network is trained to predict the touchdown vertical speed and quantify the uncertainty in the prediction. The model is validated using several flights in the test dataset, and computational results demonstrate the satisfactory performance of the proposed approach.
Research paper accepted by Knowledge-Based Systems
The poor explainability of deep learning models has hindered their adoption in safety and quality-critical applications. This paper focuses on image classification models and aims to enhance the explainability of deep learning models through the development of an uncertainty quantification-based framework. The proposed methodology consists of three major steps. In the first step, we adopt dropout-based Bayesian neural network to characterize the structure and parameter uncertainty inherent in deep learning models, propagate and represent such uncertainties to the model prediction as a distribution. Next, we employ entropy as a quantitative indicator to measure the uncertainty in model prediction, and develop an Empirical Cumulative Distribution Function (ECDF)-based approach to determine an appropriate threshold value for the purpose of deciding when to accept or reject the model prediction. Secondly, in the cases with high model prediction uncertainty, we combine the prediction difference analysis (PDA) approach with dropout-based Bayesian neural network to quantify the uncertainty in pixel-wise feature importance, and identify the locations in the input image that highly correlate with the model prediction uncertainty. In the third step, we develop a robustness-based design optimization formulation to enhance the relevance between input features and model prediction, and leverage a differential evolution approach to optimize the pixels in the input image with high uncertainty in feature importance. Experimental studies in MNIST and CIFAR-10 image classifications are included to demonstrate the effectiveness of the proposed approach in increasing the explainability of deep learning models.