Research paper accepted by Advanced Engineering Informatics

Teams formulated by aviation professionals are essential in maintaining a safe and efficient aerodrome environment. Nonetheless, the shared situational awareness between the flight crews under adverse weather conditions might be impaired. This research aims to evaluate the impact of a proposed enhancement in communication protocol on cognitive workload and develop a human-centred classification model to identify hazardous meteorological conditions. Thirty groups of subjects completed four post-landing taxiing tasks under two visibility conditions (CAVOK/CAT IIIA) while two different communication protocols (presence/absence of turning direction information) were adopted by the air traffic control officer (ATCOs). Electroencephalography (EEG) and the NASA Task Load Index were respectively used to reflect the pilot’s mental state and to evaluate the pilot’s mental workload subjectively. Results indicated that impaired visibility increases the subjective workload significantly, while the inclusion of turning direction information in the ATCO’s instruction would not significantly intensify their cognitive workload. Mutual information was used to quantitatively assess the shared situational awareness between the pilot flying and the pilot monitoring. Finally, this research proposes a humancentred approach to identify potentially hazardous weather conditions from EEG power spectral densities with Bayesian neural networks (BNN). The classification model has outperformed other baseline algorithms with an accuracy of 66.5%, an F1 score of 61.4%, and an area under the ROC of 0.749. Using the concept of explainable AI with Shapley Additive Explanations (SHAP) values, the exploration of latent mental patterns formulates novel knowledge to gain insights into the vital physiological indicators of the pilots in response to different scenarios from the BNN model. In the long term, the model facilitates the decision regarding the necessity of providing automation and decision-making aids to pilots.

Research project funded by Early Career Scheme (ECS) of Research Grants Council

The past few years have witnessed the rapid development of artificial intelligence (AI) and machine learning (ML) in solving long-standing problems. AI/ML has played an indispensable role in profoundly transforming business, transportation, and finance. However, the adoption of AI/ML in risk-sensitive areas is still in its infancy because AI/ML systems exhibit fundamental limits and practical shortcomings: the field of AI/ML lacks a rigorous framework for reasoning about risk, uncertainty, and their potentially catastrophic outcomes, while safety and quality implications are top priorities in practice across a broad array of high-stakes applications that range from medical diagnosis to aerospace systems. Since the consequences of AI/ML system failures in life and safety-critical applications can be disastrous, it is becoming abundantly clear that, reasoning about uncertainty and risk will become a cornerstone when adopting AI/ML systems in large-scale risk-sensitive applications. As ML systems face varied risks arising from the input data (e.g., data bias, dataset shift etc.) as well as the ML model (e.g., model bias, model misspecification, etc.), the goal of this project is to develop a systematic risk-aware ML framework that consists of a series of control checkpoints to safeguard ML systems against potential risks to increase the credibility of adopting ML system in critical applications. Towards this goal, this project consists of three coherent tasks: (1) develop an integrated approach for data quality monitoring by combining feature-based anomaly detection technique with outcome-based uncertainty measure. The developed integrated approach will produce a composite probabilistic risk indicator to reveal input data quality; (2) develop a two-stage ML-based framework to estimate model reliability for individual prediction (MRIP). MRIP characterizes the probability of the observed difference between model prediction and actual value being within a tiny interval while model input varies within a small prescribed range, and it provides an individualized estimation on model prediction reliability for each input x. (3) develop a ML model to learn system-level risk. A ML model will be developed to map data-level and model-level risk indicators derived in the first two tasks to a risk measure at the system level. The proposed effort has profound practical implications, the risk-aware framework will act as an effective safety barrier in preventing ML models from making over-confident predictions on cases that are either too noisy, anomalous, outside the domain of the trained model, or with low model prediction reliability, thus facilitating safe and reliable adoption of ML systems in critical applications.

Research paper accepted by Decision Support Systems

The adoption of artificial intelligence (AI) and machine learning (ML) in risk-sensitive environments is still in its infancy because it lacks a systematic framework for reasoning about risk, uncertainty, and their potentially catastrophic consequences. In high-impact applications, inference on risk and uncertainty will become decisive in the adoption of AI/ML systems. To this end, there is a pressing need for a consolidated understanding on the varied risks arising from AI/ML systems, and how these risks and their side effects emerge and unfold in practice. In this paper, we provide a systematic and comprehensive overview of a broad array of inherent risks that can arise in AI/ML systems. These risks are grouped into two categories: data-level risk (e.g., data bias, dataset shift, out-of-domain data, and adversarial attacks) and model-level risk (e.g., model bias, misspecification, and uncertainty). In addition, we highlight the research needs for developing a holistic framework for risk management dedicated to AI/ML systems to hedge the corresponding risks. Furthermore, we outline several research related challenges and opportunities along with the development of risk-aware AI/ML systems. Our research has the potential to significantly increase the credibility of deploying AI/ML models in high-stakes decision settings for facilitating safety assurance, and preventing systems from unintended consequences.