Research project funded by Early Career Scheme (ECS) of Research Grants Council

The past few years have witnessed the rapid development of artificial intelligence (AI) and machine learning (ML) in solving long-standing problems. AI/ML has played an indispensable role in profoundly transforming business, transportation, and finance. However, the adoption of AI/ML in risk-sensitive areas is still in its infancy because AI/ML systems exhibit fundamental limits and practical shortcomings: the field of AI/ML lacks a rigorous framework for reasoning about risk, uncertainty, and their potentially catastrophic outcomes, while safety and quality implications are top priorities in practice across a broad array of high-stakes applications that range from medical diagnosis to aerospace systems. Since the consequences of AI/ML system failures in life and safety-critical applications can be disastrous, it is becoming abundantly clear that, reasoning about uncertainty and risk will become a cornerstone when adopting AI/ML systems in large-scale risk-sensitive applications. As ML systems face varied risks arising from the input data (e.g., data bias, dataset shift etc.) as well as the ML model (e.g., model bias, model misspecification, etc.), the goal of this project is to develop a systematic risk-aware ML framework that consists of a series of control checkpoints to safeguard ML systems against potential risks to increase the credibility of adopting ML system in critical applications. Towards this goal, this project consists of three coherent tasks: (1) develop an integrated approach for data quality monitoring by combining feature-based anomaly detection technique with outcome-based uncertainty measure. The developed integrated approach will produce a composite probabilistic risk indicator to reveal input data quality; (2) develop a two-stage ML-based framework to estimate model reliability for individual prediction (MRIP). MRIP characterizes the probability of the observed difference between model prediction and actual value being within a tiny interval while model input varies within a small prescribed range, and it provides an individualized estimation on model prediction reliability for each input x. (3) develop a ML model to learn system-level risk. A ML model will be developed to map data-level and model-level risk indicators derived in the first two tasks to a risk measure at the system level. The proposed effort has profound practical implications, the risk-aware framework will act as an effective safety barrier in preventing ML models from making over-confident predictions on cases that are either too noisy, anomalous, outside the domain of the trained model, or with low model prediction reliability, thus facilitating safe and reliable adoption of ML systems in critical applications.

Research project funded by ARPA-E

The increasing role of renewable energy sources is challenging grid operations, which have traditionally relied on highly predictable load and generation. Future grid operations must balance generation costs and system-level risk, shifting from deterministic to stochastic optimization and risk management. The Risk-Aware Market Clearing (RAMC) project will provide a blueprint for an end-to-end, data-driven approach where risk is explicitly modeled, quantified, and optimized, striking a trade-off between cost and system-level risk minimization. The RAMC project focuses on challenges arising from increased stochasticity in generation, load, flow interchanges with adjacent markets, and extreme weather. RAMC addresses these challenges through innovations in machine learning, sampling, and optimization. Starting with the risk quantification of each individual asset obtained from historical data, RAMC learns the correlations between the performance and risk of individual assets, optimizes the selection of asset bundles, and quantifies the system-level risk.