Entries by

Research paper accepted by European Journal of Operational Research

It is common for multiple firms\textemdash such as manufacturers, retailers, and third-party insurers\textemdash to coexist and compete in the aftermarket for durable products. In this paper, we study price competition in a partially concentrated aftermarket where one firm offers multiple extended warranty (EW) contracts while the others offer a single one. The demand for EWs is described by the multinomial logit model. We show that, at equilibrium, such an aftermarket behaves like a combination of monopoly and oligopoly. Building upon this base model, we further investigate sequential pricing games for a durable product and its EWs to accommodate the ancillary nature of after-sales services. We consider two scenarios: one where the manufacturer (as the market leader) sets product and EW prices \emph{simultaneously}, and another where these decisions are made \emph{sequentially}. Our analysis demonstrates that offering EWs incentivizes the manufacturer to lower the product price, thereby expanding the market potential for EWs. Simultaneous product-EW pricing leads to a price concession on EWs compared to sequential pricing, effectively reducing the intensity of competition in the aftermarket. Overall, the competitiveness of an EW hinges on its ability to deliver high value to consumers at low marginal cost to its provider. While our focus is on EWs, the proposed game-theoretical pricing models apply broadly to other ancillary after-sales services.

Dr. Xiaoge Zhang delivered a talk on “Bayesian Deep Learning for Aircraft Hard Landing Safety Assessment” at East China Normal University, China

Landing is generally cited as one of the riskiest phases of a flight, as indicated by the much higher accident rate than other flight phases. In this talk, we focus on the hard landing problem (which is defined as the touchdown vertical speed exceeding a predefined threshold) and build a probabilistic deep learning model to forecast the aircraft’s vertical speed at touchdown using DASHlink data. Previous studies have treated hard landing as a classification problem, in which the vertical speed is represented as a categorical variable based on a predefined threshold. In this talk, we develop a machine learning model to predict the touchdown vertical speed during aircraft landing. Probabilistic forecasting is used to quantify the uncertainty in model prediction to support risk-informed decision-making. A Bayesian neural network approach is leveraged to build the predictive model. The overall methodology consists of five steps. First, a clustering method based on the minimum separation between different airports is developed to identify flights in the dataset that landed at the same airport. Secondly, identifying the touchdown point itself is not straightforward; in this paper, it is determined by comparing the vertical speed distributions derived from different candidate touchdown indicators. Thirdly, a forward and backward filtering (filtfilt) approach is used to smooth the data without introducing the phase lag. Next, a minimal-redundancy-maximal-relevance (mRMR) analysis is used to reduce the dimensionality of input variables. Finally, a Bayesian recurrent neural network is trained to predict the touchdown vertical speed and quantify the uncertainty in the prediction. The model is validated using several flights in the test dataset, and computational results demonstrate the satisfactory performance of the proposed approach.

Research paper accepted by IEEE Transactions on Automation Science and Engineering

The demand for disruption-free fault diagnosis of mechanical equipment under a constantly changing operation environment poses a great challenge to the deployment of data-driven diagnosis models in practice. Extant continual learning-based diagnosis models suffer from consuming a large number of labeled samples to be trained for adapting to new diagnostic tasks and failing to account for the diagnosis of heterogeneous fault types across different machines. In this paper, we use a representative mechanical equipment – rotating machinery — as an example and develop an uncertainty-aware continual learning framework (UACLF) to provide a unified interface for fault diagnosis of rotating machinery under various dynamic scenarios: class continual scenario, domain continual scenario, and both. The proposed UACLF takes a three-step to tackle fault diagnosis of rotating machinery with homogeneous-heterogeneous faults under dynamic environments. In the first step, an inter-class classification loss function and an intra-class discrimination loss function are devised to extract informative feature representations from the raw vibration signal for fault classification. Next, an uncertainty-aware pseudo labeling mechanism is developed to select unlabeled fault samples that we are able to assign pseudo labels confidently, thus expanding the training samples for faults arising in the new environment. Thirdly, an adaptive prototypical feedback mechanism is used to enhance the decision boundary of fault classification and diminish the model misclassification rate. Experimental results on three datasets suggest that the proposed UACLF outperforms several alternatives in the literature on fault diagnosis of rotating machinery across various working conditions and different machines.

Prof. Tong Wang gave a talk on “Using Advanced LLMs to Enhance Smaller LLMs: An Interpretable Knowledge Distillation Approach”

Large language models (LLMs) like GPT-4 or LlaMa 3 provide superior performance in complex human-like interactions. But they are costly, or too large for edge devices such as smartphones and harder to self-host, leading to security and privacy concerns. This paper introduces a novel interpretable knowledge distillation approach to enhance the performance of smaller, more economical LLMs that firms can self-host. We study this problem in the context of building a customer service agent aimed at achieving high customer satisfaction through goal-oriented dialogues. Unlike traditional knowledge distillation, where the “student” model learns directly from the “teacher” model’s responses via fine-tuning, our interpretable “strategy” teaching approach involves the teacher providing strategies to improve the student’s performance in various scenarios. This method alternates between a “scenario generation” step and a “strategies for improvement” step, creating a customized library of scenarios and optimized strategies for automated prompting. The method requires only black-box access to both student and teacher models; hence it can be used without manipulating model parameters. In our customer service application, the method improves performance, and the learned strategies are transferable to other LLMs and scenarios beyond the training set. The method’s interpretabilty helps safeguard against potential harms through human audit.

Dr. Xiaoge Zhang delivered a talk on “Reliability Engineering in the Era of AI: An Uncertainty Quantification-Based Framework” at National University of Singapore, Singapore

Establishing trustworthiness is fundamental for the responsible utilization of medical artificial intelligence (AI), particularly in cancer diagnostics, where misdiagnosis can lead to devastating consequences. However, there is currently a lack of systematic approaches to resolve the reliability challenges stemming from the model limitations and the unpredictable variability in the application domain. In this work, we address trustworthiness from two complementary aspects—data trustworthiness and model trustworthiness—in the task of subtyping non-small cell lung cancers using whole side images. We introduce TRUECAM, a framework that provides trustworthiness-focused, uncertainty-aware, end-to-end cancer diagnosis with model-agnostic capabilities by leveraging spectral-normalized neural Gaussian Process (SNGP) and conformal prediction (CP) to simultaneously ensure data and model trustworthiness. Specifically, SNGP enables the identification of inputs beyond the scope of trained models, while CP offers a statistical validity guarantee for models to contain correct classification. Systematic experiments performed on both internal and external cancer cohorts, utilizing a widely adopted specialized model and two foundation models, indicate that TRUECAM achieves significant improvements in classification accuracy, robustness, fairness, and data efficiency (i.e., selectively identifying and utilizing only informative tiles for classification). These highlight TRUECAM as a general wrapper framework around medical AI of different sizes, architectures, purposes, and complexities to enable their responsible use.

Prof. Lei Ma gave a talk on “Towards Building the Trust of Complex AI Systems in the LLM Era”

In recent years, deep learning-enabled systems have made remarkable progress, powering a surge in advanced intelligent applications. This growth and its real-world impact have been further amplified by the advent of large foundation models (e.g., LLM, Stable Diffusion). Yet, the rapid evolution of these AI systems often proceeds without comprehensive quality assurance and engineering support. This gap is evident in the integration of standards for quality, reliability, and safety assurance, as well as the need for mature toolchain support that provides systematic and explainable feedback of the development lifecycle. In this talk, I will present a high-level overview of our team’s ongoing initiatives to lay the groundwork for Trustworthy Assurance of AI Systems and its industrial applications, e.g., including (1) AI software testing and analysis, (2) our latest trustworthiness assurance efforts for AI-driven Cyber-physical systems with an emphasis on sim2real transition. (3) risk and safety assessment for large foundational models, including those akin to large language models, and vision transformers.

Prof. Bart Baesens gave a talk on “Using AI for Fraud Detection: Recent Research Insights and Emerging Opportunities”

Typically, organizations lose around five percent of their revenue to fraud. In this presentation, we explore advanced AI techniques to address this issue. Drawing on our recent research, we begin by examining cost-sensitive fraud detection methods, such as CS-Logit which integrates the economic imbalances inherent in fraud detection into the optimization of AI models. We then move on to data engineering strategies that enhance the predictive capabilities of both the data and AI models through intelligent instance and feature engineering. We also delve into network data, showcasing our innovative research methods like Gotcha and CATCHM for effective data featurization. A significant focus is placed on Explainable AI (XAI), which demystifies high-performance AI models used in fraud detection, aiding in the development of effective fraud prevention strategies. We provide practical examples from various sectors including credit card fraud, anti-money laundering, insurance fraud, tax evasion, and payment transaction fraud. Furthermore, we discuss the overarching issue of model risk, which encompasses everything from data input to AI model deployment. Throughout the presentation, the speaker will thoroughly discuss his recent research, conducted in partnership with leading global financial institutions such as BNP Paribas Fortis, Allianz, ING, and Ageas.

Research project funded by the Young Scientists Fund of National Natural Science Foundation of China

The malfunction of deep learning systems in safety-critical applications (e.g., aerospace) will lead to devastating outcomes. Thus, how to ensure the trustworthiness of deep learning systems in high-stakes decision settings is an imperative problem to be tackled. This project aims at accommodating heterogeneous sources of risks pertaining to the input data of deep learning systems in the open world, multi-source uncertainty inherent in the model reliability for individual prediction as well as the coupled relationship between input data-related risk and the model reliability tailored to each individual prediction towards devising an uncertainty quantification-based method for trustworthiness modeling of deep learning systems. We believe that the proposed effort will make substantial contributions to the development of novel and effective theories and models for enhancing the trustworthiness of deep learning systems, offer new insights for the trustworthiness modeling of deep learning systems in the open environment, and boost the advancement of trustworthy deep learning systems.