Research paper accepted by IEEE Transactions on Automation Science and Engineering

The demand for disruption-free fault diagnosis of mechanical equipment under a constantly changing operation environment poses a great challenge to the deployment of data-driven diagnosis models in practice. Extant continual learning-based diagnosis models suffer from consuming a large number of labeled samples to be trained for adapting to new diagnostic tasks and failing to account for the diagnosis of heterogeneous fault types across different machines. In this paper, we use a representative mechanical equipment – rotating machinery — as an example and develop an uncertainty-aware continual learning framework (UACLF) to provide a unified interface for fault diagnosis of rotating machinery under various dynamic scenarios: class continual scenario, domain continual scenario, and both. The proposed UACLF takes a three-step to tackle fault diagnosis of rotating machinery with homogeneous-heterogeneous faults under dynamic environments. In the first step, an inter-class classification loss function and an intra-class discrimination loss function are devised to extract informative feature representations from the raw vibration signal for fault classification. Next, an uncertainty-aware pseudo labeling mechanism is developed to select unlabeled fault samples that we are able to assign pseudo labels confidently, thus expanding the training samples for faults arising in the new environment. Thirdly, an adaptive prototypical feedback mechanism is used to enhance the decision boundary of fault classification and diminish the model misclassification rate. Experimental results on three datasets suggest that the proposed UACLF outperforms several alternatives in the literature on fault diagnosis of rotating machinery across various working conditions and different machines.

Prof. Tong Wang gave a talk on “Using Advanced LLMs to Enhance Smaller LLMs: An Interpretable Knowledge Distillation Approach”

Large language models (LLMs) like GPT-4 or LlaMa 3 provide superior performance in complex human-like interactions. But they are costly, or too large for edge devices such as smartphones and harder to self-host, leading to security and privacy concerns. This paper introduces a novel interpretable knowledge distillation approach to enhance the performance of smaller, more economical LLMs that firms can self-host. We study this problem in the context of building a customer service agent aimed at achieving high customer satisfaction through goal-oriented dialogues. Unlike traditional knowledge distillation, where the “student” model learns directly from the “teacher” model’s responses via fine-tuning, our interpretable “strategy” teaching approach involves the teacher providing strategies to improve the student’s performance in various scenarios. This method alternates between a “scenario generation” step and a “strategies for improvement” step, creating a customized library of scenarios and optimized strategies for automated prompting. The method requires only black-box access to both student and teacher models; hence it can be used without manipulating model parameters. In our customer service application, the method improves performance, and the learned strategies are transferable to other LLMs and scenarios beyond the training set. The method’s interpretabilty helps safeguard against potential harms through human audit.

Dr. Xiaoge Zhang delivered a talk on “Reliability Engineering in the Era of AI: An Uncertainty Quantification-Based Framework” at National University of Singapore, Singapore

Establishing trustworthiness is fundamental for the responsible utilization of medical artificial intelligence (AI), particularly in cancer diagnostics, where misdiagnosis can lead to devastating consequences. However, there is currently a lack of systematic approaches to resolve the reliability challenges stemming from the model limitations and the unpredictable variability in the application domain. In this work, we address trustworthiness from two complementary aspects—data trustworthiness and model trustworthiness—in the task of subtyping non-small cell lung cancers using whole side images. We introduce TRUECAM, a framework that provides trustworthiness-focused, uncertainty-aware, end-to-end cancer diagnosis with model-agnostic capabilities by leveraging spectral-normalized neural Gaussian Process (SNGP) and conformal prediction (CP) to simultaneously ensure data and model trustworthiness. Specifically, SNGP enables the identification of inputs beyond the scope of trained models, while CP offers a statistical validity guarantee for models to contain correct classification. Systematic experiments performed on both internal and external cancer cohorts, utilizing a widely adopted specialized model and two foundation models, indicate that TRUECAM achieves significant improvements in classification accuracy, robustness, fairness, and data efficiency (i.e., selectively identifying and utilizing only informative tiles for classification). These highlight TRUECAM as a general wrapper framework around medical AI of different sizes, architectures, purposes, and complexities to enable their responsible use.