Prof. Tong Wang gave a talk on “Using Advanced LLMs to Enhance Smaller LLMs: An Interpretable Knowledge Distillation Approach”
Large language models (LLMs) like GPT-4 or LlaMa 3 provide superior performance in complex human-like interactions. But they are costly, or too large for edge devices such as smartphones and harder to self-host, leading to security and privacy concerns. This paper introduces a novel interpretable knowledge distillation approach to enhance the performance of smaller, more economical LLMs that firms can self-host. We study this problem in the context of building a customer service agent aimed at achieving high customer satisfaction through goal-oriented dialogues. Unlike traditional knowledge distillation, where the “student” model learns directly from the “teacher” model’s responses via fine-tuning, our interpretable “strategy” teaching approach involves the teacher providing strategies to improve the student’s performance in various scenarios. This method alternates between a “scenario generation” step and a “strategies for improvement” step, creating a customized library of scenarios and optimized strategies for automated prompting. The method requires only black-box access to both student and teacher models; hence it can be used without manipulating model parameters. In our customer service application, the method improves performance, and the learned strategies are transferable to other LLMs and scenarios beyond the training set. The method’s interpretabilty helps safeguard against potential harms through human audit.
Prof. Lei Ma gave a talk on “Towards Building the Trust of Complex AI Systems in the LLM Era”
In recent years, deep learning-enabled systems have made remarkable progress, powering a surge in advanced intelligent applications. This growth and its real-world impact have been further amplified by the advent of large foundation models (e.g., LLM, Stable Diffusion). Yet, the rapid evolution of these AI systems often proceeds without comprehensive quality assurance and engineering support. This gap is evident in the integration of standards for quality, reliability, and safety assurance, as well as the need for mature toolchain support that provides systematic and explainable feedback of the development lifecycle. In this talk, I will present a high-level overview of our team’s ongoing initiatives to lay the groundwork for Trustworthy Assurance of AI Systems and its industrial applications, e.g., including (1) AI software testing and analysis, (2) our latest trustworthiness assurance efforts for AI-driven Cyber-physical systems with an emphasis on sim2real transition. (3) risk and safety assessment for large foundational models, including those akin to large language models, and vision transformers.
Prof. Bart Baesens gave a talk on “Using AI for Fraud Detection: Recent Research Insights and Emerging Opportunities”
Typically, organizations lose around five percent of their revenue to fraud. In this presentation, we explore advanced AI techniques to address this issue. Drawing on our recent research, we begin by examining cost-sensitive fraud detection methods, such as CS-Logit which integrates the economic imbalances inherent in fraud detection into the optimization of AI models. We then move on to data engineering strategies that enhance the predictive capabilities of both the data and AI models through intelligent instance and feature engineering. We also delve into network data, showcasing our innovative research methods like Gotcha and CATCHM for effective data featurization. A significant focus is placed on Explainable AI (XAI), which demystifies high-performance AI models used in fraud detection, aiding in the development of effective fraud prevention strategies. We provide practical examples from various sectors including credit card fraud, anti-money laundering, insurance fraud, tax evasion, and payment transaction fraud. Furthermore, we discuss the overarching issue of model risk, which encompasses everything from data input to AI model deployment. Throughout the presentation, the speaker will thoroughly discuss his recent research, conducted in partnership with leading global financial institutions such as BNP Paribas Fortis, Allianz, ING, and Ageas.