Prof. Bart Baesens gave a talk on “Using AI for Fraud Detection: Recent Research Insights and Emerging Opportunities”

Typically, organizations lose around five percent of their revenue to fraud. In this presentation, we explore advanced AI techniques to address this issue. Drawing on our recent research, we begin by examining cost-sensitive fraud detection methods, such as CS-Logit which integrates the economic imbalances inherent in fraud detection into the optimization of AI models. We then move on to data engineering strategies that enhance the predictive capabilities of both the data and AI models through intelligent instance and feature engineering. We also delve into network data, showcasing our innovative research methods like Gotcha and CATCHM for effective data featurization. A significant focus is placed on Explainable AI (XAI), which demystifies high-performance AI models used in fraud detection, aiding in the development of effective fraud prevention strategies. We provide practical examples from various sectors including credit card fraud, anti-money laundering, insurance fraud, tax evasion, and payment transaction fraud. Furthermore, we discuss the overarching issue of model risk, which encompasses everything from data input to AI model deployment. Throughout the presentation, the speaker will thoroughly discuss his recent research, conducted in partnership with leading global financial institutions such as BNP Paribas Fortis, Allianz, ING, and Ageas.

Welcome two new PhD students to join the group!

The group for risk, reliability, and resilience informatics warmly welcomes the following PhD students to join the team:

Tao Wang, and Xinru Zhang.

Research project funded by the Young Scientists Fund of National Natural Science Foundation of China

The malfunction of deep learning systems in safety-critical applications (e.g., aerospace) will lead to devastating outcomes. Thus, how to ensure the trustworthiness of deep learning systems in high-stakes decision settings is an imperative problem to be tackled. This project aims at accommodating heterogeneous sources of risks pertaining to the input data of deep learning systems in the open world, multi-source uncertainty inherent in the model reliability for individual prediction as well as the coupled relationship between input data-related risk and the model reliability tailored to each individual prediction towards devising an uncertainty quantification-based method for trustworthiness modeling of deep learning systems. We believe that the proposed effort will make substantial contributions to the development of novel and effective theories and models for enhancing the trustworthiness of deep learning systems, offer new insights for the trustworthiness modeling of deep learning systems in the open environment, and boost the advancement of trustworthy deep learning systems.