Research paper accepted by IEEE Transactions on Instrumentation and Measurement

Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at https://github.com/asdvfghg/BearingPGA-Net.

Research paper accepted by IEEE Transactions on Reliability

As safety is the top priority in mission-critical engineering applications, uncertainty quantification emerges as a linchpin to the successful deployment of AI models in these high-stakes domains. In this paper, we seamlessly encode a simple and principled uncertainty quantification module Spectral-normalized Neural Gaussian Process (SNGP) into GoogLeNet to detect various defects in steel wire ropes (SWRs) accurately and reliably. To this end, the developed methodology consists of three coherent steps. In the first step, raw Magnetic Flux Leakage (MFL) signals in waveform associated with normal and defective SWRs that are manifested in the number of broken wires are collected via a dedicated experimental setup. Next, the proposed approach utilizes Gramian Angular Field to represent the MFL signal in 1-D time series as 2-D images while preserving key spatial and temporal structures in the data. Thirdly, built atop the backbone of GoogLeNet, we systematically integrate SNGP by adding the spectral normalization (SN) layer to normalize the weights and replacing the output layers with a Gaussian process (GP) in the main network and auxiliary classifiers of GoogLeNet accordingly, where SN enables to preserve the distance in data transformation and GP makes the output layer of neural network distance aware when assigning uncertainty. Comprehensive comparisons with the state-of-the-art models highlight the advantages of the developed methodology in classifying SWR defects and identifying out-of-distribution (OOD) SWR instances. In addition, a thorough ablation study is performed to quantitatively illustrate the significant role played by SN and GP in the principledness of the estimated uncertainty towards detecting SWR instances with varying OODness.

Prof. Stefan Feuerrigel gave a talk on “Learning policies for decision-making with causal machine learning: The case of development financing”

The Sustainable Development Goals (SDGs) of the United Nations provide a blueprint of a better future by “leaving no one behind”, and, to achieve the SDGs by 2030, poor countries require immense volumes of development aid. In this work, we develop a causal machine learning framework for estimating heterogeneous treatment effects of aid disbursements that inform optimal aid allocation. We demonstrate the effectiveness of our method using data with official development aid earmarked to end HIV/AIDS in 105 countries, amounting to more than USD 5.2 billion. For this, we first show that our method successfully computes heterogeneous treatment-response curves using semi-synthetic data. Then, using real-world HIV data, we find that an optimal aid allocation suggested by our method could reduce the total number of new HIV infections compared to current allocation practice. Our findings indicate the effectiveness of causal machine learning to inform cost-efficient allocations of development aid that maximize progress towards the SDGs.