Prof. Olga Fink gave a talk on “Integrating Domain Knowledge and Physics in AI: Harnessing Inductive Bias for Advanced PHM Solutions”
In the field of prognostics and health management, the integration of machine learning has enabled the development of advanced predictive models that ensure the reliable and safe operation of complex assets. However, challenges such as sparse, noisy, and incomplete data necessitate the integration of prior knowledge and inductive bias to improve model generalization, interpretability, and robustness.
Inductive bias, defined as the set of assumptions embedded in machine learning models, plays a crucial role in guiding these models to generalize effectively from limited training data to real-world scenarios. In PHM applications, where physical laws and domain-specific knowledge are fundamental, the use of inductive bias can significantly enhance a model’s ability to predict system behavior under diverse operating conditions. By embedding physical principles into learning algorithms, inductive bias reduces the reliance on large datasets, ensures that model predictions are physically consistent, and enhances both the generalizability and interpretability of the models.
This talk will explore various forms of inductive bias tailored for PHM systems, with a particular focus on heterogenous-temporal graph neural networks, as well as physics-informed and algorithm-informed graph neural networks. These approaches will be applied to virtual sensing, modelling multi-body dynamical systems and anomaly detection.
Review paper on AI system reliability is accepted by Journal of Reliability Science and Engineering
As the potential applications of AI continue to expand, a central question remains unresolved: will users trust and adopt AI-powered technologies? Since AI’s promise closely hinges on the perceptions of its trustworthiness, how to guarantee the reliability and trustworthiness of AI plays a fundamental role in fostering its broad adoptions in practice. However, the theories, mathematical models, and methods in reliability engineering and risk management have not kept pace with the rapid technological progress in AI. As a result, the lack of essential components (e.g., reliability, trustworthiness) in the resultant models has emerged as a major roadblock to regulatory approval and widespread adoptions of AI-powered solutions in high-stakes decision environments, such as healthcare, aviation, finance, nuclear power plant, to name a few. To fully harness AI’s power for automating decision making in these safety-critical applications, it is essential to manage expectations for what AI can realistically deliver to build appropriate levels of trust. In this paper, we focus on functional reliability of AI systems developed through supervised learning and discuss the unique characteristics of AI systems that necessitate the development of specialized reliability engineering and risk management theories and methods to create functionally reliable AI systems. Next, we thoroughly review five prevalent engineering mechanisms in the existing literature for approaching functionally reliable and trustworthy AI, including uncertainty quantification (UQ) composed of model-based UQ and model-agnostic conformal prediction, failure prediction, learning with abstention, formal verification, and knowledge-enabled AI. Furthermore, we outline several research challenges and opportunities related to the development of reliability engineering and trustworthiness assurance methods for AI systems. Our research aims to deepen the understanding of reliability and trustworthiness issues associated with AI systems, and spark researchers in the field of risk and reliability engineering and beyond to contribute to this area of study with emerging importance.
Research paper accepted by European Journal of Operational Research
It is common for multiple firms\textemdash such as manufacturers, retailers, and third-party insurers\textemdash to coexist and compete in the aftermarket for durable products. In this paper, we study price competition in a partially concentrated aftermarket where one firm offers multiple extended warranty (EW) contracts while the others offer a single one. The demand for EWs is described by the multinomial logit model. We show that, at equilibrium, such an aftermarket behaves like a combination of monopoly and oligopoly. Building upon this base model, we further investigate sequential pricing games for a durable product and its EWs to accommodate the ancillary nature of after-sales services. We consider two scenarios: one where the manufacturer (as the market leader) sets product and EW prices \emph{simultaneously}, and another where these decisions are made \emph{sequentially}. Our analysis demonstrates that offering EWs incentivizes the manufacturer to lower the product price, thereby expanding the market potential for EWs. Simultaneous product-EW pricing leads to a price concession on EWs compared to sequential pricing, effectively reducing the intensity of competition in the aftermarket. Overall, the competitiveness of an EW hinges on its ability to deliver high value to consumers at low marginal cost to its provider. While our focus is on EWs, the proposed game-theoretical pricing models apply broadly to other ancillary after-sales services.