Research paper accepted by Reliability Engineering and Systems Safety
Risk management often involves retrofit optimization to enhance the performance of buildings against extreme events but may result in huge upfront mitigation costs. Existing stochastic optimization frameworks could be computationally expensive, may require explicit programming, and are often not intelligent. Hence, an intelligent risk optimization framework is proposed herein for building structures by developing a deep reinforcement learning-enabled actor-critic neural network model. The proposed framework is divided into two parts including (1) a performance-based environment to assess mitigation costs and uncertain future consequences under hazards and (2) a deep reinforcement learning-enabled risk optimization model for performance enhancement. The performance-based environment takes mitigation alternatives as input and provides consequences and retrofit costs as output by utilizing several steps, including hazard assessment, damage assessment, and consequence assessment. The risk optimization is performed by integrating performance-based environment with actor-critic deep neural networks to simultaneously reduce retrofit costs and uncertain future consequences given seismic hazards. For illustration, the proposed framework is implemented on a portfolio with numerous building structures to demonstrate the new paradigm for intelligent risk optimization. Also, the performance of the proposed method is compared with genetic optimization, deep Q-networks, and proximal policy optimization.
Prof. Yan-Fu Li gave a talk on “Recent Research Progresses on Optimal System Reliability Design”
Optimal system reliability design is an important research field in reliability engineering. Since the 1950s, extensive studies have been conducted on various aspects of this issue. This field remains highly active today due to the need to develop new generations of complex engineering systems, such as 5G telecom networks and high-performance computing clusters, which are expected to be highly reliable to meet the stringent, dynamic, and often real-time quality demands of system operators and end-users. Over the past five years, numerous new researches on optimal system reliability design have been published, addressing the theoretical challenges posed by the new engineering systems. This presentation will systematically review these works with the focus on theoretical advancements, including the models and methods for redundancy allocation problem, redundancy allocation under mixed uncertainty, joint reliability-redundancy allocation problem and joint redundancy allocation and maintenance optimization. Through analysis and discussions, we will outline future research directions.
Research paper accepted by Knowledge-Based Systems
Principled quantification of predictive uncertainty in neural networks (NNs) is essential to safeguard their applications in high-stakes decision settings. In this paper, we develop a differentiable mathematical formulation to quantify the uncertainty in NN prediction using prediction intervals (PIs). The formulated optimization problem is differentiable and compatible with the built-in gradient descent optimizers in prevailing deep learning platforms, and two performance metrics composed of prediction interval coverage probability (PICP) and mean prediction interval width (MPIW) are considered in the construction of PIs. Different from existing methods, the developed methodology features four salient characteristics. Firstly, we design two distance-based functions that are differentiable to impose constraints associated with the target coverage in PI construction, where PICP is prioritized explicitly over MPIW in the devised composite loss function. Next, we adopt a shared-bottom NN architecture with intermediate layers to separate the learning of shared and task-specific feature representations along the construction of lower and upper bounds. Thirdly, we leverage the projection of conflicting gradients (PCGrad) to mitigate interference of gradients associated with the two individual learning tasks so as to increase the convergence stability and solution quality. Finally, we design a customized early stopping mechanism to monitor PICP and MPIW simultaneously for the purpose of selecting the set of parameters that not only meets the target coverage but also has a minimal MPIW as the ultimate NN parameters. A broad range of datasets are used to rigorously examine the performance of the developed methodology. Computational results suggest that the developed method significantly outperforms the classic LUBE method across the nine datasets by reducing the PI width by 31.26% on average. More importantly, it achieves competitive results compared to the other three state-of-the-art methods by outperforming them on four out of ten datasets. An ablation study is used to explicitly demonstrate the benefit of shared-bottom NN architecture in the construction of PIs.