Efficient Probabilistic Analysis Methods for Complex Numerical Models with Large Numbers of Random Variables, 20-9190

Printer Friendly Version

Principal Investigators
Sitakanta Mohanty
Michael P. Enright

Inclusive Dates: 04/01/00 - Current

Background - Probabilistic modeling of engineered and natural systems has been widely recognized as holding the key to future safety assessment and efficient development of products with better performance and reliability. Motivated by the need to replace expensive and time-consuming field and laboratory tests, government agencies and private industry are increasingly adopting physics-based probabilistic modeling technologies and have been recently investing significant resources to develop numerical simulation techniques and uncertainty quantification methods. New challenges have also emerged for probabilistic reliability and risk analysis because highly complicated, computationally intensive, physics-based models require a large number of variables to conduct accurate system-level failure analyses. For example, in the last several years, the number of parameters in the Total-System Performance Assessment code, jointly developed by Southwest Research Institute and the U.S. Nuclear Regulatory Commission, has grown from 200 to more than 800. Reliability-based design problems for motor vehicle design also involve hundreds of responses, each with hundreds of input variables. These problems cannot be solved efficiently and accurately by the existing methods that have been successfully used for problems with a small number of variables. To the best of our knowledge, no existing method can accurately calculate small probability of failure (say, less than 1 X10-4) with less than at least a few thousand computer runs. The ability to efficiently conduct reliability analysis with a large number of variables will give the Institute the needed edge to compete in current and future markets.

Approach - The objective of the project is to develop and demonstrate a fast and accurate method to tackle risk and reliability problems involving computationally intensive and numerically complex models with large numbers of variables. The framework of the method features a hybrid approach that combines the sampling approach that explores the parameter space and the advanced reliability methods that focus the analysis at the tail of the probability distribution. The screening procedure to identify the important random variables is being developed and demonstrated. The use of effective probabilistic sensitivity measures that can most effectively identify the most important random variables is being developed. The method is being extended to combine the low-probability, high-consequence scenarios with the most likely scenarios to obtain a single performance measure. An efficient reliability analysis method and an efficient reliability-based design optimization method are being developed. The demonstration examples include nuclear waste management, oil and gas exploration and transportation, and automotive applications.

Accomplishments - A formal and comprehensive study of the method was initiated by investigating two sampling-based sensitivity measures for linear and nonlinear response functions with Gaussian and non-Gaussian random variables. Analytical solutions have been derived for linear response functions with Gaussian variables and numerical solutions for second-order response surfaces with normal or lognormal variables are being developed. The analytical solutions suggest that, in general, the mean-sensitivity (response CDF with respect to mean) has a generally better discriminating power in identifying influential variables than the sigma-sensitivity (response CDF with respect to standard deviation); however, both measures have good discriminating power at the tails of the distribution.

Intelligent Systems, Advanced Computer and
Electronic Technology, and Automation Program
2000 IR&D Home SwRI Home