Treatment of Multiple Failure Modes and Epistemic Uncertainties with
the Efficient Global Reliability Analysis Method, 18-R9895

Printer Friendly Version

Principal Investigators
Barron J. Bichon
John M. McFarland
David S. Riha

Inclusive Dates:  10/01/08 – 01/04/10

Background - Reliability analysis is the process of studying how uncertainties and variations associated with model inputs such as material properties, boundary conditions, or loading conditions may impact the resulting outputs or system performance. Simply put, given the variations that are known to exist in properties, operating conditions, and other factors, the goal is to estimate the probability that a particular system will satisfy some required performance measure. This probability is referred to as the reliability. Unfortunately, reliability analysis can add considerable complication and computational expense to traditional deterministic analysis, making it cost prohibitive for many applications of interest. This is especially true for systems that possess multiple failure modes. Additionally, uncertainty in the inputs to the reliability analysis computation leads to uncertainty in the estimated reliability, which reduces confidence in the result.

Approach - The first phase of this project aims to simultaneously reduce the computational expense and increase the accuracy of estimating the reliability of large-scale engineering systems. This will be accomplished through the development of several improvements to the recently developed Efficient Global Reliability Analysis (EGRA) method. Specifically targeted will be its application to (a) extremely expensive, (b) potentially noisy, and (c) system-level problems; (d) parallel execution of the underlying deterministic analyses; and (e) inverse reliability analysis. The second phase is uncertainty quantification. Here the research aims to study, develop, and improve methods for quantifying several types of uncertainty that can impact computed reliability results. The types of uncertainty of interest are (a) uncertainty associated with estimated probability distribution model parameters, (b) uncertainty associated with probability distribution model type, and (c) uncertainty introduced through the use of a Gaussian process surrogate model as part of the EGRA framework.

Accomplishments -

System-Level Reliability Analysis: By modifying a core part of EGRA, a method with an unprecedented combination of accuracy and efficiency has been created. For instance, one example problem demonstrated that at an average expense of only 35 function evaluations, EGRA could provide equivalent accuracy as random sampling with 3 million evaluations. Previous efficient methods remain both more computationally expensive than EGRA and less accurate due to their reliance on simplifying approximations.

Distribution Uncertainty: The probability density functions for the random variables used in reliability analysis are often estimated by fitting probability distribution models to observed test data. Because an infinite amount of test data can never be obtained, there is always some degree of uncertainty associated with the actual underlying probability density functions that generated the observed data. Consequently, there is some amount of uncertainty associated with any reliability estimate that is computed based on the assumed probability distribution models. This project has developed a rigorous approach founded in Bayesian inference for quantifying the uncertainty in both the distribution model form and its parameters. This approach allows one to compute the uncertainty associated with the reliability by averaging over multiple possible models based on their relative likelihoods, as indicated by observed data. It is important to note that while this type of analysis is certainly possible with a different reliability analysis method such as Monte Carlo sampling, it is the application of EGRA to this analysis that has made quantifying the uncertainty in reliability estimates caused by the distribution uncertainty finally practical because of the vast computational savings.

Inverse Reliability Analysis: Typically, reliability analysis is concerned with calculating the reliability of a design given a desired performance level. An inverse analysis seeks the performance level that corresponds to a desired reliability. By modifying the algorithm, EGRA has been extended to perform this type of analysis with greater efficiency and accuracy than previously available methods.

Parallel Execution: One objective of the EGRA method is to estimate reliability using the fewest possible performance model evaluations. This is a key factor when using the method in conjunction with expensive models, such as finely meshed finite element models. However, most modern computer systems support parallel processing via either multiple processing cores or multiple discrete processors (or both), or through the use of a distributed computing network with multiple computational nodes. These systems can allow expensive analysis functions to be evaluated in parallel for multiple combinations of the inputs (e.g. multiple samples in a reliability analysis context). An additional modification was made to EGRA to identify multiple good locations to explore at each iteration (where previously only the single best point was sought). Parallel computers are capable of evaluating multiple instances of the response function simultaneously, thus multiple points can be evaluated at the cost of a single evaluation. By incorporating this capability into the EGRA algorithm, the number of required function evaluations may be the same, but because some evaluations are "free," the actual run time of the analysis will be reduced without sacrificing accuracy.

Noisy Performance Functions: Performance functions may demonstrate erratic or otherwise non-smooth behavior, particularly for small changes in the input variables. For example, in many transient analyses such as impact dynamics, blast loadings, and computational fluid dynamics, the response may appear "noisy" because of the chosen finite element model mesh size or time integration steps. This type of behavior can be problematic for EGRA, which constructs a Gaussian process (GP) surrogate model for the response, assuming that it represents a smooth function of the inputs. By modifying the underlying GP model and the expected feasibility function, EGRA can be successfully applied to problems involving noisy response functions. EGRA attempts to build a model that "ignores" the noise to create a smooth function of the underlying trend in the model. If the noise is artificial, i.e., it is an error in the model that does not properly represent reality, then the GP model created by EGRA might be an accurate representation of the true response. If the noise is real, the smooth approximation constructed by EGRA will introduce some measure of error. However, for expensive response functions where employing a sampling method is impractical, EGRA could still be used to provide an approximate solution at a small fraction of the computational expense. A significant challenge still remains in selecting the proper noise level with which to construct the GP model. As the example problems in the report demonstrated, when this value is properly set, the approach to incorporate the noise works quite well, but it can be difficult to know what value should be used. Future work may seek to automatically select an ideal value when constructing the GP model.

Gaussian Process Surrogate Uncertainty: Much progress was made during this research project to identify and study potential approaches for quantifying the impact of surrogate model uncertainty on reliability predictions. It was determined that the most rigorous approach, a full random process simulation, is not practical for most problems because of the data storage required and the computational expense involved with simulating random process realizations at such a large number of sample points. However, an efficient, confidence-bound approach was implemented and found to be far more practical and still able to provide valuable quantitative information regarding the amount of uncertainty in the reliability predictions. The capability to quantify the surrogate model uncertainty was also identified as being useful as a means for assessing convergence of the EGRA algorithm. Convergence could be based on achieving a particular amount of confidence in the reliability prediction. In fact, more or less stringent convergence criteria could be specified by the user depending on the application requirement.

NESSUS® and CENTAUR™ Extensions: Using separate funds, these new capabilities have been formally incorporated into the NESSUS and CENTAUR software packages. These new extensions provide SwRI with unique capabilities to solve complex reliability problems and gain additional insight into the results. In time, this is anticipated to lead to additional project revenue and NESSUS license sales.

2010 Program Home