The Pennsylvania State University The Graduate School College of Engineering SENSITIVITY AND UNCERTAINTY STUDY OF CTF USING THE

Similar documents
COMPARISON OF COBRA-TF AND VIPRE-01 AGAINST LOW FLOW CODE ASSESSMENT PROBLEMS.

THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE DEPARTMENT OF MECHANICAL AND NUCLEAR ENGINEERING

Application of System Codes to Void Fraction Prediction in Heated Vertical Subchannels

Investigation of CTF void fraction prediction by ENTEK BM experiment data

BEST ESTIMATE PLUS UNCERTAINTY SAFETY STUDIES AT THE CONCEPTUAL DESIGN PHASE OF THE ASTRID DEMONSTRATOR

The Pennsylvania State University. The Graduate School. Department of Mechanical and Nuclear Engineering

Lectures on Applied Reactor Technology and Nuclear Power Safety. Lecture No 6

Uncertainty Quantification of EBR-II Loss of Heat Sink Simulations with SAS4A/SASSYS-1 and DAKOTA

INVERSE UNCERTAINTY QUANTIFICATION OF TRACE PHYSICAL MODEL PARAMETERS USING BAYESIAN ANALYSIS GUOJUN HU THESIS

ANALYSIS OF THE OECD PEACH BOTTOM TURBINE TRIP 2 TRANSIENT BENCHMARK WITH THE COUPLED NEUTRONIC AND THERMAL-HYDRAULICS CODE TRAC-M/PARCS

DEVELOPMENT OF COMPUTATIONAL MULTIFLUID DYNAMICS MODELS FOR NUCLEAR REACTOR APPLICATIONS

A PWR HOT-ROD MODEL: RELAP5/MOD3.2.2Y AS A SUBCHANNEL CODE I.C. KIRSTEN (1), G.R. KIMBER (2), R. PAGE (3), J.R. JONES (1) ABSTRACT

Documentation of the Solutions to the SFPE Heat Transfer Verification Cases

TABLE OF CONTENTS CHAPTER TITLE PAGE

ANALYSIS OF THE OECD MSLB BENCHMARK WITH THE COUPLED NEUTRONIC AND THERMAL-HYDRAULICS CODE RELAP5/PARCS

Validation of Traditional and Novel Core Thermal- Hydraulic Modeling and Simulation Tools

APPLICATION OF THE COUPLED THREE DIMENSIONAL THERMAL- HYDRAULICS AND NEUTRON KINETICS MODELS TO PWR STEAM LINE BREAK ANALYSIS

Title: Development of a multi-physics, multi-scale coupled simulation system for LWR safety analysis

ROYAL INSTITUTE OF TECHNOLOGY SENSITIVITY AND UNCERTAINTY ANALYSIS OF BWR STABILITY

Scope and Objectives. Codes and Relevance. Topics. Which is better? Project Supervisor(s)

arxiv: v1 [physics.comp-ph] 18 May 2018

PWR CONTROL ROD EJECTION ANALYSIS WITH THE MOC CODE DECART

A VALIDATION OF WESTINGHOUSE MECHANISTIC AND EMPIRICAL DRYOUT PREDICTION METHODS UNDER REALISTIC BWR TRANSIENT CONDITIONS

CFD SIMULATION OF SWIRL FLOW IN HEXAGONAL ROD BUNDLE GEOMETRY BY SPLIT MIXING VANE GRID SPACERS. Mohammad NAZIFIFARD

THERMAL HYDRAULIC REACTOR CORE CALCULATIONS BASED ON COUPLING THE CFD CODE ANSYS CFX WITH THE 3D NEUTRON KINETIC CORE MODEL DYN3D

PREDICTION OF MASS FLOW RATE AND PRESSURE DROP IN THE COOLANT CHANNEL OF THE TRIGA 2000 REACTOR CORE

CONVECTIVE HEAT TRANSFER

CFD-Modeling of Boiling Processes

Research Article CFD Modeling of Boiling Flow in PSBT 5 5Bundle

DEVELOPMENT AND ASSESSMENT OF A METHOD FOR EVALUATING UNCERTAINTY OF INPUT PARAMETERS

The Pennsylvania State University. The Graduate School. College of Engineering COBRA-TF ANALYSIS OF RBHT STEAM COOLING EXPERIMENTS.

The Research of Heat Transfer Area for 55/19 Steam Generator

ENGINEERING OF NUCLEAR REACTORS. Fall December 17, 2002 OPEN BOOK FINAL EXAM 3 HOURS

CHAPTER 7 NUMERICAL MODELLING OF A SPIRAL HEAT EXCHANGER USING CFD TECHNIQUE

An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

DEVELOPMENT OF A COUPLED CODE SYSTEM BASED ON SPACE SAFETY ANALYSIS CODE AND RAST-K THREE-DIMENSIONAL NEUTRONICS CODE

Analysis of the Cooling Design in Electrical Transformer

Sensitivity Analysis of a Nuclear Reactor System Finite Element Model

ASSESSMENT OF CTF BOILING TRANSITION AND CRITICAL HEAT FLUX MODELING CAPABILITIES USING THE OECD/NRC BFBT AND PSBT BENCHMARK DATABASES

Research Article Uncertainty and Sensitivity Studies with TRACE-SUSA and TRACE-DAKOTA by Means of Steady State BFBT Data

Scaling Analysis as a part of Verification and Validation of Computational Fluid Dynamics and Thermal-Hydraulics software in Nuclear Industry

Lecture 3: Adaptive Construction of Response Surface Approximations for Bayesian Inference

Experimental designs for multiple responses with different models

Lectures on Applied Reactor Technology and Nuclear Power Safety. Lecture No 7

Importance Analysis for Uncertain Thermal-Hydraulics Transient Computations

Authors : Eric CHOJNACKI IRSN/DPAM/SEMIC Jean-Pierre BENOIT IRSN/DSR/ST3C. IRSN : Institut de Radioprotection et de Sûreté Nucléaire

Studies on flow through and around a porous permeable sphere: II. Heat Transfer

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments

Data analysis with uncertainty evaluation for the Ignalina NPP RBMK 1500 gas-gap closure evaluation

A NUMERICAL APPROACH FOR ESTIMATING THE ENTROPY GENERATION IN FLAT HEAT PIPES

Development of a Validation and Uncertainty Quantification Framework for Closure Models in Multiphase CFD Solver

NEW FUEL IN MARIA RESEARCH REACTOR, PROVIDING BETTER CONDITIONS FOR IRRADIATION IN THE FAST NEUTRON SPECTRUM.

Application of V&V 20 Standard to the Benchmark FDA Nozzle Model

INVERSE PROBLEM AND CALIBRATION OF PARAMETERS

5th WSEAS Int. Conf. on Heat and Mass transfer (HMT'08), Acapulco, Mexico, January 25-27, 2008

EasyChair Preprint. Numerical Simulation of Fluid Flow and Heat Transfer of the Supercritical Water in Different Fuel Rod Channels

Fluid Flow, Heat Transfer and Boiling in Micro-Channels

Fuel BurnupCalculations and Uncertainties

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Research Article Analysis of Subchannel and Rod Bundle PSBT Experiments with CATHARE 3

Experimental Investigation of Single-Phase Friction Factor and Heat Transfer inside the Horizontal Internally Micro-Fin Tubes.

Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 2: Application to TRACE

INVESTIGATION OF THE PWR SUBCHANNEL VOID DISTRIBUTION BENCHMARK (OECD/NRC PSBT BENCHMARK) USING ANSYS CFX

MC21 / CTF and VERA Multiphysics Solutions to VERA Core Physics Benchmark Progression Problems 6 and 7

SUB-CHAPTER D.1. SUMMARY DESCRIPTION

Examination Heat Transfer

STAR-CCM+ and SPEED for electric machine cooling analysis

ENGINEERING OF NUCLEAR REACTORS

Reliability of Acceptance Criteria in Nonlinear Response History Analysis of Tall Buildings

Coupling of thermal-mechanics and thermalhydraulics codes for the hot channel analysis of RIA events First steps in AEKI toward multiphysics

Research Article Analysis of NEA-NSC PWR Uncontrolled Control Rod Withdrawal at Zero Power Benchmark Cases with NODAL3 Code

A DIRECT STEADY-STATE INITIALIZATION METHOD FOR RELAP5

VERIFICATION AND VALIDATION OF ONE DIMENSIONAL MODELS USED IN SUBCOOLED FLOW BOILING ANALYSIS

RESEARCH OF THE BUNDLE CHF PREDICTION BASED ON THE MINIMUM DNBR POINT AND THE BO POINT METHODS

This chapter focuses on the study of the numerical approximation of threedimensional

DEVELOPMENT OF HIGH-FIDELITY MULTI- PHYSICS SYSTEM FOR LIGHT WATER REACTOR ANALYSIS

ENHANCED MODELLING OF INDOOR AIR FLOWS, TEMPERATURES, POLLUTANT EMISSION AND DISPERSION BY NESTING SUB-ZONES WITHIN A MULTIZONE MODEL

TOWARDS A COUPLED SIMULATION OF THERMAL HYDRAULICS AND NEUTRONICS IN A SIMPLIFIED PWR WITH A 3x3 PIN ASSEMBLY

CALCULATING UNCERTAINTY ON K-EFFECTIVE WITH MONK10

2017 Water Reactor Fuel Performance Meeting September 10 (Sun) ~ 14 (Thu), 2017 Ramada Plaza Jeju Jeju Island, Korea

The Pennsylvania State University. The Graduate School. College of Engineering

BEST-ESTIMATE METHODOLOGY

Combining probability and possibility to respect the real state of knowledge on uncertainties in the evaluation of safety margins

Heat Transfer Predictions for Carbon Dioxide in Boiling Through Fundamental Modelling Implementing a Combination of Nusselt Number Correlations

Analysis of High Speed Spindle with a Double Helical Cooling Channel R.Sathiya Moorthy, V. Prabhu Raja, R.Lakshmipathi

CFD Simulation of Sodium Boiling in Heated Pipe using RPI Model

A First Course on Kinetics and Reaction Engineering Unit D and 3-D Tubular Reactor Models

THERMAL ANALYSIS OF A SPENT FUEL TRANSPORTATION CASK

The Pennsylvania State University The Graduate School IMPROVEMENT OF COBRA-TF FOR MODELING OF PWR COLD- AND HOT-LEGS DURING REACTOR TRANSIENTS

USE OF CFD TO PREDICT CRITICAL HEAT FLUX IN ROD BUNDLES

POLICY ISSUE (INFORMATION)

A First Course on Kinetics and Reaction Engineering Unit 33. Axial Dispersion Model

Chapter 10: Boiling and Condensation 1. Based on lecture by Yoav Peles, Mech. Aero. Nuc. Eng., RPI.

C ONTENTS CHAPTER TWO HEAT CONDUCTION EQUATION 61 CHAPTER ONE BASICS OF HEAT TRANSFER 1 CHAPTER THREE STEADY HEAT CONDUCTION 127

QUANTIFICATION OF INPUT UNCERTAINTIES BASED ON VEERA REFLOODING EXPERIMENTS

Axial profiles of heat transfer coefficients in a liquid film evaporator

Lecture 30 Review of Fluid Flow and Heat Transfer

Contents. Part I: Fundamentals of Bayesian Inference 1

REACTOR PHYSICS FOR NON-NUCLEAR ENGINEERS

22.06 ENGINEERING OF NUCLEAR SYSTEMS OPEN BOOK FINAL EXAM 3 HOURS

Transcription:

The Pennsylvania State University The Graduate School College of Engineering SENSITIVITY AND UNCERTAINTY STUDY OF CTF USING THE UNCERTAINTY ANALYSIS IN MODELING BENCHMARK A Thesis in Nuclear Engineering by Nathan W. Porter 2015 Nathan W. Porter Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science December 2015

The thesis of Nathan W. Porter was reviewed and approved by the following: Maria Avramova Adjunct Professor of Nuclear Engineering Thesis Advisor Kostadin Ivanov Adjunct Professor of Nuclear Engineering Vincent Mousseau Engineer at Sandia National Laboratories Special Signatory Arthur Motta Professor of Nuclear Engineering and Materials Science and Engineering Chair of Nuclear Engineering Department Signatures are on file in the Graduate School. ii

Abstract This work describes the results of a quantitative sensitivity and uncertainty analysis of the thermal hydraulic subchannel code, Coolant-Boiling in Rod Arrays-Three Field (COBRA-TF). Four steady state cases from Phase II, Exercise 3 of the Organisation for Economic Co-operation and Development/Nuclear Energy Agency Uncertainty Analysis in Modeling Benchmark (OECD/NEA UAM) are analyzed using the statistical analysis tool, Design Analysis Kit for Optimization and Terascale Applications (Dakota). The input uncertainties include boundary condition and geometry uncertainties specified in the benchmark, as well as modeling uncertainties which are selected based on preliminary sensitivity studies and expert judgment. A large variety of output parameters are analyzed for each case: maximum void fractions and temperatures, bundle pressure drop, and a variety of axial distributions for a central subchannel. The predicted uncertainty in all parameters remains below 10% for all cases. The dominant sources of uncertainty are inferred from sensitivity studies and rank correlation coefficients from the uncertainty analysis. The results agree well with comparable past studies, but with a number of important improvements. A thorough analysis of geometry uncertainties is used to conclude that these uncertainties are negligible for all UAM cases. Wilks Formula is shown to be inadequate for sample size selection for uncertainty quantification studies of nuclear codes. In addition, the pitfalls of using traditional black box uncertainty analysis methods are explored. An in-depth description of the bulk mass transfer model is used as an example to demonstrate that current uncertainty analysis methods are vastly insufficient to fully understand the intricacies of complex computational tools. Significant improvements can be made, for example, Bayesian Calibration can be used to directly relate experimental results to input parameter uncertainty. This method is demonstrated for the Lee and Ryley correlation and can give much more accurate input uncertainties than expert opinion. iii

Table of Contents List of Figures List of Tables List of Symbols Acknowledgments vii viii ix xii Chapter 1 Introduction 1 1.1 Uncertainty Analysis.............................. 2 1.2 UAM Benchmark................................ 4 1.3 COBRA-TF................................... 7 1.4 Dakota...................................... 8 Chapter 2 Methods 9 2.1 CTF-Dakota Coupling............................. 9 2.2 Sensitivity Methods.............................. 10 2.2.1 Parameter Studies........................... 11 2.2.2 Morris Screening............................ 11 2.2.3 Random Sampling Study....................... 12 2.3 Uncertainty Methods.............................. 13 2.3.1 Input Uncertainty Selection...................... 13 2.3.1.1 CTF VUQ Parameters................... 13 2.3.1.2 Geometry Uncertainties................... 15 2.3.1.3 Bayesian Calibration.................... 15 2.3.2 Sample Size Selection......................... 17 2.3.2.1 Wilks Formula........................ 17 2.3.2.2 New Method......................... 18 iv

Chapter 3 UAM Results 20 3.1 Sensitivity Study Results........................... 20 3.1.1 Case 1a................................. 21 3.1.2 Case 2a................................. 23 3.1.3 Case 4a................................. 24 3.1.4 Case 5a................................. 25 3.1.5 Selected Input Uncertainties...................... 25 3.2 Uncertainty Quantification Results...................... 27 3.2.1 Case 1a................................. 27 3.2.2 Case 2a................................. 28 3.2.3 Case 4a................................. 32 3.2.4 Case 5a................................. 32 3.2.5 Discussion................................ 34 3.3 Comparison to Wilks Formula........................ 36 3.4 Geometry Method Comparison........................ 37 Chapter 4 Comprehensive Uncertainty Analysis 38 4.1 Physical Basis.................................. 38 4.2 Calculation of Interfacial Heat Transfer Coefficients............. 39 4.2.1 Spatial Smoothing........................... 40 4.2.2 Correlations............................... 41 4.2.3 State Space Smoothing........................ 42 4.2.3.1 Normal Regimes....................... 42 4.2.3.2 Hot Wall Regimes...................... 44 4.2.4 Temporal Smoothing.......................... 44 4.2.5 Limits, Switching and Ramps..................... 44 4.3 Sensitivity Studies............................... 47 Chapter 5 Bayesian Calibration 50 5.1 The Lee and Ryley Correlation........................ 50 5.2 Data Selection................................. 51 5.3 Calibration Results............................... 54 Chapter 6 Conclusions 59 Appendix A Geometry Uncertainty Algorithm 62 A.1 AN: Subchannel Area............................. 62 A.2 PW: Wetted Perimeter............................. 64 A.3 GAP: Gap Size................................. 65 A.4 Numbering System............................... 65 v

Appendix B Interfacial Heat Transfer Correlations 67 B.1 Forced Convection Correlations........................ 67 B.1.1 Small Bubble.............................. 67 B.1.2 Large Bubble.............................. 70 B.1.3 Droplet................................. 70 B.1.4 Film................................... 71 B.2 Hot Wall Correlations............................. 73 B.2.1 Droplet................................. 74 B.2.2 Film................................... 75 Bibliography 76 vi

List of Figures 1.1 Diagram of a black box method........................ 3 1.2 Geometry for all cases............................. 5 2.1 Coupling between Dakota and CTF..................... 10 3.1 Elementary effects of power distributions................... 22 3.2 Case 1a correlation coefficients........................ 28 3.3 Case 1a axial uncertainties........................... 29 3.4 Case 2a correlation coefficients........................ 30 3.5 Case 2a axial uncertainties........................... 31 3.6 Case 4a correlation coefficients........................ 32 3.7 Case 4a axial uncertainties........................... 33 3.8 Case 5a correlation coefficients........................ 34 3.9 Case 5a axial uncertainties........................... 35 4.1 Simple diagram of the heat transfer model in CTF............. 39 4.2 Spatial averaging due to staggered grid.................... 40 4.3 Forced convection flow regime map...................... 42 4.4 Maximum volumetric heat transfer coefficient in CTF........... 45 4.5 Ramps on the interfacial heat transfer coefficients.............. 47 4.6 Sensitivity study of interfacial heat transfer correlations.......... 49 5.1 Raw data for Bayesian Calibration...................... 53 5.2 Solid/air bivariate projections......................... 54 5.3 Liquid/air bivariate projections........................ 55 5.4 Solid/liquid bivariate projections....................... 56 5.5 Liquid/liquid bivariate projections...................... 56 5.6 Marginal distributions from Bayesian Calibration.............. 57 A.1 Demonstration of geometry methodology.................. 63 A.2 Geometry parameter definitions........................ 63 A.3 Calculated areas................................ 64 A.4 Changes in gap size............................... 65 A.5 Geometry numbering systems......................... 66 vii

List of Tables 1.1 Boundary conditions for the steady state cases............... 5 1.2 Uncertainty parameters from UAM Benchmark............... 6 1.3 Suggested modeling uncertainties for CTF.................. 6 2.1 Sensitivity methods and computational cost................. 11 2.2 List of CTF VUQ parameters......................... 14 2.3 Sample size necessary from two-sided Wilks Formula............ 17 3.1 Case 1a parameter study results....................... 21 3.2 Case 1a Morris Screening results....................... 22 3.3 Case 2a parameter study results....................... 23 3.4 Case 2a Morris Screening results....................... 23 3.5 Case 4a parameter study results....................... 24 3.6 Case 4a Morris Screening results....................... 24 3.7 Case 5a parameter study results....................... 25 3.8 Case 5a Morris Screening results....................... 25 3.9 Selected input uncertainties.......................... 26 3.10 Case 1a uncertainty results.......................... 27 3.11 Case 2a uncertainty results.......................... 30 3.12 Case 4a uncertainty results.......................... 32 3.13 Case 5a uncertainty results.......................... 34 3.14 Change in results when using Wilks Formula................ 37 3.15 Change in results when using all geometry perturbations.......... 37 4.1 Test problem parameters............................ 47 5.1 Data used in Bayesian Calibration...................... 51 5.2 Means and standard deviations of marginal distributions.......... 58 B.1 CTF interfacial heat transfer correlations (H i = h i A i [Btu/s F ])..... 68 B.2 Lee and Ryley ranges.............................. 69 B.3 Henstock and Hanratty ranges........................ 73 B.4 Frössling ranges................................. 74 B.5 Yuen and Chen ranges............................. 75 viii

List of Symbols Greek Letters α Void fraction [ ] β Single phase mixing coefficient [ ] Γ Mass transfer [kg/s] δ Film thickness [m] ε Under relaxation parameter [ ] ɛ Convergence parameter [ ] ζ Pressure loss coefficient [ ] η Fraction of vapor generation from droplet field [ ] θ Parameter for Bayesian Calibration [ ] Θ Two phase mixing multiplier [ ] µ Dynamic viscosity [kg/ms] ρ Density [kg/m 3 ] σ Surface Tension [J/m 2 ] τ Interfacial drag coefficient [ ] Nondimensional Numbers Gr Grashof Ja Jakob = ρ(h l h f )/ρ v h fg Nu Nusselt = hl/k P e Peclet = ReP r = C p ρvl/k P r Prandtl = C p µ/k Re Reynolds = ρvl/µ W e Weber = ρu 2 D p /σ ix

Roman Letters A Area [m 2 ] C D Drag coefficient [ ] C p Specific heat [kj/kgk] D Diameter [m] d Elementary effect [ ] F Ramping or modification factor [ ] f Friction factor [ ] H Heat transfer coefficient [W/K] k Thermal conductivity [W/mK] K a Void drift scaling factor [ ] M Number of input uncertainty parameters [ ] N Number of particles or sample size [ ] P Pressure [kp a] p Partitions of each input in a sensitivity study [ ] Q Total power [kw ] q Number of replicates used for Morris Screening [ ] R Radius [m] s Standard deviation T Temperature [K] h Heat transfer coefficient per unit area [W/m 2 K] h Enthalpy [kj/kg] ṁ Mass flow rate [kg/s] u Velocity in a single direction [m/s] u CTF vector velocity (axial and transverse components) [m/s] V Volume [m 3 ] x Mean Subscripts b Bubble c Continuous or cladding d Dispersed or droplet e (Entrained) droplet x

f Saturated fluid/liquid or fuel g Saturated gas/vapor h Hydraulic i Interfacial J Momentum mesh cell index j Scalar mesh cell index k For the k-phase l Fluid/liquid p Particle (bubble or droplet) s Surface t Transverse v Gas/vapor w Wall Superscripts n Time step index s Saturated xi

Acknowledgments I would like to express my gratitude to Dr. Avramova and Dr. Ivanov for their continuous support and guidance throughout my studies. In addition, to my mentor, Dr. Mousseau, for his commitment to my learning and his willingness to share insight with me. Finally, many thanks to my close friend, Arielle Schoblom, for her time during the drafting process. This research was partially supported by the Consortium for Advanced Simulation of Light Water Reactors (www.casl.gov), an Energy Innovation Hub (www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract Number DE-AC05-00OR22725. xii

Chapter 1 Introduction Direct experimentation in the nuclear industry is inherently dangerous and prohibitively expensive. Therefore, computational tools have emerged as an acceptable supplement. Early codes were written in the 1970 s and 1980 s and are often referred to as legacy codes. These were designed early in the evolution of computers, when computational limitations required a large number of simplifications which were not always applicable and introduced significant amounts of uncertainty. Further, limitations in experimental data could make it difficult to quantify model uncertainties. Codes were required to ensure safe operation of reactors using limited computational capabilities. Under these constraints, it was necessary to design legacy codes to be overly conservative. Codes were intended to overestimate the responses relevant to the safety of the reactor; if the computational tool predicted safe operation, then the reactor was deemed safe. This allowed the general trend of reactor behavior to be estimated with the belief that the real reactor would be more safe than the simulation. Conservative methods have been used in the nuclear industry for decades, but as computers and knowledge have improved, it has become possible to estimate the uncertainty of simulations. Modern uncertainty analysis methods provide bounds on the output of a simulation. This kind of method referred to as a Best Estimate Plus Uncertainty (BEPU) method contains no conservative assumptions. In 1988, the US Nuclear Regulatory Commission (NRC) changed 10 CFR 50.46 to allow licensing decisions based on BEPU analyses [36]. This can eliminate much of the safety margin that is fundamental to traditional conservative analyses, enabling efficient operation and greater power production. Legacy codes were never intended to utilize these methods, and using them presents a variety of unique challenges. Nonetheless, the modernization of a legacy code often requires less effort than the creation of a new code. 1

The acceptance of BEPU methods by regulators has required the development of formal methods for conducting these analyses. For example, the NRC developed the Code Scaling, Applicability, and Uncertainty (CSAU) method. The Predictive Capability Maturity Model (PCMM), a modernized version of CSAU with a focus on engineering applications, was developed at Sandia National Laboratory (SNL). These methods consider similar issues, which will be discussed broadly in Section 1.1. Some projects, such as the Organisation for Economic Co-operation and Development/Nuclear Energy Agency Uncertainty Analysis in Modeling Benchmark (OECD/NEA UAM) [7], are being developed to apply BEPU methods to nuclear codes. The Benchmark is used in the current work and will be discussed in Section 1.2. This work will use the thermal hydraulic section of the UAM Benchmark to perform a sensitivity and uncertainty analysis on a thermal hydraulic subchannel code, Coolant- Boiling in Rod Arrays-Three Field (COBRA-TF). Design Analysis Kit for Optimization and Terascale Applications (Dakota), an uncertainty and sensitivity tool developed at SNL, will be used to perform the analysis. The Pennsylvania State University (PSU) Reactor Dynamics and Fuel Management Group (RDFMG) version of COBRA-TF (CTF) and Dakota will be discussed in Sections 1.3 and 1.4, respectively. After the general background and concepts have been introduced in this chapter, methods are discussed in Chapter 2. Results for the UAM study are presented in Chapter 3. Finally, Chapter 4 demonstrates some of the shortcomings of current uncertainty analysis methods and difficulties when attempting to make improvements. Chapter 5 applies a new method which addresses some of these issues. The final chapter provides a general conclusion with a discussion of the results and future work. 1.1 Uncertainty Analysis Uncertainties in computational tools originate from a variety of sources. Different methods and authors group them differently, but here they are divided into four general categories: code bugs, numerical errors, model form uncertainty, and parameter uncertainty. The uncertainty from code bugs is minimized using thorough regression testing. A large number of small unit tests are designed to ensure that each section of the code is running correctly. These tests are usually automated to ensure that any changes or additions do not destroy existing capabilities. Because code bugs can cause unpredictable results, they must be minimized before any uncertainty analysis can be performed. The quantification of numerical error is generally referred to as verification. Verification is split into two parts: code verification, which ensures that the numerical method is 2

implemented correctly by comparison with an exact solution, and solution verification, which address the uncertainty due to sub-optimal nodalization by estimating how the solution changes as the time step or cell size are varied. Model form uncertainty accounts for assumptions that are inconsistent between the coded models and the cases to which they are applied. This includes scaling errors and the extrapolation of data outside experimental ranges. Also, the use of a model or correlation when not appropriate, such as applying a correlation for spherical particles to bubbles, is included in this type of uncertainty. Model form uncertainties are quantified by comparing code solutions to experimental data, which is called validation. After regression testing, verification, and validation have been thoroughly explored for a computational tool, the parameter uncertainty can be examined. This final source deals with the design of correlations. Each coefficient in a correlation has some uncertainty, which can be formally defined using a variety of methods. One such method is Bayesian Calibration, which will be discussed in Section 2.3.1.3 and demonstrated in Chapter 5. Traditionally, uncertainty quantification studies focus only on parameter uncertainty and assume that all other contributions are small. These methods are generally referred to as black box, or non-intrusive, because they allow the user to propagate uncertainty without any knowledge of the code internals. Most black box methods allow the analyst to select a subset of input uncertainties based on their own judgment, which can exclude any number of important models. This kind of method is demonstrated in Figure 1.1. Input Uncertainties Code "Black Box" Output Uncertainties Figure 1.1. Diagram of a black box method Black box methods are only applicable when all other sources of uncertainty have been minimized, but code developers often make this assumption based on no quantitative data. To avoid this issue, code developers should aim to quantify each source of uncertainty in a consistent and quantitative way, and then focus on reducing the largest uncertainty until the code is sufficiently accurate. UQ focus = max(numerical, model form, parameter, etc.) (1.1) A variety of authors have developed quantitative methods to address all sources of uncertainty in computational tools. The first of these is the CSAU method, which 3

was developed by the NRC to support its new acceptance of BEPU analysis [27]. The method is systematic, comprehensive, and applicable to a variety of codes and models. Scalability issues are addressed first, then the ability of the code to model a specific case. Finally, the uncertainty of the results are quantified. A top-down approach is used to first identify important input uncertainties, followed by a bottom-up approach to quantify the uncertainty. Another example is the PCMM, which groups evidence that a code is mature into the following six categories: code bugs, code verification, solution verification, validation, parameter uncertainty, and calibration [37]. Each category must be sufficiently complete before the code can be used in any safety capacity. Much of this process is the same as the CSAU method, since all formal BEPU methods address the same problems. These methods both demonstrate that an intimate knowledge of the code is necessary to quantify uncertainty. This level of detail is not used in black box methods, and therefore the results are less justifiable and accurate. Though black box methods will be used in the current work, it is important to recognize that there is significant room for improvement. Chapters 4 and 5 will demonstrate this point using an in-depth analysis of a single input uncertainty. 1.2 UAM Benchmark The OECD/NEA Nuclear Science Committee Expert Group on UAM started developing the UAM Benchmark in 2006 [7]. The Benchmark is in progress and aims to assess the uncertainty in reactor physics, fuel performance, and thermal hydraulic codes. After each of the types of codes is addressed individually in Phases I and II of the Benchmark, Phase III will use the combined knowledge from the previous phases to assess the uncertainty of coupled codes. This work is a large undertaking and has many participants. Phase II, Exercise 3 of the Benchmark details twelve test cases that focus on the uncertainty in thermal hydraulic codes. Each case models a single assembly and they are divided equally among three types of Light Water Reactors (LWRs): Pressurized Water Reactors (PWRs), Boiling Water Reactors (BWRs), and Water-Water Energetic Reactors (VVERs). Half of the cases are based on hypothetical full-sized reactors, with a few changes where data is lacking. The second half are models of experiments, so that the results can be compared to real-world data. The Benchmark is also divided equally among steady state and transient cases. For this work, four steady state cases 1a, 2a, 4a, and 5a will be analyzed. Each case is briefly outlined here; more specific data can be found in the Benchmark [7]. 4

An axial cross section of each case is shown in Figure 1.2. The black circles and channels indicate locations from which axial distributions are taken. The light gray circles are gadolinium rods, which have a lower radial power due to burnable poison. In 1.2(b), the dark gray and black semicircle symbolize guide tubes and an instrumentation tube, respectively. The large gray circle in Case 4a is a water rod. (a) Case 1: PB-2 (b) Case 2: TMI-1 (c) Case 4: BFBT (d) Case 5: PSBT Figure 1.2. Geometry for all cases All CTF models are coolant-centered and have approximately 80 axial nodes. Each control volume is about 4 or 5 centimeters tall. The boundary conditions for each of the cases are shown in Table 1.1. Note that the parameters for Case 2a are for a quarter of the assembly and the parameters for all other cases are for the entire assembly. Table 1.1. Boundary conditions for the steady state cases Parameter Symbol Unit Case 1a Case 2a Case 4a Case 5a Power Q kw t 4310 3915 3520 3376 Outlet pressure P M P a 7.0 15.2 7.16 16.43 Inlet flow rate ṁ kg/s 15.6 20.5 15.3 10.1 Inlet temperature T in K 543 565 551 580 Case 1a models a single 7x7 fuel bundle from Peach Bottom Unit 2 (PB-2), which is a representative BWR. The second case models a quarter of a PWR assembly from Three Mile Island Unit 1 (TMI-1). This case is unique because there is very little void generation in the bundle. As such, the uncertainty is expected to be lower than the other three cases. The third case, 4a, is Test 4101-58 from the OECD BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark [35]. It has similar operating conditions to the PB-2 bundle, but with a large water rod in the center, a uniform axial power profile, and heated rods instead of nuclear fuel rods. Case 5a is a 5x5 bundle from the OECD PWR Subchannel and Bundle Tests (PSBT) Benchmark. This case is referred to as a PWR case in the Benchmark, but it shares similarities with the BWR cases because it has significant void. 5

Within the Benchmark, input uncertainties are divided into three categories: boundary condition, manufacturing, and code uncertainties. The boundary condition uncertainties include thermal power, outlet pressure, mass flow rate, and inlet coolant temperature. Uncertainty is also applied to each value in the axial and radial power distributions, after which the distributions are re-normalized so the total power remains constant. The geometry uncertainty is applied by perturbing the outer diameter and position of a single corner rod. The values of the boundary condition and geometry uncertainties given in the Benchmark are shown in Table 1.2. For each parameter with a normal distribution, the bounds are defined as three standard deviations. Table 1.2. Uncertainty parameters from UAM Benchmark Parameter Symbol Unit Bounds BWR PWR Distribution Power Q % 1.5 1.0 normal Outlet pressure P % 1.0 1.0 normal Inlet flow rate ṁ % 1.0 1.5 normal Inlet temperature T in K ±1.5 ±1.0 uniform Power distribution - % 3.0 3.0 normal Rod displacement d mm 0.45 0.45 normal Rod diameter D mm 0.04 0.02 normal The modeling uncertainties are code-dependent and left to the discretion of the Benchmark participants. The suggested code uncertainties are shown in Table 1.3, all of which are based on expert opinion for CTF models. Input uncertainties selected based on expert opinion are subjective and difficult to justify; therefore, the selection of modeling uncertainties will be of paramount importance to the uncertainty results. As such, additional input uncertainties will be selected based on preliminary sensitivity studies, which will be outlined in Section 2.3.1. Table 1.3. Suggested modeling uncertainties for CTF Parameter Symbol 3σ Single phase mixing coefficient β 63% Two phase mixing multiplier Θ 36% Void drift scaling factor K a 21% Heat transfer coefficient H 36% Bubbly interfacial drag coefficient τ i,b 48% Droplet interfacial drag coefficient τ i,d 39% Film interfacial drag coefficient τ i,f 54% 6

1.3 COBRA-TF COBRA-TF is a thermal hydraulic subchannel code designed for the analysis of rod bundles inside a nuclear core. The code was originally developed as part of COBRA/TRAC at Pacific Northwest National Laboratory in the 1980 s. Various versions of COBRA-TF have been created throughout academia and industry over the last few decades. One of these versions, rebranded as CTF, has been maintained by the RDFMG at PSU since the 1990 s. CTF has recently been incorporated into the The Consortium for Advanced Simulation of LWRs (CASL) project, which is a US Department of Energy Innovation Hub for Modeling and Simulation of Nuclear Reactors. The program coordinates efforts between industry, national labs, and universities to produce a multiphysics environment to simulate LWRs. Inclusion in CASL has led to rapid improvements in the modeling capabilities, parallelization, performance, documentation, and validation of CTF. The CASL version of CTF uses a two fluid, three field representation of two-phase flow. A total of eight conservation equations are solved for the fluid region: the mass, momentum, and energy equations for the continuous liquid and vapor, as well as mass and momentum equations for the entrained droplets. The entrained liquid is assumed to be at thermal equilibrium with the continuous liquid, so there is no energy equation for the droplet phase. CTF also solves for conduction through various physical structures, including nuclear fuel rods, heated conductors, and unheated conductors. The code can model both normal operating conditions and accident scenarios, and it includes models for a wide range of thermal hydraulic phenomena that are important to nuclear reactor analysis. CTF realistically represents physical processes in reactors, such as spacer grid effects, two-phase phenomena, and critical heat flux effects. All of these features, and others that have not been mentioned, make the code extremely versatile and well-suited for various applications in both academia and industry. This work performs an uncertainty analysis on CTF, which is similar to a number of previous studies. A 2005 study from PSU performed a sensitivity and uncertainty study on a number of BFBT cases [5]. The effects of boundary condition and modeling uncertainties on the outlet void profile were examined. It was found that the outlet pressure has the largest impact on the void profile. Uncertainties were low for all cases, which is a direct consequence of the input uncertainties selected. The UAM Benchmark uses the uncertainties established in this study (Table 1.3) and it is particularly relevant to Case 4a of the UAM Benchmark, which is also taken from the BFBT Benchmark. Qualitative comparisons can provide confidence in the results of the current study. A more recent study of the uncertainty in CTF was performed by Perin, which analyzes cases from the UAM Benchmark [38]. Perin s analysis uses the input uncertainties from 7

the Benchmark and Wilks Formula, concluding that all output uncertainties are small. Perin also presents sensitivity information for global parameters like the maximum void fraction and maximum fuel temperature. This work is very relevant to the current study since it analyzes the same cases and uses similar input uncertainties. Small differences are expected because of slight changes in the CTF input decks and the selected input uncertainties. To confirm the methods used in the current work, a study was presented at the 16th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-16) with comparable results to the study by Perin [40]. It used the same CTF models and input uncertainties but with updated versions of the Benchmark and code. It also used a different method for selection of sample size, which will be discussed in the next chapter. The current work makes five distinct improvements on past studies. First, it includes additional input uncertainties because those suggested by the Benchmark are not exhaustive. Second, small improvements in the CTF input decks have been made. This includes a more refined axial mesh and, in some cases, the use of more appropriate models. Third, a quantitative comparison is made between results obtained using Wilks Formula and results using the new sample size selection method. An in-depth analysis of the geometry uncertainty methods is also presented. Finally, results are presented for the uncertainty quantification. 1.4 Dakota Dakota is a statistical analysis tool that provides an extensive interface between simulation codes and iterative analysis methods [2]. It is developed at SNL and includes a variety of tools to aid in uncertainty and sensitivity analysis, code validation and verification, as well as calibration and creation of surrogate models. Dakota is the recommended software for Verification, Validation, and Uncertainty Quantification (VUQ) work within CASL [1], and as such is a natural choice for this work. The relevant Dakota methods will be discussed in the next chapter. 8

Chapter 2 Methods Relevant Dakota sensitivity and uncertainty methods are outlined in this chapter. First, the coupling between CTF and Dakota is discussed in Section 2.1, which provides the foundation for all other methods. Next, sensitivity studies are used to give general information and narrow down the input parameters for the uncertainty analyses. Sensitivity methods are discussed in Section 2.2. The uncertainty methods are given in Section 2.3, which includes a discussion of the selected input uncertainties and sample sizes. 2.1 CTF-Dakota Coupling Dakota can be coupled to codes using a built-in tool or with specialized scripts. This ambiguity in the coupling scheme requires that it be presented before any other methods are discussed. Dakota is connected to CTF using five files designed specifically for this work: 1. A Dakota input defines methods and the input/output parameters considered, 2. A CTF input deck which is a template for each UAM case, 3. The driving script that tells Dakota which scripts to call during each iteration, 4. The preprocessor that edits the template deck and creates the CTF input deck, 5. A postprocessing Python script that pulls the results from CTF output into a form that can be interpreted by Dakota. For each iteration, the input parameters are generated based on information in the Dakota input deck, which are either given directly by the user or sampled from provided parameter distributions. Then the driving script is called, which executes the preprocessor, CTF, and the postprocessor. Finally, Dakota reads the results and saves them for later use. The simulation is repeated until some sample size or convergence criterion is achieved, 9

at which point Dakota calculates all statistical outputs. This process is the same for all Dakota analyses, which simplifies the implementation because only the Dakota input needs to be edited to change the method. The coupling procedure between CTF and Dakota is shown in Figure 2.1. Template Input Deck Dakota Driving Script Converged Analysis Results Outputs from CTF Preprocessing Postprocessing CTF Input Deck CTF Output CTF Figure 2.1. Coupling between Dakota and CTF 2.2 Sensitivity Methods In general, the goal of sensitivity analysis is to determine which input parameters influence the response variables. These analyses are used to eliminate inputs that have relatively little importance, which can focus future development and make calibration, optimization, and uncertainty analysis simpler and more manageable. Additionally, some methods can measure model smoothness, nonlinear trends, simulation robustness, or interactions between inputs. As such, preliminary sensitivity studies are critical to any uncertainty analysis. This section outlines the sensitivity methods used in this study. Parameter studies and Morris Screening will be described in Sections 2.2.1 and 2.2.2, respectively. The output from the uncertainty analysis can retrospectively be used to calculate rank correlation coefficients, which is seen in Section 2.2.3. All sensitivity methods are summarized in Table 2.1. Here, M is the number of input parameters studied, p is the number of increments or partitions for each variable, N is a user-defined sample size, and q is the number of replicates used in Morris Screening. 10

Table 2.1. Sensitivity methods and computational cost Method Section Design Points Results Parameter Study 2.2.1 M(p + 1) univariate effects Morris Screening 2.2.2 q(m + 1) elementary effects Random Sampling 2.2.3 N correlation coefficients 2.2.1 Parameter Studies The first analysis performed for each UAM case is a parameter study of all possible input uncertainties. Parameter studies vary only one input parameter at a time and therefore yield only univariate effects. Nonetheless, this method is extremely important because, in addition to giving a general idea of input parameter importance, it validates the model interface, assesses response smoothness, and provides a first test of robustness. A variety of parameter study methods are available in Dakota, but the current work uses the centered parameter study. This allows the analysis of all input parameters using a single Dakota input deck. The user supplies the nominal value and increment for each input variable as well as the number of partitions. Dakota uses this information to vary each input around the nominal state. The results can be used to plot each output against each input and give a general idea of the response sensitivity and smoothness. This analysis will yield M(p + 1) state points, each of which will contain a result for each output variable. Considering the large number of input and output variables for the UAM cases, this will be a sizable amount of data. To make this presentable in a reasonable amount of space, only the largest change in each global output over the range of each input parameter will be presented. This data, presented in tabular form, will be used to summarize the univariate sensitivity of each output on each input. 2.2.2 Morris Screening Morris Screening, or Morris One At a Time (MOAT) analysis, is a valuable tool for sensitivity analysis because it can provide information about input interactions with relatively little computational expense [2, 33]. The MOAT method partitions each input variable into p levels, which creates a grid of p M possible evaluation points. The simulation is run to the desired sample size, varying one input at a time over q steps. The elementary effect for each sample is computed using a forward difference. d i (x) = y(x + e i) y(x) (2.1) The i th coordinate vector is e i and is a step. The distribution of these elementary 11

effects over the input space characterizes the effect of the i th input on the output parameter of interest. The mean ( d), modified mean( d ), and standard deviation (s) of the elementary effects are calculated for each input i after N samples have been generated. d i = 1 N d i,j (2.2a) N j=1 d i = 1 N d i,j (2.2b) N j=1 s i = 1 N (d i,j N 1 d i ) 2 (2.2c) j=1 The two means indicate the overall effect of the i th input on the output, which is similar to the univariate results from the parameter study. Since the standard deviations are indicators of how the elementary effects vary throughout the input space, they give an indication of nonlinear effects due to interactions between input parameters. This study uses q = 10 replicates for each input, which requires a sample size of q(m + 1). 2.2.3 Random Sampling Study The uncertainty analysis will yield its own sensitivity information in the form of correlation coefficients, which measure the linear relationship between two variables. Correlation coefficients are bounded by -1 and 1, where zero indicates that two parameters are independent and a large value is indicative of two closely correlated parameters. Dakota calculates simple (Pearson), partial, and rank (Spearman) correlations whenever a Monte Carlo method is used. The Pearson correlation can be calculated for any two parameters x and y. Corr(x, y) = Ni (x i x)(y i ȳ) Ni (y i ȳ) 2 N i (x i x) 2 (2.3) The partial correlations adjust for the effects of other variables and the Spearman coefficients calculate on ranked data. Ideally, a random sampling sensitivity study would be performed before the uncertainty analysis; this would be computationally costly. These correlation coefficients are statistically relevant because of the large sample size required by the uncertainty analysis. The results can be retrospectively compared to the other sensitivity methods as a final check of the results. Spearman correlations will be used for this study since the output parameters vary significantly in magnitude. 12

2.3 Uncertainty Methods Uncertainty quantification is similar to sensitivity analysis because it provides information about how input uncertainties impact a simulation. The difference is that uncertainty quantification defines input uncertainties using probability distribution functions rather than using analyst-defined exploratory values. Therefore, uncertainty quantification can provide quantitative estimates of the total uncertainty, whereas sensitivity analysis is used to determine how uncertainty can be apportioned to different inputs. Uncertainty analysis is the process of defining all input uncertainties, propagating them through a computational tool, and performing a statistical analysis to bound the results. The selection of input uncertainties is discussed in Section 2.3.1. After the input uncertainties are selected, the computation is run for a specified number of samples. The selection of the sample size discussed in Section 2.3.2 can have a drastic effect on the results. Once the simulation has finished, the mean and standard deviation are calculated for every response parameter. Most possible code output will be within three standard deviations, so all plots will demonstrate uncertainty using these bounds. 2.3.1 Input Uncertainty Selection The selection of input uncertainty bounds is vital to any analysis because unrealistic distributions can result in invalid results; therefore, this work will consider a large number of input uncertainties in addition to those already suggested by the UAM Benchmark. A number of uncertainty parameters from the CTF VUQ interface will be included in the analysis and are discussed in Section 2.3.1.1. The suggested treatment of geometry uncertainties is limited, so an improved procedure will be outlined in Section 2.3.1.2. Finally, Bayesian Calibration, a method for determining input uncertainty bounds based on experimental data, will be described in Section 2.3.1.3. This method is used in Chapter 4 as an example of comprehensive uncertainty analysis. 2.3.1.1 CTF VUQ Parameters CASL has implemented a number of parameter multipliers in CTF. Thirty-five multipliers have been implemented on the following models: mixing and drift flux, friction, wall heat transfer, mass transfer, fuel thermal conductivity, spacer grid effects, and others. Each parameter corresponds to an entire model, most of which contain multiple correlations. This is effective when using expert opinion, since it is more realistic for an analyst to judge the accuracy of a model than to propose distributions for each constant in a correlation. These multipliers and the model to which they are applied are listed in Table 2.2. 13

Parameter Symbol Description Table 2.2. List of CTF VUQ parameters k_rodqq Q Local heat rate (= total power when applied everywhere) k_cond k f Thermal conductivity of rod k_gama Γ Mass transfer from liquid to vapor phase k_htcl H l Heat transfer from rod surface to liquid k_htcv H v Heat transfer from rod surface to vapor k_cd ζ grid Grid spacer loss coefficient k_cdfb ζ Pressure loss coefficient (Rehme multiplier) k_wkr ζ gap Lateral pressure loss coefficient for gap k_xk τ i,vl Vertical interfacial drag between liquid and vapor k_xkes τ i,es Sink interfacial drag from droplets k_xkge τ i,ev Vertical interfacial drag between droplets and vapor k_xkl τ i,t,vl Transverse interfacial drag between liquid and vapor k_xkle τ i,t,ev Transverse interfacial drag between droplets and vapor k_xkvls τ i,vls Sink interfacial drag between liquid and vapor k_xkwvx τ w,v Vertical wall drag on vapor phase k_xkwlx τ w,l Vertical wall drag on liquid phase k_xkwvw τ w,t,v Transverse wall drag on vapor phase k_xkwlw τ w,t,l Transverse wall drag on liquid phase k_xkwew τ w,t,e Transverse form loss coefficient on droplet phase k_qvapl q cond Conductive heat transfer from spacer grid to vapor k_qradd q rad,e Radiative heat transfer from wall to droplets k_qradv q rad,v Radiative heat transfer from wall to vapor k_eta η Fraction of vapor generation rate from the droplet field k_sent ṁ e Entrainment mass flow rate k_sdent ṁ d Deposition mass flow rate k_qliht q block Blockage-related heat transfer to liquid k_sphts C p Specific heat of conductor k_masl Wl D Loss of liquid mass due to mixing and void drift k_masv Wv D Loss of vapor mass due to mixing and void drift k_masg Wg D Loss of gas mass due to mixing and void drift k_moml Loss of liquid momentum due to mixing and void drift k_momv Wv M Loss of vapor momentum due to mixing and void drift k_mome We M Loss of droplet momentum due to mixing and void drift k_tnrgv Wv H Loss of vapor enthalpy due to mixing and void drift k_tnrgl Wl M W H l Loss of liquid enthalpy due to mixing and void drift 14

The conductance of the pellet-cladding gap is an additional input uncertainty because it is important when calculating the centerline temperature. Each input parameter will be included in initial sensitivity studies. If a parameter has a sufficient impact on the simulation, it will be bounded and incorporated into the uncertainty analysis. 2.3.1.2 Geometry Uncertainties The UAM Benchmark accounts for geometry uncertainties by perturbing the outer diameter and location of a single corner rod. This method was intended to demonstrate the effect of geometry uncertainties on each of the characteristic subchannel geometries corner, side, and central while minimizing computational costs. Geometry uncertainties in CTF have repeatedly been shown to be negligible when using these methods [38, 40]. To confirm that the geometry uncertainties can be neglected, a more rigorous analysis is performed in this study. In reality, there is a tolerance in the diameter and placement of each rod. Applying this tolerance to only one rod can give unphysical results. For example, if two or more adjacent rods are displaced in opposite directions, extreme values of the flow area can be observed. The perturbation of a corner rod will also have little affect on the center of the assembly, which is where maximum void fractions and temperatures are generally located. This will hide any effect that the geometry uncertainties have on important safety parameters. To address these issues, the current work employs a new method for applying geometry uncertainties. The same diameter and displacement distributions are applied individually to every rod in the model. In this way, uncertainty in the response parameters should be greater than the results when using the UAM method. If the uncertainties are small even when perturbing every rod, it can be concluded that geometry uncertainties are negligible. The specific algorithm used is detailed in Appendix A. 2.3.1.3 Bayesian Calibration As discussed in Chapter 1, conservatism increases safety margins and reduces confidence, so quantitative methods, referred to as calibration, are preferred. Calibration uses experimental data to construct input distributions, which can then be used in place of expert opinion in the analysis. This provides higher quality and justifiable results. Bayesian inference will be used as a probabilistic model calibration method in this study. This begins with a prior density function and uses experimental data to update it to a more appropriate posterior distribution. The resulting distribution is consistent with experimental data, completely justifiable, not constrained to any predefined distribution 15

type, and generally has very tight bounds. Distributions based on expert opinion will not satisfy any of these criteria. Consider some vector of input uncertainties to be calibrated, θ = [θ 1,..., θ M ]. For N experimental data points with no systematic bias, the statistical model can be expressed using simple notation. d = f(θ) + ε d 1 f(x 1, θ) ε 1. =. +. d N f(x N, θ) ε N (2.4a) (2.4b) The observed experimental data is denoted by d, and f(θ) represents the model outputs with x i indicating the input settings describing the i th experiment. The errors, ε, are typically assumed to be independently and identically distributed with a mean of zero and a common variance, ε N (0, σ 2 I). The specification of errors can be less restrictive and allow for non-gaussian forms, and can encompass both experimental measurement errors and simulation model errors. Bayes relation is used to update the prior distribution using the observed data [26]. π(θ, σ 2 d) = L(θ, σ 2 d)π(θ, σ 2 ) L(θ, σ 2 d)π(θ, σ 2 )dθdσ 2 (2.5) The posterior distribution, π(θ, σ 2 d), is proportional to the likelihood function, L(θ, σ 2 d), multiplied by the prior distribution, π o (θ, σ 2 ), which is then normalized. The prior density incorporates any knowledge about the distribution that is known before obtaining the experimental data. If no knowledge is available or knowledge is unreliable, an uninformative prior should be used (usually a uniform distribution). The likelihood function incorporates information about the samples and the model. It can be interpreted as quantifying the probability of obtaining the observation d for a given set of parameters θ. Once the prior and likelihood are defined, the denominator in Equation 2.5 the normalizing constant is difficult to obtain in most applications. Sampling methods, such as Markov Chain Monte Carlo (MCMC), can alleviate some of these difficulties. MCMC aims to construct a sampling-based chain whose stationary distribution is the desired posterior distribution. This study uses the Delayed Rejection Adaptive Metropolis (DRAM) method, which is a type of MCMC method [1, 3]. Bayesian calibration also allows for the consideration of systematic biases due to different experimental setups or conditions, which introduces a second error, δ, for each 16

laboratory. So for the i th experiment at the j th laboratory, the notation changes to include the additional errors. d ij = [f(x ij, θ) + ε i ] + δ j (2.6) This is solved using similar methods to those detailed above. For the work presented in Chapter 4, the laboratories are imagined as different working fluids because this is the primary difference between the experimental data. 2.3.2 Sample Size Selection Any statistical analysis requires a large sample size before results become statistically significant. A variety of ways to determine the sample size are available, two of which will be used in the current work. The results of these two methods will be compared in Chapter 3 to demonstrate the advantages of the second method. First, Wilks Formula, which is used throughout the nuclear industry for black box uncertainty analysis methods, will be discussed in Section 2.3.2.1. This method gives a suggested sample size that will bound the response parameters while minimizing computational cost. The other option is proposed in Section 2.3.2.2 and is based purely on the convergence of the response statistics. 2.3.2.1 Wilks Formula Wilks Formula was designed as a statistical tool for application to experimental design [52]. Given a number of experimental state points, it can be used to give the certainty that the experimental values won t exceed a given percentile of the probability distribution. Used according to its original intent, a number of experiments specified by the formula would be performed and the results used to infer information about response bounds. The method can also be applied to computational tools, where the formula is used to find the number of code calculations, N, necessary to get a particular confidence, b, that the extreme results do not exceed a percentile, a. The required sample sizes are shown in Table 2.3. Table 2.3. Sample size necessary from two-sided Wilks Formula b/a 0.90 0.95 0.99 0.90 38 77 388 0.95 46 93 473 0.99 64 130 662 17