Simultaneous Robust Design and Tolerancing of Compressor Blades

Similar documents
Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Probabilistic Aerothermal Design of Compressor Airfoils. Victor E. Garzon

Probabilistic Aerothermal Design of Compressor Airfoils. Victor E. Garzon

Polynomial Chaos and Karhunen-Loeve Expansion

Experiences with Model Reduction and Interpolation

New Developments in LS-OPT - Robustness Studies

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Linear Regression and Its Applications

The Choice of Representative Volumes for Random Materials

ACTIVE SUBSPACE METHODS IN THEORY AND PRACTICE: APPLICATIONS TO KRIGING SURFACES

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Multivariate Gaussian Random Fields with SPDEs

Solving the steady state diffusion equation with uncertainty Final Presentation

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Stochastic Solvers for the Euler Equations

Uncertainty Management and Quantification in Industrial Analysis and Design

Quantifying Stochastic Model Errors via Robust Optimization

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization

A Stochastic Collocation based. for Data Assimilation

Practical unbiased Monte Carlo for Uncertainty Quantification

Hierarchical Parallel Solution of Stochastic Systems

Discrete Variables and Gradient Estimators

arxiv: v2 [math.pr] 27 Oct 2015

ECON 616: Lecture 1: Time Series Basics

Econ 424 Time Series Concepts

arxiv: v2 [math.na] 5 Dec 2013

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach

A Vector-Space Approach for Stochastic Finite Element Analysis

Stochastic Spectral Approaches to Bayesian Inference

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Impact of Geometric Variability on Compressor Repeating-Stage Performance. Antoine Vincent

Monte Carlo Methods for Uncertainty Quantification

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya

A Few Notes on Fisher Information (WIP)

Long-Run Covariability

Overview. Bayesian assimilation of experimental data into simulation (for Goland wing flutter) Why not uncertainty quantification?

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Collocation based high dimensional model representation for stochastic partial differential equations

The Bayesian approach to inverse problems

Adaptive Collocation with Kernel Density Estimation

Concepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015

Schwarz Preconditioner for the Stochastic Finite Element Method

From Fractional Brownian Motion to Multifractional Brownian Motion

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Multi-fidelity sensitivity analysis

Lecture 15, 16: Diagonalization

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

Natural Evolution Strategies for Direct Search

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

3. Review of Probability and Statistics

Solution of the Two-Dimensional Steady State Heat Conduction using the Finite Volume Method

SG39 Meeting May 16-17, Update on Continuous Energy Cross Section Adjustment. UC Berkeley / INL collaboration

Copyright 2003 by the American Institute of Aeronautics and Astronautics Inc. All rights reserved

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

The effects of manufacturing variability on turbine vane performance. John D. Duffner

2 Tikhonov Regularization and ERM

Random Eigenvalue Problems Revisited

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Numerical Synthesis of Pontryagin Optimal Control Minimizers Using Sampling-Based Methods

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE

An Investigation of the Attainable Efficiency of Flight at Mach One or Just Beyond

Random Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras

Effect of Geometric Uncertainties on the Aerodynamic Characteristic of Offshore Wind Turbine Blades

Constrained data assimilation. W. Carlisle Thacker Atlantic Oceanographic and Meteorological Laboratory Miami, Florida USA

A reduced-order stochastic finite element analysis for structures with uncertainties

c 2004 Society for Industrial and Applied Mathematics

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Algorithms for Uncertainty Quantification

UNIVERSITY OF MARYLAND Department of Physics College Park, Maryland. PHYSICS Ph.D. QUALIFYING EXAMINATION PART II

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Chapter 5. Sound Waves and Vortices. 5.1 Sound waves

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Covariance function estimation in Gaussian process regression

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Active Subspaces of Airfoil Shape Parameterizations

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

COS 424: Interacting with Data

Dynamic System Identification using HDMR-Bayesian Technique

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

A Worst-Case Estimate of Stability Probability for Engineering Drive

Lie Groups for 2D and 3D Transformations

Generalization of Hensel lemma: nding of roots of p-adic Lipschitz functions

Motivation General concept of CVaR Optimization Comparison. VaR and CVaR. Přemysl Bejda.

Time Series 2. Robert Almgren. Sept. 21, 2009

Can we do statistical inference in a non-asymptotic way? 1

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University

Lecture 2: Review of Basic Probability Theory

Calculating determinants for larger matrices

Asymptotics of minimax stochastic programs

Statistical signal processing

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Lecture 11: Eigenvalues and Eigenvectors

Nonparametric density estimation for elliptic problems with random perturbations

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

Transcription:

Simultaneous Robust Design and Tolerancing of Compressor Blades Eric Dow Aerospace Computational Design Laboratory Department of Aeronautics and Astronautics Massachusetts Institute of Technology GTL Seminar October 1, 2013

Motivation Geometric variability is unavoidable and undesirable Illustration of manufacturing variability (from [Garzon, 2003]) Impact of geometric variability can be reduced Robust design: change the nominal design Tolerancing: change the level of variability Current design and tolerancing methods are sequential A Design B C Minimum Cost Tolerance 2

Performance Impacts of Geometric Variability Geometric variability introduces both variability and mean shift into compressor performance Illustration of performance mean shift and variability (from [Lavainne, 2003]) Mean adiabatic eciency of a ank-milled integrally bladed rotor (IBR) reduced by approximately 1% [Garzon and Darmofal, 2003] 3

Research Objectives 1 Develop a framework for simultaneous robust design and tolerance optimization that incorporates manufacturing and operating costs 2 Develop an approach for probabilistic sensitivity analysis of performance with respect to the level of variability 3 Demonstrate framework eectiveness for design and tolerancing of turbomachinery compressor blades 4

The Pitfalls of Single-point Optimization Single-point optimized designs often perform poorly away from design point Polars for baseline and optimized DAE-11 airfoil (from [Drela, 1998]) Optimizer exploits ow features to improve performance at design point Small changes in ow features away from design point may degrade performance Adding additional design parameters does not improve o-design performance (it actually gets worse!) 5

Robust Design Optimization Robust optimization: determine design whose performance is relatively unchanged when the system is perturbed as a result of variability Cost Single-point Robust Design Variable Probabilistic robust multi-objective: minimize some combination of the mean and variance of the cost Solution approaches can be gradient-based or derivative-free Computationally expensive: typically involves evaluating performance at a large number of conditions 6

Modelling Variability: Random Fields Random eld: a collection of random variables that are indexed by a spatial variable Well-suited for modelling spatially distributed variability Parameters can be chosen to model observed behavior (correlation length, non-stationarity, smoothness) Gaussian random elds: uniquely characterized by mean and covariance 4 4 2 2 e(s) 0 e(s) 0 2 2 4 4 1.0 0.5 0.0 0.5 1.0 s 1.0 0.5 0.0 0.5 1.0 s (a) Smooth random eld (b) Non-smooth random eld 7

The Karhunen-Loève Expansion Simulate e(s, ω) using the Karhunen-Loève (K-L) expansion: e(s, ω) = ē(s) + λi φ i (s)ξ i (ω) i=1 C(s 1, s 2)φ i (s 2)ds 2 = λ i φ i (s 1) D ξ i (ω) N (0, 1), i.i.d. 10 0 1.0 10-1 0.5 λ 10-2 10-3 φ(s) 0.0 10-4 0.5 10-5 0 5 10 15 20 1.0 1.0 0.5 0.0 0.5 1.0 s (a) K-L eigenvalues (b) K-L eigenfunctions Truncate K-L expansion according to the decay of the eigenvalues: N KL e(s, ω) = ē(s) + λi φ i (s)ξ i (ω) i=1 8

Non-stationary Random Fields Covariance kernel consists of a xed correlation function spatially variance standard deviation (, C s1 s2 ρ(s1, s2 ) and σ(s ): ) = σ(s1 )σ(s2 )ρ(s1, s2 ) Stationary 2.0 20 1.5 10 15 e(s) σ(s) 5 1.0 0 5 0.5 10 15 0.0 1.0 0.5 s 0.0 0.5 20 1.0 1.0 0.5 0.0 s 0.5 1.0 0.5 0.0 s 0.5 1.0 Non-stationary 10 20 15 8 10 5 4 e(s) σ(s) 6 0 5 10 2 15 0 1.0 0.5 s 0.0 0.5 1.0 20 1.0 9

Random Field Model of Manufacturing Variability Manufacturing variability is modeled using a non-stationary Gaussian random eld, characterized by its mean ē(s) and covariance C(s 1, s 2) Map random eld to blade surface through arclength Manufactured blade surface constructed by perturbing design intent geometry in the normal direction: x(s, ω) = x d (s) + e(s, ω)ˆn(s) Baseline Manufactured e(s) 1.0 0.5 0.0 0.5 1.0 s (a) Error eld realization (b) Error eld mapped to blade 10

Relating Tolerances to Variance Simulate the eect of spatially varying manufacturing tolerances using a spatially varying variance σ 2 (s) Small σ 2 strict tolerances (a) Loose tolerances (b) Strict tolerances (): design intent geometry (- - -): design intent geometry +/- 2σ(s) 11

Performance and Manufacturing Costs Choose blade design and manufacturing tolerances that minimize overall cost C perf (d, σ): economic value of aerodynamic performance per blade Proportional to the moments of the performance of the system: C perf (d, σ) = k me[η(ω; d, σ)] + k v Var[η(ω; d, σ)] C man(σ): manufacturing cost per blade Monotonically decreasing function of allowed geometric variability 1 C man(σ) = k man Ω s σ(s) ds 12

Optimization Statement Optimal design (d ) and manufacturing tolerances (σ ) determined by minimizing sum of manufacturing and operating costs Mean pressure ratio Π is constrained above minimum allowable value Π (d, σ ) = argmin C perf (d, σ) + C man(σ) s.t. E[Π(d, σ, ω)] Π Gradient-based optimization: use sensitivity information to choose search directions Sequential Quadratic Programming (SQP) 13

Model Problem of Interest In the absence of variability, nominal design variables d determines the PDE solution u Nominal Design Variables d PDE Solution u(y; d) 14

Model Problem of Interest In the absence of variability, nominal design variables d determines the PDE solution u Random eld e(s, ω) introduces random noise to the solution Correlation Structure ρ Nominal Design Variables d Random Field e(s, ω; σ) PDE Solution u(y, u(y; ω; d, d) σ) Standard Deviation σ 14

Model Problem of Interest In the absence of variability, nominal design variables d determines the PDE solution u Random eld e(s, ω) introduces random noise to the solution System performance characterized by the moments of functionals F (u) of the solution, e.g. E[F ] and Var[F (u)] Correlation Structure ρ Nominal Design Variables d Random Field e(s, ω; σ) PDE Solution u(y, u(y; ω; d, d) σ) Performance Moments E[F (u(y, ω; d, σ))] Standard Deviation σ 14

Monte Carlo Method Monte Carlo method: approximate moments using sample average Sample i.i.d. Gaussian and construct error eld through K-L N KL e(s, ω) = ē(s) + λi φ i (s)ξ i (ω) i=1 Perturb geometry according to error eld x(s, ω) = x d (s) + e(s, ω)ˆn(s) Compute ow solution for perturbed geometry Compute performance quantities of interest for each sample geometry E[F ] 1 N MC F n N MC n=1 Requires a large number of samples as error is O(N 1/2 MC ) 15

From Design and Tolerances to Mean Performance Design Intent Geometry x d (s; d) Random Field 1 e 1 (s, ω 1 ; σ) Geometry Realization 1 x 1 (s, ω 1 ; d, σ) CFD Geometry 1 Performance η 1 (ω 1, d, σ) Correlation Structure ρ ω 1 Karhunen-Loève Expansion φ i (s), λ i ω 2 Random Field 2 e 2 (s, ω 2 ; σ) Geometry Realization 2 x 2 (s, ω 2 ; d, σ) CFD Geometry 2 Performance η 2 (ω 2, d, σ) Performance Moments E[η(d, σ)] Standard Deviation σ ω NMC.. Random Field N MC e NMC (s, ω NMC ; σ) Geometry Realization N MC x NMC (s, ω NMC ; d, σ) CFD Geometry N MC Performance η NMC (ω NMC, d, σ) 16

Sensitivity Analysis Overview Gradient based optimization: need to compute E[F ] (Sensitivity of mean performance to nominal design) d i E[F ] (Sensitivity of mean performance to tolerances) σ i Pathwise sensitivity: exchange dierentiation and integration [ ] [ ] [ ] E[F ] F E[F ] F F e = E = E = E d i d i σ i σ i e σ i Shape sensitivities F / d i and F / e can be computed with Adjoint method Direct sensitivity method Finite dierence/complex step method Sample path sensitivity e/ σ i derived from the K-L expansion 17

Pathwise Sensitivity Analysis Objective function is some moment of a performance quantity of interest J = E[F (e(σ i, ξ))] = F (e(σ i, ξ)) p Ξ (ξ) dξ Exchange dierentiation and integration: [ ] J F e F e = p Ξ (ξ) dξ = E σ i e σ i e σ i Key idea: x random numbers and perturb sample realizations Pros: well-suited to handle spatially distributed uncertainty Cons: requires continuous F (excludes failure probability sensitivities) 18

Monte Carlo and Pathwise Sensitivities Recall Monte Carlo approximation: E[F ] 1 N MC F n N MC n=1 Pathwise sensitivity analysis: exchange dierentiation and summation E[F ] 1 N MC F n E[F ] 1 d i N MC d i σ i N MC n=1 N MC n=1 F n σ i F n/ σ i is computed for xed realization, i.e. xed ξ in the K-L expansion N KL e(s, ω; σ) = ē(s) + λi φ i (s)ξ i (ω) i=1 19

Sample Path Sensitivity For each realization of the random eld, compute sensitivity of random eld e with respect to each σ j Karhunen-Loève eigenvalues/eigenvectors are dierentiable functions of σ: N KL e(s, ω; σ) = ē(s) + λi φ i (s)ξ i (ω) e(s, ω; σ) σ j = N KL i=1 φ i σ j i=1 ( 1 2 λ i φ i + ) φ i λ i ξ i (ω) λ i σ j σ j λ i σ j = φ T i C σ j φ i = (C λ i I ) + C σ j φ i These derivatives exist if the eigenvalues have algebraic multiplicity of one 20

Sample Path Sensitivity Holding xed ξ in K-L expansion ensures that δσ(s) small e(s) and e(s) + δe δσ(s) are close δσ 0.010 1.4 1e 2 0.008 1.2 1.0 e(s) + e σ δσ(s) σ(s) + δσ(s) 0.8 σ (s ) 0.006 0.004 e (s ) 0.6 0.4 0.2 e(s) 0.002 σ(s) 0.0 0.2 0.000 1.0 0.5 0.0 s 0.5 1.0 Initial and perturbed standard deviation 0.4 1.0 0.5 0.0 s 0.5 1.0 Initial and perturbed error eld x d x d + e x d + e + δe Initial and perturbed manufactured blade 21

Subsonic Cascade Example MISES: coupled inviscid/viscous ow solver Blade shape parameterized with Chebyshev polynomials Shape sensitivities computed using nite dierences Baseline (no geometric variability): Π 0 = 1.089, θ 0 = 1.70 10 2 Manufacturing variability modeled using a squared exponential kernel with σ(s) = 5 10 4 ) s1 s2 2 C(s 1, s 2) = σ(s 1)σ(s 2) exp ( 2L 2 L = 2/20 E[θ] = 1.721 10 2 E[Π] = 1.088 Mean pressure ratio constrained to be above Π = 1.088 Performance cost function only includes mean eciency 22

Subsonic Cascade Results Optimal design attains lower loss coecient and allows more variability Performance cost function (Cperf ) is reduced by 6% Tolerance cost function (Cman) is reduced by 47% (a) Baseline and optimized blade (b) Optimal σ(s) distribution 23

Summary and Future Work New framework for simultaneous robust design and tolerancing Manufacturing and operating costs incorporated into optimization Create a feedback loop between designers and manufacturers Novel probabilistic sensitivity analysis of performance to level of variability Optimal blade performs better and is cheaper to manufacture Future Work More accurate/ecient shape sensitivities: direct sensitivity method Transonic compressor optimization Investigate solution quality 24

Questions? 25

Pathwise Sensitivity Sucient Conditions for Unbiasedness (Following Glasserman[Glasserman, 2004]) Assume output Y is a function of m random variables: Pathwise estimate is unbiased if [ ] Y (θ + h) Y (θ) E lim h 0 h Y (θ) = f (X 1(θ),..., X m(θ)) = f (X (θ)) = lim h 0 E [ ] Y (θ + h) Y (θ) h (A1) X i (θ) exists w.p. 1, i = 1,..., m (A2) Denote D f R m as the set where f is dierentiable and require P(X (θ) D F ) = 1 Then Y (θ) exists w.p. 1 and is given by Y (θ) = m i=1 θ Θ f X i (X (θ))x i (θ) 26

Pathwise Sensitivity Sucient Conditions for Unbiasedness (A3) The function f is Lipschitz continuous, i.e. k f s.t. x, y R m f (x) f (y) k f x y (A4) There exist random variables k i, i = 1,..., m, s.t. θ 1, θ 2 Θ, and E[k i ] < X i (θ 2) X i (θ 1) k i θ 2 θ 1 Conditions (A3) and (A4) imply that Y is almost surely Lipschitz continuous in θ: Y (θ 2) Y (θ 1) k Y θ 2 θ 1 Thus, we have Y (θ + h) Y (θ) h k Y The interchange of expectation and dierentiation are then justied by the DCT 27

Adjoint Method Consider an objective function F that depends on the solution u of some PDE, which in turn depends on some parameter p: which we linearize F = F (u; p) δf = F T δu + F T δp u p The PDE solution satises a residual equation R(u; p) = 0 which can be linearized to give [ ] [ ] R R δu + δp = 0 u p 28

Adjoint Method Introduce the adjoint state ψ (Lagrange multiplier), and treat the linearized residual equation as a constraint T δf = F u ( F = u δu + F T δp ψ T p ]) T ψ T [ R u If the adjoint state ψ is chosen to satisfy [ ] T R ψ = F =0 [ R δu + p ([ { ] }} ] ){ R δp u ( [ ]) F T δu + ψ T R δp p p u u then the sensitivity gradient can be computed as ( [ ]) F F T p = ψ T R p p Cost is 2N MC ow solutions (vs 2N MC [N d + N σ] ow solutions for FD) 29

Eigenvalue Level Repulsion: von Neumann-Wigner Theorem The codimension of the set of positive denite matrices with repeated eigenvalues is greater than one The space of all SPD n n matrices forms a linear space of dimension N = n(n + 1)/2. Two ways to count this: sum of diagonal and elements above diagonal, or... n dimensions corresponding to the eigenvalues (n 1) dimensions corresponding to rst eigenvector subject to φ 1 = 1 (n 2) dimensions corresponding to second eigenvector subject to φ 2 = 1 and φ T 1 φ 2 = 0... Single dimension corresponding to the second to last eigenvector Final eigenvector is uniquely determined by all others n + n (n i) = n + n(n 1)/2 = n(n + 1)/2 = N i=1 30

Eigenvalue Level Repulsion: von Neumann-Wigner Theorem Now consider the space of all SPD n n matrices with exactly two eigenvalues that are equal The space of real SPD n n matrices with more than two equal eigenalues is certainly no larger than this space Similar counting approach n 1 dimensions corresponding to the eigenvalues (n 1) dimensions corresponding to rst simple eigenvector subject to φ 1 = 1 (n 2) dimensions corresponding to second simple eigenvector subject to φ 2 = 1 and φ T 1 φ 2 = 0... Two dimensions corresponding to the last simple eigenvector Eigenspace corresponding to equal eigenvalues is uniquely determined n 2 (n 1) + (n i) = N 2 Starting from a random matrix and moving in a random direction will almost surely result in simple eigenvalues i=1 31

Antithetic Variates Consider estimating the mean M = E[F (ξ)], ξ = (ξ 1,..., ξ NKL ) with two samples: ˆM = F (ξ 1) + F (ξ 2 ) F1 + F2 = 2 2 with estimator variance If F (ξ) is monotone, choose Var( ˆM) = Var(F 1) + Var(F 2) + 2Cov(F 1, F 2) 4 ξ 1 = ξ 2 then Cov(F 1, F 2) < 0 and variance is reduced Quantities of interest and their sensitivities are nearly linear when level of uncertainty is small 32

References I [Drela, 1998] Drela, M. (1998). Frontiers of Computational Fluid Dynamics 1998, chapter 19, Pros and cons of airfoil optimization, pages 363380. World Scientic Publishing. [Garzon, 2003] Garzon, V. E. (2003). Probabilistic Aerothermal Design of Compressor Airfoils. PhD dissertation, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics. [Garzon and Darmofal, 2003] Garzon, V. E. and Darmofal, D. (2003). Impact of geometric variability on axial compressor performance. Journal of Turbomachinery, 125(4):692703. [Glasserman, 2004] Glasserman, P. (2004). Monte Carlo Methods in Financial Engineering, chapter 7, Estimating Sensitivities, pages 386401. Springer Verlag, New York. [Lavainne, 2003] Lavainne, J. (2003). Sensitivity of a Compressor Repeating-Stage to Geometry Variation. Master's dissertation, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics. 33