On the Karhunen-Loève basis for continuous mechanical systems

Similar documents
Model Reduction, Centering, and the Karhunen-Loeve Expansion

Stochastic Spectral Approaches to Bayesian Inference

Statistical signal processing

A Vector-Space Approach for Stochastic Finite Element Analysis

Proper Orthogonal Decomposition

Collocation based high dimensional model representation for stochastic partial differential equations

Second-Order Inference for Gaussian Random Curves

Separation of Variables in Linear PDE: One-Dimensional Problems

TA16 11:20. Abstract. 1 Introduction

Lecture 3: Review of Linear Algebra

Solving the steady state diffusion equation with uncertainty Final Presentation

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Efficient Observation of Random Phenomena

Dynamic response of structures with uncertain properties

arxiv: v2 [math.pr] 27 Oct 2015

Polynomial Chaos and Karhunen-Loeve Expansion

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1

Sparse polynomial chaos expansions in engineering applications

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

PART IV Spectral Methods

POD for Parametric PDEs and for Optimality Systems

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Lecture 3: Review of Linear Algebra

Spectral methods for fuzzy structural dynamics: modal vs direct approach

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media

Maximum variance formulation

Mathematical Introduction

A review of stability and dynamical behaviors of differential equations:

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach

PLEASE LET ME KNOW IF YOU FIND TYPOS (send to

Mathematical Methods in Quantum Mechanics With Applications to Schrödinger Operators 2nd edition

Structural Dynamics Lecture Eleven: Dynamic Response of MDOF Systems: (Chapter 11) By: H. Ahmadian

Feedback Control of Turbulent Wall Flows

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Some Aspects of Solutions of Partial Differential Equations

Review of Strain Energy Methods and Introduction to Stiffness Matrix Methods of Structural Analysis

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

1 The postulates of quantum mechanics

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

A NEW METHOD FOR VIBRATION MODE ANALYSIS

Functional Analysis Review

A Hilbert Space for Random Processes

Reduced-order models for flow control: balanced models and Koopman modes

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

Artificial boundary conditions for dispersive equations. Christophe Besse

«Random Vectors» Lecture #2: Introduction Andreas Polydoros

Kernel Method: Data Analysis with Positive Definite Kernels

Stochastic Processes. A stochastic process is a function of two variables:

Lecture 12: Detailed balance and Eigenfunction methods

Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation and Identification

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Proper Orthogonal Decomposition (POD)

Functional Analysis Review

An Empirical Chaos Expansion Method for Uncertainty Quantification

Finite element approximation of the stochastic heat equation with additive noise

Improvement of Reduced Order Modeling based on Proper Orthogonal Decomposition

Quantum Mechanics Solutions. λ i λ j v j v j v i v i.

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Stochastic Solvers for the Euler Equations

Reduced order modelling for Crash Simulation

A New Wavelet-based Expansion of a Random Process

MP463 QUANTUM MECHANICS

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

Highly-efficient Reduced Order Modelling Techniques for Shallow Water Problems

Chapter 2 Spectral Expansions

Ordinary Differential Equations II

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Sampling and Low-Rank Tensor Approximations

Chemistry 532 Problem Set 7 Spring 2012 Solutions

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

Concentration Ellipsoids

Statistical Convergence of Kernel CCA

1 Coherent-Mode Representation of Optical Fields and Sources

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Tests for separability in nonparametric covariance operators of random surfaces

Modeling Multi-Way Functional Data With Weak Separability

Numerical Solutions to Partial Differential Equations

Convergence of Eigenspaces in Kernel Principal Component Analysis

16.584: Random Vectors

An inverse scattering problem in random media

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method

Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows

1. Quantum Mechanics, Cohen Tannoudji, Chapters Linear Algebra, Schaum Series 3. Quantum Chemistry Ch. 6

Stochastic low-dimensional modelling of a random laminar wake past a circular cylinder

Thermodynamic limit for a system of interacting fermions in a random medium. Pieces one-dimensional model

On some singular limits for an atmosphere flow

Interest Rate Models:

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Stochastic Dynamics of SDOF Systems (cont.).

f xx g dx, (5) The point (i) is straightforward. Let us check the point (ii). Doing the integral by parts, we get 1 f g xx dx = f g x 1 0 f xg x dx =

Module 9: Stationary Processes

New Developments in LS-OPT - Robustness Studies

A NONLINEAR DIFFERENTIAL EQUATION INVOLVING REFLECTION OF THE ARGUMENT

OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS

Lecture Notes on PDEs

Singular solutions for vibration control problems

Transcription:

for continuous mechanical systems 1 1 Department of Mechanical Engineering Pontifícia Universidade Católica do Rio de Janeiro, Brazil e-mail: rsampaio@mec.puc-rio.br

Some perspective to introduce the talk Newton Maxwell Boltzmann Carnot: beginning of Thermodynamics A side comment

Newton

Maxwell

Boltzmann

Beginning of Thermodynamics

The idea of Carnot

Carnot s Theorem

Why theorems are important Let N be the set of natural numbers 1,2,3,... Question: Find the greatest natural number. Suppose N is the solution. If N 1 then N 2 > N, so N 1 can not be the solution. Then N = 1 is the solution Answer: N = 1 is the greatest natural number.

Outline: Naive look/more detailed look Some history-main applications Main ideia of KL decomposition The mathematical problem Construction of the KL basis How to combine KL with Galerkin Reduced model given by KL Practical question: how to compute An example Different guises of KL; basic ingredients Karhunen-Loève expansion: main hypothesis Karhunen-Loève Theorem Basic properties Applications to Random Mechanics Examples

Some history Beginning: works in Statistics and Probability and Spectral Theory in Hilbert Spaces. Some contributions: Kosambi (1943) Loève (1945) Karhunen (1946) Pougachev (1953) Obukhov (1954) Applications: Lumley (1967): method applied to Turbulence Sirovich (1987): snapshot method An important book appeared in 1996: Holmes, Lumley, Berkooz. In Solid Mechanics the applications started around 1993. In finite dimension it appears under different guises: Principal Component Analysis (PCA): Statistics and image processing Empirical orthogonal functions: Oceanography and Metereology Factor analysis: Psychology and OnEconomics the Karhunen-Loève basis

Main Applications Data analysis: Principal Component Analysis (PCA) Reduced models, through Galerkin approximations Dynamical Systems: to understand the dynamics Image processing Signal Analysis Two main purposes: order reduction by projecting high-dimensional data in lower-dimensional space feature extration by revealing relevant but unexpected structure hidden in the data

Main idea of KL decomposition In plain words Key idea of KL is to reduce a large number of interdependent variables to a much smaller number of uncorrelated variables while retaining as much as possible of the variation in the original data. more precisely Suppose we have an ensemble {u k } of scalar fields, each being a function defined in (a,b) R. We work in a Hilbert space L 2 ((a,b)). We want to find a (orthonormal) basis {ψ n } n=1 of L 2 that is optimal for the given data set in the sense that the finite dimensional representation of the form û(x) = k=1 a k ψ k (x) describes a typical member of the ensemble better than representations of the same dimension in any other basis. The notion of typical implies the use of an average over the Rubens ensemble Sampaio {uon k } theand Karhunen-Loève optimality basis means

The mathematical problem Suppose, for simplicity, we have just one function ψ This implies E( < u,ψ > 2 ) max ψ L 2 ψ 2 J(ψ) = E( < u,ψ > 2 ) λ( ψ 2 1) b with R(x,y) = E(u(x)u(y)) a d ds J(ψ + εφ) ε=0 = 0 R(x,y)ψ(y)dy = λψ(x)

Construction of the KL basis Construct R(x,y) from the data Solve the eigenvalue problem: R(x,y)ψ(y)dy = λψ(x) D to get the pair (λ i,ψ i ) If u is the field then the N-order approximation of it is û N (t,x) = E(u(t,x)) + Σ N i=1a i (t)ψ(x) To make predictions use the Galerkin method taking the ψ s as trial functions

Galerkin projections Suppose we have a dynamical system governed by v t = A(v) v (a,b) D R n v(0,x) = v 0 (x) initial condition B(v) = 0 boundary condition The Galerkin method is a discretization scheme for PDE based on separation of variables. One searches solutions in the form: ˆv(x) = k=1 a k ψ k (x)

Reduced equations The reduced equation is obtained making the error of the approximation orthogonal to the first N KL elements of the basis. da i errorequation(t,x) = ˆv t A(ˆv) errorinicond(x) = ˆv(0,x) v 0 (x) < errors,ψ i (x) >= 0 for i = 1,...,N. dt (t) = D A(ΣN n=1 a n(t)ψ n (x))ψ i (x)dx for i = 1,...,N a i (0) = D v 0(x)ψ i (x)dx for i = 1,...,N

Computation of the KL basis: Direct method In this method, the displacements of a dynamical system are measured or calculated at N locations and labeled u 1 (t,x 1 ),u 2 (t,x 2 ),...,u N (t,x N ). Sampling these displacements M times, we can form the following M N ensemble matrix: U = [ u 1 (t 1,x 1 ) u 2 (t 1,x 2 )... u n (t 1,x N ) ] u 1 u 2... u N =...... u 1 (t M,x 1 ) u 2 (t M,x 2 )... u n (t M,x N ) Thus, the spatial correlation matrix of dimension N N is formed as R u = 1 M UT U. The PO modes are then given by the eigenvectors of R,

Direct method u 2 1 M Média no tempo E[u] _ + v 2 1 M Dados R 11 R 12...R 1N... R 22 sim...... R 2N Correlação espacial R NN Autovetores Autovalores ψ λ 2 1 N Energias POMs Algoritmo de implementação do método direto.

Snapshot method u 2 1 M Média no tempo E[u] _ + v 2 1 M Dados C 11 C 12... C 1M... C 22... sim... C 2N C MM Produto interno Autovetores Autovalores A λ x M Σm = 1 A km v (m) 2 1 M POMs Energias Algoritmo de implementação do método dos retratos.

Different guises of KL; basic ingredients Sets: D R l Ω space of events R n codomain of functions L 2 (D,R n ) is a Hilbert space of functions with inner product <,> D and associated norm. D. The elements of this space are deterministic functions. (Ω,F,P) is a probability space, F is a sigma-algebra and P a probability measure. ω Ω is an event, that is a realization of a random function. The mean value of a random variable X is E( X ) = Ω X (z)dp(z) with X : Ω R n z X (z) Ω also has a Hilbert space structure, noted L 2 (Ω,R n ), if we put the inner product <,> Ω = E( XY ) and the associated norm is. Ω

Basic ingredients of KL In order to compute KL basis one needs two basic ingredients: a L 2 space of functions an averaging operator In the literature we find mainly three main forms of KL decompositions. To understand their similarities and differences it is worth to think of the fields as defined in a cartesian product of two sets, that will provide the main ingredients we just mentioned X : D Ω R n (z,ω) X (z,ω) We have the following interpretation: X (z,.) is a random variable, that is, all possible realizations of a field for fixed z D. We need the averaging operator to do statistics with this random variables, one for each z D. X (.,ω) this is a realization of a field, hence a function of L 2 (D,R n ). Physical quantities are defined in terms of this fieldrubens so we Sampaio need the On the first Karhunen-Loève structure. basis

Karhunen-Loève expansion: main hypothesis Let us consider a random field {X (z)} z D defined on a probability space (Ω,F,P) X : D( R l ) Ω R n (z;ω) X (z;ω) Assumption I: {X (z)} z D is a second-order random field i.e. E( X (z) 2 ) = E(< X (z),x (z) >) <, z D E(.) denotes the ensemble average and, is the inner product in R n. Assumption II: {X (z)} z D is continuous in quadratic mean i.e. X (z + h) X (z) 2 L 2 (Ω,R n ) 0 as h 0.

Under Assumption I and II z D, X (z) L 2 (Ω,R n ) (with < Y 1,Y 2 > Ω = E(< Y 1,Y 2 >)). Second order moment characteristics: m X (z) = E(X (z)) R X (z 1,z 2 ) = E(X (z 1 ) X (z 2 )) C X (z 1,z 2 ) = E((X (z 1 ) E(X ( 1 ))) (X (z 2 ) E(X (z 2 )))) When the random field is mean zero valued, then C X = R X. We will assume in the sequel that {X (z)} z D is a mean zero valued field. The correlation function C X is continuous on D D.

The self-adjoint operator According to our assumptions, the integral operator, Q : L 2 (D,R n ) L 2 (D,R n ) ψ (Qψ)(z) = D C X (z,z )ψ(z )dz, with kernel C X (z,z ), defines a continuous self-adjoint Hilbert-Schmidt operator on the Hilbert space L 2 (D,R n ).

Eigenvalues property: The operator Q has a countable number of eigenvalues λ 1 λ n, i.e. (Qψ n )(z) = λ n ψ n (z) where ψ 1,,ψ n,, denote the associated eigenfunctions. The set of eigenfunctions constitutes a orthonormal basis of L 2 (D,R n ) < ψ n,ψ m > D = < ψ n (z),ψ m (z) > dz = δ nm D where <,> D denotes the inner product in L 2 (D,R n ) with the associated norm. D.

Karhunen-Loève Theorem The Karhunen-Loève theorem states that a continuous second-order random field can be expanded in a series of the eigenfunctions, ψ n, as X (z) = n=1 ξ n ψ n (z) (in L 2 (Ω,R n )) where ξ 1,ξ 2,,ξ n, are scalar uncorrelated random variables defined by ξ n = < X (z),ψ m (z) > dz D with { λn if n = m E(ξ n ξ m ) = λ n δ nm = 0 if n m The {ψ k } are named the KL modes ( also, Principal Orthogonal modes, POM).

Energy property The eigenvalues, λ n, of Q are related to the mean energy of the random field according to the following relation E( X 2 <,> D ) = n=1 λ n.

Optimality property The Karhunen-Loève expansion satisfies the following optimality property: q E( X k ψ k (z) k=1ξ 2 q D ) E( X ξ k ψ k (z) 2 D ) k=1 for any integer q and any arbitrary orthogonal basis ( ψ k ) k 1 of L 2 (D,R n ) where ξ 1, ξ 2,, ξ k, are scalar random variables given by ξ k = < X (z),ψ k (z) > dz D It is optimal in the sense that given a fixed number q of modes, no other linear decomposition can contain as much energy as the KL expansion.

Summary: Karhunen-Loève Theorem {X (z)} z D defined on a probability space (Ω,F,P) Covariance matrix function C X (z 1,z 2 ) X (z;ω) = n=1 ξ n (ω)ψ n (z) in L 2 (Ω,R n ) with ψ n : eigenfunctions (Qψ)(z) = C X (z,z )ψ(z )dz { D 1 if n = m < ψ n (z)ψ m (z) > dz = 0 if n m in L2 (D,R n ) D ξ n : scalar random variables ξ n (ω) = { λn if n = m E(ξ n ξ m ) = 0 if n m D < X (z,ω),ψ m (z) > dz

Applications to Random Mechanics In random mechanics, the random characteristics have often modeled using random fields {u(z)} z D where the domain D is either D = D x R p (with p = 1,2, or 3) static problems or D = D t D x R R p dynamics problems Without loss of generality, we assume D t = [0,T ] where T R+. In order to find a flow model that still reveals the main features contained in the dynamics, one often searches for an expansion in the variables separated form u(t,x) = k=1 a k (t)φ k (x) where φ k are deterministic R n -valued functions, and {a k (t)} t Dt are scalar time random processes. Let us see how to adapt the KL theory to these cases.

Approach 1 Apply the KL theorem to random field {u(t,x)} (t,x) D with covariance matrix function C u (t 1,x 1,t 2,x 2 ) u(t,x;ω) = n=1 ξ n (ω)ψ n (t,x) in L 2 (Ω,R n ) with ψ n : eigenfunctions (Qψ)(t,x) = C X (t,x,t,x )ψ(t,x )dt dx D { 1 if n = m < ψ n (t,x)ψ m (t,x) > dtdx = 0 if n m in L2 (D,R n ) D ξ n : scalar random variables ξ n (ω) = { λn if n = m E(ξ n ξ m ) = 0 if n m D < X (t,x,ω),ψ m (t,x) > dtdx

Approach 2 For fixed t D t, Apply the KL theorem to random field {u(t,x)} x Dx with the covariance matrix function C u (t,x 1,t,x 2 ) u(t,x;ω) = n=1 ξ n (t,ω)ψ n (t,x) in L 2 (Ω,R n ) with ψ n (t,.) : eigenfunctions (Qψ)(x) = C X (t,x,t,x )ψ(x )dx D x { 1 if n = m < ψ n (t,x)ψ m (t,x) > dtdx = D x 0 if n m in L2 (D,R ξ n (t,.) : scalar random variables ξ n (t,ω) = < X (t,x,ω),ψ m (t,x) { D x λn if n = m E(ξ n (t,.)ξ m (t,.)) = 0 if n m

Approach 3 L 2 ([0,T ] Ω,R n ) with < Y,Z > [0,T ] Ω = E (< Y,Z >) with E (.) = 1 T T 0 E(.)dt. Apply KL theorem the random field {u(.,x)} x Dx with the covariance matrix function C u (x,x ) = E (u(.,x) u(.,x )) u(x;t,ω) = n=1 ξ n (t,ω)ψ n (x) with ψ n (.) : eigenfunctions (Qψ)(x) = C u (x,x )ψ(x )dx D x < ψ n (x)ψ m (x) > dx = D x { 1 if n = m 0 if n m in L2 (D x,r n ) ξ n : scalar random processes ξ n (t,ω) = < u(t,x,ω)ψ m (x) > d { D x λn if n = m E (ξ n ξ m ) = 0 if n m in L2 ([0,T ] Ω,R)

Approach 3: discrete case, random process: {u(t)} [0,T ] L 2 ([0,T ] Ω,R n ) with u : [0,T ] Ω R n (t, ω) u(t; ω) < Y,Z > [ 0,T ] Ω = E (< Y,Z >) with E (.) = 1 T Covariance matrix C u = E (u(.)u(.) T ) T 0 E(.)dt. ψ n : u(t,ω) = n=1 eigenvectors { C u ψ n = λψ n 1 if n = m ψ n ψm T = 0 if n m ξ n (t,ω)ψ n in Rn ξ n : scalar random { processes ξ n (t,ω) =< u(t,ω)ψ m > λn if n = m E (ξ n ξ m ) = 0 if n m in L2 ([0,T ] Ω,R)

Some remarks 1 The existence of the Karhunen-Loève expansion as described in approach 3 does not require any assumption on stationarity and ergodicity properties. 2 The Karhunen-Loève expansion as described in approach 3 usually depends on the time parameter T. 3 If the random field {u(t,x)} (t,x) Dt D x is weakly stationary with respect to the time variable, (C u (t,x,t,x ) = C u (t t,x,x )) then approach 2 and approach 3 give the same results. Moreover, the KL expansion does not depends on the time parameter T.

KL modes and Equivalent linear system Let us consider the discrete nonlinear case: U(t) R n MÜ(t) + C U(t) + F (U(t)) = Bw(t) where {w(t)} t is a Gaussian white noise process {U(t)}: stationary process Then, the KL modes of the stationary nonlinear response coincide with the KL modes of the stationary response of the associated equivalent linear system given by the true stochastic linearization method: MÜ(t) + C U(t) + K eq U(t) = Bw(t) where K eq minimizes E((F (U) KU) T (F (U) KU))

How to compute the PO modes? The techniques to estimate the covariance function of a random field depends on its properties. If the random field is non-stationary, there is a general method for estimating its second-moment characteristics that assumes several realizations of the random field are available. If the random field is weakly stationary with respect to the time variable, there is a method for estimating C u (x 1,x 2 ) based on a single realization of the random field. Two methods can be used to compute de PO modes: - Direct method - Snapshot method

Example?? L/2 L F f 50mm ε h Figure: Vibro-impact beam Some results are presented here obtained from simulated data generated from a mathematical model of a linear clamped beam impacting a flexible barrier.

Simulation of the experiment EI 4 w(x,t) x 4 +ρa 2 w(x,t) t 2 = F f (t)δ (x x f )+ N i=1 - Ten mode shapes of the associated linear system ŵ(x,t) = - Galerkin method (10 DOF) 10 i=1 q i (t)φ i (x) Q + [2ω i τ i ] Q + [ω 2 i ]Q + F ci (Q) = BF f (t) where modal damping were added to the discretized model. F b (w(x ci,t))δ (x x ci )

2 2 1 1 ψ(x) 0 1 1. MS 1. POM 2. MS 2. POM ψ(x) 0 1 3. MS 3. POM 4. MS 4. POM 2 0 0.1 0.2 0.3 0.4 0.5 2 x (m) 2 0 0.1 0.2 0.3 0.4 0.5 x (m) 2 1 1 ψ(x) 0 1 5. MS 5. POM 6. MS 6. POM ψ(x) 0 1 7. MS 7. POM 8. MS 8. POM 2 0 0.1 0.2 0.3 0.4 0.5 x (m) 2 0 0.1 0.2 0.3 0.4 0.5 x (m) Comparison between PO modes ψ i and linear modes φ i The first two KLs significantly differs from the first two mode shapes reflecting the influence of the barrier upon the system.

Reduced-order model formulation: From the PO modes ψ i, we have construct a reduced model. EI 4 w(x,t) x 4 +ρa 2 w(x,t) t 2 = F f (t)δ (x x f )+ - n KL modes with (1 n 10) ŵ(x,t) = - Galerkin method (n DOF) n i=1 a i (t)ψ i (x) N i=1 Ä + [2ω i τ i ]Ȧ + F KL (A) = B KL F f (t) F b (w(x ci,t))δ (x x ci ) where same modal damping were added to the discretized model.

This figure presents a comparison between the original and the reduced-order models constructed with 5 and 10 PO modes. 6 x 10 3 Comparison between models Model with 5 POMs Original model 6 x 10 3 Comparison between models Model with 10 POMs Original model Neutral axis displacement at x = 0.46 m 4 2 0 2 4 Neutral axis displacement at x = 0.46 m 4 2 0 2 4 6 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Time (s) 6 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Time (s) Figure: Dynamic with reduced order modes with 5 and 10 PO modes The result is clearly not as good as expected and the full reduced-order model is not yet capable of reproducing the original response. A probable explanation for this result is that the use of the modal damping ratios for the first and second KLMs is inappropriate as they are physically different.

Concluding remarks In order to use the KL theory to expand a random field in the variables separated (time-random variables and spatial variable), it is necessary to use the adequate spatial covariance function. The use of the PO modes to develop the reduced-order model in the presence of damping may not be robust.