HT14 A COMPARISON OF DIFFERENT PARAMETER ESTIMATION TECHNIQUES FOR THE IDENTIFICATION OF THERMAL CONDUCTIVITY COMPONENTS OF ORTHOTROPIC SOLIDS

Size: px
Start display at page:

Download "HT14 A COMPARISON OF DIFFERENT PARAMETER ESTIMATION TECHNIQUES FOR THE IDENTIFICATION OF THERMAL CONDUCTIVITY COMPONENTS OF ORTHOTROPIC SOLIDS"

Transcription

1 Inverse Problems in Engineering: heory and Practice rd Int. Conference on Inverse Problems in Engineering June -8, 999, Port Ludlow, WA, USA H4 A COMPARISON OF DIFFEREN PARAMEER ESIMAION ECHNIQUES FOR HE IDENIFICAION OF HERMAL CONDUCIVIY COMPONENS OF ORHOROPIC SOLIDS M. M. Meias and H. R. B. Orlande Department of Mechanical Engineering, EE/COPPE Federal University of Rio de Janeiro, UFRJ Cid. Universitária, Cx. Postal:6850 Rio de Janeiro, RJ, Brasil M. N. Ozisi Department of Mechanical and Aerospace Engineering North Carolina State University, NCSU PO Box: 790 Raleigh, NC, USA ABSRAC In this paper we examine the inverse problem of identification of the three thermal conductivity components of an orthotropic cube. For the solution of such parameter estimation problem, we consider two different versions of the Levenberg-Marquardt method and four different versions of the conugate gradient method. he techniques are compared in terms of rate of reduction of the obective function with respect to the number of iterations, CPU time and accuracy of the estimated parameters. Simulated measurements with and without measurement errors are used in the analysis, for three different sets of exact values for the parameters. NOMENCLAURE I number of transient measurements per sensor J sensitivity coefficients J sensitivity matrix defined by equation (7),, thermal conductivities in the x, y and z directions, respectively M number of sensors P vector of unnown parameters S least-squares norm defined by equation (5) vector of estimated temperatures t h, t f heating time and final time Y vector of measured temperatures GREEKS σ standard deviation of the measurement errors INRODUCION In nature, several materials have direction-dependent thermal conductivities including, among others, woods and crystals. his is also the case for some man-made materials, for example, composites. Such ind of materials is denoted anisotropic, as an opposition to isotropic materials, in which the thermal conductivity does not vary with direction. A special case of anisotropic materials involve those where the offdiagonal elements of the conductivity tensor are null and the diagonal elements are the principal conductivities along three mutually orthogonal directions. hey are referred to as orthotropic materials (Ozisi, 99). As a result of the importance of orthotropic materials in nowadays engineering, a lot of attention has been devoted in the recent past to the estimation of their thermal properties, by using inverse analysis techniques of parameter estimation (Sawaf and Ozisi, 995, Sawaf et al, 995, ata et al, 99, ata, 99, Dowding et al, 995, 996, Meias et al, 999). In this paper we present a comparison of different methods of parameter estimation, as applied to the identification of the three thermal conductivity components of an orthotropic solid, by using simulated experimental data. Such a physical problem was chosen for comparison of the methods because it requires non-linear estimation procedures, since the sensitivity coefficients are functions of the unnown parameters. Experimental variables used in the analysis, such as the duration of the experiment, location of sensors and boundary conditions, were optimally chosen (Meias et al, 999). he methods examined in this wor include: the Levenberg-Marquardt Method (Levenberg, 944, Marquardt, 96, Bec and Arnold, 977, Moré, 977, Meias et al, 999, Ozisi and Orlande, Copyright 999 by ASME

2 999) and the Conugate Gradient Method versions of Fletcher- Reeves, Pola-Ribiere, Powell-Beale and Hestenes-Stieffel (Alifanov, 994, Daniel, 97, Ozisi and Orlande, 999, Powell, 977, Hestenes and Stiefel, 95, Fletcher and Reeves, 964, Colaço and Orlande, 999). Such methods are compared in terms of the rate of reduction of the obective function, accuracy of estimated parameters and effects of the initial-guess on the convergence. Different sets of values are used for the thermal conductivity components in order to generate the simulated measurements, thus representing several practical engineering materials (Meias et al, 999), as described next. DIREC PROBLEM he physical problem considered here involves the threedimensional linear heat conduction in an orthotropic solid, with thermal conductivity components, and in the x, y and z directions, respectively. he solid is considered to be a parallelepiped with sides a, b and c, initially at zero temperature. For times t > 0, uniform heat fluxes q ( t ), q ( t ) and q ( t) are supplied at the surfaces x = a, y = b and z = c, respectively, while the other three remaining boundaries at x = 0, y = 0 and z = 0 are supposed insulated. he mathematical formulation of such physical problem is given in dimensionless form by + + = x y z t (.a) in 0 < x < a, 0 < y < b, 0 < z < c ; x t > 0 = 0 at x = 0; = q( t) at x = a, for t > 0 x = 0 at y = 0; = q ( t) at y = b, for t > 0 y y (.b,c) (.d,e) = 0 at z = 0 ; = q( t) at z = c, for t > 0 (.f,g) z z = 0 for t = 0 ; in 0 x a, 0 y b, 0 z c (.h) In the direct problem associated with the physical problem described above, the three thermal conductivity components, and, as well as the solid geometry, initial and boundary conditions, are nown. he obective of the direct problem is then to determine the transient temperature field (x, y, z, t) in the body. By following the same approach of ata et al (99), we assume the boundary heat fluxes to be pulses of finite duration t h, that is, q, for 0 < t th q ( t) = for =,, () 0, for t > th where q is the dimensionless magnitude of the applied heat flux. he solution of problem () with boundary heat fluxes given by equation (), can be obtained analytically as a superposition of three one-dimensional solutions by using the split-up procedure (Mihailov and Ozisi, 994) for 0 < t t h. We obtain for 0< t t h q a x t ( x, y, z, t) = θ, + a a (.a) q b y t q c z t + θ + θ,, b b c c where ξ θ( ξ, τ) = + + τ + 6 (.b) ( i+ ) + ( ) cos( i π ξ) exp[ τ( i π) ] i= After the heating period, problem () becomes homogeneous with initial temperature distribution given by equation (.a) for t = t h. Hence, it can easily be solved by separation of variables (Ozisi, 99). Since the initial condition at t = t h obtained from equation (.a) is a superposition of three one-dimensional solutions, such is also the case for the solution for t > t h. he temperature field for t > t h is given by q q q ( x, y, z, t) = θ ( x, t, ) + θ ( y, t, ) + θ ( z, t, ) a b c (4.a) where θ ( ξ, t, ) = t + i= ( ) i h cos( i π ξ) exp[ ( t t h ) ]{ exp[ t h ]} (4.b) INVERSE PROBLEM For the inverse problem considered here, the thermal conductivity components, and are regarded as unnown, while the other quantities appearing in the formulation of the direct problem described above are assumed to be nown with high degree of accuracy. For the estimation of the vector of unnown parameters P = [,, ], we assume available the transient readings of M temperature sensors. Since it is desirable to have a nonintrusive experiment, we consider the sensors to be located at the insulated surfaces x = 0, y = 0 and z = 0. We note that the temperature measurements may contain random errors. Such errors are assumed here to be additive, uncorrelated, and normally distributed with a zero mean and a nown constant standard-deviation σ (Bec and Arnold, 977). herefore, the solution of the present parameter estimation problem can be obtained through the minimization of the ordinary least-squares norm S ( P) = [ Y ( P)] [ Y ( P)] (5) where Copyright 999 by ASME

3 H H H H H H [ Y ( P)] = [ Y ( P), Y ( P),, YI I ( P) ] (6.a) and each element [ Y H i H i ( P)] is a row vector of length equal to the number of sensors M, that is, H H [ Yi i ( P)] = [ Yi i ( P), Yi i ( P),, YiM im ( P)] for i =,..,I (6.b) We note that Y im and im (P) are the measured and estimated temperatures, respectively, for time t i, i=,,i, and for the sensor m, m=,,m. he estimated temperatures are obtained from the solution of the direct problem given by equations (,4), by using the current available estimate for the vector of unnown parameters P = [,, ]. he methods considered in this paper for the minimization of the least squares norm (5) mae use of the sensitivity matrix. For the present case, involving multiple measurements to estimate the vector of unnown parameters P = [,, ], the sensitivity matrix is defined as H H H H H H ( P) J ( P) = (7) P H H H I I I he elements of the sensitivity matrix are denoted Sensitivity Coefficients. Due to the analytical nature of the solution of the direct problem given by equations (,4), we can also obtain analytic expressions for the sensitivity coefficients. Since the solution of the direct problem is obtained as a superposition of one-dimensional solutions, we note that the sensitivity coefficient with respect to is a function of x, but not of y and z. he expressions for the sensitivity coefficients with respect to are obtained as q a x t J = θ, + a a π π + q t + i i x ( t)( i ) ( ) cos exp (8.a) a i= a a for 0 < t t h and ( ) cos( π ) exp[ ( π) ] = i q i x i t J a i= (8.b) { {exp[ th ]( t th ) t} + for t > t h + exp[ t ] } h and analogous expressions can be obtained for the sensitivity coefficients J and J with respect to and, respectively. We note that the present estimation problem is non-linear, since the sensitivity coefficients are functions of the unnown parameters. MEHODS OF SOLUION For the minimization of the least squares norm (5), we consider here the Levenberg-Marquardt Method (Levenberg, 944, Marquardt, 96, Bec and Arnold, 977, Meias et al, 999, Ozisi and Orlande, 999) and the Conugate Gradient Method versions of Fletcher-Reeves, Pola-Ribiere, Powell- Beale and Hestenes-Stieffel (Alifanov, 994, Daniel, 97, Ozisi and Orlande, 999, Powell, 977, Hestenes and Stiefel, 95, Fletcher and Reeves, 964, Colaço and Orlande, 999). hese methods can be suitably arranged in iterative procedures of the form P + = P + P where P is the increment in the vector of unnown parameters at iteration. he computation of P for each of the methods considered here are addressed next.. Levenberg-Marquardt Method he so-called Levenberg-Marquardt Method was first derived by Levenberg (944) by modifying the ordinary least squares norm. Later Marquardt (96) derived basically the same technique by using a different approach. Marquardt s intention was to obtain a method that would tend to the Gauss method in the neighborhood of the minimum of the ordinary least squares norm, and would tend to the steepest descent method in the neighborhood of the initial guess used for the iterative procedure. he increment P for the Levenberg-Marquardt Method can be written as where µ P = [( J ) (9) J + µ Ω ] ( J ) [ Y ( (0) is a positive scalar named damping parameter, and Ω is a diagonal matrix. he purpose of the matrix term µ Ω in equation (0) is to damp oscillations and instabilities due to the ill-conditioned character of the problem, by maing its components large as compared to those of J J, if necessary. he damping parameter is made large in the beginning of the iterations, since the problem is generally ill-conditioned in the region around the initial guess used for the iterative procedure, which can be quite far from the exact parameters. With such an approach, the matrix J J is not required to be non-singular in the beginning of iterations and the Levenberg-Marquardt Method tends to the Steepest Descent Method, that is, a very small step is taen in the negative gradient direction. he parameter µ is then gradually reduced as the iteration procedure advances to the solution of the parameter estimation problem, and then the Levenberg-Marquardt Method tends to the Gauss Method (Bec and Arnold, 977, Ozisi and Orlande, 999). he following criteria are used to stop the iterative procedure of the Levenberg-Marquardt Method: Copyright 999 by ASME

4 + (i) S( P ) < ε (ii) ( J ) [ Y ( < ε (.a) (.b) + (iii) P P < ε (.c) where ε, ε and ε are user prescribed tolerances and. is the vector Euclidean norm, i.e., x = ( x x), where the superscript denotes transpose. he criterion given by equation (.a) tests if the least squares norm is sufficiently small, which is expected to be in the neighborhood of the solution for the problem. Similarly, equation (.b) checs if the norm of the gradient of S(P) is sufficiently small, since it is expected to vanish at the point where S(P) is minimum. Although such a condition of vanishing gradient is also valid for maximum and saddle points of S(P), the Levenberg-Marquardt method is very unlie to converge to such points. he last criterion given by equation (.c) results from the fact that changes in the vector of parameters are very small when the method has converged. Different versions of the Levenberg-Marquardt method can be found in the literature, depending on the choice of the diagonal matrix Ω and on the form chosen for the variation of the damping parameter µ. We illustrate here the computational algorithm of the method with the matrix Ω taen as Ω = diag [( J ) J ] () Suppose that temperature measurements are given at times t i, i=,...,i. Also, suppose that an initial guess P 0 is available for the vector of unnown parameters P. Choose a value for µ 0, say, µ 0 = and set = 0. hen, Step. Solve the direct heat transfer problem given by equations () with the available estimate P. Step. Compute S(P ) from equation (5). Step. Compute the sensitivity matrix J defined by equation (7) and then the matrix Ω given by equation (). Step 4. Compute the increments for the unnown parameters by using equation (0). Step 5. Compute the new estimate P + as + P = P + P () Step 6. Step 7. Step 8. Solve now the direct problem () with the new estimate P + in order to find (P + ). hen compute S(P + ), as defined by equation (5). If S(P + ) S(P ), replace µ by 0µ and return to step 4. If S(P + )< S(P ), accept the new estimate P + and replace µ by 0.µ. Step 9. Chec the stopping criteria given by equations (.a-c). Stop the iterative procedure if any of them is satisfied; otherwise, replace by + and return to step. In another version of the Levenberg-Marquardt method due to Moré(977) the matrix Ω is taen as the identity matrix and the damping parameter µ is chosen based on the so-called trust region algorithm. he subroutines of the IMSL (987) are based on this version of the Levenberg-Marquardt Method.. Conugate Gradient Method he Conugate Gradient Method is a straightforward and powerful iterative technique for solving inverse problems of parameter estimation. In the iterative procedure of the Conugate Gradient Method, at each iteration a suitable step size is taen along a direction of descent in order to minimize the obective function. he direction of descent is obtained as a linear combination of the negative gradient direction at the current iteration with directions of descent from previous iterations. he Conugate Gradient Method with an appropriate stopping criterion belongs to the class of iterative regularization techniques, in which the number of iterations is chosen so that stable solutions are obtained for the inverse problem (Alifanov, 994, Ozisi and Orlande, 999). he increment in the vector of unnown parameters at each iteration of the conugate gradient method is given by P = β d (4) he search step size β is obtained by minimizing the obective function given by equation (5) with respect to β. By using a first-order aylor series approximation for the estimated temperatures, the following expression results for the search step size(ozisi and Orlande, 999): [ J d ] [ Y ( β = (5) [ J d ] [ J d ] he direction of descent d is given in the following general form d q = S( P ) + γ d + ψ d (6) where γ and ψ are conugation coefficients and S( P ) is the gradient vector given by S( P ) = ( J ) [ Y ( (7) he superscript q in equation (6) denotes the iteration number where a restarting strategy is applied to the iterative procedure of the conugate gradient method. Restarting strategies were suggested for the conugate gradient method of parameter estimation in order to improve its convergence rate (Powell, 977). Different versions of the Conugate Gradient Method can be found in the literature depending on the form used for the 4 Copyright 999 by ASME

5 computation of the direction of descent given by equation (6) (Alifanov, 994, Daniel, 97, Ozisi and Orlande, 999, Powell, 977, Hestenes and Stiefel, 95, Fletcher and Reeves, 964, Colaço and Orlande, 999). In the Fletcher-Reeves version, the conugation coefficients γ and ψ are obtained from the following expressions [ S( γ = with γ 0 = 0 for = 0 [ S( (8.a) ψ = 0 for = 0,,, (8.b) In the Pola-Ribiere version of the Conugate Gradient Method the conugation coefficients are given by [ S( γ = with γ 0 = 0 for =0 (9.a) [ S( ψ = 0 for = 0,,, (9.b) For the Hestenes-Stiefel version of the Conugate Gradient Method we have = [ S( γ with γ 0 = 0 for =0 (0.a) [ d ] [ S( P )] ψ = 0 for = 0,,, (0.b) Powell(977) suggested the following expressions for the conugation coefficients, which gives the so-called Powell- Beale s version of the conugate gradient method = [ S( γ with γ 0 = 0 for =0 (.a) [ d ] [ S( P )] q+ q [ S( ψ = with ψ 0 = 0 q q+ q [ d ] [ S( (.b) for = 0 In accordance with Powell(977), the application of the conugate gradient method with the conugation coefficients given by equations () requires restarting when gradients at successive iterations tend to be non-orthogonal (which is a measure of the local non-linearity of the problem) and when the direction of descent is not sufficiently downhill. Restarting is performed by maing ψ = 0 in equation (6). he non-orthogonality of gradients at successive iterations is tested by using: ABS( [ S( ) 0.[ S( (.a) where ABS (.) denotes the absolute value. A non-sufficiently downhill direction of descent (i.e., the angle between the direction of descent and the negative gradient direction is too large) is identified if either of the following inequalities are satisfied: [ d ] [ S( -.[ S( (.b) or [ d ] [ S( -0.8[ S( (.c) In Powell-Beale s version of the conugate gradient method, the direction of descent given by equation (6) is computed in accordance with the following algorithm for : SEP : est the inequality (.a). If it is true set q = -. SEP : Compute γ with equation (.a). SEP : If = q+ set ψ = 0. If q+ compute ψ with equation (.b). SEP 4: Compute the search direction d (X,Y,τ) with equation (6). SEP 5: If q+ test the inequalities (.b,c). If either one of them is satisfied set q = - and ψ =0. hen recompute the search direction with equation (6). he iterative procedure given by equations (9,4-) does not provide the conugate gradient method with the stabilization necessary for the minimization of the obective function (5) to be classified as well-posed. herefore, as the estimated temperatures approach the measured temperatures containing errors, during the minimization of the function (5), large oscillations may appear in the inverse problem solution. However, the conugate gradient method may become wellposed if the Discrepancy Principle (Alifanov, 994) is used to stop the iterative procedure. In the discrepancy principle, the iterative procedure is stopped when the following criterion is satisfied + S( P ) < ε () where the value of the tolerance ε is chosen so that sufficiently stable solutions are obtained. In this case, we stop the iterative procedure when the residuals between measured and estimated temperatures are of the same order of magnitude of the measurement errors, that is, Y - σ (4) im im where σ is the standard deviation of the measurement errors, which is supposed constant and nown. By substituting equation (4) into equation (5), we obtain ε as ε = I M σ (5) If the measurements are regarded as errorless, the tolerance ε can be chosen as a sufficiently small number, since the expected minimum value for the obective function (5) is zero. he iterative procedure of the conugate gradient method can be suitably arranged in the following computational algorithm. Suppose that temperature measurements and an initial guess P 0 is available for the vector of unnown parameters P. Set = 0 and then Step. Solve the direct heat transfer problem () by using the available estimate P and obtain the vector of estimated temperatures (P ). 5 Copyright 999 by ASME

6 Step. Chec the stopping criterion given by equation (). Continue if not satisfied. Step. Compute the sensitivity matrix J defined by equation (7). Step 4. Compute the gradient direction S( P ) from equation (7) and then the conugation coefficients from equations (8), (9), (0) or (). Note that Powell-Beale s version requires restarting if any of the inequalities (.a-c) are satisfied. Step 5. Compute the direction of descent d by using equation (6). Step 6. Compute the search step size β from equation (5). Step 7. Compute the increment P with equation (4) and then the new estimate P + with equation (9). Replace by + and return to step. RESULS AND DISCUSSIONS For the results presented below, we assumed the solid with unnown thermal conductivities to be available in the form of a cube, so that a = b = c =. Also, the heat fluxes applied on the boundaries x = a =, y = b = and z = c = were assumed to be of equal magnitude during the heating period 0 < t t h, so that q = in equation (), for =,,. Different experimental variables, such as the number and locations of sensors, heating time and final time, optimally chosen by Meias et al (999), were used here for the analysis of test-cases involving different values for the unnown thermal conductivity components. he test-cases examined here are summarized in table, together with their respective optimal values of heating and final times. he readings of three sensors (M=) located at the positions (0,0.9,0.9), (0.9,0,0.9) and (0.9,0.9,0) were used for the estimation of the unnown thermal conductivity components. One hundred simulated measurements per sensor, containing additive, uncorrelated and normally distributed errors with zero mean and constant standard-deviation, were assumed available for the estimation procedure. he computational algorithms presented above for the Levenberg-Marquardt method, as well as for the different versions of the conugate gradient method, were programmed in FORRAN 90 and applied to the estimation of the parameters shown in table, by using simulated measurements with standard deviations of σ = 0 (errorless measurements) and σ = 0.0 max, where max is the maximum measured temperature. For the comparison presented below, we also considered the IMSL (987) version of the Levenberg- Marquardt method in the form of the subroutine DBCLSJ. Upper and lower limits for the unnown parameters, required by such subroutine, were taen as 0 - and 0 4, respectively. he same limits were also considered in the implementation of the other estimation techniques. able summarizes the techniques used in the present paper. ables to 5 present the results obtained for the CPU time, average rate of reduction of the obective function with respect to the number of iterations (r) and RMS error (e RMS ), obtained with each of the techniques summarized in table, for the test-cases to, respectively. he computations were performed in a Pentium 00 MHz, under the Fortran PowerStation platform. he results presented in tables and 4 were obtained with an initial guess P 0 =[0.,0.,0.] for the unnown parameters, while an initial guess of P 0 =[0.5,0.5,0.5] was required for convergence of any of the methods for testcase (table 5). For those cases involving simulated measurements with random errors, the results shown in tables - 5 were averaged over 0 different runs, in order to reduce the effects of the random number generator utilized. he average rates of reduction of the obective function (r) were obtained from the following approximation for the variation of S(P) with the number of iterations (N): S where C is a constant depending on the data. r (P) = C N (6) he RMS errors were computed as e RMS = ( ex, est, ) (7) = where the subscripts ex and est refer to the exact and estimated parameters, respectively. able.. est-cases with respective heating and final times. t h t f Case Exact Parameters able. echniques used for the estimation of the unnown parameters. echnique Method Version A Levenberg-Marquardt his paper B Levenberg-Marquardt IMSL A Conugate Gradient Method Fletcher-Reeves B Conugate Gradient Method Pola-Ribiere C Conugate Gradient Method Hestenes-Stiefel D Conugate Gradient Method Powell-Beale he tolerances for the stopping criteria of technique A were taen as ε = ε = ε =0-5 in equations (.a-c), for cases involving errorless measurements, as well as measurements with random errors. For those cases involving errorless measurements, the tolerances for the stopping criterion of techniques A-D were taen as ε = 0-5 in equation (). For those cases involving measurements with random errors, such tolerances for techniques A-D were obtained from 6 Copyright 999 by ASME

7 equation(5) based on the discrepancy principle. We note that the tolerances for the stopping criteria of technique B were set internally by the subroutine DBCLSJ of the IMSL (987). able. Results for test-case echnique σ CPU time (s) r e RMS A 0.0 max B 0.0 max A 0.0 max B 0.0 max C 0.0 max D max able 4. Results for test-case echnique σ CPU time (s) r e RMS A 0.0 max B 0.0 max A 0.0 max B 0.0 max C 0.0 max D max able 5. Results for test-case echnique σ CPU time (s) r e RMS A 0.0 max B 0.0 max A 0.0 max B 0.0 max C 0.0 max D max.6 0. Let us consider first in the analysis of tables -5 the cases involving errorless measurements (σ = 0). ables -5 show that all techniques were able to estimate exactly the three different sets of unnown parameters examined, resulting in e RMS =. he highest rates of reduction of the obective function were obtained with the Levenberg-Marquardt method and, for such method, technique B had a better performance than technique A, except for test-case (table ) where both techniques were equivalent. he rates of reduction of the obective function were of the order of 0 (minimum of 8) or higher for the Levenberg- Marquardt method, while such rates were of the order of 0 (maximum of 4) for the conugate gradient method. he CPU times were generally smaller for the Levenberg-Marquardt method (techniques A and B) than for the conugate gradient method (techniques A-D). ables -5 show a strong reduction of the rate of reduction of the obective function when measurements with random errors were used in the analysis, instead of errorless measurements. Such a behavior was also observed in a nonlinear function estimation problem (Colaço and Orlande, 999). As for the cases with errorless measurements, the Levenberg- Marquardt method had a performance superior than that of the conugate gradient method in terms of CPU time and rate of reduction of the obective function, for the cases with measurements with random errors. All techniques resulted in accurate estimates for the unnown parameters when measurements with random errors were used in the analysis; but relative high RMS errors were noticed with techniques A and C for test-case. We note that the use of the discrepancy principle was not required to provide the Levenberg-Marquardt method with the regularization necessary to obtain stable solutions for those cases involving measurements with random errors. he computational experiments revealed that the Levenberg- Marquardt method, through its automatic control of the damping parameter µ, reduced drastically the increment in the vector of estimated parameters, at the iteration where the measurement errors started to cause instabilities on the inverse problem solution. he iterative procedure of the Levenberg- Marquardt method was then stopped by the criterion given by equation (.c). For some cases we also noticed that the iterative procedure of the Levenberg-Marquardt method was stopped by the criterion (.b) when measurements with random errors were used in the analysis, that is, the norm of gradient vector became very small. able 6 is prepared to illustrate the effect of the initial guess for the parameters, i.e., P 0, over the rate of reduction of the obective function. hree different initial guesses were considered for test-case, by using errorless measurements in the analysis. able 6 shows that, although the rate of reduction umped to 4 for technique A for the initial guess P 0 =[,,], the values of such rate were insensitive to the three initial guesses, with all techniques examined in this paper. We note that techniques A-D, based on the conugate gradient method, did not converge to the exact parameters for initial guesses larger than P 0 = [,,]. On the other hand, techniques A and B, based on the Levenberg-Marquardt method, were able to converge to the exact parameters even with initial guesses as large as P 0 = [0,0,0]. 7 Copyright 999 by ASME

8 able 6. Effect of the initial guess on the rate of reduction of the obective function. echnique r P 0 =[0.,0.,0.] P 0 =[,,] P 0 =[,,] A B A 4 B 4 4 C 4 D 0 CONCLUSIONS In this paper we compared 6 different parameter estimation techniques, based on the Levenberg-Marquardt method and on the conugate gradient method, for the identification of the three thermal conductivity components of orthotropic solids. he techniques were compared in terms of rate of reduction of the obective function, CPU time and accuracy of the estimated parameters. he foregoing analysis reveals that, besides having the highest rates of reduction of the obective function, the use of the Levenberg-Marquardt method also resulted in the smallest CPU times and in the smallest RMS errors of the estimated parameters. Such method was able to converge to the exact parameters even for initial guesses quite far from the exact values. Hence, the Levenberg-Marquardt method appears to be the best, among those tested, for the estimation of the three thermal conductivity components of orthotropic solids. ACKNOWLEDGMENS he solution of the direct problem, given by equations () and (4) as a superposition of three one-dimensional solutions, was derived by Prof. M. D. Mihailov, who is currently a Visiting Professor in the Department of Mechanical Engineering of the Federal University of Rio de Janeiro. REFERENCES Alifanov, O.M., 994, Inverse Heat ransfer Problems, Springer-Verlag. Bec, J.V. and Arnold, K.J., 977, Parameter Estimation in Engineering and Science, Wiley, New Yor. Colaço, M.J. and Orlande, H.R.B., 999, A Comparison of Different Versions of the Conugate Gradient Method of Function Estimation, Num. Heat ransfer, Part A (in press). Daniel, J.V., 97, he Approximate Minimization of Functionals, Prentice-Hall, Englewood Cliffs. Dowding, K.J., Bec, J.V. and Blacwell, B., 996, Estimation of Directional-Dependent hermal Properties in a carbon-carbon composite, Int. J. Heat Mass ransfer, Vol. 9, No. 5, pp Dowding, K., Bec, J.V., Ulbrich, A., Blacwell, B. and Hayes, J., 995, Estimation of hermal Properties and Surfaces Heat Flux in Carbon-Carbon Composite, J. of ermophysics and Heat ransfer, Vol.9, No., pp Fletcher R. and Reeves, C.M., 964, Function Minimization by Conugate Gradients, Computer J., Vol. 7, pp Hestenes, M.R. and Stiefel, E, 95., Method of Conugate Gradients for Solving Linear Systems, J. Research Nat. Bur. Standards, Vol. 49, pp IMSL Library, 987, Edition 0, Houston exas. Levenberg, K., 944, A Method for the Solution of Certain Non-linear Problems in Least Squares, Quart. Appl. Math.,, pp Marquardt, D.W., 96, An Algorithm for Least Squares Estimation of Nonlinear Parameters, J. Soc. Ind. Appl. Math,, pp Meias, M.M., Orlande, H.R.B. and Ozisi, M.N., 999, Design of Optimum Experiments for the Estimation of the hermal Conductivity Components of Orthotropic Solids, Hybrid Methods in Engineering, Vol., No, pp Mihailov, M. D. and Ozisi, M.N., Unified Analysis of Heat and Mass Diffusion, Dover, New Yor, 994. Moré, J.J., 977, Numerical Analysis, Lecture Notes in Mathematics, Vol. 60, G.A Watson, ed., Springer-Verlag, Berlin, pp Ozisi, M.N. and Orlande, H.R.B., 999, Inverse Heat ransfer: Fundamentals and Applications, aylor & Francis, Pennsylvania. Özisi, M.N., 99, Heat Conduction, nd ed., Wiley, New Yor. Powell, M.J.D., 977, Restart Procedures for the Conugate Gradient Method, Mathematical Programming, Vol., pp Sawaf, B.W., and Ozisi, M.N., 995, Determining the Constant hermal Conductivities of Orthotropic Materials, Int. Comm. Heat Mass ransfer, Vol., pp. 0-. Sawaf, B., Özisi, M.N. and Jarny, Y., 995, An Inverse Analysis to Estimate Linearly emperature Dependent hermal Conductivity Components and Heat Capacity of an Orthotropic Medium, Int. J. Heat Mass ransfer, Vol.8, No. 6, pp ata, R., Bec, J.V. and Scott, E.P., 99, Optimal Experimental Design for Estimating hermal Properties of Composite Materials, Int. J. Heat Mass ransfer, Vol.6, No., pp Copyright 999 by ASME

Levenberg-Marquardt Method for Solving the Inverse Heat Transfer Problems

Levenberg-Marquardt Method for Solving the Inverse Heat Transfer Problems Journal of mathematics and computer science 13 (2014), 300-310 Levenberg-Marquardt Method for Solving the Inverse Heat Transfer Problems Nasibeh Asa Golsorkhi, Hojat Ahsani Tehrani Department of mathematics,

More information

Luiz Augusto D. Reis 1, Helcio R. B. Orlande 2, Daniel Lesnic 3. de Janeiro, UFRJ de Janeiro, UFRJ

Luiz Augusto D. Reis 1, Helcio R. B. Orlande 2, Daniel Lesnic 3. de Janeiro, UFRJ de Janeiro, UFRJ Blucher Mechanical Engineering Proceedings May, vol., num. www.proceedings.blucher.com.br/evento/wccm A COMPARISON OF THE ITERATIVE REGULARIZATION TECHNIQUE AND OF THE MARKOV CHAIN MONTE CARLO METHOD:

More information

ESTIMATION OF THE APPARENT MASS DIFFUSION COEFFICIENT BY USING INVERSE ANALYSIS AND RADIATION MEASUREMENT TECHNIQUES

ESTIMATION OF THE APPARENT MASS DIFFUSION COEFFICIENT BY USING INVERSE ANALYSIS AND RADIATION MEASUREMENT TECHNIQUES Inverse Problems in Engineering: heory and Practice 3 rd Int Conference on Inverse Problems in Engineering June 3-8, 999, Port Ludlow, WA, USA H3 ESIMAION OF HE APPAREN MASS DIFFUSION COEFFICIEN BY USING

More information

O. Wellele 1, H. R. B. Orlande 1, N. Ruperti Jr. 2, M. J. Colaço 3 and A. Delmas 4

O. Wellele 1, H. R. B. Orlande 1, N. Ruperti Jr. 2, M. J. Colaço 3 and A. Delmas 4 IDENTIFICATION OF THE THERMOPHYSICAL PROPERTIES OF ORTHOTROPIC SEMI-TRANSPARENT MATERIALS O. Wellele 1, H. R. B. Orlande 1, N. Ruperti Jr. 2, M. J. Colaço 3 and A. Delmas 4 1 Department of Mechanical Engineering

More information

New hybrid conjugate gradient algorithms for unconstrained optimization

New hybrid conjugate gradient algorithms for unconstrained optimization ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,

More information

HYBRID OPTIMIZATION WITH AUTOMATIC SWITCHING AMONG OPTIMIZATION ALGORITHMS

HYBRID OPTIMIZATION WITH AUTOMATIC SWITCHING AMONG OPTIMIZATION ALGORITHMS EVOLUTIONARY ALGORITHMS AND INTELLIGENT TOOLS IN ENGINEERING OPTIMIZATION W. Annicchiarico, J. Périaux, M. Cerrolaza and G. Winter (Eds.) CIMNE, Barcelona, Spain 2004 HYBRID OPTIMIZATION WITH AUTOMATIC

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT THERMAL PROPERTIES

DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT THERMAL PROPERTIES Inverse Problems in Engineering: Theory and Practice 3rd Int. Conference on Inverse Problems in Engineering June 3-8, 999, Port Ludlow, WA, USA EXP7 DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT

More information

Hot Air Gun Identification by Inverse Heat Transfer

Hot Air Gun Identification by Inverse Heat Transfer Vol. 8/ No. 4/ Winter 2016 pp. 29-34 Hot Air Gun Identification by Inverse Heat Transfer A. M. Tahsini *1 and S. Tadayon Mousavi 2 1, 2. Aerospace Research Institute; Ministry of Science, Research and

More information

First Published on: 11 October 2006 To link to this article: DOI: / URL:

First Published on: 11 October 2006 To link to this article: DOI: / URL: his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd

More information

New Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods

New Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT THERMAL PROPERTIES *

DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT THERMAL PROPERTIES * DESIGN OF EXPERIMENTS TO ESTIMATE TEMPERATURE DEPENDENT THERMAL PROPERTIES * K. J. Dowding and B. F. Blackwell Sandia National Laboratories P.O. Box 5800 Mail Stop 0835 Albuquerque, NM 8785 (505) 844-9699,

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

An inverse problem in estimating simultaneously the effective thermal conductivity and volumetric heat capacity of biological tissue

An inverse problem in estimating simultaneously the effective thermal conductivity and volumetric heat capacity of biological tissue Applied Mathematical Modelling 31 (7) 175 1797 www.elsevier.com/locate/apm An inverse problem in estimating simultaneously the effective thermal conductivity and volumetric heat capacity of biological

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

INVERSE DETERMINATION OF SPATIAL VARIATION OF DIFFUSION COEFFICIENTS IN ARBITRARY OBJECTS CREATING DESIRED NON- ISOTROPY OF FIELD VARIABLES

INVERSE DETERMINATION OF SPATIAL VARIATION OF DIFFUSION COEFFICIENTS IN ARBITRARY OBJECTS CREATING DESIRED NON- ISOTROPY OF FIELD VARIABLES Contributed Papers from Materials Science and Technology (MS&T) 05 October 4 8, 05, Greater Columbus Convention Center, Columbus, Ohio, USA Copyright 05 MS&T5 INVERSE DETERMINATION OF SPATIAL VARIATION

More information

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S. Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

An Efficient Modification of Nonlinear Conjugate Gradient Method

An Efficient Modification of Nonlinear Conjugate Gradient Method Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN

More information

Residual statics analysis by LSQR

Residual statics analysis by LSQR Residual statics analysis by LSQR Residual statics analysis by LSQR Yan Yan, Zhengsheng Yao, Gary F. Margrave and R. James Brown ABSRAC Residual statics corrections can be formulated as a linear inverse

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

A Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence

A Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

SOLUTION OF 1D AND 2D POISSON'S EQUATION BY USING WAVELET SCALING FUNCTIONS

SOLUTION OF 1D AND 2D POISSON'S EQUATION BY USING WAVELET SCALING FUNCTIONS SOLUION OF D AND D POISSON'S EQUAION BY USING WAVELE SCALING FUNCIONS R. B. Burgos a, and H. F. C. Peixoto b a Universidade do Estado do Rio de Janeiro Departamento de Estruturas e Fundações Rua S. Francisco

More information

Inverse Problems and Optimal Design in Electricity and Magnetism

Inverse Problems and Optimal Design in Electricity and Magnetism Inverse Problems and Optimal Design in Electricity and Magnetism P. Neittaanmäki Department of Mathematics, University of Jyväskylä M. Rudnicki Institute of Electrical Engineering, Warsaw and A. Savini

More information

Inverse Heat Flux Evaluation using Conjugate Gradient Methods from Infrared Imaging

Inverse Heat Flux Evaluation using Conjugate Gradient Methods from Infrared Imaging 11 th International Conference on Quantitative InfraRed Thermography Inverse Heat Flux Evaluation using Conjugate Gradient Methods from Infrared Imaging by J. Sousa*, L. Villafane*, S. Lavagnoli*, and

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

MULTI VARIABLE OPTIMIZATION

MULTI VARIABLE OPTIMIZATION MULI VARIABLE OPIMIZAION Min f(x 1, x 2, x 3,----- x n ) UNIDIRECIONAL SEARCH - CONSIDER A DIRECION S r r x( α ) = x +α s - REDUCE O Min f (α) - SOLVE AS A SINGLE VARIABLE PROBLEM Min Point s r Uni directional

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search Yu-Hong Dai and Cai-Xia Kou State Key Laboratory of Scientific and Engineering Computing, Institute of

More information

Chapter 6: Derivative-Based. optimization 1

Chapter 6: Derivative-Based. optimization 1 Chapter 6: Derivative-Based Optimization Introduction (6. Descent Methods (6. he Method of Steepest Descent (6.3 Newton s Methods (NM (6.4 Step Size Determination (6.5 Nonlinear Least-Squares Problems

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

Steepest descent algorithm. Conjugate gradient training algorithm. Steepest descent algorithm. Remember previous examples

Steepest descent algorithm. Conjugate gradient training algorithm. Steepest descent algorithm. Remember previous examples Conjugate gradient training algorithm Steepest descent algorithm So far: Heuristic improvements to gradient descent (momentum Steepest descent training algorithm Can we do better? Definitions: weight vector

More information

A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION

A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION 1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,

More information

INTEGRAL TRANSFORM SOLUTION OF BENDING PROBLEM OF CLAMPED ORTHOTROPIC RECTANGULAR PLATES

INTEGRAL TRANSFORM SOLUTION OF BENDING PROBLEM OF CLAMPED ORTHOTROPIC RECTANGULAR PLATES International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering M&C 2011) Rio de Janeiro, RJ, Brazil, May 8-12, 2011, on CD-ROM, Latin American Section LAS)

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) oday s opics Multidisciplinary System Design Optimization (MSDO) Approimation Methods Lecture 9 6 April 004 Design variable lining Reduced-Basis Methods Response Surface Approimations Kriging Variable-Fidelity

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹,

Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹, Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹, Nguyen Hoai Son 2, and Nguyen Quoc uan 2 1 Department of Engineering echnology,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 2005

Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 2005 D1 Proceedings of the 5th International Conference on Inverse Problems in Engineering : Theory and Practice, Cambridge, UK, 11-15th July 25 EXPERIMENTAL VALIDATION OF AN EXTENDED KALMAN SMOOTHING TECHNIQUE

More information

Levenberg-Marquardt and Conjugate Gradient Training Algorithms of Neural Network for Parameter Determination of Solar Cell

Levenberg-Marquardt and Conjugate Gradient Training Algorithms of Neural Network for Parameter Determination of Solar Cell International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 9 No. 4 Dec. 204, pp. 869-877 204 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Levenberg-Marquardt

More information

KALMAN AND PARTICLE FILTERS

KALMAN AND PARTICLE FILTERS KALMAN AND PARTICLE FILTERS H. R. B. Orlande, M. J. Colaço, G. S. Duliravich, F. L. V. Vianna, W. B. da Silva, H. M. da Fonseca and O. Fudym Department of Mechanical Engineering Escola Politécnica/COPPE

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

Numerical software for the inverse estimation of the temperature-dependent thermal properties in heat conduction

Numerical software for the inverse estimation of the temperature-dependent thermal properties in heat conduction Advanced Computational Methods and Experiments in Heat Transfer XII 121 Numerical software for the inverse estimation of the temperature-dependent thermal properties in heat conduction J. Zueco & G. Monreal

More information

TR/05/81 June Non-linear optimization of the material constants in Ogden's strain-energy function for incompressible isotropic elastic materials

TR/05/81 June Non-linear optimization of the material constants in Ogden's strain-energy function for incompressible isotropic elastic materials TR/05/81 June 1981 Non-linear optimization of the material constants in Ogden's strain-energy function for incompressible isotropic elastic materials E.H. Twizell and R.W. Ogden (0) 0. Abstract The Levenberg

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Modification of Non-Linear Constrained Optimization. Ban A. Metras Dep. of Statistics College of Computer Sciences and Mathematics

Modification of Non-Linear Constrained Optimization. Ban A. Metras Dep. of Statistics College of Computer Sciences and Mathematics Raf. J. Of Comp. & Math s., Vol., No., 004 Modification of Non-Linear Constrained Optimization Ban A. Metras Dep. of Statistics College of Computer Sciences and Mathematics Received: 3/5/00 Accepted: /0/00

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

For those who want to skip this chapter and carry on, that s fine, all you really need to know is that for the scalar expression: 2 H

For those who want to skip this chapter and carry on, that s fine, all you really need to know is that for the scalar expression: 2 H 1 Matrices are rectangular arrays of numbers. hey are usually written in terms of a capital bold letter, for example A. In previous chapters we ve looed at matrix algebra and matrix arithmetic. Where things

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Thermal Analysis of Cross Flow Heat Exchangers

Thermal Analysis of Cross Flow Heat Exchangers Website: www.ijetae.com (ISSN 225-2459, ISO 9:28 Certified Journal, Volume 4, Special Issue 9, September 24) hermal Analysis of Cross Flow Heat Exchangers Shreyas Muralidharan & B. Gokul Maharaj School

More information

ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS

ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-203-OS-03 Department of Mathematics KTH Royal

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

MOBILE SOURCE ESTIMATION WITH AN ITERATIVE REGULARIZATION METHOD

MOBILE SOURCE ESTIMATION WITH AN ITERATIVE REGULARIZATION METHOD Proceedings of the th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, - th July 00 MOBILE SOURCE ESTIMATION WITH AN ITERATIVE REGULARIZATION METHOD L. AUTRIQUE,

More information

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

NUMERICAL EVALUATION OF A SYMMETRIC POTENTIAL FUNCTION

NUMERICAL EVALUATION OF A SYMMETRIC POTENTIAL FUNCTION MATHEMATICS OF COMPUTATION Volume 67, Number 222, April 1998, Pages 641 646 S 25-5718(98)948-X NUMERICAL EVALUATION OF A SYMMETRIC POTENTIAL FUNCTION LORI A. CARMACK Abstract. We discuss the numerical

More information

Lecture 7: Optimization methods for non linear estimation or function estimation

Lecture 7: Optimization methods for non linear estimation or function estimation Lecture 7: Optimization methods for non linear estimation or function estimation Y. Favennec 1, P. Le Masson 2 and Y. Jarny 1 1 LTN UMR CNRS 6607 Polytetch Nantes 44306 Nantes France 2 LIMATB Université

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

FALL 2018 MATH 4211/6211 Optimization Homework 4

FALL 2018 MATH 4211/6211 Optimization Homework 4 FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution

More information

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Math 60. Rumbos Spring Solutions to Assignment #17

Math 60. Rumbos Spring Solutions to Assignment #17 Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Conjugate Directions for Stochastic Gradient Descent

Conjugate Directions for Stochastic Gradient Descent Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

An Inverse Boundary Design Problem in a Radiant Enclosure

An Inverse Boundary Design Problem in a Radiant Enclosure he 6 th. International Chemical Engineering Congress & Exhibition IChEC 009 6 0 November 009, ish Island, Iran An Inverse Boundary Design Problem in a Radiant Enclosure A. arimipour *, S.M.H. Sarvari,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for

More information

AN EVOLUTIONARY ALGORITHM TO ESTIMATE UNKNOWN HEAT FLUX IN A ONE- DIMENSIONAL INVERSE HEAT CONDUCTION PROBLEM

AN EVOLUTIONARY ALGORITHM TO ESTIMATE UNKNOWN HEAT FLUX IN A ONE- DIMENSIONAL INVERSE HEAT CONDUCTION PROBLEM Proceedings of the 5 th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15 th July 005. AN EVOLUTIONARY ALGORITHM TO ESTIMATE UNKNOWN HEAT FLUX IN A

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information