c 2013 Society for Industrial and Applied Mathematics

Size: px
Start display at page:

Download "c 2013 Society for Industrial and Applied Mathematics"

Transcription

1 SIAM J. MATRIX ANAL. APPL. Vol. 34, No. 3, pp c 213 Society or Industrial and Applied Matheatics χ 2 TESTS FOR THE CHOICE OF THE REGULARIZATION PARAMETER IN NONLINEAR INVERSE PROBLEMS J. L. MEAD AND C. C. HAMMERQUIST Abstract. We address discrete nonlinear inverse probles with weighted least squares and Tihonov regularization. Regularization is a way to add ore inoration to the proble when it is ill-posed or ill-conditioned. However, it is still an open question as to how to weight this inoration. The discrepancy principle considers the residual nor to deterine the regularization weight or paraeter, while the χ 2 ethod [J. Mead, J. Inverse Ill-Posed Probl., 16 28), pp ; J. Mead and R. A. Renaut, Inverse Probles, 25 29), 252; J. Mead, Appl. Math. Coput., ), pp ; R. A. Renaut, I. Hnetynova, and J. L. Mead, Coput. Statist. Data Anal., 54 21), pp ] uses the regularized residual. Using the regularized residual has the beneit o giving a clear χ 2 test with a ixed noise level when the nuber o paraeters is equal to or greater than the nuber o data. Previous wor with the χ 2 ethod has been or linear probles, and here we extend it to nonlinear probles. In particular, we deterine the appropriate χ 2 tests or Gauss Newton and Levenberg Marquardt algoriths, and these tests are used to ind a regularization paraeter or weights on initial paraeter estiate errors. This algorith is applied to a two-diensional cross-well toography proble and a one-diensional electroagnetic proble ro [R. C. Aster, B. Borchers, and C. Thurber, Paraeter Estiation and Inverse Probles, Acadeic Press, New Yor, 25]. Key words. least squares, regularization, nonlinear, covariance AMS subject classiications. 92E24, 65F22, 65H1, 62J2 DOI / X 1. Introduction. We address nonlinear inverse probles o the or Fx) =d, where d R represents easureents and F R n represents a nonlinear odel that depends on paraeters x R n. It is airly straightorward, although possibly coputationally deanding, to recover paraeter estiates x ro data d or a linear operator F = A when A is well-conditioned. I A is ill-conditioned, typically constraints or penalties are added to the proble, and the proble is regularized. The ost coon regularization technique is Tihonov regularization, where least squares is used to iniize 1.1) d Ax α 2 Lx 2 2. The iniu occurs at ˆx, i the invertibility condition ˆx =A T A + α 2 L T L) A T d, N A) NL) is satisied. When ˆx is written explicitly in this anner, it is clear how appropriate choice o α creates a well-conditioned proble. Popular ethods or choosing α are Received by the editors July 13, 212; accepted or publication in revised or) by J. G. Nagy May 3, 213; published electronically August 6, 213. This wor was supported by NSF grant DMS Departent o Matheatics, Boise State University, Boise, ID jead@boisestate. edu, chaerquist@gail.co). 1213

2 1214 J. L. MEAD AND C. C. HAMMERQUIST the discrepancy principle, L-curve, and generalized cross validation GCV) [5], while L can be chosen to be the identity atrix, or a discrete approxiation to a derivative operator. The proble with this approach is that large values o α ay signiicantly change the proble or create undesired sooth solutions. Nonlinear probles have the sae issues but have the added burden o linear approxiations [1]. The paraeter estiate o the regularized nonlinear proble iniizes 1.2) d Fx) α2 Lx 2 2. In [3] it is shown that this regularized proble is stable and that there is a wea connection between the ill-posedness o the nonlinear proble and the linearization. The authors o [3] and [12] prove the well-posedness o the regularized proble and convergence o regularized solutions. The least squares estiate can be viewed as the axiu a posteriori MAP) estiate [13] when the data and the initial paraeter estiate x p have errors that are norally distributed with error covariances C and C, respectively. Regularization in this case taes the or o adding a priori inoration about the paraeters. The regularized axiu lielihood estiate iniizes the ollowing objective unction: 1.3) J =d Fx)) T C d Fx)) + x x p ) T C x x p). The diiculty with this viewpoint is that the distribution o the data and initial paraeter errors ay be unnown, or not noral. In addition, good estiates o C and C can be diicult to acquire. However, dierent nors or unctionals can be sapled which represent error distributions other than noral, and scientiic nowledge can be used to or prior error covariance atrices. Epirical Bayes ethods [2] can also be used to ind hyperparaeters, i.e., paraeters which deine the prior distribution. In this wor we use the least squares estiator and weight the squared errors with inverse covariances atrices but do not assue the errors are norally distributed. Matrix weights alleviate the proble o undesired sooth solutions because they give piecewise sooth solutions; thus discontinuous paraeters can be obtained. In preliinary wor [7, 9] diagonal atrix weights C were ound, and uture wor involves ore robust ethods or inding C. In this initial wor on nonlinear probles, we approxiate the unnown error covariance atrix with a scalar ties the identity atrix, but uture wor involves inding ore dense atrices. To ind the scalar we use the χ 2 ethod, which is siilar to the discrepancy principle. The discrepancy principle can be viewed as a χ 2 test on the data residual, but this is not possible when the nuber o paraeters is greater than or equal to the nuber o data because the degrees o reedo in the χ 2 test are zero or negative. This issue is typically not recognized because the degrees o reedo in the discrepancy principle are oten taen to be, the nuber o data, while they should be reduced by the nuber o paraeters, i.e., reduced to n. The reduction is necessary because paraeter estiates are ound using the data, and hence their dependency should be subtracted ro the degrees o reedo. This dierence can be signiicant when the nuber o paraeters is signiicantly dierent ro the nuber o data. Alternatively we suggest applying the χ 2 test to the regularized residual 1.1), in which case the degrees o reedo are equal to the nuber o data [1]. This is called the χ 2 ethod, and previous wor with the ethod applied to linear probles has beendone[7,8,9,11].

3 NONLINEAR χ 2 METHOD 1215 The χ 2 ethod is described in section 2, where we also develop it or nonlinear probles. In this wor χ 2 tests are developed or Gauss Newton and Levenberg Marquardt algoriths, rather than or the nonlinear unctional, because we have ound that the statistical tests on the theoretical unctional are not satisied when the optiu value is approxiated. We will explain how iterations in these nonlinear algoriths or a sequence o linear least squares probles and show how to apply the χ 2 principle at each iterate. Fro these tests we deterine a regularization paraeter, and this can be viewed as an estiate o the error covariance atrices. In section 3 we give results on benchar probles ro [1] and copare the nonlinear χ 2 ethod to other ethods. In section 4 we give conclusions and uture wor. 2. Nonlinear χ 2 ethod. We orulate the proble as iniizing weighted errors and in a least squares sense where the errors are deined as 2.1) d = Fx)+, 2.2) x = x p +. The weighted least squares it o the easureents d and initial paraeter estiate x p is ound by iniizing the objective unction in 1.3). This eans the weights are chosen to be the inverse covariance atrices or the respective errors. Statistical inoration coes in the or o error covariance atrices, but they are used only to weight the errors or residuals and are not used in the sense o Bayesian inerence because the errors ay not be Gaussian. An initial estiate x p is required or this ethod, but this oten coes ro soe understanding o the proble. The goal is then to ind a way to weight this initial estiate. The eleents in C should relect whether x p is a good or poor estiate and ideally quantiy the correlation in the paraeter errors. Estiation o C is thus a challenge with liited understanding o the paraeters, but regularization ethods are an approach to its approxiation. In this wor we use a odiied or o the discrepancy principle, called the χ 2 ethod [7], to estiate it. It is well nown that when the objective unction 1.3) is applied to a linear odel, it ollows a χ 2 distribution [1, 7], and this act is oten used to also chec the validity o estiates. This property is given in the ollowing theore. Theore 1. I Fx) =Ax, d, andx are independent and identically distributed rando variables ro unnown distributions with = =, T = C, T = C, and T =,andthenuberodata is large, then the iniu value o 1.3) asyptotically ollows a χ 2 distribution with degrees o reedo. A proo o this is given in [7]. In the case that x p is not the ean o x, then 1.3) ollows a noncentral χ 2 distribution as shown in [11]. The requireent that T = assues that the errors in d and x p are uncorrelated, and hence we do not get x p ro d. The χ 2 ethod is used to ind weights or isits in the initial paraeter estiates, C, or data, C, and is based on the χ 2 tests given in Theore 1. The ethod can also be used to ind weights or the regularization ter when it contains a irst or second derivative operator L. In this case the inverse covariance atrix is viewed as weighting the error in an initial estiate o the appropriate derivative. The iniu value o 1.3) is J ˆx) =r T AC A T + C ) r with r = d Ax p [7]. Its expected value is the nuber o degrees o reedo, which is the nuber o easureents in d. Theχ 2 ethod thus estiates scalar weight C

4 1216 J. L. MEAD AND C. C. HAMMERQUIST or C by solving the nonlinear equation 2.3) r T AC A T + C ) r =. For exaple, with d given by the data and x p an initial paraeter estiate possibly ), i we have estiates o data uncertainty C, we can solve or C alternatively we can solve or C given C ). This diers ro the discrepancy principle in that the regularized residual is used or the χ 2 test rather than just the data residual. One advantage o this approach over the discrepancy principle is that it can be applied when the nuber o paraeters is greater than or equal to the nuber o data. Results ro the χ 2 ethod or the case when C = σ 2I, i.e., the scalar χ2 ethod, are given in [8, 11]. It is shown there that this is an attractive alternative to the L- curve, GCV, and the discrepancy principle aong other ethods. Preliinary wor or ore dense estiates o C or C has been done in [7, 9], and uture wor involves eicient solution o nonlinear systes siilar to 2.3) to estiate ore dense C or C. Here we extend the estiation o C or C by χ 2 tests to nonlinear probles. It has been shown that the nonlinear data residual has properties siilar to those o the linear data residual [1]. In particular, it behaves lie a χ 2 rando variable with n degrees o reedo. Here we show that when Newton-type ethods are used to ind the iniu o 1.3), the corresponding cost unctions at each iterate are also χ 2,andwegivetheχ 2 tests or both the Gauss Newton and Levenberg Marquardt ethods Gauss Newton ethod. The Gauss Newton ethod uses Newton s ethod to estiate the iniu value o a unction. When used to ind paraeters that iniize 1.3), the ollowing estiate is obtained at each iterate: 2.4) x +1 = x +J T C J + C ) J T C r C x ), where J is the Jacobian o Fx) about the previous estiate x, r = d Fx ), and x = x x p. Iteratively regularized Gauss Newton updates the regularization paraeter at each estiate so that C = α I. At each iterate, this can be viewed as an estiate that iniizes the ollowing unctional: 2.5) J x) J x) = d J x) T C d J x)+α x x p ) T x x p ), with d = d Fx )+J x. With this view, at each iterate the Gauss Newton unctional iniizes weighted errors and deined by 2.6) 2.7) d = J x +, x = x p +, where = υ +, υ represents error in linear approxiation at step, i.e., υ = Fx) Fx ) J x x ), and T = α I. Inverse ethods typically identiy error in data, but it is less coon to account or error in the nuerical approxiation o the odel. In ost applications, bias in data error is subtracted out so that = is a reasonable assuption. In the ollowing theore we will assue that =, which is not valid unless we reove the bias in

5 NONLINEAR χ 2 METHOD 1217 the linearization error. Estiating bias in the linearization error is a topic or uture wor; however, we continue with the theoretical developent using this assuption. The act that the regularization paraeter is iteratively adjusted helps to alleviate the eects o this linearization error. The linear cost unction 2.5) is the basis or the linear χ 2 ethod, which states that d J ˆx C + ˆx x p C = r T P r χ2 with P = J C J T + C and r = d J x p. The Gauss Newton ethod can be viewed as optiizing the linear cost unction at each iterate, but the algorith is not copletely linear in that the optial estiate at each iterate is x +1 in 2.4) rather than ˆx. The dierence is that x +1 depends on the estiate at the previous iteration; i.e., ˆx is 2.4) with x =. We will show in the ollowing theore that under speciic assuptions, the linear cost unction at the Gauss Newton iterate also ollows a χ 2 distribution with degrees o reedo: d J x +1 C + x +1 x p C = r T P r χ2. Theore 2. Let = =, T =, T = C,and T value o 2.5) at each iterate is = C ; then the 2.8) J x +1 )=r + J x ) T P r + J x ) with P = J C J T + C,and J x +1 ) ollows a χ 2 distribution with degrees o reedo. Proo. The estiate x +1 and is given by 2.4). Deine Q = J T C J + C so that 2.9) x +1 = x +J C ) T P r Q The linear cost unction 2.5) at this estiate is 2.1) J x +1 )=r T P r +2 x T J T P Sipliying urther, we note that C C Q ) C = C = C = J T C r + x T C x. C Q I)Q C J T C J Q C = J T JT P )T, where the last equality uses the act that C J T P quadratic or C Q ) C x. C ) T Q JT C J x +1 )=r + J x ) T P r + J x ) = T, ) C = Q JT C.Nowwehavethe where r + J x = d J x p = P 1/2. The square root o P is deined since C and C are covariance atrices, and hence P is syetric positive deinite.

6 1218 J. L. MEAD AND C. C. HAMMERQUIST The ean o is = P /2 + ), which is zero under our assuptions. In addition, the covariance o is T = P /2 J C J T ) /2 + C P = I. Note that C = C υ + C and C υ can be approxiated using the Hessian H at each iterate; i.e., 2.11) C ν H 2 covx x ) 2 ) HT Occa s ethod. Occa s ethod estiates paraeters that iniize the nonlinear unctional 1.3) by updating x p with x ound by iniizing the linear unctional 2.5). The regularization paraeter is chosen according to the discrepancy principle at each step, and C = α 2 LLT with the operator L typically chosen to represent the second derivative [1]. The discrepancy principle inds the value o α or which d Fx +1 ) 2 2 C 2. The goal in Occa s inversion is to ind sooth paraeters that it the data [4]. When x p, and hence the second derivate estiate, is chosen to be, soothresults are obtained. Guessing the second derivative o the paraeter values to be zero ay be a good choice or prior inoration, i little inoration is nown about x p. The discrepancy principle is then applied where a χ 2 test or the data residual is used to ind weights α.theχ 2 test in Theore 2 diers ro Occa s inversion in that the regularized residual is used to ind the weights α, and the Gauss Newton update rather than the optiu or the linearized cost unction is used or the test. When the operator L is not ull ran, the degrees o reedo in the χ 2 test change [8]. This leads to the ollowing Corollary to Theore 2. Corollary 3. Let 2.12) J L x) = d J x) T C d J x)+x x p ) T L T C L ) Lx x p ), and assue the invertibility condition 2.13) N J ) NL), where N J ) is the null space o atrix J. In addition, let the assuptions in Theore 2 hold except that LL) T = C L. I L has ran q, the iniu value o the unctional 1.2) ollows a χ 2 distribution with n + q degrees o reedo. Proo. The proo or linear probles is given in [8] Levenberg Marquardt ethod. With the Gauss Newton ethod it ay be the case that the objective unction does not decrease, or it ay decrease slowly at each iteration. The solution can be increented in the direction o steepest descent by introducing the daping paraeter λ. The iterate in this case at each step is 2.14) x +1 = x +J T C J + C + λ I) J T C r C x ).

7 NONLINEAR χ 2 METHOD 1219 In [1] it is stated that this approach diers ro Tihonov regularization because it does not alter the objective unction 1.3). However, at each iterate it does iniize an altered or o J x) in 2.5); i.e., it iniizes 2.15) J x) = J x)+λ x x ) T x x ). This cost unction iniizes the weighted errors,, andδ deined by d = J x +, x = x p +, x = x + δ, where δ represents the error in the paraeter estiate at step. Theore 4. With the sae assuptions as in Theore 2, and i δ =, δ δ T = λ I, δ T =, andδt =, then the iniu value o 2.15) at each iterate is with B =C C.Here J x +1 )=r + J B x ) T P λ ) r + J B x ) s J + λ I) C, s = x T C x +1 )+s ollows a χ 2 distribution with degrees o reedo. B I) x,andp λ = J B C J T + Proo. The proo ollows siilarly as that o Theore 2; however, we deine Q λ = Q + λ I so that 2.16) The iniu value is J x +1 = x +J B C ) T P λ ) r Q C x. x +1 )=r T P λ ) r +2 x T J B ) T P λ ) J r + x T C C Q ) C x. Now here the last ter in x +1 ) does not lead to a quadratic unctional. However, the irst two ters suggest the quadratic or T λ λ with To deterine s we note that λ =P λ )/2 r + J B x ). J B ) T P λ ) J B =J B ) T C Q J T C ) T =J C + λ I) C )T C J Q = C = C = C C C + λ I) J T C JQ ) C ) C + λ I) Q I Q C C + λ I) Q ) C, where the irst equality uses the act that Q JT C =C +λ I) J T P.Unortunately, this is not C C Q ) C, which would give J x +1 ) quadratic or, but we can use it to ind s.since C Q = C + λ I) Q ) ) C + λ I) C,

8 122 J. L. MEAD AND C. C. HAMMERQUIST we have that ) s = x T C C + λ I) C C x = x T C B I) x. Finally, we now show that the ean and covariance o the quadratic part o the unctional T λ λ are zero and the identity atrix, respectively. First, we note that i = δ =, then x = δ =, and hence λ =. For the covariance, note that r + J B x = d Jx + J B x = + J δ + J B δ). With the assuption that the errors are not correlated, we thus have E r + J B x )r + J B x ) T ) = C + J 1/λI 1/λB T + B C B T 1/λB +1/λB B T )JT. Sipliying urther, we get 1/λI 1/λB T + B C B T 1/λB +1/λB B T = B 1/λB 1/λB BT + C B T 1/λI +1/λBT = B C +C +1/λI 1/λB ) )BT = B C, ) where the second equality uses the act that B ourth ters in the previous equality. Thus BT = I both in the second and E λ T λ )=P /2 C + JC + λi) J T ) P /2 = I. We note here that by setting λ = we get the sae result as in Theore 2. The assuptions in Theore 4 ay be diicult to veriy or ay not hold exactly. For exaple, we assue that the daping paraeter λ represents the inverse o the error covariance or the paraeter estiate at step. The daping paraeter is not chosen to estiate the covariance but rather to shorten the step in the Gauss Newton iteration. There is a proble with its interpretation as the inverse o the standard deviation or the error because as λ decreases to zero the covariance is undeined. Alternatively, we tae the view that when λ, ininite weight is given to x x in the objective unction since x is considered a good estiate o x. Insection3we give histogras o J x) and hold. J x) and show experientally that these theores 3. Nuerical experients. We present two probles to illustrate both the validity o the χ 2 tests and the applicability o these tests to solving nonlinear inverse probles. The nonlinear probles are ro Chapter 1 o [1], where the authors not only describe and illustrate solutions to these probles, but also provide corresponding MATLAB codes that both set up the orward probles and ind solutions to the inverse probles. In particular, we used the -iles available with the text to copute approxiations using Gauss Newton with the discrepancy principle as described in Algorith 1 and Occa s ethod as described in Algorith 2.

9 NONLINEAR χ 2 METHOD 1221 Algorith 1 Gauss Newton with discrepancy principle). Input L, C, x p or i =1, 2, 3,... do Generate logarithically spaced α i or =1, 2, 3,... do Calculate x +1 = J T C i x +1 x < tol then x i = x +1 brea end i end or Calculate J i data = d Fx i) 2 C end or Choose i or sallest value o J i data x x i. Algorith 2 Occa s inversion [1]). Input L, C, x, δ or =1, 2, 3,...,3 do Calculate J and d Deine x +1 = J T C while x x x J + α 2 i LT L ) J T C d α 2 i LT Lx p ) J + α 2 LT L ) J T C d > 5e 3 or d Fx +1 ) 2 > 1.1δ 2 do C Choose largest value o α such d Fx +1 ) 2 C I no such α exists, then chose a α that iniizes d Fx +1 ) 2 C end while end or 3.1. Nonlinear inverse proble descriptions. The irst proble is an ipleentation o nonlinear cross-well toography. The orward odel includes ray path reraction, and the reracted rays tend to travel through high-velocity regions and avoid low-velocity regions, adding nonlinearity to the proble. It is set up with two wells spaced 16 apart, with 15 sources and receivers equally spaced at 1 depths down the wells. The travel tie between each pair o opposing sources and receivers is recorded, and the objective is to recover the two-diensional velocity structure between the two wells. The true velocity structure has a bacground o 2.9 /s with an ebedded Gaussian-shaped region that is about 1% aster and another Gaussian shaped region that is about 15% slower than the bacground. The observations or this particular proble consist o 64 travel ties between each pair o opposing sources and receivers. The true velocity odel along with the 64 ray paths are plotted in Figure 1. The region between the two wells is discretized into 64 square blocs, so there are 64 odel paraeters the slowness o each bloc) and 64 observations the ray path travel ties). The second proble considered is the estiation o the soil electrical conductivity proile ro above-ground electroagnetic induction easureents, as illustrated in Figure 2. The orward proble odels a ground conductivity eter which has two coils on a one eter long bar. Alternat-

10 1222 J. L. MEAD AND C. C. HAMMERQUIST Fig. 1. The setup o the cross-well toography proble. Shown here is the true velocity odel /s) and the corresponding ray paths. Fig. 2. A representation o the soil conductivity estiation. The instruent depicted in the top o iage represents a ground conductivity eter creating a tie-varying electroagnetic ield in the layered earth beneath. ing current is sent in one o the coils which induces currents in soil, and both coils easure the agnetic ield that is created by the subsurace currents. For a coplete treatent o the instruent and corresponding atheatical odel, see [6]. There are a total o 18 observations, and the subsurace electrical conductivity o the ground is discretized into 1 layers, 2 c thic, with a sei-ininite layer below 2, resulting in 11 conductivities to be estiated. As noted in [1], in solving this inverse proble, the Gauss Newton ethod does not always converge. Thereore, inding the solution necessitated the use o the Levenberg Marquardt algorith Nuerical validation o χ 2 tests. We nuerically tested Theore 2 on the cross-well toography proble by applying the Gauss Newton ethod to iniize 1.3) or 1 dierent realizations o and which were sapled ro N,.1) 2 I)andN,.1) 2 I), respectively. For perspective, the values or d are O.1), while the values or x are O.1), so that 1% noise is added to the data while the initial paraeter estiate has 1% added noise. We then used the Gauss Newton ethod to solve the nonlinear inverse proble 1 ties, once or each realization o noise, which is essentially equivalent to sapling J 1 ties. Each o these converged in six iterations. Histogras o saples o J at each iteration are shown

11 NONLINEAR χ 2 METHOD At iteration =1 2 At iteration =2 2 At iteration = At iteration =4 2 At iteration =5 2 At iteration = Fig. 3. Histogra showing the saple distribution o J x) at each Gauss Newton iteration or the cross-well toography proble. The ean o the saple is shown as the iddle red tic. in Figure 3, and the ean is labeled in red. Since there are 64 observations and 64 odel paraeters, the theory says that J χ 2 64 with E J ) = 64. In the beginning iterations, C ε slightly underestiates C ε, so the ean o J is soewhat larger than the theoretical ean. We also tested Corollary 3 using this toography proble when L does not have ull ran. We ollowed [1], where a discrete approxiation o the Laplacian or L is used in order to regularize this proble. Noise was generated in the sae anner as in Figure 3, and histogras o JL ro 1.2) are plotted in Figure 4. The ran o this operator is 63 so that EJ L ) = 63. The histogras in Figures 3 and 4 and their respective eans at the inal iterations show that the sapled distributions are good approxiations o the theoretical χ 2 distributions in Theore 2 and Corollary 3. The electrical conductivity proble was used to nuerically test Theore 4 since the Levenberg Marquardt algorith was used to estiate the odel paraeters when iniizing 1.3). As in the previous siulations, this inversion was also repeated or 1 dierent realizations o and which were sapled ro N,.1) 2 I)and N, 1) 2 I), respectively. The values or d are O1), and the values or x are O1), so that 1% noise is added to data while the initial paraeter estiate has 1% added noise. Alost all o these trials converged in six iterations or ewer with the ajority converging in ive iterations. Histogras or J x +1 )areshown in Figure 5. There were 18 observations or this proble and 11 paraeters, so E J x +1 )) = 18. Theore 4 states that E J x +1 )+s ) = 18, while at convergence when λ =wehaves =. Fro the histogras, we see that even at early iterations s is negligible, and J x +1 ) behaves lie χ 2 18 distribution. Thereore, in the inversion results we neglect the eects o s when calculating the regularization paraeter. This experient was also repeated when L does not have ull ran, and hence there are reduced degrees o reedo in the χ 2 distribution. We used a discrete

12 1224 J. L. MEAD AND C. C. HAMMERQUIST 2 At iteration =1 2 At iteration =2 2 At iteration = At iteration =4 2 At iteration =5 2 At iteration = Fig. 4. Histogra showing the saple distribution o J x) at each Gauss Newton iteration or the cross-well toography proble. The ean o the saple is shown as the iddle red tic. second order dierential operator with ran 9toregularizetheinversion; thereore, E J x +1 )+s ) = 16. In this experient was sapled ro N, 1) 2 I)since Lx are O1). All o these Levenberg Marquardt ) iterations converged in ive or ewer iterations with the ajority converging in our iterations. These results are also shown in Figure 6, where we see that the saple ean is 17 at convergence. In addition, J x +1 ) behaves lie χ 2 16 at the early iterations, so we also neglect the eects o s when L does not have ull ran Inverse proble results Cross-well toography. In [1] the authors solve the nonlinear toography proble or a range o values o α and choose L to represent the two-diensional Laplacian operator. These results are ro Algorith 1, where a value o α that satisies the discrepancy principle is ultiately the one that is chosen to give the best paraeter estiate. A reproduction o those results is given on the let in Figure 7. Here we also ound paraeter estiates with L = I, which is a good choice or regularization i there is a good initial estiate x p, and those results are on the let in Figure 8. Given that the results with L = I are worse than those with the Laplacian operator, we are let to conclude that the initial estiate o constant x p equal to 29 /s across the entire grid is not aseectiveasassuingthatx is soothly varying. In Figures 7 and 8 the Gauss Newton estiate with the discrepancy principle is copared to the estiate ro the iteratively regularized Gauss Newton ethod with the χ 2 test in Theore 2, as described in Algorith 3. The iterations were stopped when the change in the paraeter estiate was less than the tolerance. In Figure 7 we see that the estiate ound with L representing the Laplacian operator appears the sae with both ethods. However, in Figure 8 the estiate ro Algorith 3 with L = I appears slightly ore clear than that ro Algorith 1. It is evident ro these igures that estiates ound with the Laplacian operator are soother than the

13 NONLINEAR χ 2 METHOD At iteration =1 2 At iteration =2 2 At iteration = At iteration = At iteration =5 6 4 At iteration = Fig. 5. Histogras o J x) at each Levenberg Marquardt iteration or the electrical conductivities proble. C = σ 2 I. The saple ean is shown as the iddle tic. solutions ound with L = I. Algorith 3 iteratively regularized Gauss Newton with χ 2 test). Input L, C, x p x 1 = x p or =1, 2, 3,... do Calculate J and d Deine J x,α)= d J x 2 C + α 2 Lx x p ) 2 2 Choose α such that J x +1,α ) Φ n+q 95%), whereφ Calculate x +1 = J T C J + α 2 LT L ) J T C i x +1 x <tolthen x x +1 return end i end or n+q is the inverse CDF o χ2 n+q. d + α 2 LT Lx p ) Figures 7 and 8 are the results o only one realization o. In order to establish a good coparison, the above procedure or both Algoriths 1 and 3 were repeated or 2 dierent realizations o. The ean and standard deviation o ˆx x true / x true or the 2 trials or each ethod are given in Table 1. The χ 2 ethod gave better results on average when L = I, but the discrepancy principle did better on average with the second derivative operator. While these dierences are only increental, the χ 2 ethod was aster coputationally because it only solves the inverse proble once and dynaically estiates α at each iteration. This ipleentation o the dis-

14 1226 J. L. MEAD AND C. C. HAMMERQUIST 2 At iteration =1 2 At iteration =2 2 At iteration = At iteration =4 6 At iteration = Fig. 6. Histogras o J x) at each Levenberg Marquardt iteration or the electrical conductivities proble. C = σ 2L L. The saple ean is shown as the iddle tic Fig. 7. Solutions ound or the toography proble with L discrete approxiation to the Laplacian. Let: Solution ound using the discrepancy principle. Right: Solution ound with Algorith 1. crepancy principle requires the inverse proble to be solved ultiple ties, incurring coputational cost, as is evident by the outer loop in Algorith Subsurace conductivity estiation. The subsurace electrical conductivities inverse proble is in soe ways ore diicult than the previous proble. The Gauss Newton step does not always lead to a reduction in the nonlinear cost unction, and it is not always possible to ind a regularizing paraeter or which the solution satisies the discrepancy principle. In [1] the Levenberg Marquardt algorith was ipleented to iniize the unregularized least squares proble to deonstrate the ill-posedness o this proble. This estiate, plotted in Figure 9, is wildly oscillating, has extree values, and is not even close to being a physically possible solution. However, this is not evident ro just looing at the data isit, as this solution

15 NONLINEAR χ 2 METHOD Fig. 8. Solutions ound or the toography proble when L = I. Let: Solution ound using the discrepancy principle. Right: Solution ound with Algorith 1. Table 1 Coparison o discrepancy principle to the χ 2 ethod or the cross-well toography proble. Method χ 2 ethod Discrepancy principle C = α 2 I C = α 2 L T L μ =.1628 μ =.26 σ =.6 σ =.456 μ =.1672 μ =.18 σ =.5 σ =.21 actually its the data quite well. The inverse proble was also solved in [1] using Occa s inversion as given in Algorith 2, and this estiate is also shown in Figure 9. Occa s inversion uses a discrete second derivative approxiation or L. We also ipleented it with L = I, but the algorith diverged. Theore 4 lends itsel to a χ 2 ethod siilar to Algorith 3, but with J replaced with the regularized Levenberg Marquardt unctional J. The regularized Levenberg Marquardt ethod with the regularization paraeter ound by the χ 2 test ro Theore 4 is given in Algorith 4. The Levenberg Marquardt paraeter λ was initialized at.1 and then decreased by a actor o 1 at each iteration. However, i the unctional was decreasing rapidly with that choice, λ decreased, while i it was not decreasing, another choice was ound that decreased the unctional. We applied this algorith with both L = I and a discrete second derivative operator or L to ind estiates or the subsurace conductivity proble. The resulting paraeter estiates are plotted in Figure 1. Coparing the estiates ound using the discrete second derivative or L in Algoriths 2 and 4, in Figures 9 and 1, it is apparent that both estiate the true solution airly well or this realization o. While the χ 2 ethod was still able to ind a solution with L = I, it can be seen in Figure 1 that this choice does not estiate the true solution as well. Algorith 4 regularized Levenberg Marquardt with χ 2 test). Input L, C, x p, λ 1 or =1, 2, 3... do Calculate J and d i >1 then

16 1228 J. L. MEAD AND C. C. HAMMERQUIST Conductivity S/) solution True odel depth ) Conductivity S/) Occa s inversion True odel depth ) Fig. 9. Let: The solution ound using a Levenberg Marquardt algorith. Right: The solution ound with Occa s inversion with C = α 2 L T L. Deine: J x,λ)= d J x 2 + α 2 C Lx x p) λ2 x x 2 2, x +1 = J T C J + α 2 L T L + λ 2 I ) J T C d + α 2 L T Lx p + λ 2 x ) Update paraeter by inding a sall λ that ensures J x +1,λ ) < J x,λ ) end i Deine: J x,α)= d J x 2 + α 2 Lx x C p ) λ 2 x x 2 2 Choose α +1 such that J x +1,α +1 ) Φ n+q 95%) where Φ n+q is inverse CDF o χ2 n+q. Calculate: x +1 = J T C J + α 2 +1 LT L + λ 2 I) J T C d + α 2 +1 LT Lx p + λ 2 x ) i x +1 x <tolthen x x +1 return end i end or Once again, to establish a good coparison, each o these ethods was run or 2 dierent realizations o. The ean and standard deviation o ˆx x true / x true or the 2 trials or each ethod are given in Table 2. While Occa s inversion with C = α 2 L T L was able to ind good solutions or soe realization o, suchas the solution plotted in Figure 9, the standard deviation or this case given in Table 2 indicates that soeties it ound poor estiates. Occa s inversion consistently diverged with the choice C = α 2 I over ultiple realizations o. These results show that the χ 2 ethod consistently gave uch better answers than Occa s inversion or this proble. Even the nonsoothed χ 2 ethod with L = I ound better solutions on average than Occa s with the soothing choice C = α 2 L T L. The relatively sall values or σ in Table 2 or the χ 2 ethod suggest that the solutions ound were airly consistent with each other.

17 NONLINEAR χ 2 METHOD 1229 Conductivity S/) χ 2 ethod True odel Conductivity S/) χ 2 ethod True odel depth ) depth ) Fig. 1. Let: The paraeters ound using the χ 2 ethod and C paraeters ound using the χ 2 ethod and C = α 2 I. = α 2 L T L. Right: The Table 2 Coparison o the χ 2 ethod to Occa s inversion or the estiation o subsurace conductivities. Method χ 2 ethod Occa s inversion C = α 2 I C = α 2 L T L μ =.1827 μ =.38 σ =.295 σ =.281 diverges μ =.4376 σ = Suary and conclusions. Weighted least squares with Tihonov regularization is a popular approach to solving ill-posed inverse probles. For nonlinear probles, the resulting paraeter estiates iniize a nonlinear unctional, and Gauss Newton or Levenberg Marquardt algoriths are coonly used to ind the iniizer. These nonlinear optiization ethods iteratively solve a linear or o the objective unction, and we have shown or both ethods that the iniu value o the unctionals at each iterate ollows a χ 2 distribution. The iniu values o the Gauss Newton and Levenberg Marquardt unctionals dier, but both have degrees o reedo equal to the nuber o data. We illustrate this χ 2 behavior or the Gauss Newton ethod on a two-diensional cross-well toography proble, and or the Levenberg Marquardt ethod we use a one-diensional electroagnetic proble. Since the iniu value o the unctionals ollows a χ 2 distribution at each iterate, this gives χ 2 tests ro which a regularization paraeter can be ound or nonlinear probles. We give the resulting algoriths and test the by estiating paraeters or the cross-well toography and electroagnetic probles. It was shown that Algorith 1 provided paraeter estiates that were o accuracy siilar to that o the discrepancy principle in a nonlinear cross-well toography proble. In a subsurace electrical conductivity proble, the χ 2 ethod with the Levenberg Marquardt algorith proved to be ore robust than Occa s inversion, providing paraeter estiates without the use o a soothing operator. This algorith also provided uch better estiates than Occa s inversion on average when the soothing operator was used. We conclude that the nonlinear χ 2 ethod is an attractive alternative to the discrepancy principle and Occa s inversion. However, it does share a disadvantage with these ethods in that they all require the covariance o the data to be nown.

18 123 J. L. MEAD AND C. C. HAMMERQUIST I an estiate o the data covariance is not nown, then the nonlinear χ 2 ethod will not be appropriate or solving such a proble. Future wor includes estiating ore coplex covariance atrices or the paraeter estiates. In [9] Mead shows that it is possible to use ultiple χ 2 tests to estiate such a covariance, and it sees liely that this could also be extended to solving nonlinear probles. REFERENCES [1] R. C. Aster, B. Borchers, and C. Thurber, Paraeter Estiation and Inverse Probles, Acadeic Press, New Yor, 25, p. 31. [2] P. C. Carline and T. A. Louis, Bayes and Epirical Bayes Methods or Data Analysis, Chapan and Hall, London, 1996, p [3] H. W. Engl, K. Kunisch, and A. Neubauer, Convergence rates or Tihonov regularisation o nonlinear ill-posed probles, Inverse Probles, ), pp [4] W. P. Gouveia and J. A. Scales, Resolution o seisic waveor inversion: Bayes versus Occa, Inverse Probles, ), pp [5] P. C. Hansen, Ran-Deicient and Discrete Ill-Posed Probles: Nuerical Aspects o Linear Inversion, SIAM Monogr. Math. Model. Coput. 4, SIAM, Philadelphia, [6] J. M. H. Hendricx, B. Borchers, D. L. Corwin, S. M. Lesch, A. C. Hilgendor, and J. Schule, Inversion o soil conductivity proiles ro electroagnetic induction easureents: Theory and experiental veriication, Soil Sci. Soc. Aer. J., 66 22), pp [7] J. Mead, Paraeter estiation: A new approach to weighting a priori inoration, J. Inverse Ill-Posed Probl., 16 28), pp [8] J. Mead and R. A. Renaut, A Newton root-inding algorith or estiating the regularization paraeter or solving ill-conditioned least squares probles, Inverse Probles, 25 29), 252. [9] J. Mead, Discontinuous paraeter estiates with least squares estiators, Appl. Math. Coput., ), pp [1] A. Rieder, On the regularization o nonlinear ill-posed probles via inexact Newton iterations, Inverse Probles, ), pp [11] R. A. Renaut, I. Hnetynova, and J. L. Mead, Regularization paraeter estiation or large scale Tihonov regularization using a priori inoration, Coput. Statist. Data Anal., 54 21), pp [12] T. I. Seidan and C. R. Vogel, Well-posedness and convergence o soe regularisation ethods or non-linear ill posed probles, Inverse Probles, ), pp [13] A. Tarantola, Inverse Proble Theory and Methods or Model Paraeter Estiation, SIAM, Philadelphia, 25.

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

A practical approach to real-time application of speaker recognition using wavelets and linear algebra

A practical approach to real-time application of speaker recognition using wavelets and linear algebra A practical approach to real-tie application o speaker recognition using wavelets and linear algebra Duc Son Pha, Michael C. Orr, Brian Lithgow and Robert Mahony Departent o Electrical and Coputer Systes

More information

Optimization of Flywheel Weight using Genetic Algorithm

Optimization of Flywheel Weight using Genetic Algorithm IN: 78 7798 International Journal o cience, Engineering and Technology esearch (IJET) Volue, Issue, March 0 Optiization o Flywheel Weight using Genetic Algorith A. G. Appala Naidu*, B. T.N. Charyulu, C..C.V.aanaurthy

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Sexually Transmitted Diseases VMED 5180 September 27, 2016

Sexually Transmitted Diseases VMED 5180 September 27, 2016 Sexually Transitted Diseases VMED 518 Septeber 27, 216 Introduction Two sexually-transitted disease (STD) odels are presented below. The irst is a susceptibleinectious-susceptible (SIS) odel (Figure 1)

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

A model reduction approach to numerical inversion for a parabolic partial differential equation

A model reduction approach to numerical inversion for a parabolic partial differential equation Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/0266-56/30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

I. Understand get a conceptual grasp of the problem

I. Understand get a conceptual grasp of the problem MASSACHUSETTS INSTITUTE OF TECHNOLOGY Departent o Physics Physics 81T Fall Ter 4 Class Proble 1: Solution Proble 1 A car is driving at a constant but unknown velocity,, on a straightaway A otorcycle is

More information

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Methodology Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Methodology Paper cgill University aculty o Science Departent o atheatics and Statistics Part A Exaination Statistics: ethodology Paper Date: 17th August 2018 Instructions Tie: 1p-5p Answer only two questions ro. I you

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Variations on Backpropagation

Variations on Backpropagation 2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well

More information

THE KALMAN FILTER: A LOOK BEHIND THE SCENE

THE KALMAN FILTER: A LOOK BEHIND THE SCENE HE KALMA FILER: A LOOK BEHID HE SCEE R.E. Deain School of Matheatical and Geospatial Sciences, RMI University eail: rod.deain@rit.edu.au Presented at the Victorian Regional Survey Conference, Mildura,

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Dual porosity DRM formulation for flow and transport through fractured porous media

Dual porosity DRM formulation for flow and transport through fractured porous media Boundary Eleents XXVII 407 Dual porosity DRM orulation or lo and transport through ractured porous edia T. aardzioska & V. Popov Wessex Institute o Technology, UK Abstract The ain objective o this ork

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion A Nonlinear Sparsity Prooting Forulation and Algorith for Full Wavefor Inversion Aleksandr Aravkin, Tristan van Leeuwen, Jaes V. Burke 2 and Felix Herrann Dept. of Earth and Ocean sciences University of

More information

Arithmetic Unit for Complex Number Processing

Arithmetic Unit for Complex Number Processing Abstract Arithetic Unit or Coplex Nuber Processing Dr. Soloon Khelnik, Dr. Sergey Selyutin, Alexandr Viduetsky, Inna Doubson, Seion Khelnik This paper presents developent o a coplex nuber arithetic unit

More information

Estimating flow properties of porous media with a model for dynamic diffusion

Estimating flow properties of porous media with a model for dynamic diffusion Estiating low properties o porous edia with a odel or dynaic diusion Chuntang Xu*, Jerry M. Harris, and Youli Quan, Geophysics Departent, Stanord University Suary We present an approach or estiating eective

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments

Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments Geophys. J. Int. (23) 155, 411 421 Optial nonlinear Bayesian experiental design: an application to aplitude versus offset experients Jojanneke van den Berg, 1, Andrew Curtis 2,3 and Jeannot Trapert 1 1

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term Nuerical Studies of a Nonlinear Heat Equation with Square Root Reaction Ter Ron Bucire, 1 Karl McMurtry, 1 Ronald E. Micens 2 1 Matheatics Departent, Occidental College, Los Angeles, California 90041 2

More information

State Estimation Problem for the Action Potential Modeling in Purkinje Fibers

State Estimation Problem for the Action Potential Modeling in Purkinje Fibers APCOM & ISCM -4 th Deceber, 203, Singapore State Estiation Proble for the Action Potential Modeling in Purinje Fibers *D. C. Estuano¹, H. R. B.Orlande and M. J.Colaço Federal University of Rio de Janeiro

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot Vol. 34, No. 1 ACTA AUTOMATICA SINICA January, 2008 An Adaptive UKF Algorith for the State and Paraeter Estiations of a Mobile Robot SONG Qi 1, 2 HAN Jian-Da 1 Abstract For iproving the estiation accuracy

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

ACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION

ACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION International onference on Earthquae Engineering and Disaster itigation, Jaarta, April 14-15, 8 ATIVE VIBRATION ONTROL FOR TRUTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAE EXITATION Herlien D. etio

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits Correlated Bayesian odel Fusion: Efficient Perforance odeling of Large-Scale unable Analog/RF Integrated Circuits Fa Wang and Xin Li ECE Departent, Carnegie ellon University, Pittsburgh, PA 53 {fwang,

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

STOPPING SIMULATED PATHS EARLY

STOPPING SIMULATED PATHS EARLY Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA Multivariate Methos Xiaoun Qi Principal Coponents Analysis -- PCA he PCA etho generates a new set of variables, calle principal coponents Each principal coponent is a linear cobination of the original

More information

Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Damping Characteristics

Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Damping Characteristics Copyright c 2007 Tech Science Press CMES, vol.22, no.2, pp.129-149, 2007 Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Daping Characteristics D. Moens 1, M.

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER IEPC 003-0034 ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER A. Bober, M. Guelan Asher Space Research Institute, Technion-Israel Institute of Technology, 3000 Haifa, Israel

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL

GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL CHAO MA, XIN LIU, AND ZAIWEN WEN Abstract. In this paper, we consider a nonlinear least squares odel for the phase retrieval proble. Since

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

Journal of Solid Mechanics and Materials Engineering

Journal of Solid Mechanics and Materials Engineering Journal o Solid Mechanics and Materials Engineering First Order Perturbation-based Stochastic oogeniation Analysis or Short Fiber Reinorced Coposite Materials* Sei-ichiro SAKATA**, Fuihiro ASIDA** and

More information

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

CHAPTER 19: Single-Loop IMC Control

CHAPTER 19: Single-Loop IMC Control When I coplete this chapter, I want to be able to do the following. Recognize that other feedback algoriths are possible Understand the IMC structure and how it provides the essential control features

More information