Krylov-based minimization for optimal H 2 model reduction

Size: px
Start display at page:

Download "Krylov-based minimization for optimal H 2 model reduction"

Transcription

1 Proceedings of the 46th IEEE Conference on Decision and Control New Orleans, LA, USA, Dec , 27 Krylov-based iniization for optial H 2 odel reduction Christopher A. Beattie and Serkan Gugercin Abstract We present an approach to odel reduction for linear dynaical systes that is nuerically stable, coputationally tractable even for very large order systes, produces a sequence of onotone decreasing H 2 error nors, and (under odest hypotheses is globally convergent to a reduced order odel that is guaranteed to satisfy first-order optiality conditions with respect to H 2 error. I. INTRODUCTION Suppose we are given a stable single-input/single-output linear dynaical syste described by the transfer function H(s := c(si A 1 b, (1 with A R n n, b,c T R n. We have particular interest in cases where the syste order, n, is very large and our goal is to find, for any given order r n, a reduced-order odel described by H r (s := c r (si r A r 1 b r (2 with A r R r r, and b r,c T r Rr, such that H r (s is the optial H 2 approxiation to H(s, i.e where H H r H2 = G H2 := in H G r H2. (3 G r: stable ( 1 + 1/2 G(jω dω 2 (4 2π A reduced-order odel that is a global iniizer for H 2 error criterion is guaranteed to exist in the single-input/singleoutput case. However, existence of global iniizers in the ulti-input/ulti-output case is still an open question. Finding global iniizers can be very difficult so the ore odest goal of finding local iniizers is ore usual. Optial H 2 approxiation in this sense has been investigated extensively; see for instance [1], [3], [28], [9], [21], [5], [3], [29], [4], [2], [2] and references therein. Most existing optial H 2 ethods require dense atrix operations, e.g., solving a series of Lyapunov equations. This rapidly becoes intractable as diension increases, so such ethods are rarely usable even for ediu scale probles. We propose here a coputationally effective Krylov-based iniization algorith for optial H 2 approxiation, that is effective even for systes having on the order of any thousands of state variables. This work was supported in parts by the NSF Grants DMS and DMS and AFOSR Grant FA Christopher A. Beattie and Serkan Gugercin are with Departent of Matheatics, Virginia Tech,, 46 McBryde Hall, Blacksburg, VA, , USA {beattie,gugercin}@ath.vt.edu We will construct reduced order odels H r (s by a Galerkin projection process. Let V R n r and Z R n r be given so that Z T V = I r. Then with x r (t R r, Vx r (t R n will approxiate x(t by forcing Z T (Vx r (t AVx r (t b u(t = The reduced order odel in (2 is then obtained as follows: A r = Z T AV, b r = Z T b, c T r = c T V. (5 The reduced order transfer function, H r (s, is deterined principally by the range spaces of V and Z. We will chose the coluns of V and Z to span rational Krylov subspaces designed to iniize the H 2 error. A. Moent atching and rational Krylov ethods Given H(s as in (1, odel reduction by oent atching aounts to finding a reduced-order syste H r (s that interpolates both the values of H(s and its derivatives, at selected points σ k in the coplex plane. For reasons we discuss later, we consider only Herite interpolation; so the oent atching proble requires finding A r, b r, and c r so that H r (σ k = H(σ k and H r(σ k = H (σ k for k = 1,...,r or equivalently, c T (σ k I A 1 b = c T r (σ k I r A r 1 b r c T (σ k I A 2 b = c T r (σ ki r A r 2 b r and for k = 1,...,r. (The quantity c T (σ k I A (j+1 b is called the j th oent of H(s at σ k. Rational interpolation by projection was first proposed by Skelton et. al. in [1], [31], [32]. Grie [16] showed how one can obtain the required projection using the rational Krylov ethod of Ruhe [26]. We state a particular version of Grie s result that suffices to effect Herite interpolation: Proposition 1.1: [16] Consider H(s = c(si A 1 b, distinct shifts given as σ 1, σ 2,..., σ r, and atrices V and Z with Z T V = I, and Ran(V = span { (σ 1 I A 1 b,, (σ r I A 1 b } (6 Ran(Z = span { (σ 1 I A T 1 c,, (σ r I A T 1 c }. (7 The reduced order syste H r defined by A r = Z T AV, b r = Z T b, c T r = ct V, atches two oents of H(s at each of the interpolation points σ k, k = 1,, r. Proposition 1.1 shows that Krylov-based ethods are able to atch oents without ever coputing the explicitly. This is iportant since the coputation of oents is, in /7/$ IEEE. 4385

2 46th IEEE CDC, New Orleans, USA, Dec , 27 general, ill-conditioned. This is one of the ain advantages of the Krylov-based ethods [12]. Unlike graian based odel reduction ethods such as balanced truncation (see [6], Krylov-based odel reduction constructs the odeling subspaces V and Z using only atrix-vector ultiplications and soe sparse linear solvers. These approaches can be ipleented iteratively, which can broaden its range of coputational effectiveness even further; for details, see also [14], [15]. B. First-order optiality conditions for H 2 odel reduction Here, we present first-order necessary conditions for optial H 2 approxiation that are due to Meier and Luenberger [5]. These conditions aount to requiring the optial reduced order odel to be a Herite interpolant to the fullorder syste at particular points. This otivates our usage of Krylov-based ethods. Assue that H(s has siple poles λ 1, λ 2,... λ n. Let φ i denote the associated residues of H(s at the poles λ i, so that φ i = li s λi H(s(s λ i, i = 1,, n. Siilarly, let H r (s has siple poles ˆλ 1, ˆλ 2,... ˆλ r with residues ˆφ i = li s ˆλi H(s(s ˆλ i, i = 1,, r. Then Theore 1.1: [5] Given the syste H(s = n let H r (s = r φ k s λ k, ˆφ k s ˆλ k be an optial solution to the H 2 odel reduction proble (3. Then, H r (s interpolates H(s and its first derivative at ˆλ i, i = 1,...,r, i.e., H r ( ˆλ k = H( ˆλ k and H r ( ˆλ k = H( ˆλ k, for i = 1,...,r. (8 Recent work by Gugercin et al. [2] provides a new and siple proof of these necessary conditions for H 2 -optiality, also showing equivalence with a variety of other (apparently distinct necessary conditions for H 2 -optiality. Theore 1.1 also has straightforward extensions to systes with ultiple poles; see, for exaple, [6], [22], [5]. Theore 1.1 illustrates why the rational Krylov fraework is copelling for the optial H 2 proble; the firstorder conditions (8 lead to Herite interpolation at specific interpolation points ˆλ i, for i = 1,...,r. Based on this observation, a Krylov-based algorith was developed in [2] that generates a reduced odel satisfying the first-order necessary conditions (8. We build on this Krylov/interpolation fraework and incorporate strategies that assure a step-wise decrease in the H 2 error H H r H2 and guarantee a superlinear rate of convergence to a (local iniizer. The key to such an error descent ethod will be gradient and Hessian expressions for H H r H2 viewed ultiately as a function of the interpolation points {σ i } r i=1. II. H 2 ERROR GRADIENTS AND HESSIANS The reduced odels that we consider are constructed via a Galerkin projection: A r = Z T AV, b r = Z T b, c T r = c T V, where (9 using rational Krylov subspaces as in Proposition 1.1 Ran(V = span { (σ i I A 1 b } r i=1 and (1 Ran(Z = span { (σ i I A T 1 c } r i=1. where σ k are distinct coplex nubers. H r (s, and consequently, the H 2 error J = H H r 2 H 2 both depend on the r free paraeters {σ i } r i=1, so we seek a forulation that treats {σ i } r i=1 as the optiization paraeters. We begin with an expression for the H 2 error recently proved by Gugercin and Antoulas [19], [18], [6]: Lea 2.1: Given the full-order syste H(s = n φ k s λ k and any reduced-order odel H r (s = r ˆφ k, the H s ˆλ 2 nor of the error syste, denoted by k J := H(s H r (s 2 H 2, is given by J = n i=1 φ i (H( λ i H r ( λ i + r j=1 ˆφ j (H r ( ˆλ j H( ˆλ j. (11 Both H r (s and J present theselves here as functions of the 2r paraeters {ˆλ i } r i=1 (reduced-order poles and {ˆφ i } r i=1 (reduced-order residues. So it is straightforward to first derive the gradient and Hessian of J with respect to these variables. Although we can derive an H 2 -descent optiization algorith directly with respect to these 2r paraeters, we have found it far ore effective to transfor the proble further so that the H 2 -descent can be organized with respect to the r paraeters, {σ i } r i=1. We can deterine the two Jacobian atrices representing ˆλ i and ˆφ i, and use the chain rule to introduce this further change of variables to obtain the gradient and Hessian of J with respect to the interpolation shifts, {σ i } r i=1. This fors the core of a Krylov/interpolation-based H 2 -descent optiization algorith. A. Gradient of J with respect to poles and residues Gugercin et al. [2] derived the derivatives of the cost function J with respect to ˆφ k and ˆλ k : Theore 2.1: [2] Let H(s = n φ k s λ k and H r (s = r ˆφ k where both H(s and H s ˆλ r (s have distinct poles k as before. Then, J ˆφ = 2H( ˆλ + 2H r ( ˆλ, = 1...r, (12 J ( = 2ˆφ H ( ˆλ H r ˆλ ( ˆλ, = 1...r. (13 If we define the 2r-diensional paraeter vector q = [ˆφ1,..., ˆφr, ˆλ ] T 1,..., ˆλr, (14 Theore 2.1 defines the gradient of J with respect to q: [ ] T J J J J q J = ˆφ,..., 1 ˆφ,,...,. (15 r ˆλ 1 ˆλ r 4386

3 46th IEEE CDC, New Orleans, USA, Dec , 27 B. Hessian of J with respect to poles and residues By differentiating (12 and (13 with respect to ˆφ p and ˆλ p (followed by soe tedious anipulations, we obtain secondorder partial derivatives of J with respect to the pole-residue paraeters ˆφ and ˆλ: Theore 2.2: Let H(s = n φ k s λ k and H r (s = ˆφ k as above. Then, r s ˆλ k 2 J 2 ˆφ p ˆφ = ˆλ + ˆλ, (16 p 2 J ˆλ p ˆλ = 4ˆφ ˆφp (ˆλ + ˆλ p 3 for p, (17 2 J 2ˆλ = 2ˆφ (H ( ˆλ H r ( ˆλ ˆφ 2, (18 2 J 2ˆφ p ˆλ p ˆφ = (ˆλ + ˆλ for p (19 p 2 2 J ( ˆλ ˆφ = 2 H ( ˆλ H r( ˆλ + ˆφ (2 2ˆλ 2 Theore 2.2 in effect, gives the Hessian of J with respect to the 2r-diensional paraeter vector q defined in (14, : 2 qj = ˆλ 3 { 2 J ˆφi, i = 1,...,r,, q i = q i q j ˆλ i r, i = r + 1,...,2r. (21 Reark 2.1: One can develop a descent ethod (either line search or trust-region for optial H 2 odel reduction cobining Theores 2.1 and 2.2. However, for an r th order reduced odel, the optiization variable q will have a diension 2r. In the next section, we will develop a Krylovbased Newton ethod where the only variables will be the r interpolations points {σ i } r i=1 ; hence the nuber of variables will be reduced to half without any loss of accuracy in the underlying optiization proble. Our nuerical experience suggests that the Krylov-based fraework converges faster than the pole-residue forulation. C. The Jacobian atrix J λ := ˆλ i Let λ(σ = [ˆλ 1, ˆλ 2,..., ˆλ r ] T denote the r-tuple of the reduced order poles ephasizing the fact that we now view the reduced order poles as a function of interpolation points only. Recall that the reduced odels will be obtained via rational Krylov projection as shown in (9 and (1. Since, the gradient and Hessian of the H 2 cost with respect to {ˆφ i } and {ˆλ i } are already obtained in the previous section, what reains to obtain for a Krylov-based optiization ethod are the Jacobians representing ˆφ i and ˆλ i. We will derive these two quantities using the underlying Krylov-based reduction fraework in (9 and (1. Define the coplex r-tuple σ = [σ 1, σ 2,..., σ r ] T C r together with related atrices V and W: V = [ (σ 1 I A 1 b,..., (σ r I A 1 b ] (22 c(σ 1 I A 1 W T =. c(σ r I A 1 By this notation, the reduced-order atrix A r in (9 becoes A r = Z T AV where Z T = (W T V 1 W T ; hence satisfying Z T V = I r. As before, ˆλ i and ˆφ i will denote, respectively, the poles and residues of the reduced order syste. We ay calculate the entries of the Jacobian atrix of λ(σ by differentiating the eigenvalue relation W T AVˆx i = ˆλ i W T Vˆx i and using the definitions of V and W in (22. This yields Lea 2.2: Let ˆx be a unit eigenvector of A r = (W T V 1 W T AV associated with ˆλ i, so W T AVˆx = ˆλ i W T Vˆx. Then, where ˆλ i = ˆx T j W (AVˆx T ˆλ i Vˆx ˆx T W T + Vˆx (ˆx T W T A ˆλ iˆx T W T j Vˆx ˆx T W T Vˆx j W T = W T = e j c(σ j I A 2 and (23 (24 j V = V = (σ j I A 2 be T j σ. j ( The entries of the Jacobian atrix J λ = b λi provide a easure of the sensitivity of the reduced order poles, λ(σ, to perturbations of σ. Notice that Vˆx and ˆx T W T are Galerkin approxiations to right and left eigenvectors, respectively, of A. (26 shows how to copute J λ. This requires solving a sall r r generalized eigenvalue proble to copute ˆλ i and ˆx, and 2r additional linear solves to copute (σ i I n A 2 b and (σ i I n A T 2 c T. However, since constructing V and W T already requires coputing (σ i I n A 1 b and (σ i I n A T 1 c T, J λ does not require additional factorizations, only soe additional triangular solves are needed. D. The Jacobian atrix J φ = ˆφ i Define the r-tuple φ(σ = [ˆφ 1, ˆφ 2,..., ˆφ r ] T. As above, notation φ(σ ephasizes the fact that the reduced order residues is a function of the interpolation ( points only. Also, siilar to J λ, the entries of J φ = ˆφi will provide a easure of the sensitivity of the reduced order residues, φ(σ, to perturbations of σ. Lea 2.3: Given H(s = c(si A 1 b, let the reduced odel H r (s = c r (si r A r 1 b r be obtained as in (9 with Z = W(W T V 1 where V and W are as in (22. Define M = W T AV, N = W T V, and the generalized 4387

4 46th IEEE CDC, New Orleans, USA, Dec , 27 eigendecoposition MX = NXΛ with a diagonal atrix of reduced syste poles (eigenvalues Λ = diag(ˆλ i and an invertible atrix of (right eigenvectors X. Define Y = (NX 1, an associated atrix of left eigenvectors so that Y M = ΛY N. Then, the residues of H r (s is given by Moreover, φ i = e T i ˆφ i = e T i Y W T bcvxe i. (25 [ ( j Y W T bcvx + Y ( j W T bcvx +Y W T bc( j VX + Y W T bcv( j X ] e i (26 where j X = X solves the Sylvester equation N 1 M( j X ( j XΛ + Q =. (27 with Q = N 1 ( j MX N 1 ( j NXΛ X( j Λ and j Y = Y satisfies j Y = Y [( j NX + N( j X]Y. (28 where j W T and j V are as in (24, and j M = j N = M = ( j W T AV + W T A( j V N = ( j W T V + W T ( j V Proof: (25 is obtained by transferring the reduced odel into odal for. On the other hand, (26 follows fro differentiating (25 with respect to σ j. Lea 2.3 explains how to copute J φ. Note that J φ requires solving r sall r r Sylvester equations as in (27. In addition to having sall diension r, (27 can be solved easily due to diagonal coefficient ter Λ. Once ore, the additional coputational cost due to Jacobian coputation is sall. Moreover, even though the Sylvester equation (27 is singular, it is guaranteed to be consistent by construction. E. The gradient and Hessian of J with respect to shifts σ Having established the two Jacobians J λ and J φ, we are now ready to state the ain result of the paper stating the gradient and Hessian of the H 2 error with respect to shifts in a Krylov-based odel reduction setting: Lea 2.4: Given the full-order odel H(s, let H r (s = r ˆφ k be obtained via Krylov-based odel s ˆλ k reduction as in (9-(1 with shifts σ = [σ 1, σ 2,..., σ r ] T. Define [ ] Jφ J = (29 J λ where J λ and J φ as defined in (23 and (26 Then, σj, the gradient of the H 2 error J = H H r 2 H 2 with respect to σ is given by σj = J T ( q J (3 where q J is given by (15. Moreover, σj 2, the Hessian of J with respect to σ, is σ 2 J = ( JT q 2 J r ( J J + i=1 ˆφ σ 2 ˆφ i + J σˆλ 2 i i ˆλ i (31 where qj 2 is as in (21, and σ 2 ˆφ i and σˆλ 2 i represent, respectively, the Hessian of ˆφ i and ˆλ i with respect to σ. Proof: (3 follows fro cobining Theore 2.1 and Leas 2.2 and 2.3. (31 follows fro Theore 2.2, and Leas 2.2 and 2.3. We can establish Lipschitz continuity of J and σj. To ephasize dependence on the interpolation points σ = [σ 1, σ 2,..., σ r ] T, we use a superscript [σ] in the reduced order odel. Theore 2.3: Let H(s be defined as in (1 and σ = [σ 1, σ 2,..., σ r ] T. Denote the reduced-order odel associated with σ as H r [σ] (s calculated fro (9-(1 with shifts σ = [σ 1, σ 2,..., σ r ] T. The H 2 error viewed as a function of the shifts, J (σ : C r R is defined as J (σ = H H r [σ] 2 H 2. For soe finite J >, let N be the level set N = {σ : J (σ J }. Then, both J (σ and σj (σ are Lipschitz continuous functions on the level set N. Proof: The result follows fro observing the continuity, differentiability and boundedness of J (σ and σj (σ on the level set N. III. A KRYLOV-BASED MINIMIZATION ALGORITHM Our iniization algorith is structured overall as a line search/odified Newton ethod. We present only a generic description without details (such as terination and step acceptance criteria associated with standard optiization ipleentation. For such details, we refer to [24]. We have also developed a trust region variant of this approach that we do not discuss here but will include in the full paper. Note that the Hessian expression (31 suggests that the calculation of σ 2 J will require the copution of the individual Hessians σ 2 ˆφ i and σˆλ 2 i. However in (31 these ters are ultiplied by eleents of q J which will becoe negligible in the vicinity of a iniizer. In our nuerical ipleentations, we approxiate σ 2 J either using a Gauss- Newton approxiation or a partial secant update B = J T ( 2 q J J (32 B = J T ( 2 qj J + S (33 where S is a least-change secant approxiation to ( r J i=1 ˆφ 2 i σ ˆφ i + J 2 ˆλ i σˆλ i as described, for exaple, of Dennis et al. [11] for nonlinear least squares probles. Further odifications to B are also perfored to assure that it is safely positive definite (thus insuring a descent direction; see [24]. 4388

5 46th IEEE CDC, New Orleans, USA, Dec , 27 Algorith 3.1: Krylov-based H 2 iniization 1 Choose initial shifts σ ( = [σ ( 1,..., σ( r ] T 2 for k =, 1, 2,... until convergence a Copute σj ( σ (k using (3 b Copute the Hessian approxiation B k using (32 or (33. If necessary, odify B k to be positive definite. c Copute the odified Newton direction: p (k = B 1 k σj ( σ (k d Copute a step length α (k that satisfies the Wolfe conditions, i.e., with σ (k + = σ(k + α k p (k σ (k + c 1α k J J σj T σj σ (k + p (k c 2 σj with < c 1 < c 2 < 1. e Update σ (k+1 = σ (k + = σ (k + α (k p (k σ (k + σ (k T p (k, σ (k T p (k f Copute the reduced order odel H (k+1 r for new shifts σ (k+1 3 end The following theore is a direct consequence of Algorith 3.1 and Theore 2.3. Theore 3.1: Given H(s = c(si A 1 b and the r th order reduced odel, H r (k, obtained via Algorith 3.1 with the corresponding H 2 error J (k = H H (k 2. Assue H 2 that the initial reduced odel H r ( is a stable dynaical syste for each k with r (s is stable. Then H r (k (s ( J (k+1 J (k, k =, 1, 2,.... and li J σ (k = k Moreover, if, upon convergence, the Hessian 2 σj is positive definite, then Algorith 3.1 converges to a local iniizer. Reark 3.1: In coputing the step length α (k, we only ention that the final step length ust satisfy Wolfe conditions, even though different ways of accoplishing this are available. The details of how the line search is perfored is oitted. For details, we refer the reader to [24]. Reark 3.2: This is the first Krylov-based descent algorith for optial H 2 odel reduction. Previous descent approaches require solving two large-scale Lyapunov equations to copute the search direction. In our case, the doinant cost to calculate search directions are sparse linear solves. To our knowledge, this is the first approach where the Hessian (hence a Newton direction is used in optial H 2 approxiation. A. A low order exaple IV. NUMERICAL EXAMPLES In this exaple, we illustrate the application of the proposed for a siple fourth order syste fro [3] with the state-space representation A = , b = 4 1 ct = 1. We reduce the order to r = 3, 2, 1 using the proposed Krylov-based descent algorith with line search, i.e. Algorith 3.1. The resulting relative H 2 error nors, H Hr H 2 H H2, are tabulated in Table I. In each case, the ethod yields the inial H 2 error nors as listed in [3], [2] staying in nuerically effective Krylov-fraework and guaranteeing a descent in each step. Convergence behavior of H Hr H 2 H H2 and σj for r = 1, 2 and 3 are shown, respectively, in Figures 1, 2 and 3. In each case, convergence is very fast and quadratic behavior of the Newton step is clearly reflected. TABLE I RELATIVE H 2 ERRORS r = 1 r = 2 r = 3 H H r H2 H H / H / H Nor of the gradient k: Iteration index Fig. 1. Evolution of Algorith 3.1 for r = Nor of the gradient k: Iteration Index B. A rando odel Fig. 2. Evolution of Algorith 3.1 for r = 2 The full-order odel is a stable rando odel of order n = 5. We reduce the order to r = 4 using Algorith 4389

6 46th IEEE CDC, New Orleans, USA, Dec , 27 / H Nor of the gradient k: Iteration Index Fig. 3. Evolution of Algorith 3.1 for r = The convergence behavior for this exaple is shown in Figure 4. As in the previous, the proposed ethod quickly converges to the optial odel while staying in Krylovfraework. / H Nor of the gradient k: Iteration Index Fig. 4. Evolution of Algorith 3.1 for the rando odel for r = 3 REFERENCES [1] L. Baratchart, M. Cardelli and M. Olivi, Identification and rational l 2 approxiation: a gradient algorith, Autoatica, 27: (1991. [2] P. Fulcheri and M. Olivi, Matrix rational H 2 approxiation: a gradient algorith based on Schur analysis, SIAM Journal on Control and Optiization, 36: (1998. [3] D.C. Hyland and D.S. Bernstein, The optial projection equations for odel reduction and the relationships aong the ethods of Wilson, Skelton, and Moore, IEE. Trans. Autoat. Contr., Vol. 3, No. 12, pp , [4] A. Lepschy, G.A. Mian, G. Pinato and U. Viaro, Rational L 2 approxiation: A non-gradient algorith, 3 th CDC, [5] L. Meier and D.G. Luenberger, Approxiation of Linear Constant Systes, IEE. Trans. Autoat. Contr., Vol. 12, pp , [6] A.C. Antoulas, Lectures on the approxiation of linear dynaical systes, Advances in Design and Control, SIAM, Philadelphia, 24. [7] A. C. Antoulas, D. C. Sorensen, and S. Gugercin, A survey of odel reduction ethods for large scale systes, Conteporary Matheatics, AMS Publications, 28: , 21. [8] G.A. Baker, Jr. and P. Graves-Morris, Padé Approxiants, Part I: Basic Theory, Reading, MA: Addison-Wesley, [9] A.E. Bryson and A. Carrier, Second-order algorith for optial odel order reduction, J. Guidance Contr. Dyna., pp , 199 [1] C. De Villeagne and R. Skelton, Model reduction using a projection forulation, International Jour. of Control, Vol. 4, , [11] J.E. Dennis, D.M. Gay and R.E. Welsch, An adaptive nonlinear leastsquares algorith, ACM Transactions on Matheatical Software, Vol. 7, pp , [12] P. Feldan and R.W. Freund, Efficient linear circuit analysis by Padé approxiation via a Lanczos ethod, IEEE Trans. Coputer-Aided Design, 14, , [13] D. Gaier, Lectures on Coplex Approxiation, Birkhauser, [14] K. Gallivan, E. Grie, and P. Van Dooren, A rational Lanczos algorith for odel reduction, Nuerical Algoriths, 2(1-2:33-63, April [15] K. Gallivan, P. Van Dooren, and E. Grie, On soe recent developents in projection-based odel reduction, in ENUMATH 97 (Heidelberg, World Sci. Publishing, River Edge, NJ, 1998, pp , [16] E.J. Grie, Krylov Projection Methods for Model Reduction, Ph.D. Thesis, ECE Dept., U. of Illinois, Urbana-Chapaign, [17] S. Gugercin and A. C. Antoulas, A coparative study of 7 odel reduction algoriths, Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, Deceber 2. [18] S. Gugercin, Projection ethods for odel reduction of large-scale dynaical systes, Ph.D. Dissertation, ECE Dept., Rice University, Deceber 22. [19] S. Gugercin and A.C. Antoulas, An H 2 error expression for the Lanczos procedure, Proceedings of the 42nd IEEE Conference on Decision and Control, Deceber 23. [2] S. Gugercin, A.C. Antoulas and C.A. Beattie, Rational Krylov ethods for optial H 2 odel reduction, subitted to SIMAX 26. [21] Y. Halevi, Frequency weighted odel reduction via optial projection, in Proc. IEEE Conf. Decision and Control, pp , 199. [22] Y. Halevi, Projection Properties of the Optial L 2 Reduced Order Model, International Journal of Control. Vol. 79(4, , 26. [23] J.G. Korvink and E.B. Rudyni, Oberwolfach Benchark Collection, In P. Benner, G. Golub, V. Mehrann, and D. Sorensen, editors, Diension Reduction of Large-Scale Systes, Springer-Verlag, Lecture Notes in Coputational Science and Engineering, Vol. 45 (ISBN Berlin/Heidelberg, Gerany, 25. [24] J. Nocedal and S.J. Wright, Nuerical Optiization, Springer Series in Operations Research, Springer-Verlag New York, Inc, [25] J.J. Moré and D.C. Sorensen, Coputing a Trust region step, SIAM Journal on Scientific and Statistical Coputing, Vol. 4, pp , [26] A. Ruhe, Rational Krylov algoriths for nonsyetric eigenvalue probles II: atrix pairs, Linear Alg. Appl., 197: , (1994. [27] Thilo Penzl, Algoriths for odel reduction of large dynaical systes, Technical Report SFB393/99-4, Sonderforschungsbereich 393 Nuerische Siulation auf assiv parallelen Rechern, TU Chenitz, 917, FRG, Available fro [28] J.T. Spanos, M.H. Millan, and D.L. Mingori, A new algorith for L 2 optial odel reduction, Autoatics, pp , [29] D.A. Wilson, Optiu solution of odel reduction proble, in Proc. Inst. Elec. Eng., pp , 197. [3] W-Y. Yan and J. La, An approxiate approach to H 2 optial odel reduction, IEEE Transactions on Autoatic Control, AC-44, pp , [31] A. Yousouff and R.E. Skelton, Covariance equivalent realizations with applications to odel reduction of large-scale systes, in Control and Dynaic Systes, C.T. Leondes ed., Acadeic Press, vol. 22, pp , [32] A. Yousouff, D. A. Wagie, and R. E. Skelton, Linear syste approxiation via covariance equivalent realizations, Journal of Math. Anal. and App., Vol. 196, ,

Inexact Solves in Krylov-based Model Reduction

Inexact Solves in Krylov-based Model Reduction Inexact Solves in Krylov-based Model Reduction Christopher A. Beattie and Serkan Gugercin Abstract We investigate the use of inexact solves in a Krylov-based model reduction setting and present the resulting

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction

More information

Krylov-based model reduction of second-order systems with proportional damping

Krylov-based model reduction of second-order systems with proportional damping Krylov-based model reduction of second-order systems with proportional damping Christopher A Beattie and Serkan Gugercin Abstract In this note, we examine Krylov-based model reduction of second order systems

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN USA

Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN USA μ-synthesis Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minneapolis, MN 55455 USA Keywords: Robust control, ultivariable control, linear fractional transforation (LFT),

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

Krylov-based model reduction of second-order systems with proportional damping

Krylov-based model reduction of second-order systems with proportional damping Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 TuA05.6 Krylov-based model reduction of second-order systems

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

Variations on Backpropagation

Variations on Backpropagation 2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance

More information

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng EE59 Spring Parallel LSI AD Algoriths Lecture I interconnect odeling ethods Zhuo Feng. Z. Feng MTU EE59 So far we ve considered only tie doain analyses We ll soon see that it is soeties preferable to odel

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino UNCERTAINTY MODELS FOR ROBUSTNESS ANALYSIS A. Garulli Dipartiento di Ingegneria dell Inforazione, Università di Siena, Italy A. Tesi Dipartiento di Sistei e Inforatica, Università di Firenze, Italy A.

More information

arxiv: v1 [math.na] 20 Dec 2017

arxiv: v1 [math.na] 20 Dec 2017 Interpolatory Model Reduction of Paraeterized Bilinear Dynaical Systes Andrea Carracedo Rodriguez Serkan Gugercin Jeff Borggaard arxiv:72.738v [ath.na] 2 Dec 27 Originally subitted for publication in July

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

Convexity-Based Optimization for Power-Delay Tradeoff using Transistor Sizing

Convexity-Based Optimization for Power-Delay Tradeoff using Transistor Sizing Convexity-Based Optiization for Power-Delay Tradeoff using Transistor Sizing Mahesh Ketkar, and Sachin S. Sapatnekar Departent of Electrical and Coputer Engineering University of Minnesota, Minneapolis,

More information

A model reduction approach to numerical inversion for a parabolic partial differential equation

A model reduction approach to numerical inversion for a parabolic partial differential equation Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/0266-56/30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,

More information

Solving initial value problems by residual power series method

Solving initial value problems by residual power series method Theoretical Matheatics & Applications, vol.3, no.1, 13, 199-1 ISSN: 179-9687 (print), 179-979 (online) Scienpress Ltd, 13 Solving initial value probles by residual power series ethod Mohaed H. Al-Sadi

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Research Article Data Reduction with Quantization Constraints for Decentralized Estimation in Wireless Sensor Networks

Research Article Data Reduction with Quantization Constraints for Decentralized Estimation in Wireless Sensor Networks Matheatical Probles in Engineering Volue 014, Article ID 93358, 8 pages http://dxdoiorg/101155/014/93358 Research Article Data Reduction with Quantization Constraints for Decentralized Estiation in Wireless

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications

More information

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University

More information

Research Article Approximate Multidegree Reduction of λ-bézier Curves

Research Article Approximate Multidegree Reduction of λ-bézier Curves Matheatical Probles in Engineering Volue 6 Article ID 87 pages http://dxdoiorg//6/87 Research Article Approxiate Multidegree Reduction of λ-bézier Curves Gang Hu Huanxin Cao and Suxia Zhang Departent of

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Least squares fitting with elliptic paraboloids

Least squares fitting with elliptic paraboloids MATHEMATICAL COMMUNICATIONS 409 Math. Coun. 18(013), 409 415 Least squares fitting with elliptic paraboloids Heluth Späth 1, 1 Departent of Matheatics, University of Oldenburg, Postfach 503, D-6 111 Oldenburg,

More information

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 11, NOVEMBER 2001 1821 Adaptive Stabilization of a Class of Nonlinear Systes With Nonparaetric Uncertainty Aleander V. Roup and Dennis S. Bernstein

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Effective joint probabilistic data association using maximum a posteriori estimates of target states

Effective joint probabilistic data association using maximum a posteriori estimates of target states Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information

Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks

Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks 050 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO., NOVEMBER 999 Decentralized Adaptive Control of Nonlinear Systes Using Radial Basis Neural Networks Jeffrey T. Spooner and Kevin M. Passino Abstract

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term Nuerical Studies of a Nonlinear Heat Equation with Square Root Reaction Ter Ron Bucire, 1 Karl McMurtry, 1 Ronald E. Micens 2 1 Matheatics Departent, Occidental College, Los Angeles, California 90041 2

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Scalable Symbolic Model Order Reduction

Scalable Symbolic Model Order Reduction Scalable Sybolic Model Order Reduction Yiyu Shi Lei He -J Richard Shi Electrical Engineering Dept, ULA Electrical Engineering Dept, UW Los Angeles, alifornia, 924 Seattle, WA, 985 {yshi, lhe}eeuclaedu

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1. A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS MICHAEL EIERMANN AND OLIVER G. ERNST Abstract. We show how the Arnoldi algorith for approxiating a function of a atrix ties a vector

More information

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind ISSN 746-7659, England, UK Journal of Inforation and Coputing Science Vol., No., 6, pp.-9 Bernoulli Wavelet Based Nuerical Method for Solving Fredhol Integral Equations of the Second Kind S. C. Shiralashetti*,

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

The Solution of One-Phase Inverse Stefan Problem. by Homotopy Analysis Method

The Solution of One-Phase Inverse Stefan Problem. by Homotopy Analysis Method Applied Matheatical Sciences, Vol. 8, 214, no. 53, 2635-2644 HIKARI Ltd, www.-hikari.co http://dx.doi.org/1.12988/as.214.43152 The Solution of One-Phase Inverse Stefan Proble by Hootopy Analysis Method

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

The Fundamental Basis Theorem of Geometry from an algebraic point of view

The Fundamental Basis Theorem of Geometry from an algebraic point of view Journal of Physics: Conference Series PAPER OPEN ACCESS The Fundaental Basis Theore of Geoetry fro an algebraic point of view To cite this article: U Bekbaev 2017 J Phys: Conf Ser 819 012013 View the article

More information

Average Consensus and Gossip Algorithms in Networks with Stochastic Asymmetric Communications

Average Consensus and Gossip Algorithms in Networks with Stochastic Asymmetric Communications Average Consensus and Gossip Algoriths in Networks with Stochastic Asyetric Counications Duarte Antunes, Daniel Silvestre, Carlos Silvestre Abstract We consider that a set of distributed agents desire

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information