1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations,

Size: px
Start display at page:

Download "1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations,"

Transcription

1 ANALYSIS OF SOME KRYLOV SUBSPACE METODS FOR NORMAL MATRICES VIA APPROXIMATION TEORY AND CONVEX OPTIMIZATION M BELLALIJ, Y SAAD, AND SADOK Dedicated to Gerard Meurant at the occasion of his 60th birthday Abstract Krylov subspace ethods are strongly related to polynoial spaces and their convergence analysis can often be naturally derived fro approxiation theory Analyses of this type lead to discrete in-ax approxiation probles over the spectru of the atrix, fro which upper bounds of the relative Euclidean residual nor are derived A second approach to analyzing the convergence rate of the GMRES ethod or the Arnoldi iteration, uses as a priary indicator the,) entry of the inverse of K K where K is the Krylov atrix, ie, the atrix whose colun vectors are the first vectors of the Krylov sequence This viewpoint allows to provide, aong other things, a convergence analysis for noral atrices using constrained convex optiization The goal of this paper is to explore the relationships between these two approaches Specifically, we show that for noral atrices, the Karush-Kuhn-Tucker KKT) optiality conditions derived fro the convex axiization proble and the characterization properties of polynoial of best approxiation on a finite set of points are identical Therefore, these two approaches are atheatically equivalent In developing tools to prove our ain result, we will give an iproved upper bound on the distance of a given eigenvector fro Krylov spaces Key words Krylov subspaces, polynoials of best approxiation, in-ax proble, interpolation, convex optiization, KKT optiality conditions Introduction This paper is concerned with the study of convergence of Krylov subspace ethods for solving linear systes of equations, or eigenvalue probles Ax = b, ) Au = λu 2) ere, A is a given atrix of size N N, possibly coplex ethods onto Krylov subspaces These are projection K A, v) = span { v, Av,, A v }, generated by v and A, where v C N is a given initial vector A wide variety of iterative ethods fall within the Krylov subspace fraework This paper focuses on two ethods for non-eritian atrix probles: Arnoldi s ethod [5] which coputes eigenvalues and eigenvectors of A, and the generalized inial residual ethod GMRES) [4] which solves linear systes of equations GM- RES extracts an approxiate solution x ) fro the affine subspace x 0) +K A, r 0) ) r 0) = b Ax 0) is the initial residual and x 0) C N is a given initial approxiate Université de Valenciennes, Le Mont ouy, 5933 Valenciennes Cedex, France E-ail: MohaedBellalij@univ-valenciennesfr University of Minnesota, Dept of Coputer Science and Engineering E-ail: saad@csunedu Work supported by NSF under grant ACI , and by the Minnesota Supercoputing Institute LMPA, Université du Littoral, 50 rue F Buisson BP699, F Calais Cedex, France E-ail: sadok@lpauniv-littoralfr

2 solution of ) by requiring that this approxiation yield a iniu residual nor The question of estiating the convergence rate of iterative ethods of this type, has received uch attention in the past and is still an active area of research Researchers have taken different paths to provide answers to the question, which is a very hard one in the non-noral case Nuerous papers dealt with this issue by deriving upper bounds for the residual nor, which reveal soe intrinsic links between the convergence properties and the spectral inforation available for A The standard technique in ost of these works [6, 5, 6, 5, 8, 2, 3] is based on a polynoial approach More precisely, the link between residual vectors and polynoials inspired a search for bounds on the residual nor that are derived fro analytic properties of soe noralized associated polynoials as functions defined on the coplex plane In recent years, a different approach taking a purely algebraic point of view, was advocated for studying the convergence of the GMRES ethod This approach discussed initially by Sadok [7, 8] and Ipsen [7], and followed by Zavorin, O Leary and Elan [2], Liesen and Tichy [0] is distinct fro that based on approxiation theory Related theoretical residual bounds have been established, by exploring certain classes of atrices, in trying to explain the obscure behavior of this ethod, in particular the stagnation phenoenon Nevertheless, a great nuber of open questions reain Exploiting results shown in [8], we have recently presented in [] an alternative way to analyze the convergence of the GMRES and Arnoldi ethods, based on an expression for the residual nor in ters of deterinants of Krylov atrices An appealing feature of this viewpoint is that it allows us to provide, in particular, a thorough analysis of the convergence for noral atrices, using results fro constrained convex optiization It provides an upper bound for the residual nor, at any step, which can be expressed as a product of relative eigenvalue differences The purpose of the present work is to show the connection between these two approaches: on the one hand the in-ax polynoial approxiation viewpoint and on the other, the constrained convex optiization viewpoint Specifically, we establish that the Karush-Kuhn-Tucker KKT) optiality conditions derived fro the convex axiization proble and the characterization properties of polynoial of best approxiation on a finite set of points are atheatically equivalent The paper is organized as follows Section 2 sets the notation and states the ain result which will be key to showing the connection between the approxiation theory viewpoint on the one hand and convex optiization on the other Also included in the sae section is a useful Lea whose application to the GMRES and Arnoldi cases lead to the introduction of the optiization viewpoint Sections 3 and 4 begin with brief reviews of the two Krylov ethods under consideration and then derive upper bounds for GMRES and for Arnoldi algoriths respectively Results concerning the characterization of the polynoial of best approxiation on finite point sets are discussed in Section 5 and they are then applied to our situation Section 6 outlines the proof of the ain result by exaining the KKT optiality conditions derived fro the convex axiization proble and establishes the equivalence of the two forulations Finally, a few concluding rearks are ade in Section 7 2 Preliinaries and stateent of the ain result Throughout the paper it is assued that the atrix under consideration, naely A, is noral In addition, all results are under the assuption that exact arithetic is used The Euclidean two-nor on C N and the atrix nor induced by it will be denoted by The identity atrix of order resp N ) is denoted by I resp I) We use e i to denote 2

3 the i-th colun of the identity of appropriate order Let A C N N be a coplex atrix with spectru σa) = {λ, λ 2,, λ N } Since A is noral, there exists a unitary atrix U such that A = UΛU, where Λ = diagλ, λ 2,, λ N ) The superscripts T and indicate the transpose and the conjugate transpose respectively The diagonal atrix with diagonal entries β i, i =,, will be denoted by D β = diagβ, β 2,, β ) 2) The Krylov atrix whose coluns are r 0), A r 0),, A r 0) is denoted by K Polynoials which are closely related to Krylov subspaces will play an iportant role in our analysis We denote the set of all polynoials of degree not exceeding by P and the set of polynoials of P with value one at ω by P ω) Recall that for any p P we have pa) = puλu ) = UpΛ)U For any vector µ = µ, µ 2,, µ M ) T we denote by V µ) the rectangular Vanderonde atrix: µ µ µ 2 µ V µ) = µ M µ M 2 22) For exaple we will denote by V λ) the atrix of size N whose entry i, j) is λ j i, where λ, λ 2,, λ N are the eigenvalues of A We will also need a notation for a specific row of a atrix of the for 22) We define the vector function s ω) =, ω,, ω ) 23) Note that the i-th row of the Vanderonde atrix 22) is s µ i ) Finally, for a coplex nuber z = ρe ıθ, z = ρe ıθ denotes the coplex conjugate of z, z = ρ its odulus, and sgnz) = z/ z its sign 2 Main result The result to be presented next will be used in the convergence analysis of both GMRES and the Arnoldi ethod For this reason it is stated in general ters without reference to a specific algorith We denote by M the standard siplex of R M : M = { γ R M : γ 0 and e T γ = } where e =,,, ) T R M The coon notation γ 0 eans that γ i 0 for i =,, M Let ω, µ,, µ M be M + ) distinct coplex nubers, and an integer such that + < M Define the following function of γ F,ω γ) = s + ω) V +µ) D γ V + µ)) s + ω) 24) Then, we can state the following 3

4 Theore 2 Let M M be the doain of definition of F,ω Then the supreu of F,ω over is reached and we have ) 2 in ax pµ j) = ax F,ω γ) 25) j=,,m γ e M p P ω) The above theore will be helpful in linking two approaches, one based on approxiation theory and the other on optiization It shows in effect that the in-ax proble is equivalent to a convex optiization proble The proof of this theore is based on the Karush-Kuhn-Tucker KKT) conditions applied to the axiization proble stated above Fro these KKT conditions, we will derive two linear systes which appear in the characterization of the polynoial of best approxiation 22 A lea on the projection error Next, we state a known lea which will be key in establishing soe relations between various results This lea gives in siple ters an expression for what we will refer to as the projection error, ie, the difference between a given vector and its orthogonal projection onto a subspace Lea 22 Let X be an arbitrary subspace with a basis represented by the atrix B and let c / X Let P the orthogonal projector onto X Then, we have I P)c 2 = e T C e 26) where c C = c c B B c B B ) Proof The proof, given in [], is reproduced here for copleteness Given an arbitrary vector c C N, observe that I P)c 2 = c I P)I P)c = c I P)c = c c c Pc with P = BB B) B Fro this it follows that I P)c 2 = c c c BB B) B c 27) The right-hand side of 27) is siply the Schur copleent of the,) entry of C, which as is well-known is the inverse of the,) entry of C The expression 26) can also be derived fro basic identities satisfied by least squares residuals as was shown first in [9] This was later forulated explicitly and proved in [9] by exploiting properties of the Moore-Penrose generalized inverse 3 Convergence analysis for GMRES for noral atrices The basic idea of the GMRES algorith for solving linear systes is to project the proble onto the Krylov subspace of diension N The GMRES algorith starts with an initial guess x 0) for the solution of ) and seeks the -th approxiate solution x ) in the affine subspace x ) x 0) + K A, r 0) ), 4

5 satisfying the residual nor iniization property b Ax ) = in b Au = u x 0) +K A,r 0) ) in z K A,r 0) ) r 0) Az 3) As can be readily seen, this approxiation is of the for x ) = x 0) + p A)r 0), where p P Therefore, the residual r ) has the polynoial representation r ) = I Ap A))r 0), and the proble 3) translates to r ) = in p P 0) pa)r 0) 32) Characteristic properties of GMRES are that the nor of r ) is a non increasing sequence of and that it terinates in steps if r ) = 0 and r ) 0 Moreover, we have r ) 0 if and only if dik + A, r 0) )) = + Therefore, while analyzing the convergence of GMRES, we will assue that the Krylov atrix K + is of rank + Before we turn our attention to the bounds for analyzing convergence, we will explore two different ways of studying the convergence rate They will be given in ters of the so-called optial polynoials for one and in ters of spectral decoposition of Krylov atrices for the other 3 Analysis based on the best unifor approxiation The contribution of the initial residual in 32) is usually siplified by exploiting the inequality pa)r 0) pa) r 0) Then the issue becoes one of finding an upper bound for pa) for all p P 0) It follows that an estiate of the relative Euclidean r ) residual nor r 0) is associated with the so-called ideal GMRES polynoial of a atrix proble: in p P 0) pa) If we expand r 0) in the eigenbasis r 0) = U α then r ) = UpΛ) U r 0) = UpΛ) α = pλ) α α ax pλ i) i=,,n which then shows that r ) r 0) in p P 0) ax pλ) λ σa) 32 Analysis based on convex optiization Next, an alternative viewpoint for analyzing residual nors will be forulated This alternative, developed in [8, ], uses as a priary indicator the, ) entry of the inverse of Kl K l where K l is the Krylov atrix with l coluns, associated with the GMRES ethod It is assued that rankk + ) = + Setting c = r 0) and B = AK in Lea 22 yields the following expression for the residual nor r ) : r ) 2 = e T K + K +) e 33) Assue that r 0) has the eigen-expansion r 0) = Uα A little calculation shows see [2]) that we can write K + as K + = U D α V + λ) spectral factorization of K + ) We refer to 2) and 22) for the definitions of D α and V + λ) Thus, 5

6 in the noral case U U = I), we obtain K+K + = α 2 V+λ)D β V + λ) where β i = α i 2 α 2 = α 2 i r 0) 2 We can therefore rewrite 33) as follows, r ) 2 r 0) = 2 e T V + λ)d β V + λ) ) 34) e Note that β N Thus, an optial bound for r ) 2 / r 0) 2 can be obtained by axiizing the right-hand side of 34), ie, the axiu over β N of /[e T V + λ)d β V + λ) ) e ]) is another upper bound for r ) 2 / r 0) 2 In fact, thanks to Theore 2, the two bounds given in this section coincide Indeed, substituting M = N, µ j = λ j and ω = 0 in 25) would yield: ax β N V + λ)d β V + λ) ) ) = ax F,0 β) = in e β e N e T p P 0) ) 2 ax pλ j) j=,,n We end this section by stating the following assertion Let Iβ) be the set of indices j such that β j 0 If rankk + ) = + r ) = 0) then clearly the cardinality of Iβ) is at least + 4 Outline of the convergence analysis for Arnoldi s ethod Arnoldi s ethod approxiates solutions of the eigenvalue proble 2) by coputing an approxiate eigenpair λ ), ũ ) ) obtained fro the Galerkin condition ũ ) K A, v ), and Aũ ) λ ) ũ ), A i v ) = 0 for i = 0,, Let P be the orthogonal projector onto the Krylov subspace K A, v ), and λ, u) be an exact eigenpair of A In [5] the following result was shown to analyze the convergence of the process in ters of the projection error u P u of a given eigenvector u fro the subspace K A, v ) Theore 4 Let A = P A P and let γ = P A λi)i P ) Then the residual nors of the pairs λ, P u and λ, u for the linear operator A satisfy, respectively A λi)p u) γ I P )u, A λi)u λ 2 + γ 2 I P )u This theore establishes that the convergence of the Arnoldi ethod can be analyzed by estiating I P )u Note that γ A The result shows that under the condition that the projected proble is not too ill-conditioned, there will be an approxiate eigenpair close the exact one when I P )u is sall 4 Analysis based on unifor) approxiation theory The result just discussed above shows how the convergence analysis of the Arnoldi ethod can be stated in ters of the projection error I P )u of the exact eigenvector u fro the Krylov subspace The usual technique to estiate this projection error assues 6

7 that A is diagonalizable and expands the initial vector v in the eigenbasis as v = N j= α ju j We exaine the convergence of a given eigenvalue which is indexed by, ie, we consider u, the -st colun of U Adapting Lea 62 fro [5] stated for diagonalizable atrices to the special situation of noral atrices gives the following theore Theore 42 Let A be noral A = UΛU, U U = I) and let v = N j= α ju j = Uα, then I P )u N j=2 α j 2 ɛ ) 4) α where ɛ ) = in ax pλ j) p P λ ) j=2,,n The right-hand side of 4) ay exceed one but we know that I P )u since P is an orthogonal projector and u = The new bound provided next for the left part of 4) does not exceed one This result is based on optiization theory 42 Analysis based on convex optiization Let L + be the rectangular) atrix of C N +), with colun-vectors α u, v, A v,, A v Lea 22 with c = α u and B = [v, A v,, A v ], yields: I P )α u 2 = e T L + L +) e, where it is assued that L + is of full rank so that L +L + is nonsingular As with the Krylov atrix, we can write L + as L + = U D α W + with W + [e, V λ)] Thus, in the noral case U U = I), we have I P )u 2 α 2 α 2 = e T ), 42) Z W ) + β) e where Z W ) + β) C+) +) is the atrix Z W ) + β) = W +D β W + and β i = α i / α ) 2 Another forulation of I P )u is given next Definitions of V λ), s, and other quantities ay be found in Section 2 Theore 43 If the atrix function Z W ) + β) is nonsingular with β > 0, then ) e T Z W ) + β) e = + ϕ β), β where β = β 2,, β N ) T we have and the function ϕ is independent of β More precisely, I P ) u 2 = + β ϕ β), where ϕ β) = s λ ) Ṽ D eβṽ) s λ ) in which Ṽ = V λ) where λ = [λ 2, λ 3,, λ N ] T Proof We write the atrix Z W ) + Z W ) + β) = β) in the partitioned for β β s λ ) β s λ ) V λ) D β V λ) 7 )

8 First, using the atrix inversion in block for and then applying the Sheran- Morrison-Woodbury forula to the, )-block, see, eg, [20], leads to: ) e T Z W ) + β) e = + s β λ ) V λ) D β V λ) β s λ ) s λ ) ) s λ ) Note that V λ) = hence s λ ) Ṽ V λ) D β V λ) β s λ ) s λ ) = Therefore, e T ) and V λ) D β V λ) = N i= β i s λ i ) s λ i ), N β i s λ i ) s λ i ) = Ṽ D eβ Ṽ i=2 Z W ) + β) ) e = β + s λ )Ṽ D eβ Ṽ ) s λ ) Applying the relation 42) results in I P )u 2 = + β ϕ β) Next, we state a bound for I P ) u which slightly iproves the one given in Theore 42 Theore 44 If < N, I P j ) u = 0 for j {,, } and the atrix A is noral then I P ) u where α = α 2,, α N ) T Proof First observe that α ɛ ), α 2 ɛ ) ) 2 + α 2 β ϕ β) = α 2 ϕ α 2 2,, α N 2 ) = α 2 α 2 ϕ α 2 2 α 2,, α 2 N α 2 ) = α 2 α 2 ϕ γ), where γ = γ,, γ N ) with γ i = α i+ 2 Invoking Theore 43, we obtain I P ) u 2 α 2 It is easy to see that γ N + α 2 in ϕ eα 2 γ) γ e N Using Theore 2 with M = N, µ j = λ j+ andω = λ, leads to I P ) u 2 + α 2 eα 2 ɛ ) ) 2, and this copletes the proof 8

9 5 Characterization of the polynoial of best approxiation To characterize polynoials of best unifor approxiation, we will follow the treatent of Lorentz [, Chap 2] Our ain goal is to derive two linear systes which characterize the optial polynoial These systes are fundaental in establishing the link with the optiization approach to be covered in the next section 5 General context We begin this section with soe additional notation Let CS) denote the space of coplex or real continuous functions on a copact subset S of KR or C) If g CS), then the unifor nor of g is g = ax gz) We set Eg, S) := {z : gz) = g, z S} A set Ψ = {ψ, ψ 2,, ψ } fro CS) is a Chebyshev syste, if it satisfies the aar condition, ie, if each polynoial p = a ψ + a 2 ψ a ψ, with the coefficients a i not all equal to zero, has at ost ) distinct zeros on S The -diensional space E spanned by such a Ψ is called a Chebyshev space We can verify that Ψ is a Chebyshev syste if and only if for any distinct points z i S the following deterinant is nonzero : ψ z ) ψ z ) detψ j z i )) := ψ z ) ψ z ) Let f CS), f / E We say that q E is a best approxiation to f fro E if f q f p, p E In other words f q = in fz) pz) ax p E z S Our first result exploits the following well-known characterization of the best unifor approxiation, which can be found for exaple in [] An elegant characterization of best approxiations is also given in [3] Theore 5 A polynoial q is a polynoial of best approxiation to f CS) fro E if and only if there exist r extreal points, ie, r points z, z 2,, z r Ef q, S), and r positive scalars β i, i =,, r, such that z S r β l =, with r 2 + in the coplex case and r + in the real case, which satisfy the equations: r β l [fz l ) q z l )] ψ j z l ) = 0, j =,, 5) l= Two rearks regarding this result are iportant to ake First, it can be applied to characterize the best unifor approxiation over any finite subset σ of S, with at least + ) points Second, the uniqueness of the polynoial of best approxiation is guaranteed if E is a Chebyshev space [, Chap 2, p26] Moreover, we have r = + if S R and + r 2 + if S C This will be the case because we will deal with polynoials of P The above result can be applied to our finite in-ax approxiation proble: in p P ω) ax pµ j) This is the goal of the next section j=,,m 9 l=

10 52 Application to the in-ax proble Let σ denote the set of the coplex nubers µ,, µ M and let Q ω) < M) denote the set of all polynoials of degree not exceeding with value zero at ω Let ψ j ) j= be the set of polynoials of Q ω) defined by ψ j z) = z j ω j Our proble corresponds to taking f and E = Q ω) This yields: in p P ω) ax pµ) = µ σ in q Q ω) ax qµ) f µ σ q According to the rearks ade following Theore 5, the polynoial of best approxiation q Q ω) for f with respect to σ) exists and is unique The following theore, which gives an equivalent forulation of the relation 5), can now be stated This theore will lead to auxiliary results fro which the axiu and the polynoial of best approxiation can be characterized Theore 52 The polynoial q z) = a ψ z) + a 2ψ 2 z) + + a ψ z) is the best approxiation polynoial for the function fz) = on σ fro Q ω) if and only if there exist r extreal points, ie, r points z, z 2,, z r Ef q, S), and r r positive scalars β i, i =,, r, such that β l =, with r 2 + in the coplex l= case and r + in the real case, verifying: + t + t j µ j l = ε l, l =,, r; 52) δ j=2 with δ = f q 2, t j+ = a j δ, j =,,, and t = δ + t j ωj and j=2 r β l ε l = δ, 53) l= r β l ε l µ j l ωj ) = 0, j =,, ; 54) l= where ε l = sgn q µ l )) Proof Let δ = f q 2 Then µ l Ef q, σ) is equivalent to q µ l ) = f q = δ Without loss of generality, we can assue the r extreal points to consist of the r first ites of σ According to the above definition of ε l, the polynoial q satisfies the following interpolation conditions: q µ l ) = δ ε l for j =,, r 55) Setting t = δ + t j ωj and t j+ = a j, j =,,, we obtain 52) j=2 δ r Equation 55) shows that β l ε l ψ j z l ) = 0, j =,, is a restateent of 5) We then have r l= l= β l ε l qz l ) = 0 for all polynoials q Q ω) 56) 0

11 Furtherore, fro 55) we have the relation δ β l = β l ε l fz l ) β l ε l q z l ), l =,, r To see that r l= β l = is equivalent to 53) it suffices to su-up the ters in this relation and apply the conjugate of 56) As a reark, we ay transfor the interpolation conditions 55) to π µ j ) = ε j for all j =,, r, with π z) = δ q z)) Then π can be written in the for of the Lagrange interpolation forula π z) = + j= ε j l +) j z), with l +) j z) = + k=,k j z µ k µ j µ k Note that l +) j z) is the jth Lagrange interpolating polynoial of degree associated with {µ,, µ + } Finally, recalling that π ω) =, we obtain δ A consequence of this is that: in p P ω) + j= ε j l +) j ω) = δ, 57) ax µ σ pµ) = f q = δ = + j= ε j l +) j ω) 6 Proof of the ain result In this section, we will show that 2 ax F,ω β) = ɛ ) β e ω)), M where F,ω β) is defined in 24) and ɛ ) ω) = in p P ω) ax pµ j) 6) j=,,m The proof will be based on applying the Karush-Kuhn-Tucker KKT) optiality conditions to our convex axiization proble We begin by establishing the following lea which shows a few iportant properties of the function F,ω Since there is no abiguity, we will siplify notation by writing V + for V + µ) in the reainder of this section Lea 6 Let M M be the doain of definition of the function F,ω in 24) Then the following properties hold : F,ω is a concave function defined on the convex set M 2 F,ω is differentiable at β M and we have F,ω β j β) = F,ω β)) 2 e T j V + t 2, is such that V +D β V + t = s + ω) More- where t = t, t 2,, t + ) T over, we have N i= β i F,ω β) β i = F,ω β) 62)

12 Proof We will begin by proving the first property Let r be a real positive scalar Since D rβ = r D β, then F,ω rβ) = s + ω)rv + D βv + )) s +ω) = r F,ω β) Thus F,ω is hoogeneous of degree Let β, β M and 0 r It is easy to see that rβ + r)β M We now take taking x = s + ω), G = V +D rβ V + and G 2 = V +D r) β V +, in the following version of Bergstro s inequality given in [2, pp 227] : ) x G + G 2 ) x x G x ) + x G 2 x ) where G and G 2 are positive definite eritian atrices Then fro the hoogeneity of F,ω it follows that F,ω rβ + r)β ) r F,ω β) + r) F,ω β ) ence F,ω is concave Next we establish the second part Let us define Z + β) to be the atrix Z + β) = V+D β V + C +,+ We have Z+β) = Z + β) and F,ω β) = [ s + ω)z+ β)s +ω) ] for β M Clearly, F,ω is differentiable at β M By using the derivative of the inverse of the atrix function, we have It follows that Z+ β) = Z+ β β) Z +β) Z+ i β β) i s +ω) Z + β) β i s + ω) = t V +e i e T i V + t, where t is such that Z + β)t = s + ω) As a consequence, F,ω β i + β) s +ω) Z s + ω) β β) = i s + ω)z+ β)s +ω) ) 2 = t V+e i e T i V +t s + ω)z+ β)s +ω) ) 2 We can write ore succinctly F,ω β i β) = F,ω β)) 2 e T i V + t 2 The equality 62) follows fro a siple algebraic anipulation Indeed, we have M i= β i F,ω β) β i = F,ω β)) 2 M t V+β i e i e T i )V + t i= = F,ω β)) 2 t V +D β V + t ) = F,ω β) 2

13 With the lea proved, we now proceed with the proof of the theore Let us consider the characterization of the solution of the convex axiization proble ax β M e F,ω β) by using the standard KKT conditions We begin by defining the functions g i β) = β i and gβ) = e T β Thus β M eans that gβ) = 0 and g i β) 0 So the Lagrangian function is forulated as L β, δ, η) = F,ω β) δgβ) M η i g i β), where δ R and η = η,, η M ) R M Note that the functions g, g i are convex affine functions) and by Lea, the objective function F is also convex It follows that this axiization proble can be viewed as a constrained convex optiization proble Thus according to the KKT conditions [4], if F,ω β) has a local axiizer β in M then there exist Lagrange ultipliers δ, η = η,, η M ) such that β, δ, η ), satisfy the following conditions : i) L β, δ, η ) = 0; β j ii) gβ ) = 0 and g j β ) 0 and ηj iii) ηj g jβ ) = 0 for all j =,, M i= 0 for all j =,, M) ; As β is in M, we have at least + coponents of β which are nonzero Thus, there exists + κ, κ ) coponents of β which we label β, β2,, β+κ, for siplicity, such that βj 0 for all j =,, + κ The copleentarity conditions iii), yields ηj = 0 for all j =,, + κ and η j > 0 for all j = + κ +,, M ence the condition i) can be re-expressed as F,ω β ) = δ, for j =,, + κ, and β j F 63),ω β ) = δ ηj β, for j = + κ +,, M j Again by using the forula of F,ω β j β ) given in the lea, the relations in 63) becoe and F,ω β ) 2 e T j V + t 2 = δ for j =,, + κ, 64) F β, ω) 2 e T j V + t 2 = δ η j for j = + κ +,, M 65) Note that t is such that V +D β V + t = s + ω) 66) Now by observing that βj = 0 for all j = + κ +,, k, +κ j= β j =, and by using the first part of 63), equation 62) of the lea shows that F,ω β ) = δ The reaining part of the proof is devoted to establishing the sae forulas given in the theore which characterize the polynoial of the best approxiation In view of 64), we then have e T j V + t = ε j δ for all j =,, + κ 67) 3

14 ere we have written ε j = e ιθj in the coplex case and ε j = ± in the real case [ Cobining s +ω) ) V+D β V + s+ ω)] = F,ω β ) with 66) we obtain s +ω) t = /δ On the other hand, the optial solutions βj and the nubers ε j can be derived fro 66) Indeed, using 66) and 67), we have e T j D β V + t = βj ε j for all j =,, + κ δ Therefore, by applying V+ we have +κ j= β j ε j = δ and +κ j= β j ε jµ l j = δ ω l = +κ j= β j ε jω l for l =,, ence, we find that +κ j= β j ε jµ l j ωl ) = 0 for l = 0,, The relations 52), 54) and 53) in Theore 52 are all satisfied, with, f, z j = µ j, ψ j z) = z j ω j and r = + κ It follows that the Lagrange ultiplier δ is the sae as in the previous section As a consequence we have δ = F,ω β ) = q 2 and 25) is established 7 Concluding rearks We have established the equivalence, for noral atrices, between the approxiation theory approach on the one hand and the optiization approach on the other, for solving a in-ax proble that arises in convergence studies of the GMRES and Arnoldi ethods Because of their convenient properties, the KKT equations allow us to give a coplete characterization of the residual bounds at each step for both ethods It also unravels a strong connection between the two viewpoints KKT equations give ore precise inforation about the extreal points than the approxiation theory approach We point out the iportance of the KKT-equations obtained, of which only the non active part is needed to prove the equivalence For the GMRES ethod for exaple, fro the active part 65), we infer that η j = δ F β, ω) 2 e T j V + λ)t 2 > 0, for j = + κ +,, k So that e T j V + λ)t < δ for j = + κ +,, k This shows that the extreal points can be characterized by δ = e T i V + λ)t = ax j k e T j V + λ)t for i =,, + κ The connections established in this paper provide new insights into the different ways in which residual bounds can be derived It is hoped that the developents with the optiization approach will pave the way for extensions beyond the case of noral atrices REFERENCES [] M Bellalij, Y Saad and Sadok On the convergence of the Arnoldi process for eigenvalue probles Technical Report usi , Minnesota Supercoputer Institute, University of Minnesota, Minneapolis, MN, 2007 [2] M Eierann and O G Ernst Georetric aspects of the theory of Krylov subspace ethods, Acta Nuerica, 0 200), pp [3] M Eierann, O G Ernst, and O Schneider Analysis of acceleration strategies for restarted inial residual ethods, J Coput Appl Math, ), pp

15 [4] R Fletcher Practical ethods of optiisation 2nd Edition, Wiley, Chichester, 987 [5] A Greenbau and L Gurvits MAX-MIN properties of atrix factor nors, SIAM J Sci Coput, 5 994), pp [6] A Greenbau Iterative ethods for solving linear systes, SIAM, Philadelphia, 997 [7] I C F Ipsen Expressions and bounds for the GMRES residuals, BIT, ), pp [8] W Joubert A robust GMRES-based adaptive polynoial preconditioning algorith for nonsyetric linear systes, SIAM J Sci Coput, 5 994), pp [9] J Liesen, M Rožlozník, and Z Strakoš, Least squares residuals and inial residual ethods, SIAM J Sci Coput, ), pp [0] J Liesen and P Tichy, The worst-case GMRES for noral atrices, BIT, ), pp79 98 [] GG Lorentz Approxiation of functions 2nd Edition, Chelsea Publishing Copany, New York, 986 [2] J R Magnus and Neudecker Matrix Differential Calculus with Applications in Statistics and Econoetrics 2nd Edition, Wiley, Chichester, 999 [3] T J Rivlin and S Shapiro A unified approach to certain probles of approxiation and iniization, J Soc Indust Appl Math, 9 96), pp [4] Y Saad and M Schultz GMRES : A Generalized Minial Residual algorith for solving nonsyetric linear systes, SIAM J Sci Stat Coput, 7 986), pp [5] Y Saad Nuerical Methods for Large Eigenvalue Probles alstead Press, New York, 992 [6] Y Saad Iterative Methods for Solving Sparse Linear Systes SIAM publications, Philadelphia, 2003 [7] Sadok, Méthodes de projections pour les systèes linéaires et non linéaires abilitation thesis, university of Lille, Lille, France, 994 [8] Sadok Analysis of the convergence of the Minial and the Orthogonal residual ethods Nuer Algoriths, ), pp 0-5 [9] GW Stewart Collinearity and least squares regression Statist Sci, 2 987), pp68 00 [20] F Zhang Matrix Theory Springer Verlag, New York, 999 [2] I Zavorin, D O Leary and Elan Cop lete stagnation of GMRES LinearAlgebra and its Applications, ), pp

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Matheatical Society Vol. 41 (2015), No. 4, pp. 981 1001. Title: Convergence analysis of the global FOM and GMRES ethods for solving

More information

max λ Λ(A) p(λ) b Ax 0 2

max λ Λ(A) p(λ) b Ax 0 2 ON THE CONVERGENCE OF THE ARNOLDI PROCESS FOR EIGENVALUE PROBLEMS M BELLALIJ, Y SAAD, AND H SADOK Abstract This paper takes another look at the convergence of the Arnoldi procedure for solving nonsymmetric

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations Solve nonsingular linear syste: Ax = b (solution ˆx = A b) Solve an approxiate, but sipler syste: Mx = b x = M b Iprove

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

Perturbation on Polynomials

Perturbation on Polynomials Perturbation on Polynoials Isaila Diouf 1, Babacar Diakhaté 1 & Abdoul O Watt 2 1 Départeent Maths-Infos, Université Cheikh Anta Diop, Dakar, Senegal Journal of Matheatics Research; Vol 5, No 3; 2013 ISSN

More information

Partitions of finite frames

Partitions of finite frames Rowan University Rowan Digital Works Theses and Dissertations 5--06 Partitions of finite fraes Jaes Michael Rosado Rowan University, jarosado09@gail.co Follow this and additional works at: http://rdw.rowan.edu/etd

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Algebraic Montgomery-Yang problem: the log del Pezzo surface case

Algebraic Montgomery-Yang problem: the log del Pezzo surface case c 2014 The Matheatical Society of Japan J. Math. Soc. Japan Vol. 66, No. 4 (2014) pp. 1073 1089 doi: 10.2969/jsj/06641073 Algebraic Montgoery-Yang proble: the log del Pezzo surface case By DongSeon Hwang

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality UPPSALA UNIVERSITY Departent of Inforation Technology Division of Scientific Coputing Solutions to exa in Finite eleent ethods II 14-6-5 Stefan Engblo, Daniel Elfverson Question 1 Note: a inus sign in

More information

Introduction to Robotics (CS223A) (Winter 2006/2007) Homework #5 solutions

Introduction to Robotics (CS223A) (Winter 2006/2007) Homework #5 solutions Introduction to Robotics (CS3A) Handout (Winter 6/7) Hoework #5 solutions. (a) Derive a forula that transfors an inertia tensor given in soe frae {C} into a new frae {A}. The frae {A} can differ fro frae

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Methods for Large Unsymmetric Linear. Systems. Zhongxiao Jia y. Abstract. The convergence problem of Krylov subspace methods, e.g.

Methods for Large Unsymmetric Linear. Systems. Zhongxiao Jia y. Abstract. The convergence problem of Krylov subspace methods, e.g. The Convergence of Krylov Subspace Methods for Large Unsyetric Linear Systes Zhongxiao Jia y Abstract The convergence proble of Krylov subspace ethods, e.g. OM, MRES, CR and any others, for solving large

More information

Mechanics Physics 151

Mechanics Physics 151 Mechanics Physics 5 Lecture Oscillations (Chapter 6) What We Did Last Tie Analyzed the otion of a heavy top Reduced into -diensional proble of θ Qualitative behavior Precession + nutation Initial condition

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1. A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS MICHAEL EIERMANN AND OLIVER G. ERNST Abstract. We show how the Arnoldi algorith for approxiating a function of a atrix ties a vector

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Stability Ordinates of Adams Predictor-Corrector Methods

Stability Ordinates of Adams Predictor-Corrector Methods BIT anuscript No. will be inserted by the editor Stability Ordinates of Adas Predictor-Corrector Methods Michelle L. Ghrist Jonah A. Reeger Bengt Fornberg Received: date / Accepted: date Abstract How far

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

MA304 Differential Geometry

MA304 Differential Geometry MA304 Differential Geoetry Hoework 4 solutions Spring 018 6% of the final ark 1. The paraeterised curve αt = t cosh t for t R is called the catenary. Find the curvature of αt. Solution. Fro hoework question

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison yströ Method vs : A Theoretical and Epirical Coparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou Machine Learning Lab, GE Global Research, San Raon, CA 94583 Michigan State University,

More information

New Classes of Positive Semi-Definite Hankel Tensors

New Classes of Positive Semi-Definite Hankel Tensors Miniax Theory and its Applications Volue 017, No., 1 xxx New Classes of Positive Sei-Definite Hankel Tensors Qun Wang Dept. of Applied Matheatics, The Hong Kong Polytechnic University, Hung Ho, Kowloon,

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT PETER BORWEIN AND KWOK-KWONG STEPHEN CHOI Abstract. Let n be any integer and ( n ) X F n : a i z i : a i, ± i be the set of all polynoials of height and

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information