A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
|
|
- Annice Knight
- 5 years ago
- Views:
Transcription
1 A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based Newton method for solving the inverse singular value problem. The proposed method is also globalized by employing the directional derivative-based Wolfe line search conditions. Under some mild assumptions, The global and quadratic convergence of our method is established. To improve the practical effectiveness, we also propose a hybrid method for solving the inverse singular value problem. We show that the hybrid method converges locally quadratically and globally in the sense that a stationary point of a merit function for the inverse singular value problem is computed. Numerical tests demonstrate that the proposed hybrid method is very effective for solving the inverse singular value problem with distinct and multiple singular values. Keywords Inverse singular value problem, generalized Newton method, directional derivative, Wolfe conditions, global convergence AMS subject classification. 65F18, 65F15, 15A18, 49J52, 90C56 1 Introduction The inverse problem of reconstructing a matrix from the given singular values, i.e. the inverse singular value problem, has a growing importance in many applications such as the optimal sequence design for direct-spread code division multiple access [40], the passivity enforcement in nonlinear circuit simulation [34], the constructions of Toeplitz-related matrices from prescribed singular values[1, 2, 13], the inverse problem in some quadratic group [27], and the construction of nonnegative matrices, positive matrices and anti-bisymmetric matrices with prescribed singular values [23, 24, 43], and others [31]. In this paper, we consider the inverse singular value problem defined as follows. School of Mathematical Sciences, Xiamen University, Xiamen , People s Republic of China (mawei830424@yahoo.com.cn). School of Mathematical Sciences, Xiamen University, Xiamen , People s Republic of China (zjbai@xmu.edu.cn). The research of this author was partially supported by the National Natural Science Foundation of China grant , the Natural Science Foundation of Fujian Province of China for Distinguished Young Scholars (No. 2010J06002), and NCET. 1
2 ISVP: Let {A i } n i=0 be n + 1 real m n matrices (m n) and let σ 1 σ 2 σ n be n nonnegative numbers. Find a vector c R n such that the singular values of the matrix A(c ) := A 0 + n c i A i (1) are exactly σ 1,..., σ n. This is a special kind of inverse singular value problems, which was originally proposed by Chu [5]. It is easy to check that the ISVP can be converted into the following inverse eigenvalue problem (IEP): Given n+1 real m n matrices {A i } n i=0 and n numbers σ 1 σ 2 σ n 0, find a vector c R n such that the eigenvalues of the matrix B(c ) := [ 0 A0 A T 0 0 ] + i=1 n i=1 c i [ 0 Ai A T i 0 are exactly σ1,..., σ n, σ1,..., σ n, and zero of multiplicity m n. Therefore, the ISVP is solvable if and only if the above IEP is solvable. To our knowledge, there is no literature on the existence and uniqueness questions for the ISVP. For more discussion on the ISVP, one may refer to [6, 7]. However, there exist some numerical methods developed for computational purpose. For any c R n, denote the singular values of A(c) by {σ i (A(c))} n i=1 with the ordering σ 1(A(c)) σ 2 (A(c)) σ n (A(c)) 0. Then, the ISVP is to find a solution c R n of the following nonlinear equation: f(c) := (σ 1 (A(c)) σ 1, σ 2 (A(c)) σ 2,..., σ n (A(c)) σ n) T = 0. (2) Based on the nonlinear equation (2), Newton-type methods and an Ulm-like method were developed for solving the ISVP [4, 5, 41]. The advantage of Newton-type methods and Ulm s method lies in its superlinear convergence. However, these methods converge only locally. Based on the generalized Jacobian and the directional derivative, many nonsmooth versions of Newton s method were developed for solving nonsmooth equations [26, 29, 30] and their globalized versions were proposed for solving nonlinear complementarity problems [9, 20, 21, 26]. A typical nonlinear complementarity problem is to find a vector x R n such that x 0, F (x) 0, x T F (x) = 0, where F : R n R n is a continuously differentiable function and x 0 means that x j 0 for j = 1,..., n. For various applications of nonlinear complementarity problems, the interested reader may refer to the survey papers by Harker and Pang [18] and Ferris and Pang [11]. The directional differentiability of all singular values of a rectangular matrix was discussed in [35] and the directional derivatives of all eigenvalues of a symmetric matrix were provided in [19]. Also, it was showed in [39] that all singular values of a rectangular matrix are strongly semismooth and the generalized Newton method in [38] can be employed to solve the ISVP. The regularization techniques were also used in generalized Newton methods for nonlinear complementary problems [10, 37]. Motivated by this, in this paper, we propose a 2 ]
3 regularized directional derivative-based Newton method for solving the ISVP. We first give the explicit formulas of the directional derivatives of sums of singular values of A(c) for any c R n. Then, we present a regularized directional derivative-based Newton method for the ISVP. Our method is globalized by the directional derivative-based Wolfe line search conditions. Under the assumptions that all elements in the generalized Jacobian of a nonlinear function of the ISVP are nonsingular at a solution of the ISVP (see Assumption 4.3 below) and the subset D := {c R n : A(c) has distinct singular values} of R n is dense in R n, the global and quadratical convergence of our method is established. To further improve the feasibility, sparked by [17, 29], we also propose a hybrid method for solving the ISVP by combining the directional derivative-based generalized Newton method with an Armijo-like line search based on a merit function of the ISVP. Under the same assumptions, the hybrid method is shown to converge quadratically and globally (in the sense of finding a stationary point of the merit function for the ISVP). Compared to the generalized Newton method in [38], our method makes use of the directional derivatives of singular values of A(c) instead of their generalized Jacobians [8] and the direction generated by the directional derivative-based generalized Newton equation can be applied to globalizing our method by using the Armijo-like line search or the Wolfe line search. Some numerical tests are reported to illustrate the effectiveness of the proposed hybrid method for solving the ISVP with both distinct and multiple singular values. This paper is organized as follows. In Section 2 we review some preliminary results on nonsmooth analysis. In Section 3 we propose a regularized directional derivative-based Newton method for solving the ISVP. In Section 4 we give the convergence analysis. In Section 5 we present a hybrid method for solving the ISVP. In Section 6 we report some numerical experiments. 2 Preliminaries In the section, we review some necessary concepts and basic properties related to Lipschitz continuous functions. Let X be a finite dimensional real vector space, equipped with the Euclidean inner product, and its induced norm. We note that a function Φ : X R n is directionally differentiable at x X if the directional derivative Φ (x; x) = lim t 0 Φ(x + t x) Φ(x) t exists for all x X. In the following, we recall the definition of B-differentiability [32]. Definition 2.1 A function Φ : X R n is said to be B-differentiable at x X if it is directionally differentiable at x and Φ(x + x) Φ(x) Φ (x; x) lim = 0. x 0 x For a locally Lipschitzian function Φ : X R n, it was shown that the B-differentiability of Φ is equivalent to its directional differentiability [36]. 3
4 Let Φ : X R n be a locally Lipschitz continuous function. We study the following nonlinear equation: Φ(x) = 0. (3) If Φ is nonsmooth, one may use Clarke s generalized Jacobian-based Newton method for solving (3) [30]. Notice that Φ is Fréchet differentiable almost everywhere [33]. Clarke s generalized Jacobian Φ(x) [8] of Φ at x X is defined by: Φ(x) := co B Φ(x) := co{lim Φ (x j ) x j x, Φ is differentiable at x j }. where co means the convex hull and Φ (x) means the Jacobian of Φ at x X. Then, the generalized Newton method for solving (3) is given as follows [30]: Set x k+1 = x k + d k, where d k = V 1 k Φ(x k ), V k Φ(x k ). (4) If Φ is both locally Lipschitz continuous and directionally differentiable, then a directional derivative-based Newton method for solving (3) is given as follows [26, 29]: Set x k+1 = x k + d k, where d k solves Φ(x k ) + Φ (x k ; d k ) = 0. (5) We point out that (5) coincides with (4) since there is a relation between Clarke s generalized Jacobian and the directional derivative of a locally Lipschitzian function [30, Lemma 2.2]. Lemma 2.2 Let Φ : X R m be a locally Lipschitzian function and directionally differentiable on a neighborhood of x X. Then Φ (x; ) is Lipschitzian and for any x X, there exists a V Φ(x) such that Φ (x; x) = V x. To give the convergence analysis of (4) or (5), we need the following definition of semismoothness. For the original concept of semismoothness for functionals and vector-valued functions, one may refer to [22, 30]. Definition 2.3 Let Φ : X R m be a locally Lipschitz continuous function. (a) Φ is said to be semismooth at x X if for any V Φ(x + h) and h 0, V (h) Φ (x; h) = o( h ). (b) Φ is said to be strongly semismooth at x if for any V Φ(x + h) and h 0, V (h) Φ (x; h) = O( h 2 ). Definition 2.3 shows that a (strongly) semismooth function must be B-differentiable. Therefore, if Φ is semismooth (strongly semismooth, respectively) at x X, then for any h 0, Φ(x + h) Φ(x) Φ (x; h) = o( h ) (O( h 2 ), respectively). We now give the convergence result of (4) [30, Theorem 3.2]. convergence of (5) can be found in [29, Theorem 4.3]. The result on superlinear 4
5 Proposition 2.4 Let Φ : X R n be a locally Lipschitz continuous function. Let x X be an accumulation point of a sequence {x k } generated by (4) and Φ( x) = 0. Assume that Φ is semismooth at x and all V F ( x) are nonsingular. Then the sequence {x k } converges to x superlinearly provided that the initial guess x 0 is sufficiently close to x. Moreover, if Φ is strongly semismooth at x, then the convergence rate is quadratic. On the nonsingularity of Clarke s generalized Jacobian of a locally Lipschitz continuous function, we have the following lemma [29, 30]. Lemma 2.5 Let Φ : X R m be a locally Lipschitz continuous function. Suppose that all elements in B Φ(x) are nonsingular. Then there exist a constant κ and a neighborhood N (x) of x such that, for any y N (x) and any V B Φ(y), V is nonsingular and V 1 κ. If Φ is semismooth at y, then, for any h X, there exists V B Φ(y) such that Φ (y; h) = V h and h κ Φ (y; h). Finally, we have the following lemma on the (strongly semismoothness) semismoothness of composite functions [12, 22]. Lemma 2.6 Suppose that Φ : X R m is (strongly semismooth) semismooth at x X and Ψ : R m R q is (strongly semismooth) semismooth at Φ(x). Then the composite function Υ = Ψ Φ is (strongly semismooth) semismooth at x. 3 A regularized directional derivative-based Newton method In this section, we first reformulate the ISVP as a new nonsmooth equation and then propose a regularized directional derivative-based Newton method for solving the nonsmooth equation. In what follows, let M m n, S n and O(n) denote the set of all m-by-n real matrices, the set of all n- by-n real symmetric matrices and the set of all n-by-n orthogonal matrices, respectively. Denote by M T the transpose of a matrix M. Let I n be the identity matrix of order n. Let {σ j (M)} n j=1 stand for the singular values of a matrix M M m n with σ 1 (M) σ 2 (M) σ n (M) 0. In this case, we call σ j (M) the j-th largest singular value of M. Define the linear operator Θ : M m n S m+n by [ ] 0 M Θ(Z) := M T M M m n. 0 The singular value decomposition of a matrix M M m n (m n) is given by [16] [ ] P T Σ(M) MQ =, Σ(M) = diag(σ 0 1 (M), σ 2 (M),..., σ n (M)), where P O(m), Q O(n), and σ 1 (M) σ 2 (M) σ n (M) 0 are the singular values of M. Partition P by P = [P (1), P (2) ], where P (1) M m n and P (2) M m (m n). Let U O(m + n) be defined by U := 1 2 [ P (1) P (1) 2P (2) Q Q 0 5 ]. (6)
6 It is easy to check that the matrix Θ(M) admits the following spectral decomposition: Θ(M) = U Σ(M) Σ(M) U T. (7) That is, Θ(M) has the eigenvalues ±σ i (M), i = 1, 2,..., n, and 0 of multiplicity m n. For j = 1,..., n, define ϕ j (M) := j σ i (M) M M m n. i=1 By using the strong semismoothness of all eigenvalues of a real symmetric matrix [38], the linearity of Θ( ) and Lemma 2.6, we have the following result on the strong semismoothness of {σ j ( )} n j=1 and {ϕ j( )} n j=1 [39]. Lemma 3.1 The functions {σ j ( )} n j=1 and {ϕ j( )} n j=1 M m n. are strongly semismooth functions over Now, we consider the ISVP. For the convenience of theoretical analysis, instead of (2), in what follows, we focus on a new nonsmooth reformulation of the ISVP: Find a solution c R n of the following nonsmooth equation: where g(c) := ϕ(a(c)) ϕ = (ϕ 1 (A(c)) ϕ 1, ϕ 2 (A(c)) ϕ 2,..., ϕ n (A(c)) ϕ n) T = 0, (8) ϕ j (A(c)) = j σ i (A(c)) c R n and ϕ j = i=1 j σi, j = 1,..., n. We point out that the function g : R n R n defined in (8) is a composite nonsmooth function, where ϕ : M m n R n is strongly semismooth and the linear function A : R n M m n defined in (1) is continuously differentiable. It follows from Lemma 2.6 that g is strongly semismooth and thus for any c, h R n, g (c; h) = ϕ(a(c); A (c)h), A (c)h = For any c R n, define the sets P(c) and Q(c) by (P(c), Q(c)) := i=1 n j=1 h j A(c) c j. { [ (P (c), Q(c)) O(m) O(n) : P (c) T Σ(A(c)) A(c)Q(c) = 0 where Σ(A(c)) := diag(σ 1 (A(c)), σ 2 (A(c)),..., σ n (A(c))). For any c R n, let P (c) := [p 1 (c),..., p m (c)] P(c) and Q(c) := [q 1 (c),..., q n (c)] Q(c). ]}. 6
7 For the j-th largest singular value σ j (A(c)) of A(c) with multiplicity r j, let s j be the number of singular values, ranking before j, which are equal to σ j (A(c)) and define P j1 (c) := [p 1 (c),..., p j sj (c)], P j2 (c) := [p j sj +1(c),..., p j+rj s j (c)], Q j1 (c) := [q 1 (c),..., q j sj (c)], Q j2 (c) := [q j sj +1(c),..., q j+rj s j (c)]. By using (6), (7), and [19, Theorem 4.4], we can easily derive the following proposition on the directional derivative of g in (8). Proposition 3.2 For any c R n and h R n, we set ( n ( n E 1(h) := Pj1(c) T h i A(c)/ c i )Q j1 (c) and E 2(h) := Pj2(c) T h i A(c)/ c i )Q j2 (c). i=1 Then, the directional derivative of g( ) := (g 1 ( ),..., g n ( )) at c R n in the direction h R n is given by ( 1 ( )) s j ( 1 g j(c; h) = tr E 2 1(h) + E 1(h) T + µ i (E )) 2 2(h) + E 2(h) T for j = 1,..., n, where tr( ) and µ i ( ) denote the trace and the i-th largest eigenvalue of a matrix, respectively. Proposition 3.2 shows that for any c R n, the directional derivative g (c; h) in the direction h R n can be formulated easily once the singular value decomposition of A(c) is computed. This motivates us to propose the following directional derivative-based Newton method for solving the ISVP: Set c k+1 = c k + d k, where d k solves i=1 g(c k ) + g (c k ; d k ) = 0. (9) Notice that g is strongly semismooth. By Lemma 2.5, there exists a V k B g(c k ) such that g (c k ; d k ) = V k d k. Hence, the direction derivative-based Newton method (9) coincides with the generalized Newton method [29, 30] c k+1 = c k V 1 k g(x k ), V k B g(c k ). Therefore, the equation (9) can be solved by an iterative method (e.g., the transpose-free quasiminimal residual (TFQMR) method [14]). In addition, by Proposition 2.4, we know that (9) converges locally quadratically if the initial guess c 0 is sufficiently close to a zero c of g under the nonsingularity assumption of g( c). However, when the initial guess c 0 is far away from a zero c of g, the directional derivativebased Newton equation (9) is not necessarily solvable even though we assume that all elements in g( c) are nonsingular. To improve the global solvability and practical effectiveness of the directional derivative-based Newton equation (9), one may use the similar regularization techniques for nonlinear complementary problems [10, 37], i.e., we replace g by g ɛ, where g ɛ (c) := g(c) + ɛc (ɛ, c) R R n. i=1 7
8 For the ISVP, we define a regularized function w : R n+1 R n+1 by [ ] ɛ w(z) :=, z := (ɛ, c) R R n. g ɛ (c) Then, w is strongly semismooth and c solves (8) if and only if (0, c ) solves w(z) = 0. To develop global convergence, we also define the merit function ω : R n+1 R by ω(z) := 1 2 w(z) 2, z = (ɛ, c) R R n. (10) By Lemma 2.6 and Lemma 3.1, it is easy to know that ω is strongly semismooth. By the chain rule, the directional derivative of ω at any z R n+1 is given by [ ] ω (z; d) = w(z) T w (z; d), w ɛ (z; d) = g (11) (c; c) + ɛ c + ɛc for all d := ( ɛ, c) R n+1. By Proposition 3.2 and (11), we know that for any z = (ɛ, c) R R n, the directional derivative w (z; d) in the direction d := ( ɛ, c) R n+1 can be formulated explicitly since w (c; c) is available once the singular value decomposition of A(c) is obtained. Also, sparked by the generalized Newton method in [26] for solving B-differentiable equations, we propose the following regularized directional derivative-based Newton method for solving w(z) = 0. Algorithm I: A Regularized Directional Derivative-Based Newton Method Step 0. Given 0 < λ 1 < λ 2 < 1, η (0, (1 λ 2 )/(1 + λ 2 )) and 0 ɛ ( 1, 1). Let z 0 := (ɛ 0, c 0 ) be the initial point, where ɛ 0 := ɛ and c 0 R n is arbitrary. k := 0. Step 1. If w(z k ) = 0, then stop. Otherwise, go to Step 2. Step 2. Apply an iterative method (e.g., the TFQMR method) to solve for d k := ( ɛ k, c k ) R n+1 such that w(z k ) + w (z k ; d k ) = 0. (12) w(z k ) + w (z k ; d k ) η k w(z k ), (13) and where η k := min{η, w(z k ) }. ω (z k ; d k ) η k d k, d k, (14) Step 3. Compute the steplength α k > 0 satisfying the directional derivative-based standard Wolfe conditions [25]: ω(z k + α k d k ) ω(z k ) + λ 1 α k ω (z k ; d k ), (15) Set z k+1 := z k + α k d k. ω (z k + α k d k ; d k ) > λ 2 ω (z k ; d k ). (16) 8
9 Step 4. Replace k by k + 1 and go to Step 1. We point out that in Algorithm I, we use a line search that satisfies the directional derivativebased Wolfe conditions (15) and (16) instead of the directional derivative-based Armijo rule (15) since the directional derivative-based Armijo rule may not reduce the slope sufficiently [25]. 4 Convergence analysis In this section, we shall establish the global and quadratic convergence of Algorithm I. We first recall some necessary perturbation results, which can be found in [3, 4, 41]. Lemma 4.1 (see [3, Lemma 2]) For any x, y R n, we have where ζ = ( n A i 2 ) 1/2. i=1 A(x) A(y) ζ x y, Lemma 4.2 (see [3, Lemma 4]) For any x R n, let {σ i (A(x))} n i=1, {p i(x)} n i=1 and {q i(x)} n i=1 be the singular values and associated normalized left singular vectors and normalized right singular vectors of A(x) respectively. Suppose that {σ i (A(x))} n i=1 are all distinct. Then there exist positive numbers δ 0 and ξ such that, for any y R n with y x δ 0, p i (y) p i (x) ξ y x and q i (y) q i (x) ξ y x, 1 i n. Notice that w is already strongly semismooth. By Proposition 2.4, to show the superlinear convergence of Algorithm I, we also need the following nonsingularity assumption on the generalized Jacobian B w( ). Assumption 4.3 All elements in B w( z) are nonsingular, where z is an accumulation point of the sequence {z k } generated by Algorithm I. Remark 4.4 Let z = (0, c) is an accumulation point of the sequence {z k := (ɛ k, c k )} generated by Algorithm I such that w( z) = 0. Obviously, {[ ] } 1 0 B w( z) = : V c V B g( c). If A( c) has distinct singular values, then {[ 1 0 w( z) = B w( z) = c g ( c) ]}, where for any P ( c) := [p 1 ( c),..., p m ( c)] P( c) and Q( c) := [q 1 ( c),..., q n ( c)] Q( c), the Jacobian of g at c is given by ([42, Theorem 1.9.3] and [41]) [g ( c)] j,l := j (p i ( c)) T A l q i ( c), 1 j, l n. (17) i=1 9
10 In this case, Assumption 4.3 holds if and only if g ( c) is nonsingular. If for all k sufficiently large, A(c k ) has distinct eigenvalues, then {[ ]} w(z k ) = B w(z k 1 0 ) = c k g (c k ) + ɛ k. I n In this case, Assumption 4.3 can be measured by lim sup ( g (c k ) + ɛ k ) 1 I n <. k Therefore, in practice, Assumption 4.3 can be tested by checking whether the matrix g (c k )+ɛ k I n is nonsingular for all k sufficiently large. Define D := {c R n : A(c) has distinct singular values}. To establish the global and quadratic convergence of Algorithm I, we prove the following preliminary lemma. Lemma 4.5 Let z R n+1 be an accumulation point of the sequence {z k } generated by Algorithm I. Suppose that Assumption 4.3 holds and D is dense in R n. If equation (12) is solvable for d k R n+1 such that the conditions (13) and (14) are satisfied, then there exists a constant κ > 0 such that for all k sufficiently large, d k κ(1 + η) w(z k ), (18) ω ( z; d k ) ω (z k + d k ; d k ) L 1 (z k, d k ) z k + d k z d k, (19) where L 1 (z k, d k ) := L 2 +2(2+ξζn n) w(z k +d k ) and L is the Lipschitz constant of w around z. In addition, if w( z) = 0, then for any 0 δ 1/2 and for all k sufficiently large, z k + d k z = O( z k z 2 ), (20) w(z k + d k ) δ w(z k ), (21) Proof: We show the conclusions (18) (21) as follows. We first prove (18). By the semismoothness of w, Assumption 4.3, Lemma 2.5, we have, for all k sufficiently large, there exist a constant κ > 0 and a V k B w(z k ) such that w (z k ; z k z) = V k (z k z), w (z k ; d k ) = V k d k, V 1 k κ. (22) By the hypothesis and (22), for all k sufficiently large, d k V 1 k V k d k + w(z k ) w(z k ) κ ( w (z k ; d k ) + w(z k ) + w(z k ) ) κ(1 + η k ) w(z k ) κ(1 + η) w(z k ), 10
11 which confirms (18). Next, we show (19). Notice that d k = ( ɛ k, c k ) R R n for all k. By the semismoothness of w and Lemma 2.5, we have for all k sufficiently large, there exist V ( c) B g( c) and V (c k + c k ) B g(c k + c k ) such that g ( c; c k ) = V ( c) c k and g (c k + c k ; c k ) = V (c k + c k ) c k. By the definition of B g( c) and B g(c k + c k ), there exist x k, y k R n, at which g is differentiable (i.e., A( ) has distinct singular values at the points x k, y k ), such that for all k sufficiently large, x k c c k + c k c 2, y k (c k + c k ) c k + c k c 2, (23) and V ( c) g (x k ) c k + c k c, V (c k + c k ) g (y k ) c k + c k c. (24) By using (23), we get for all k sufficiently large, x k y k x k c + y k (c k + c k ) + c k + c k c 2 c k + c k c. (25) By (25) and Lemma 4.2, there exists a positive number ξ such that, for all k sufficiently large, p i (x k ) p i (y k ) ξ x k y k and q i (x k ) q i (y k ) ξ x k y k, 1 i n. (26) Since A( ) has distinct singular values {σ l (A( ))} n l=1 at xk, y k, we have [41] σ l (x k ) c p = p T l (xk ) A(xk ) c p q l (x k ), σ l (y k ) c p = p T l (yk ) A(yk ) c p q l (y k ), 1 l, p n. This, together with (24), (25), (26), and Lemma 4.1, implies that, for all k sufficiently large, g ( c; c k ) g (c k + c k ; c k ) = V ( c) c k V (c k + c k ) c k V ( c) g (x k ) c k + V (c k + c k ) g (y k ) c k + g (x k ) c k g (y k ) c k j n n n max {p T l (xk )( ( c k ) p A p )q l (x k ) p T l (yk )( ( c k ) p A p )q l (y k )} 1jn l=1 p=1 +2 c k + c k c c k 2 c k + c k c c k + j n n n max p T l 1jn (xk )( ( c k ) p A p )q l (x k ) p T l (yk )( ( c k ) p A p )q l (x k ) + n max 1jn l=1 l=1 p=1 j n n p T l (yk )( ( c k ) p A p )q l (x k ) p T l (yk )( ( c k ) p A p )q l (y k ) p=1 2 c k + c k c c k + n max 1jn p=1 p=1 p=1 j p l (x k ) p l (y k ) l=1 n ( c k ) p A p p=1 11
12 + n max 1jn j l=1 n ( c k ) p A p q l (x k ) q l (y k ) p=1 2 c k + c k c c k + 2ξζ n max 1jn j c k + c k c c k = 2(1 + ξζn n) c k + c k c c k 2(1 + ξζn n) z k + d k z d k. (27) We further derive from (11) and (27) that, for all k sufficiently large, ω ( z; d k ) ω (z k + d k ; d k ) l=1 = w( z) T w ( z; d k ) w(z k + d k ) T w (z k + d k ; d k ) w( z) w(z k + d k ) w ( z; d k ) + w(z k + d k ) w ( z; d k ) w (z k + d k ; d k ) L z k + d k z L d k + w(z k + d k ) g ( c; c k ) g (c k + c k ; c k ) + w(z k + d k ) (ɛ k + ɛ k 0) c k + ɛ k (c k + c k c) L 2 z k + d k z d k + w(z k + d k ) g ( c; c k ) g (c k + c k ; c k ) +2 w(z k + d k ) z k + d k z d k = L 1 (z k, d k ) z k + d k z d k. (28) Therefore, the estimate (19) holds. We now verify (20). By using the strong semismoothness of w and (22), it follows that for all k sufficiently large, w (z k ; z k z) w ( z; z k z) = V k (z k z) w ( z; z k z) = O( z k z 2 ). (29) Also, by the hypothesis, one has w(z k ) + w (z k ; d k ) η k w(z k ) w(z k ) 2 = w(z k ) w( z) 2 = O( z k z 2 ). (30) Using the strong semismoothness of w, (22), (29), and (30), we obtain, for all k sufficiently large, z k + d k z V 1 k w(z k ) w( z) w ( z; z k z) + V 1 k w(z k ) + V k d k w (z k ; z k z) w ( z; z k z) + V 1 k = O( z k z 2 ) + V 1 k w(z k ) + w (z k ; d k ) + O( z k z 2 ) = O( z k z 2 ), which verifies (20). Finally, we establish (21). Since w is strongly semismooth, by Lemma 2.2 and (20), one has, for any δ (0, 1/2) and for all k sufficiently large, w(z k + d k ) w ( z; z k + d k z) = w(z k + d k ) w( z) w ( z; z k + d k z) δ z k + d k z. (31) 12
13 By (20), for all k sufficiently large, z k + d k z It follows from (31) and (32) that, for all k sufficiently large, δ 1 + 2κ(1 + η)(l + 1) zk z δ z k z. (32) w(z k + d k ) w ( z; z k + d k z) + w(z k + d k ) w ( z; z k + d k z) L z k + d k z + δ z k + d k z (L + δ) z k + d k z (L + δ)δ 1 + 2κ(1 + η)(l + 1) zk z. (33) Also, by (18) and (32), we get, for all k sufficiently large, z k z d k + z k + d k z κ(1 + η) w(z k ) + δ z k z, which implies that, for all k sufficiently large, z k z κ(1 + η) 1 δ w(zk ). (34) By (33) and (34), we obtain, for all k sufficiently large, w(z k + d k ) This completes the proof of (21). (L + δ)δ 1 + 2κ(1 + η)(l + 1) zk z (L + δ)δ κ(1 + η) 1 + 2κ(1 + η)(l + 1) 1 δ w(zk ) 2κ(1 + η)(l + 1)δ 1 + 2κ(1 + η)(l + 1) w(zk ) δ w(z k ). We now offer a lower and an upper bound for the sequence {ω (z k ; d k )} generated by Algorithm I. Lemma 4.6 Let {z k } and {d k } be generated by Algorithm I. Then one has ω (z k ; d k ) (1 + η) w(z k ) 2 = 2(1 + η)ω(z k ), (35) ω (z k ; d k ) (η 1) w(z k ) 2. (36) 13
14 Proof: It follows from (11) and (13) that and ω (z k ; d k ) = w(z k ) T w (z k ; d k ) = w(z k ), w (z k ; d k ) = w(z k ), w (z k ; d k ) + w(z k ) w(z k ) w (z k ; d k ) + w(z k ) w(z k ) + w(z k ) 2 (1 + η k ) w(z k ) 2 (1 + η) w(z k ) 2 = 2(1 + η)ω(z k ); ω (z k ; d k ) = w(z k ) T w (z k ; d k ) = w(z k ), w (z k ; d k ) = w(z k ), w (z k ; d k ) + w(z k ) w(z k ) w (z k ; d k ) + w(z k ) w(z k ) w(z k ) 2 (η k 1) w(z k ) 2 (η 1) w(z k ) 2. The following theorem gives the global convergence of Algorithm I. Theorem 4.7 Let {z k } be the sequence generated by Algorithm I. Suppose that the level set {z R n+1 : w(z) w(z 0 ) } is bounded and each generalized Newton equation (12) can be solved inexactly such that the conditions (13) and (14) are satisfied. Assume that w(z k ) 0 for all k. Then the following conclusions (a) (d) hold: (a) w(z k+1 ) < w(z k ), (b) the sequence {z k } is bounded, (c) if z is any accumulation point of {z k }, Assumption 4.3 holds, and D is dense in R n, then w( z) = 0. (d) if lim inf α k > 0, then any accumulation point z of {z k } is such that w( z) = 0. Proof: By the hypothesis, it is obvious that the conclusions (a) and (b) hold. In the following, we verify the conclusions (c) and (d). Since the sequence {ω(z k ) 0} is strictly monotone decreasing. Hence, {ω(z k )} is convergent and ( lim ω(z k ) ω(z k+1 ) ) = 0. (37) k Using (15) and (36), we have ω(z k ) ω(z k+1 ) λ 1 α k ω (z k ; d k ) λ 1 (1 η)α k w(z k ) 2 0. (38) Relations (37) and (38) imply that lim (α k w(z k ) 2 ) = 0. k 14
15 If lim inf α k > 0, then, any accumulation point z of {z k } is a solution to w( z) = 0. Now, we assume that lim inf α k = 0. Let z be an accumulation point of {z k }. By taking a subsequence if necessary, we may assume that {z k } converges to z and lim α k = 0. Thus {w(z k )} converges to w( z). From Lemma 4.5, we know that the sequence {d k } is bounded for all k sufficiently large. Hence, {d k } has an accumulation point d. Without loss of generality, we may assume that d k converges to d. As the proofs of (27) and (28), we can derive that, for all k sufficiently large, { ω (z k + α k d k ; d) ω ( z; d) L 2 (z k, α k d k ) z k + α k d k z d, ω (z k ; d) ω ( z; d) (39) L 3 (z k ) z k z d, where L 2 (z k, α k d k ) := L 2 + 2(2 + ξζn n) w(z k + α k d k ), L 3 (z k ) := L 2 + 2(2 + ξζn n) w(z k ), By the strong semismoothness of ω and Lemma 2.2, { ω (z k + α k d k ; d k ) ω (z k + α k d k ; d) W 1 d k d, ω (z k ; d k ) ω (z k ; d) W 2 d k d, (40) where W 1 and W 2 are, respectively, the Lipschitz constants of ω (z k + α k d k ; ) and ω (z k ; ) around d. One has by (39) and (40), for all k sufficiently large, { ω (z k + α k d k ; d k ) ω ( z; d) W 1 d k d + L 2 (z k, α k d k ) z k + α k d k z d, ω (z k ; d k ) ω ( z; d) W 2 d k d (41) + L 3 (z k ) z k z d, Relations (16), (35) and (36) imply that { ω (z k + α k d k ; d k ) > λ 2 ω (z k ; d k ) λ 2 (1 + η) w(z k ) 2, ω (z k ; d k ) (1 η) w(z k ) 2. (42) By passing to the limit as k, we have by (41) and (42), and hence λ 2 (1 + η) w( z) 2 ω ( z; d) (1 η) w( z) 2 Therefore, w( z) = 0 since λ 2 (1 + η) < (1 η). {(1 η) λ 2 (1 + η)} w( z) 2 0. Finally, we can establish the quadratic convergence of Algorithm I as follows. Theorem 4.8 Let z R n+1 be an accumulation point of the sequence {z k } generated by Algorithm I and w( z) = 0. Assume that w(z k ) = 0 for all k and each generalized Newton equation (12) can be solved inexactly such that the conditions (13) and (14) are satisfied. Suppose that Assumption 4.3 is satisfied and D is dense in R n. Then the sequence {z k } converges to z quadratically and α k eventually becomes 1. 15
16 Proof: By Lemma 4.5, for all k sufficiently large, z k + d k z = O( z k z 2 ). (43) Let γ = κ(1 + η) > 0. By Lemma 4.5 again, for all k sufficiently large, z k z z k + d k z + d k z k + d k z + κ(1 + η) w(z k ) = z k + d k z + γ w(z k ). (44) By (43), there exists a sufficiently small scalar θ > 0 such that, for all k sufficiently large, z k + d k z θ z k z. This, together with (44), implies that, for all k sufficiently large, z k + d k z Thus one has, for all k sufficiently large, or θγ 1 θ w(zk ). w(z k + d k ) = w(z k + d k ) w( z) L z k + d k z Lθγ 1 θ w(zk ) ω(z k + d k ) ( Lθγ ) 2ω(z k ), (45) 1 θ where L is the Lipschitz constant of w at z. Furthermore, we can choose a sufficiently small θ satisfying the condition ( Lθγ ) 2 1 > 2(1 + η)λ1. 1 θ Then, we have by (35) and (45), for all k sufficiently large, ω(z k + d k ) ω(z k ) ( ( Lθγ ) ) 2 1 ω(z k ) 2(1 + η)λ 1 ω(z k ) λ 1 ω (z k ; d k ). (46) 1 θ On the other hand, using (43), we get, for all k sufficiently large, z k + d k z λ 1 (1 η) λ 1 (1 η) + L 0 γ 2 zk z, where L 0 := L 2 + 2(2 + ξζn n) w(z 0 ). This, together with (44), implies that z k + d k z λ 1(1 η) w(z k ). (47) L 0 γ 16
17 By the strong semismoothness of ω, Lemma 4.5, Theorem 4.7(a), (36), and (47), for all k sufficiently large, ω (z k + d k ; d k ) = ω ( z; d k ) ω (z k + d k ; d k ) ω ( z; d k ) ω (z k + d k ; d k ) L 1 (z k, d k ) z k + d k z d k = ( L 2 + 2(2 + ξζn n) w(z k + d k ) ) z k + d k z d k ( L 2 + 2(2 + ξζn n) w(z k ) ) z k + d k z d k λ 1 (1 η) L 0 w(z k ) γ w(z k ) L 0 γ = λ 1 (1 η) w(z k ) 2 λ 1 ω (z k ; d k ). (48) Notice that λ 2 > λ 1 > 0. We have by (48), for all k sufficiently large, ω (z k + d k ; d k ) λ 1 ω (z k ; d k ) > λ 2 ω (z k ; d k ). (49) Combining (46) with (49) yields, for all k sufficiently large, This, together with (43), completes the proof. α k = 1 and z k+1 = z k + d k. Remark 4.9 Theorem 4.8 shows that, under the conditions that Assumption 4.3 is satisfied and D is dense in R n, z is a zero of w if and only if {z k } converges to z and α k eventually becomes 1. In other words, z is not a zero of w if and only if either {z k } diverges or lim k α k = 0. This implies that the generalized Newton equation (12) may not give a globally descent direction for the merit function ω in (10) especially when lim k α k = 0. Remark 4.10 In Theorem 4.7(c) and Theorem 4.8, we have assumed that D is dense in R n, i.e., all singular values of a real rectangular matrix are almost everywhere differentiable in R n. This is reasonable since the subset R n \D is usually a null set. We can see the similar remark for eigenvalues of a real symmetric matrix as in [15, pp ] and [38, p. 2365]. 5 A hybrid method We observe from Theorem 4.8 that, when z k is sufficiently near a zero z of w, the generalized Newton equation (12) exhibits a good local property: if Assumption 4.3 is satisfied and D is dense in R n, (12) is solvable and the convergence rate is quadratic. However, globally, the vector d k generated by (12) is not necessarily a descent direction for the merit function ω in (10). Some globally convergent descent methods were provided in [17] for solving the nonlinear equations defined by locally Lipschitzian functions. To improve the practical effectiveness of Algorithm I, sparked by [17, 29], we give a hybrid method for solving w(z) = 0, which combines the directional derivative-based generalized Newton equation (12) with an Armijo-like line search based on the 17
18 merit function ω in (10). We will show that the hybrid method converges quadratically and globally (in the sense of computing a stationary point of the merit function ω). Algorithm II: A hybrid Method Step 0. Given η (0, 1/2), ρ (0, 1), λ (0, 1 η), and ɛ R. Let z 0 := (ɛ 0, c 0 ) be the initial point, where ɛ 0 := ɛ and c 0 R n is arbitrary. k := 0. Step 1. Solve (12) for d k := ( ɛ k, c k ) R R n such that the conditions (13) and (14) are satisfied. Step 2. Compute the steplength α k = ρ l k, where l k is the smallest nonnegative integer l satisfying the Armijo-like condition: Set z k+1 := z k + α k d k. Step 3. Replace k by k + 1 and go to Step 1. ω(z k + ρ l d k ) ω(z k ) 2λρ l ω(z k ), (50) We observe that, in Algorithm II, the Armijo-like rule (50) ensures that {ω(z k )} is strictly decreasing and is bounded below (by zero), i.e., 0 < ω(z k+1 ) (1 2λα k )ω(z k ) ω(z k ). In our numerical tests, we will report the numerical performance of Algorithm II instead of Algorithm I since Algorithm II is more effective and feasible in pratice. As in [17], we can derive the follow lemma on the Armijo-like rule (50) in Algorithm II. Lemma 5.1 Let η (0, 1/2) and ρ (0, 1). Suppose that ω(z k ) 0 for all k. If Equation (12) is solvable such that the conditions (13) and (14) are satisfied, then there exists a scalar τ > 0 such that for all τ [0, τ], ω(z k + τd k ) ω(z k ) 2λτω(z k ). Proof: We show the conclusion by contradiction. Suppose there exists a sequence {τ i > 0} such that {τ i } converges to zero and ω(z k + τ i d k ) ω(z k ) > 2λτ i ω(z k ) Dividing both sides by τ i and then passing to the limit i, we get On the other hand, by (11) and (13), i. ω (z k ; d k ) 2λω(z k ). (51) ω (z k ; d k ) = w(z k ) T w (z k ; d k ) = w(z k ) T ( w(z k ) + w (z k ; d k ) ) w(z k ) T w(z k ) w(z k ) w(z k ) + w (z k ; d k ) w(z k ) 2 η k w(z k ) 2 w(z k ) 2 η w(z k ) 2 w(z k ) 2 2(1 η)ω(z k ). 18 (52)
19 By (51) and (52), 2λω(z k ) 2(1 η)ω(z k ). This is a contradiction by the choice of λ and the assumption that ω(z k ) 0 for all k. We see from Lemma 5.1 that the Armijo-like rule (50) always holds for some nonnegative integer l k under the assumption that each generalized Newton equation (12) is solvable such that the conditions (13) and (14) are satisfied. On the global and quadratic convergence of Algorithm II, we have the following result. Theorem 5.2 Let z be any accumulation point of the sequence {z k } generated by Algorithm II. Suppose that the level set {z R n+1 : w(z) w(z 0 ) } is bounded and each generalized Newton equation (12) can be solved inexactly such that the conditions (13) and (14) are satisfied. Assume that w(z k ) 0 for all k. Then the conclusions (a) (d) hold: (a) w(z k+1 ) < w(z k ), (b) the sequence {z k } is bounded, (c) if all elements in B w( z) are nonsingular and D is dense in R n, then w( z) = 0. (d) if lim inf α k > 0, then w( z) = 0. In addition, suppose that w( z) = 0, all elements in B w( z) are nonsingular and D is dense in R n. Then the sequence {z k } converges to z quadratically and α k eventually becomes 1. Proof: Obviously, the conclusions (a) and (b) hold. If all elements in B w( z) are nonsingular and D is dense in R n, we can show the conclusions (c) and (d) by following the similar proof of Theorem 4.7. Suppose that w( z) = 0, all elements in B w( z) are nonsingular and D is dense in R n. Then, by Theorem 4.8, we can easily prove that {z k } converges to z quadratically and α k eventually becomes 1. 6 Numerical tests In this section, we report some numerical tests to illustrate the effectiveness of Algorithm II for solving the ISVP. The tests were carried out in MATLAB 7.10 running on a PC Intel Pentium IV of 3.0 GHz CPU. In our numerical experiments, we set λ = 10 4 and η = We implement the Armijo-like rule (50) by w(z k+1 ) (1 2λα k ) 1 2 w(z k ). For demonstration purpose, for Examples below, we set the stopping criterion to be w(z k ) w(z k+1 ) 10 5 and the linear system (12) is solved by the TFQMR method [14] (we set the largest number of iterations in TFQMR to be 100). One may solve (12) by other iterative methods. 19
20 Example 6.1 This is an inverse problem with distinct singular values (m = 7 and n = 4). The test data is given as follows: A 1 = A 3 = A 0 = , A 2 =, A 4 =, σ = ( , , , ) T, c = (1, 2, 3, 4) T. We report our numerical results for different starting points: (a) c 0 = ( 1, 1, 1, 1) T ; (b) c 0 = (0, 0, 0, 0) T ; (c) c 0 = (10, 10, 10, 10) T ; (d) c 0 = (50, 50, 50, 50) T ; (e) c 0 = (100, 100, 100, 100) T. Example 6.2 This is an inverse problem with distinct singular values (m = 5 and n = 4). The test data is defined by: A 0 = , A i = I m (:, i)i n (:, i) T, i = 1, 2, 3, 4, σ = (5.1602, , , ) T, c = (1, 2, 3, 5) T, where I n (:, i) denotes the i-th column of the identity matrix I n. We report our numerical results for different starting points: (a) c 0 = (0, 0, 0, 0) T ; (b) c 0 = (1, 1, 1, 1) T ; (c) c 0 = (10, 10, 10, 10) T ; (d) c 0 = (50, 50, 50, 50) T ; (e) c 0 = (10, 20, 30, 50) T.,, 20
21 Example 6.3 This is an inverse problem with multiple singular values (m = 6 and n = 4). The test data is defined by: A 0 = 0, A 1 = A 3 = , A 2 =, A 4 = σ = (2, 1, 1, 1) T, c = (1, 1, 1, 1) T We report our numerical results for different starting points: (a) c 0 = (0, 0, 0, 0) T ; (b) c 0 = ( 3, 2, 4, 7) T ; (c) c 0 = (5, 6, 7, 8) T ; (d) c 0 = (11, 13, 15, 19) T ; (e) c 0 = (100, 121, 155, 199) T. Example 6.4 [2] This is a Toeplitz-plus-Hankel inverse problem with distinct singular values (m = n = 5). Define A 0 = 0, A 1 = A 4 =, A 2 = , A 5 =, A 3 = σ = (4, 3, 2, 1, 0) T, c = (0, , , , ) T., We report our numerical results for different starting points: (a) c 0 = (0, 0, 0, 0, 0) T ; (b) c 0 = (1, 1, 1, 1, 1) T ; (c) c 0 = (8, 5, 8, 5, 8) T ; (d) c 0 = (20, 17, 15, 12, 11) T ; (e) c 0 = (37, 43, 48, 50, 60) T. Example 6.5 [2] This is a Toeplitz-plus-Hankel inverse problem with multiple singular values (m = n = 5). The basis matrices {A i } 5 i=1 are defined as in Example 6.5 and σ = (2, 2, 2, 1, 0) T, c = (0, , , , ) T.,,, 21
22 We report our numerical results for various starting points: (a) c 0 = (0, 0, 0, 0, 0) T ; (b) c 0 = (1, 1, 1, 1, 1) T ; (c) c 0 = (8, 10, 13, 17, 19) T ; (d) c 0 = (30, 27, 25, 22, 21) T ; (e) c 0 = (68, 77, 88, 100, 120) T. Tables 1 10 display the numerical results for Examples , where SP., IT., NF., w(z k ), α k, and κ 2 (g (c k ) + ɛ k I n ) := g (c k ) + ɛ k I n (g (c k ) + ɛ k I n ) 1 stand for the starting point, the number of Newton iterations and the number of function evaluations, the values of w(z k ) at last 3 iterates of Algorithm II, the eventual value of the steplength α k at the final iteration of Algorithm II, and the condition number of g (c k ) + ɛ k I n at last 3 iterates of Algorithm II, where A(c k ) has distinct singular values, respectively. Here, * denotes that g (c k ) + ɛ k I n could not be estimated at current iterate c k since A(c k ) has multiple singular values. We observe from Table Tables 1 10 that both Algorithm II with regularization (i.e., ɛ 0) and Algorithm II without regularization (i.e., ɛ = 0) are convergent for different starting points. Furthermore, the values of κ 2 (g (c k )+ɛ k I n ) are not large and thus g (c k )+ɛ k I n are all nonsingular for all k sufficiently large where A(c k ) has distinct singular values and thus Assumption 4.3 is satisfied. The quadratic convergence occurs at the final stage of the iterations. This confirms our theoretical prediction. We also see that our algorithm converges to a solution c of the ISVP, which is not necessarily equal to the original c. In our numerical tests, we observe that different values of ɛ and ρ may yield different numbers of outer iterations. An interesting question is to find the optimal parameters ɛ and ρ, which needs future study. Table 1: Numerical results for Example 6.1 Alg. II with regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) Alg. II without regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) Acknowledgments We would like to thank the editor and the anonymous referees for their helpful and valuable comments. 22
23 Table 2: Final values of c k for Example 6.1 Alg. II with regularization Alg. II without regularization SP. c c (a) ( , , , ) ( , , , ) (b) ( , , , ) ( , , , ) (c) (4.7112, , , ) (1.0946, , , ) (d) (0.8170, , , ) (3.9373, , , ) (e) (1.2070, , , ) (3.9373, , , ) Table 3: Numerical results for Example 6.2 Alg. II with regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 (9.0796, , ) (c) ( , , ) 1.0 (6.7536, , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 (5.8986, , ) Alg. II without regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 (6.0137, , ) (c) ( , , ) 1.0 (8.9459, , ) (d) ( , , ) 1.0 (8.9440, , ) (e) ( , , ) 1.0 (5.6568, , ) References [1] P. Alonso, G. Flores-Becerra, and A. M. Vidal, Sequential and parallel algorithms for the inverse Toeplitz singular value problem, Proc. CSC, 2006, pp [2] Z. J. Bai, X. Q. Jin, and S. W. Vong, On some inverse singular value problems with Toeplitzrelated structure, Numerical Algebra, Control and Optimization, 2 (2012), pp [3] Z. J. Bai, B. Morini, S. F. Xu, On the local convergence of an iterative approach for inverse singular value problems, J. Comput. Appl. Math., 198 (2007), pp [4] Z. J. Bai and S. F. Xu, An inexact Newton-type method for inverse singular value problems, Linear Algebra Appl., 429 (2008), pp [5] M. T. Chu, Numerical methods for inverse singular value problems, SIAM J. Numer. Anal., 29 (1992), pp [6] M. T. Chu and G. H. Golub, Structured inverse eigenvalue problems, Acta Numer., 11 (2002), pp [7] M. T. Chu and G. H. Golub, Inverse Eigenvalue Problems: Theory, Algorithms, and Applications, Oxford University Press, Oxford,
24 Table 4: Final values of c k for Example 6.2 Alg. II with regularization Alg. II without regularization SP. c c (a) ( , , , ) ( , , , ) (b) (0.9407, , , ) ( , , , ) (c) (3.5784, , , ) ( , , , ) (d) ( , , , ) ( , , , ) (e) (5.6839, , , ) (5.0938, , , ) Table 5: Numerical results for Example 6.3 Alg. II with regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) Alg. II without regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) [8] F. Clarke, Optimization and Nonsmooth Analysis, John Wiley, New York, [9] T. De Luca, F. Facchinei, and C. Kanzow, A semismooth equation approach to the solution of nonlinear complementarity problems, Math. Programming, 75 (1996), pp [10] F. Facchinei and C. Kanzow, Beyond monotonicity in regularization methods for nonlinear complementarity problems, SIAM J. Control Optim., 37 (1999), pp [11] M. C. Ferris and J. S. Pang, Engineering and economic applications of complementarity problems, SIAM Rev., 39 (1997), pp [12] A. Fischer, Solution of monotone complementarity problems with Lipschitzian functions, Math. Programming, 76 (1997), pp [13] G. Flores-Becerra, V. M. Garcia, A. M. Vidal, Parallelization of a method for the solution of the inverse additive singular value problem, WSEAS Transactions on Mathematics, 5 (2006), pp [14] R. W. Freund, A transpose-free quasi-minimal residual algorithm for non-hermitian linear systems, SIAM J. Sci. Comput., 14 (1993), pp
25 Table 6: Final values of c k for Example 6.3 Alg. II with regularization Alg. II without regularization SP. c c (a) (1.0000, , , ) (1.0000, , , ) (b) ( , , , ) ( , , , ) (c) (1.0000, , , ) (1.0000, , , ) (d) (1.0000, , , ) (1.0000, , , ) (e) ( , , , ) (1.0000, , , ) Table 7: Numerical results for Example 6.4 Alg. II with regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) Alg. II without regularization SP. ɛ ρ IT. NF. w(z k ) α k κ 2 (g (c k ) + ɛ k I n) (a) ( , , ) 1.0 ( , , ) (b) ( , , ) 1.0 ( , , ) (c) ( , , ) 1.0 ( , , ) (d) ( , , ) 1.0 ( , , ) (e) ( , , ) 1.0 ( , , ) [15] S. Friedland, J. Nocedal, and M. L. Overton, The formulation and analysis of numerical methods for inverse eigenvalue problems, SIAM. J. Numer. Anal., 24 (1987), pp [16] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, USA, Third Edition, 1996 [17] S. P. Han, J. S. Pang, and N. Rangaraj,Globally convergent Newton methods for nonsmooth equations, Math. Oper. Res., 17(1992), pp [18] P. T. Harker and J. S. Pang, Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications, Math. Programming, 48 (1990), pp [19] J.-B. Hiriart-Urruty and D. Ye, Sensitivity analysis of all eigenvalues of a symmetric matrix, Numer. Math., 70 (1995), pp [20] H. Y. Jiang, Global convergence analysis of the generalized Newton and Gauss-Newton methods of the Fischer-Burmester equation for complementarity problem, Math. Oper. Res., 24 (1999), pp [21] H. Y. Jiang and L. Q. Qi, A new nonsmooth equations approach to nonlinear complementarity problems, SIAM J. Control Optim., 35 (1997), pp
A derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationNewton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions
260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation
More informationA SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES
A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationNonnegative Inverse Eigenvalue Problems with Partial Eigendata
Nonnegative Inverse Eigenvalue Problems with Partial Eigendata Zheng-Jian Bai Stefano Serra-Capizzano Zhi Zhao June 25, 2011 Abstract In this paper we consider the inverse problem of constructing an n-by-n
More informationAn accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method
More informationA PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)
A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationA QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2
A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,
More informationWHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?
WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationApplying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone
Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this
More informationsystem of equations. In particular, we give a complete characterization of the Q-superlinear
INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica
More informationKey words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point
More informationON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS
ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are
More informationTensor Complementarity Problem and Semi-positive Tensors
DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New
More informationSymmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata
Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata Zheng-Jian Bai Revised: October 18, 2007 Abstract In this paper we concern the inverse problem of constructing the n-by-n
More informationAffine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ
2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationA general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces
A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces MING TIAN College of Science Civil Aviation University of China Tianjin 300300, China P. R. CHINA
More information20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes
016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which
More informationSmoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences
Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au
More informationAn improved generalized Newton method for absolute value equations
DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationA Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay
A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay Zheng-Jian Bai Mei-Xiang Chen Jin-Ku Yang April 14, 2012 Abstract A hybrid method was given by Ram, Mottershead,
More informationTWO CSCS-BASED ITERATION METHODS FOR SOLVING ABSOLUTE VALUE EQUATIONS
Journal of Applied Analysis and Computation Volume 7, Number 4, November 2017, 1336 1356 Website:http://jaac-online.com/ DOI:10.11948/2017082 TWO CSCS-BASED ITERATION METHODS FOR SOLVING ABSOLUTE VALUE
More informationNonsmooth Matrix Valued Functions Defined by Singular Values
Nonsmooth Matrix Valued Functions Defined by Singular Values Defeng Sun 1 Department of Mathematics and Center for Industrial Mathematics National University of Singapore, Republic of Singapore Jie Sun
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationSOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS
A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica
More informationError bounds for symmetric cone complementarity problems
to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationA smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function
Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationStationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationOn matrix equations X ± A X 2 A = I
Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,
More informationTechnische Universität Dresden Herausgeber: Der Rektor
Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002
More informationOn the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem
On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department
More informationProperties of Solution Set of Tensor Complementarity Problem
Properties of Solution Set of Tensor Complementarity Problem arxiv:1508.00069v3 [math.oc] 14 Jan 2017 Yisheng Song Gaohang Yu Abstract The tensor complementarity problem is a specially structured nonlinear
More informationResearch Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem
Advances in Operations Research Volume 01, Article ID 483479, 17 pages doi:10.1155/01/483479 Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem
More informationAn inexact subgradient algorithm for Equilibrium Problems
Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationA continuation method for nonlinear complementarity problems over symmetric cone
A continuation method for nonlinear complementarity problems over symmetric cone CHEK BENG CHUA AND PENG YI Abstract. In this paper, we introduce a new P -type condition for nonlinear functions defined
More informationDepartment of Social Systems and Management. Discussion Paper Series
Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010
More informationOn the Midpoint Method for Solving Generalized Equations
Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationKeywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.
STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationA Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization
, March 16-18, 2016, Hong Kong A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization Yung-Yih Lur, Lu-Chuan
More informationc 2007 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationIterative algorithms based on the hybrid steepest descent method for the split feasibility problem
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationAN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING
J Syst Sci Complex (2010) 23: 822 829 AN EQUVALENCY CONDTON OF NONSNGULARTY N NONLNEAR SEMDEFNTE PROGRAMMNG Chengjin L Wenyu SUN Raimundo J. B. de SAMPAO DO: 10.1007/s11424-010-8057-1 Received: 2 February
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationIterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming
Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study
More informationGENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim
Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationLecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008
Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity
More informationAbsolute Value Programming
O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationSemismooth Support Vector Machines
Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,
More informationc 2002 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 13, No. 2, pp. 386 405 c 2002 Society for Industrial and Applied Mathematics SUPERLINEARLY CONVERGENT ALGORITHMS FOR SOLVING SINGULAR EQUATIONS AND SMOOTH REFORMULATIONS OF COMPLEMENTARITY
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationExact penalty decomposition method for zero-norm minimization based on MPEC formulation 1
Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationSemismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones
to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationA FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION
U.P.B. Sci. Bull., Series A, Vol. 78, Iss. 1, 2016 ISSN 1223-7027 A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION Hamid Esmaeili 1, Mahdi Mirzapour 2, Ebrahim Mahmoodabadi
More informationAffine covariant Semi-smooth Newton in function space
Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held
More informationDownloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see
SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION
More informationInverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry
Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationA generalization of the Gauss-Seidel iteration method for solving absolute value equations
A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,
More informationUsing exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities
Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present
More information