A note on Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms
|
|
- Clifford Blake
- 5 years ago
- Views:
Transcription
1 Noname manuscript No. (will be inserted by the editor) A note on Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms Yong Xia Received: date / Accepted: date Abstract The Legendre-Fenchel conjugate of the product of two positivedefinite quadratic forms was posted as an open question in the field of nonlinear analysis and optimization by Hiriart-Urruty [ Question 11 in SIAM Review 49, , (2007)]. Under a conve assumption on the function, it was answered by Zhao [SIAM J. Matri Analysis & Applications, 31(4), (2010)]. In this note, we answer the open question without making the conveity assumption. Keywords Legendre-Fenchel conjugate quadratic form PACS 15A48 65F15 65K05 90C25 1 Introduction The Legendre-Fenchel conjugate of the function h(y) is defined as h () = sup y R n T y h(y). For eample, if h(y) = 1 2 yt Ay, where A is positive definite, then h () = 1 2 T A 1. In the field of nonlinear analysis and optimization, J.B. Hiriart- Urruty [1] raised an open question what is the epression or formula of the conjugate of the product function f(y) = 1 4 (yt Ay)(y T By), This research was supported by National Natural Science Foundation of China under grants and /A011702, and by the fund of State Key Laboratory of Software Development Environment under grant SKLSDE-2011ZX-15. Y. Xia State Key Laboratory of Software Development Environment, LMIB of the Ministry of Education, School of Mathematics and System Sciences, Beihang University, Beijing , P. R. China dearyia@gmail.com
2 2 Yong Xia where A,B are two n n positive definite matrices. Recently, Zhao [3] answered the open question under the assumption that f(y) is conve. The main result is as follows. Theorem 1 ([3]) Let the function f(y) be conve. At any point R n, the value of the conjugate f () is finite, and f () = 0 if = 0, otherwise if 0, ( f () = p(α) := 3α 1/3 T (A+αB) 1 ) 2/3, 4 where α is any real root of the univariate equation g(α) := α T (A+αB) 1 A(A+αB) 1 T (A+αB) 1 B(A+αB) 1 = 0. (1) The following gives a sufficient condition under which the real root to the equation (1) is unique. Theorem 2 ([3]) If ma{κ(a), κ(b)} 2.5, where κ( ) denotes the condition number, then f(y) is conve and for any 0, there eists a unique real root to the equation (1). In this note, we answer the open question without the conveity assumption. Our proof is much shorter than that of Theorem 1 given in [3]. As a corollary, the sufficient condition (Theorem 2) is improved. 2 Main Result In this section, we study the conjugate f () = sup y R n T y f(y) without assuming that f(y) is conve. Lemma 1 The conjugate f () is finite. f () = 0 if = 0, otherwise if 0, f () > 0. Proof. The finiteness of f () follows from lim y T y (yt Ay)(y T By) lim y T y λ min(a)λ min (B) y =, where λ min (A) > 0 and λ min (B) > 0 are the minimum eigenvalues of A and B, respectively. If = 0, it is trivial to verify f () = 0. Now we assume 0. Since ma ǫ { T (ǫ) f(ǫ)} = ma{ T ǫ ( T A)( T B)ǫ 2 } = ǫ ( T ) 2 ( T A)( T B) > 0, we have f () > 0. The proof is complete.
3 Legendre-Fenchel conjugate of two quadratic forms 3 Lemma 2 Suppose 0. y is a stationary point of T y f(y) if and only if T y f(y) = p(α), where α is a real root of the univariate equation (1). Proof. Suppose y is a stationary point of T y f(y), we have ( 1 = f(y) = 2 (yt Ay)B + 1 ) 2 (yt By)A y. The assumption 0 implies that y 0. Therefore, y = 2(βB +γa) 1, (2) where β = y T Ay > 0 and γ = y T By > 0. Substituting (2) into y T Ay and y T By, we have Dividing (3) by (4) yields β = 4 T (βb +γa) 1 A(βB +γa) 1, (3) γ = 4 T (βb +γa) 1 B(βB +γa) 1. (4) β γ T (A+ β γ B) 1 A(A+ β γ B) 1 T (A+ β γ B) 1 B(A+ β γ B) 1 = 0. That is, α := β γ is a real root of (1). According to (3) and (4), we have T (A+αB) 1 = T (A+αB) 1 A(A+αB) 1 +α T (A+αB) 1 B(A+αB) 1 = 1 4 βγ αγ3 = 1 2 αγ3, which implies that ( 2 T (A+αB) 1 ) 1/3 γ =. (5) α Therefore, it follows from (2), (5) and the definitions of β, γ and α that T y f(y) = 2 γ T (A+αB) 1 1 ( T 4 βγ = (A+αB) 1 ) 2/3 3α1/3. 4 On the other hand, let α be a real root of (1). Define γ as in (5). Let β = αγ. Define y as in (2). Then y is a stationary point. The proof is complete. Lemma 3 Suppose 0. T y f(y) has at most 2n 1 stationary points, which can be solved to any given precision in polynomial time.
4 4 Yong Xia Proof. According to Lemma 2, the number of stationary points of T y f(y) is equal to the number of the real roots of Equation (1). Since A,B are positive definite, there eists a nonsingular matri P such that both P T AP and P T BP are diagonal matrices. Assume P T AP = Diag(a 1,...,a n ) and P T AP = Diag(b 1,...,b n ). Then for i = 1,...,n, a i > 0 and b i > 0. Let z = P T. Since P is nonsingular and 0, we have z 0. It is trivial to verify that each root of Equation (1) satisfies ( n n (a i +αb i ) )(α 2 b i zi 2 n (a i +αb i ) 2 a i z 2 i (a i +αb i ) 2 ) = 0, (6) which is a univariate polynomial equation of degree (2n 1). Therefore, E- quation (1) has at most 2n 1 real roots, which can be solved to any given precision in polynomial time [2]. The proof is complete. Now we present our main result. Theorem 3 At any point R n, the value of the conjugate f () is finite, and f () = 0 if = 0, otherwise if 0, f () = p(α ), where α is the maimizer of p(α), solved in polynomial time. Proof. Since the maimizer of T y f(y) must be a stationary point, it follows fromlemmas1and2thatf () = p(α ),whereα = argma g(α)=0 p(α). Itis not difficult to verify that the equations dp(α) dα = 0 and g(α) = 0 areequivalent. Therefore, α = argmap(α). Moreover, α can be solved by enumerating all the roots of g(α) = 0, which is done in polynomial time according to the proof of Lemma 3. As a corollary, we show that Theorem 2 is improved. Theorem 4 ([3]) If f(y) is conve, then for any 0, there eists a unique real root to the equation (1). Proof. If f(y) is conve, a stationary point of T y f(y) is also a maimizer. It follows from Lemma 2 and Theorem 3 that any stationary point of p(α) is also a global maimizer. According to Lemma 3 and the equivalence between dp(α) dα = 0 and g(α) = 0, p(α) has k( 2n 1) stationary points, denoted by α 1 <... < α k. It is sufficient to show k = 1. Suppose this is not true, i.e., k 2. Since p(α 1 ) = p(α 2 ) are the maimum value of p(α), for any α (α 1,α 2 ),wehavep(α) < p(α 1 ). Thenthereis alocalminimizerin (α 1,α 2 ), which is a stationary point and can not be a maimizer. Now, we obtain a contradiction. The proof is complete.
5 Legendre-Fenchel conjugate of two quadratic forms 5 3 Conclusion The Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms was posted as an open question by Hiriart-Urruty [1]. Under a conve assumption on the function, it was answered by Zhao [3]. In this note, we give an answer to the open question without making the conveity assumption. Our proof is much shorter than that of [3]. Finally, we raise an open question what is the epression of the conjugate of f(y) = 1 4 (yt Ay)(y T By)+ 1 2 yt Cy, where A, B, C are positive definite matrices. References 1. J.B. Hiriart-Urruty, Potpourri of conjectures and open questions in nonlinear analysis and optimization, SIAM Review, 49, , (2007) 2. M. Sagraloff, When Newton meets Descartes: A Simple and Fast Algorithm to Isolate the Real Roots of a Polynomial, in Proceedings of the 37th International Symposium on Symbolic and Algebraic Computation, , ACM New York, NY, USA, Y.B. Zhao, The Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms, SIAM J. Matri Analysis & Applications, 31(4), (2010)
Key words. Convex analysis, matrix theory, quadratic form, Legendre-Fenchel conjugate
Published in SIM J. Matrix nalysis & pplications, 300), no.4, pp.79-8.) THE LEGENDRE-FENCHEL CONJUGTE OF THE PRODUCT OF TWO POSITIVE-DEFINITE QUDRTIC FORMS YUN-BIN ZHO bstract. It is well-known that the
More informationUNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013
PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped
More informationComputation of the Joint Spectral Radius with Optimization Techniques
Computation of the Joint Spectral Radius with Optimization Techniques Amir Ali Ahmadi Goldstine Fellow, IBM Watson Research Center Joint work with: Raphaël Jungers (UC Louvain) Pablo A. Parrilo (MIT) R.
More informationDuality Uses and Correspondences. Ryan Tibshirani Convex Optimization
Duality Uses and Correspondences Ryan Tibshirani Conve Optimization 10-725 Recall that for the problem Last time: KKT conditions subject to f() h i () 0, i = 1,... m l j () = 0, j = 1,... r the KKT conditions
More informationLecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method
Lecture Eigenvalue Problem Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method Eigenvalue Eigenvalue ( A I) If det( A I) (trivial solution) To obtain
More informationEconomics 205 Exercises
Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the
More informationA Finite Newton Method for Classification Problems
A Finite Newton Method for Classification Problems O. L. Mangasarian Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI 53706 olvi@cs.wisc.edu Abstract A fundamental
More informationLecture 14: Newton s Method
10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes
More informationRefined optimality conditions for differences of convex functions
Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationtoo, of course, but is perhaps overkill here.
LUNDS TEKNISKA HÖGSKOLA MATEMATIK LÖSNINGAR OPTIMERING 018-01-11 kl 08-13 1. a) CQ points are shown as A and B below. Graphically: they are the only points that share the same tangent line for both active
More informationarxiv: v1 [math.oc] 14 Oct 2014
arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu
More information1 Convexity, Convex Relaxations, and Global Optimization
1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global
More informationEigenvalue analysis of constrained minimization problem for homogeneous polynomial
J Glob Optim 2016) 64:563 575 DOI 10.1007/s10898-015-0343-y Eigenvalue analysis of constrained minimization problem for homogeneous polynomial Yisheng Song 1,2 Liqun Qi 2 Received: 31 May 2014 / Accepted:
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More information9. Interpretations, Lifting, SOS and Moments
9-1 Interpretations, Lifting, SOS and Moments P. Parrilo and S. Lall, CDC 2003 2003.12.07.04 9. Interpretations, Lifting, SOS and Moments Polynomial nonnegativity Sum of squares (SOS) decomposition Eample
More informationLecture 23: November 19
10-725/36-725: Conve Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 23: November 19 Scribes: Charvi Rastogi, George Stoica, Shuo Li Charvi Rastogi: 23.1-23.4.2, George Stoica: 23.4.3-23.8, Shuo
More informationA Note on KKT Points of Homogeneous Programs 1
A Note on KKT Points of Homogeneous Programs 1 Y. B. Zhao 2 and D. Li 3 Abstract. Homogeneous programming is an important class of optimization problems. The purpose of this note is to give a truly equivalent
More informationA. Incorrect! Apply the rational root test to determine if any rational roots exist.
College Algebra - Problem Drill 13: Zeros of Polynomial Functions No. 1 of 10 1. Determine which statement is true given f() = 3 + 4. A. f() is irreducible. B. f() has no real roots. C. There is a root
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Conve Analysis and Economic Theory Winter 2018 Toic 16: Fenchel conjugates 16.1 Conjugate functions Recall from Proosition 14.1.1 that is
More informationarxiv: v1 [math.fa] 16 Jun 2011
arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior
More information1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION
Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationTensor Complementarity Problem and Semi-positive Tensors
DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New
More informationLinear Algebra. Solving SLEs with Matlab. Matrix Inversion. Solving SLE s by Matlab - Inverse. Solving Simultaneous Linear Equations in MATLAB
Linear Algebra Solving Simultaneous Linear Equations in MATLAB Solving SLEs with Matlab Matlab can solve some numerical SLE s A b Five techniques available:. method. method. method 4. method. method Matri
More informationUsing quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints
Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell
More informationOn the Diagonal Approximation of Full Matrices
On the Diagonal Approximation of Full Matrices Walter M. Lioen CWI P.O. Box 94079, 090 GB Amsterdam, The Netherlands ABSTRACT In this paper the construction of diagonal matrices, in some
More informationf ( x ) = x Determine the implied domain of the given function. Express your answer in interval notation.
Test Review Section.. Given the following function: f ( ) = + 5 - Determine the implied domain of the given function. Epress your answer in interval notation.. Find the verte of the following quadratic
More informationx 20 f ( x ) = x Determine the implied domain of the given function. Express your answer in interval notation.
Test 2 Review 1. Given the following relation: 5 2 + = -6 - y Step 1. Rewrite the relation as a function of. Step 2. Using the answer from step 1, evaluate the function at = -1. Step. Using the answer
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationIRRATIONAL FACTORS SATISFYING THE LITTLE FERMAT THEOREM
International Journal of Number Theory Vol., No. 4 (005) 499 5 c World Scientific Publishing Company IRRATIONAL FACTORS SATISFYING THE LITTLE FERMAT THEOREM Int. J. Number Theory 005.0:499-5. Downloaded
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationGraded Questions on Matrices. 1. Reduce the following matrix to its normal form & hence find its rank Where
Graded Questions on Matrices MATRICES Rank of Matri : Normal Form 1. Reduce the following matri to its normal form & hence find its rank A N M 2 3 1 1 1 1 2 4 3 1 3 2 6 3 0 7 Q P [Dec. 07, May 10] 2. Reduce
More informationDiagonalization. P. Danziger. u B = A 1. B u S.
7., 8., 8.2 Diagonalization P. Danziger Change of Basis Given a basis of R n, B {v,..., v n }, we have seen that the matrix whose columns consist of these vectors can be thought of as a change of basis
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More information1 Kernel methods & optimization
Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions
More informationHW #1 SOLUTIONS. g(x) sin 1 x
HW #1 SOLUTIONS 1. Let f : [a, b] R and let v : I R, where I is an interval containing f([a, b]). Suppose that f is continuous at [a, b]. Suppose that lim s f() v(s) = v(f()). Using the ɛ δ definition
More informationJEE/BITSAT LEVEL TEST
JEE/BITSAT LEVEL TEST Booklet Code A/B/C/D Test Code : 00 Matrices & Determinants Answer Key/Hints Q. i 0 A =, then A A is equal to 0 i (a.) I (b.) -ia (c.) -I (d.) ia i 0 i 0 0 Sol. We have AA I 0 i 0
More informationLU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU
LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is
More informationPolynomials and Polynomial Functions
Unit 5: Polynomials and Polynomial Functions Evaluating Polynomial Functions Objectives: SWBAT identify polynomial functions SWBAT evaluate polynomial functions. SWBAT find the end behaviors of polynomial
More informationCity Suburbs. : population distribution after m years
Section 5.3 Diagonalization of Matrices Definition Example: stochastic matrix To City Suburbs From City Suburbs.85.03 = A.15.97 City.15.85 Suburbs.97.03 probability matrix of a sample person s residence
More informationReal Analysis Math 125A, Fall 2012 Sample Final Questions. x 1+x y. x y x. 2 (1+x 2 )(1+y 2 ) x y
. Define f : R R by Real Analysis Math 25A, Fall 202 Sample Final Questions f() = 3 + 2. Show that f is continuous on R. Is f uniformly continuous on R? To simplify the inequalities a bit, we write 3 +
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationOn the Inverting of a General Heptadiagonal Matrix
British Journal of Applied Science & Technology 18(5): 1-, 2016; Article no.bjast.313 ISSN: 2231-0843, NLM ID: 101664541 SCIENCEDOMAIN international www.sciencedomain.org On the Inverting of a General
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationMath 104: Homework 7 solutions
Math 04: Homework 7 solutions. (a) The derivative of f () = is f () = 2 which is unbounded as 0. Since f () is continuous on [0, ], it is uniformly continous on this interval by Theorem 9.2. Hence for
More informationFixed Point Theorem and Sequences in One or Two Dimensions
Fied Point Theorem and Sequences in One or Two Dimensions Dr. Wei-Chi Yang Let us consider a recursive sequence of n+ = n + sin n and the initial value can be an real number. Then we would like to ask
More informationPROBLEMS In each of Problems 1 through 12:
6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Linear Discriminant Analysis Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Principal
More informationEquivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p
Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical
More informationGrouped Network Vector Autoregression
Statistica Sinica: Supplement Grouped Networ Vector Autoregression Xuening Zhu 1 and Rui Pan 2 1 Fudan University, 2 Central University of Finance and Economics Supplementary Material We present here the
More informationAssignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:
Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationAdvanced Higher Mathematics Course Assessment Specification
Advanced Higher Mathematics Course Assessment Specification Valid from August 015 This edition: April 013, version 1.0 This specification may be reproduced in whole or in part for educational purposes
More informationThe Jordan forms of AB and BA
Electronic Journal of Linear Algebra Volume 18 Volume 18 (29) Article 25 29 The Jordan forms of AB and BA Ross A. Lippert ross.lippert@gmail.com Gilbert Strang Follow this and additional works at: http://repository.uwyo.edu/ela
More informationMath Midterm Solutions
Math 145 - Midterm Solutions Problem 1. (10 points.) Let n 2, and let S = {a 1,..., a n } be a finite set with n elements in A 1. (i) Show that the quasi-affine set A 1 \ S is isomorphic to an affine set.
More informationCHAPTER 3 : QUADRARIC FUNCTIONS MODULE CONCEPT MAP Eercise 1 3. Recognizing the quadratic functions Graphs of quadratic functions 4 Eercis
ADDITIONAL MATHEMATICS MODULE 5 QUADRATIC FUNCTIONS CHAPTER 3 : QUADRARIC FUNCTIONS MODULE 5 3.1 CONCEPT MAP Eercise 1 3. Recognizing the quadratic functions 3 3.3 Graphs of quadratic functions 4 Eercise
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationMath Bank - 6. What is the area of the triangle on the Argand diagram formed by the complex number z, -iz, z iz? (a) z 2 (b) 2 z 2
Math Bank - 6 Q.) Suppose A represents the symbol, B represents the symbol 0, C represents the symbol, D represents the symbol 0 and so on. If we divide INDIA by AGRA, then which one of the following is
More informationLecture 2: Weighted Majority Algorithm
EECS 598-6: Prediction, Learning and Games Fall 3 Lecture : Weighted Majority Algorithm Lecturer: Jacob Abernethy Scribe: Petter Nilsson Disclaimer: These notes have not been subjected to the usual scrutiny
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationAMB111F Notes 3 Quadratic Equations, Inequalities and their Graphs
AMB111F Notes 3 Quadratic Equations, Inequalities and their Graphs The eqn y = a +b+c is a quadratic eqn and its graph is called a parabola. If a > 0, the parabola is concave up, while if a < 0, the parabola
More information3.1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY
MATH00 (Calculus).1 ANALYSIS OF FUNCTIONS I INCREASE, DECREASE, AND CONCAVITY Name Group No. KEYWORD: increasing, decreasing, constant, concave up, concave down, and inflection point Eample 1. Match the
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationProperties of Solution Set of Tensor Complementarity Problem
Properties of Solution Set of Tensor Complementarity Problem arxiv:1508.00069v3 [math.oc] 14 Jan 2017 Yisheng Song Gaohang Yu Abstract The tensor complementarity problem is a specially structured nonlinear
More informationMA2223 Tutorial solutions Part 1. Metric spaces
MA2223 Tutorial solutions Part 1. Metric spaces T1 1. Show that the function d(,y) = y defines a metric on R. The given function is symmetric and non-negative with d(,y) = 0 if and only if = y. It remains
More information16x y 8x. 16x 81. U n i t 3 P t 1 H o n o r s P a g e 1. Math 3 Unit 3 Day 1 - Factoring Review. I. Greatest Common Factor GCF.
P a g e 1 Math 3 Unit 3 Day 1 - Factoring Review I. Greatest Common Factor GCF Eamples: A. 3 6 B. 4 8 4 C. 16 y 8 II. Difference of Two Squares Draw ( - ) ( + ) Square Root 1 st and Last Term Eamples:
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationPessimistic Bi-Level Optimization
Pessimistic Bi-Level Optimization Wolfram Wiesemann 1, Angelos Tsoukalas,2, Polyeni-Margarita Kleniati 3, and Berç Rustem 1 1 Department of Computing, Imperial College London, UK 2 Department of Mechanical
More informationNumerical Analysis Comprehensive Exam Questions
Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order
More informationEC5555 Economics Masters Refresher Course in Mathematics September 2013
EC5555 Economics Masters Reresher Course in Mathematics September 3 Lecture 5 Unconstraine Optimization an Quaratic Forms Francesco Feri We consier the unconstraine optimization or the case o unctions
More informationBeyond Newton s method Thomas P. Minka
Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.
More informationConvexity II: Optimization Basics
Conveity II: Optimization Basics Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 See supplements for reviews of basic multivariate calculus basic linear algebra Last time: conve sets and functions
More informationThere is a unique function s(x) that has the required properties. It turns out to also satisfy
Numerical Analysis Grinshpan Natural Cubic Spline Let,, n be given nodes (strictly increasing) and let y,, y n be given values (arbitrary) Our goal is to produce a function s() with the following properties:
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationSCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE
SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.
More informationLecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases.
Lecture 11 Andrei Antonenko February 6, 003 1 Examples of bases Last time we studied bases of vector spaces. Today we re going to give some examples of bases. Example 1.1. Consider the vector space P the
More informationIntro to Nonlinear Optimization
Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationUniversidad Carlos III de Madrid
Universidad Carlos III de Madrid Eercise 1 2 3 4 5 6 Total Points Department of Economics Mathematics I Final Eam January 22nd 2018 LAST NAME: Eam time: 2 hours. FIRST NAME: ID: DEGREE: GROUP: 1 (1) Consider
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationRecursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q
Applied Mathematical Sciences, Vol. 2, 2008, no. 38, 1855-1872 Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q Nicholas Assimakis Department of Electronics, Technological
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationSemismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones
to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National
More informationState space transformations
Capitolo 0. INTRODUCTION. State space transformations Let us consider the following linear time-invariant system: { ẋ(t) = A(t)+Bu(t) y(t) = C(t)+Du(t) () A state space transformation can be obtained using
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationk is a product of elementary matrices.
Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationRobust-to-Dynamics Linear Programming
Robust-to-Dynamics Linear Programg Amir Ali Ahmad and Oktay Günlük Abstract We consider a class of robust optimization problems that we call robust-to-dynamics optimization (RDO) The input to an RDO problem
More informationIyad T. Abu-Jeib and Thomas S. Shores
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant
More informationSection 3.3 Graphs of Polynomial Functions
3.3 Graphs of Polynomial Functions 179 Section 3.3 Graphs of Polynomial Functions In the previous section we eplored the short run behavior of quadratics, a special case of polynomials. In this section
More informationNow, if you see that x and y are present in both equations, you may write:
Matrices: Suppose you have two simultaneous equations: y y 3 () Now, if you see that and y are present in both equations, you may write: y 3 () You should be able to see where the numbers have come from.
More information