A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION

Similar documents
Auxiliary signal design for failure detection in uncertain systems

OPTIMAL CONTROL AND ESTIMATION

Active incipient fault detection with more than two simultaneous faults

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

5 Handling Constraints

Riccati difference equations to non linear extended Kalman filter constraints

Robust Least-Squares Filtering With a Relative Entropy Constraint

The Design of Auxiliary Signals for Robust Active Failure Detection in Uncertain Systems

Linear-Quadratic Optimal Control: Full-State Feedback

Optimization and Root Finding. Kurt Hornik

Numerical optimization

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

Discrete-Time H Gaussian Filter

Performance assessment of MIMO systems under partial information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

S-estimators in mapping applications

OBSERVER DESIGN WITH GUARANTEED BOUND FOR LPV SYSTEMS. Jamal Daafouz Gilles Millerioux Lionel Rosier

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

An Exact Stability Analysis Test for Single-Parameter. Polynomially-Dependent Linear Systems

SYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1. P. V. Pakshin, S. G. Soloviev

Trust Regions. Charles J. Geyer. March 27, 2013

Inexact Newton Methods and Nonlinear Constrained Optimization

Likelihood Bounds for Constrained Estimation with Uncertainty

On Lagrange multipliers of trust region subproblems

Computation. For QDA we need to calculate: Lets first consider the case that

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems

Sample ECE275A Midterm Exam Questions

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Algorithms for constrained local optimization

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

LINEAR AND NONLINEAR PROGRAMMING

Algorithms for Constrained Optimization

On Lagrange multipliers of trust-region subproblems

An Inexact Newton Method for Optimization

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

An LQ R weight selection approach to the discrete generalized H 2 control problem

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

The Bock iteration for the ODE estimation problem

ADAPTIVE FILTER THEORY

You should be able to...

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Nonlinear Programming

Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q

1 Lagrange Multiplier Method

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Unconstrained optimization

Line Search Methods. Shefali Kulkarni-Thaker

Applications of Controlled Invariance to the l 1 Optimal Control Problem

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

A Note on Inverse Iteration

MULTI-INPUT multi-output (MIMO) channels, usually

Zeros and zero dynamics

Stochastic Tube MPC with State Estimation

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Properties of Matrices and Operations on Matrices

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

ACTIVE FAILURE DETECTION: AUXILIARY SIGNAL DESIGN AND ON-LINE DETECTION

Chapter 3 Transformations

H Estimation. Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar. H Estimation p.1/34

Strong duality in Lasserre s hierarchy for polynomial optimization

Higher-Order Methods

Further Results on Model Structure Validation for Closed Loop System Identification

Denis ARZELIER arzelier

Robust Internal Model Control for Impulse Elimination of Singular Systems

Principal Component Analysis and Linear Discriminant Analysis

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

ROBUST CONSTRAINED REGULATORS FOR UNCERTAIN LINEAR SYSTEMS

An Optimization-based Approach to Decentralized Assignability

List of Symbols, Notations and Data

ADAPTIVE FILTER THEORY

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

12. Interior-point methods

An Inexact Newton Method for Nonlinear Constrained Optimization

The constrained stochastic matched filter subspace tracking

Structured State Space Realizations for SLS Distributed Controllers

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

H -Optimal Control and Related Minimax Design Problems

State estimation of uncertain multiple model with unknown inputs

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

1 Introduction QUADRATIC CHARACTERIZATION AND USE OF OUTPUT STABILIZABLE SUBSPACES 1

THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Math 411 Preliminaries

Reduced-order Interval-observer Design for Dynamic Systems with Time-invariant Uncertainty

Fixed Order H Controller for Quarter Car Active Suspension System

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Transcription:

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION ALIREZA ESNA ASHARI, RAMINE NIKOUKHAH, AND STEPHEN L. CAMPBELL Abstract. The problem of maximizing a quadratic function subject to an ellipsoidal constraint is considered. The algorithm produces an approximation of the optimal solution in a finite number of iterations. In particular, the method can be used to solve the ill-conditioned problems in which the solution consists of two parts from two orthogonal subspaces. Without restrictive assumptions, the solution generated by the method satisfies the first and second order necessary conditions for a maximizer of the objective function. Numerical experiments clearly demonstrate that the method is successful. Key words. constrained optimization, quadratic optimization, lagrangian method 1. Introduction. In many identification and fault detection problems uncertainties are represented by unknown parameters belonging to predefined ellipsoids. Basically there are two ways of treating initial state uncertainty, and system dynamic disturbances. One way to treat information about these variables is to use stochastic modeling and consider the initial condition and the inputs to be random variables and stochastic processes. This type of approaches does not directly define a set of behaviors for the system, but associates to each trajectory a probability. An alternative approach is to consider input noises and initial states unknown except for the fact that they belong to given bounded sets [1]. Therefore, the information of system states in normal or faulty modes is described by a family of sets. Especially, for the purpose of fault detection, it is enough to test the membership of the measurements [6] in these sets. The model selection methods based on this approach are Robust in the sense that the detection of the fault is guaranteed for all the predefined set of uncertainties. In these types of approaches, we may face the problem of finding the worst case uncertainty for a given input. This application is a major motivation for studying a group of constrained quadratic optimization problems J = max µ a + Bµ 2, s.t. µ 2, (1.1) where a R n and B R n m are given vector and matrix respectively, of proper dimensions and µ R m is an unknown vector to be calculated.. is the Euclidean norm in R n. This type of maximization is usually the inner maximization of the Min-Max problem which we may face in signal design problems for model selection. Studying the properties of this problem helps us to understand some unusual attributes of those types of problems [3]. In the literature, similar quadratic problems have been considered and solved approximately. As an example, consider the following minimization, J = min µ g T µ + 1 2 µt Cµ, s.t. µ 2. (1.2) Problem (1.2) has attracted much attention, as it is used in computing trust regions (see [5, 2]). Several algorithms have been proposed to solve (1.2) based on the assumption that INRIA, Rocquencourt BP 105, 78153 Le Chesnay Cedex, France (Alireza.esna ashari@inria.fr). INRIA, Rocquencourt BP 105, 78153 Le Chesnay Cedex, France (Ramine.nikoukhah@inria.fr). Department of Mathematics and Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC 27695-8205, USA (slc@math.ncsu.edu). Research supported in part by NSF Grant DMS- 0907832. 1

2 A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION it has a solution on the boundary µ =. It is the same situation that occurs in (1.1). Note that (1.2) is more general than (1.1). Both of the problems (1.1) and (1.2) look simple. However, there are some difficulties in some special cases of these problems. In (1.1) the solution is on the boundary of the set of uncertainties µ. Assuming = 1, the hard case is when we cannot find any λ 0 such that λi B T B is positive definite and the solution satisfies (λi B T B) 1 B T a 2 = 1, (1.3) where I represents the identity matrix. The problem turns out to be similar to the hard case of problem (1.2). The solution of (1.2) is considered in [7, 5]. In [5] it is explained how in a similar hard case this leads to numerical difficulties. An approximate solution to this problem is provided in [5] which gives a nearly optimal solution which is near enough to the optimal solution (see also [4, 7, 2]). In this paper, Knowing that the hard case occurs, we develop an algorithm for problem (1.1) that, unlike the approach of [5], gives an exact solution to (1.1). Considering the structure of the problem, the proposed algorithm uses the bisection method which is also capable of solving the hard cases. Based on the complete solution to (1.1) we propose a recursive algorithm to find the solution in the hard case. Also we consider the solution of a dynamic counterpart optimization, which can be solved by using a dynamic programming approach and designing a filter to calculate the bound on uncertainties in the forward solution of a Riccati equation. 2. Main results. 2.1. Static Maximization. In order to propose an efficient approach to solve the optimization problem, we need to understand the structure of the problem. Problem (1.2) has been already investigated in [7]. In this paper we study the structure of (1.1) in a different way which is more compatible with the algorithm given in Section 3 and the utilized bisection method. For the sake of simplicity, we assume = 1 from now on. In the other cases, the solution can be attained by scaling the answer. At the first step, the following Theorem provides the information about the solution of a general case optimization problem. Problem (1.1) is a special case of this problem, however, we need the general solution for the algorithm proposed in the next section. THEOREM 2.1. Consider the following optimization problem J = max µ a + Bµ 2, s.t. Dµ 2 1, (2.1) where B and D are matrices, not necessarily square, and a is a vector with proper dimensions. D is assumed to be full column rank. Let λ denote the maximum eigenvalue of the matrix pair {B T B, D T D} for which λd T D B T B is singular and denote the optimum value of µ by µ. Also, consider the condition Left Kernel( λd T D B T B) Left Kernel(B T a). (2.2) There are two possibilities for the solution of J: 1. if (2.2) is not satisfied, or if (2.2) is satisfied but then ( λd T D B T B) B T a > 1, µ = (λ D T D B T B) 1 B T a. (2.3)

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION 3 λ is the value of λ > λ for which Dµ 2 = 1 and ( ) represents the pseudoinverse. 2. if not, then µ = ( λd T D B T B) B T a + Ψζ, (2.4) where Ψ Right Kernel( λd T D B T B) is a full column rank matrix and ζ is calculated such that ζ = 1 ( λd T D B T B) B T a 2. (2.5) Proof. First of all we should note that the solution occurs on the boundary where Dµ = 1. Using the Lagrangian method, the Lagrangian is Taking the derivative of J with respect to µ gives In order to have a maximum, it is necessary that Assuming that λ > λ, we may have We define J = a + Bµ 2 λ Dµ 2. (2.6) (B T B λd T D)µ = B T a. (2.7) λ λ. (2.8) µ = (λd T D B T B) 1 B T a. (2.9) f(λ) = Dµ 2. (2.10) Here, f(λ) is a monotonically decreasing function of λ on (λ, ). It is easy to show by calculating the derivative of f(λ) with respect to λ that df(λ) dλ = 2aT B(λD T D B T B) 1 (D T D)(λD T D B T B) 1 D T D(λD T D B T B) 1 B T a, (2.11) which is negative over the range of λ in (2.8). On the other hand and lim f(λ) = 0, (2.12) λ lim f(λ) =. (2.13) λ λ So, for some value of λ, f(λ) equals one. If (2.2) is satisfied, the same situation occurs. In this case lim f( λ) = ( λd T D B T B) B T a. (2.14)

4 A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION Here, if f( λ) > 1, it equals 1 for a greater value of λ and µ is calculated from (2.3). If f( λ) < 1, we can make it equal to one as in (2.4). COROLLARY 2.2. f(λ) = Dµ 2 is monotonically decreasing function of λ on (λ, ). The result of corollary (2.2) is useful for us to propose the algorithm in the next section. To simplify our notation we define Π = ( λd T D B T B) B T a. (2.15) Now, we are ready to give a solution for the second case of Theorem 2.1. We need to define vector ζ in such a case. Let us put µ from (2.4) in the original problem to obtain the following new optimization problem C = max a + BΠ + BΨζ 2 s.t. DΠ + DΨζ 2 = 1. (2.16) ζ Again we use Lagrangian method, defining J 2 as J 2 = a + BΠ + BΨζ 2 ρ DΠ + DΨζ 2 (2.17) where ρ is a Lagrange multiplier. The first derivative of (2.17) is put equal to zero to find the stationary points of the optimization problem J 2 ζ = 0, (2.18) that is, (Ψ T B T BΨ ρψ T D T DΨ)ζ + Ψ T (B T a + B T BΠ ρd T DΠ) = 0 (2.19) In order for the solution to be a maximum of the problem, we need to check the second derivative of J 2 : that is A sufficient condition for satisfying (2.21) is If ρ λ, then the solution ζ is obained as follows, 2 J 2 0, (2.20) ζ2 Ψ T (B T B ρd T D)Ψ 0. (2.21) λ ρ. (2.22) ζ = (ρψ T D T DΨ Ψ T B T BΨ) 1 Ψ T (B T a + B T BΠ ρd T DΠ). (2.23) However, considering that the solution of (2.1) occurs on Dµ = 1 and ρ = λ and C = J it is concluded that ρ = λ. (2.24) But (2.19) is already satisfied for ρ = λ for any selection of ζ. Thus we have freedom in the selection of this vector.

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION 5 3. An algorithm to find the solution. There is a Lagrange multiplier, λ, which should be determined to solve the problem. It is found using a bisection method where λ λ. Here, the algorithm to find the multiplier and the final solution of (2.1) is summarized. To solve the problem (1.1), we choose D as the identity matrix. However, in the hard case, the solution comes from a new optimization problem of the general form (2.1), but has less dimensions. This leads to a recursive algorithm. Note that B T B λd T D may have more than one zero eigenvalue. 1. Select a large enough upper bound for λ, and call it λ max. The lower limit is set to λ min = λ. These two variables may change during the algorithm. 2. Select λ as λ = (λ max + λ min )/2. If λ λ < ɛ for a small positive value of ɛ, solve the problem for λ. 3. If λ = λ, go to step 5. Otherwise calculate µ = (λd T D B T B) 1 B T a for the selected λ. 4. let k = Dµ 2. The algorithm stops when k = 1. In this case, µ = (λd T D B T B) 1 B T a is the solution. If k > 1, set λ max = λ, and if k < 1, set λ min = λ, and go to step 2. 5. If λ = λ, calculate ( λd T D B T B) B T a and Ψ as the basis of the right-kernel of λd T D B T B. 6. If ζ is a scalar, solve a second order equation DΨζ 2 = 1 DΠ 2 to find ζ which gives two solutions for ζ. In this case the algorithm terminates at this point. Otherwise go to the next step. 7. If ζ is a vector, then there are multiple solutions. In order to determine a solution, introduce a new quadratic cost function by defining ā and B to get a new problem J = max ā + Bζ 2, s.t. DΨζ 2 = 1 DΠ 2. (3.1) ζ ā and B are free to meet any other design considerations. 8. Call this algorithm from the first step to solve (3.1). Note that (3.1) is a problem of type (2.1), but now the dimensions of ζ is less than the dimensions of µ. 4. Examples. Consider the following system; a = 0.0068 3.4010 1.5781 0.0812 4.3380, B = 0.2067 0.4676 0.6879. 0.9731 1.2059 2.7120 2.1410 The set of eigenvalues of the matrix B T B is {0.0637, 14.0637, 14.0637}. So we have λ = 14.0637. This is an example of the hard case where the λ determined by the algorithm converges to λ and we need to solve the problem for λ = λ. We calculate λi B T B = 1 2 3 2 4 6, B T a = 0.3 0.6. 3 6 9 0.9 In this case ζ is a vector of dimension 2 and 0.8729 0.4082 Ψ = 0.2182 0.8165, Π = 0.0214 0.0429. 0.4364 0.4082 0.0643

6 A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION A choice for ζ and µ is ζ = ( ) 0.1598, µ = 0.2836 0.7954. 0.9839 0.5357 The optimal value of the cost function is J = 5.8240 for any selection of ζ. 5. Conclusions. The problem of maximizing a quadratic function subject to an ellipsoidal constraint is considered and a method is given to solve the problem. In particular, the method can be used to solve the ill-conditioned problems in which the solution consists of two parts from two orthogonal subspaces. Numerical experiments clearly demonstrate that the method is successful. REFERENCES [1] D. BERTSEKAS AND I. RHODES, Recursive state estimation for a set-membership description of uncertainty, IEEE Transactions on Automatic Control, 16 (1971), pp. 117 128. [2] T. F. COLEMAN AND C. HEMPEL, Computing a trust region step for a penalty function, SIAM J. Sci. Statist. Comput., 11 (1990), pp. 180 201. [3] A. ESNA ASHARI, R. NIKOUKHAH, AND S.L. CAMPBELL, Active robust fault detection of closed-loop systems: general cost case, 7th IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes, (2009). [4] DAVID M. GAY, Computing optimal locally constrained steps, SIAM J. Sci. Statist. Comput., 2 (1981), pp. 186 197. [5] J. J. MORÉ AND D. C. SORENSEN, Computing a trust region step, SIAM J. Sci. Statist. Comput., 4 (1983), pp. 553 572. [6] R. NIKOUKHAH, Guaranteed active failure detection and isolation for linear dynamical systems, Automatica, 34 (1998), pp. 1345 1358. [7] D. C. SORENSEN, Newton s method with a model trust region modification, SIAM J. Numer. Anal., 19 (1982), pp. 409 426.