Robust l 1 and l Solutions of Linear Inequalities

Similar documents
A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Robust linear optimization under general norms

Interval solutions for interval algebraic equations

BCOL RESEARCH REPORT 07.04

A new primal-dual path-following method for convex quadratic programming

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Handout 8: Dealing with Data Uncertainty

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Robust portfolio selection under norm uncertainty

Lifting for conic mixed-integer programming

Nonlinear Optimization for Optimal Control

Convex relaxation. In example below, we have N = 6, and the cut we are considering

15. Conic optimization

Summary of the simplex method

Lecture: Cone programming. Approximating the Lorentz cone.

Existence of Solutions to Split Variational Inequality Problems and Split Minimization Problems in Banach Spaces

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

Uncertainties estimation and compensation on a robotic manipulator. Application on Barrett arm

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Bounds on the Largest Singular Value of a Matrix and the Convergence of Simultaneous and Block-Iterative Algorithms for Sparse Linear Systems

Robust Scheduling with Logic-Based Benders Decomposition

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

MINIMUM PEAK IMPULSE FIR FILTER DESIGN

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Lecture 6: Conic Optimization September 8

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Lecture 7: Convex Optimizations

Optimization based robust control

On the exponential convergence of. the Kaczmarz algorithm

Optimisation and Operations Research

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Convergence and Perturbation Resilience of Dynamic String-Averaging Projection Methods

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

1 Robust optimization

arxiv: v1 [math.na] 28 Jun 2013

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

The speed of Shor s R-algorithm

Module 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo

Convex Optimization and l 1 -minimization

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On self-concordant barriers for generalized power cones

Optimal Value Function Methods in Numerical Optimization Level Set Methods

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Formulation of L 1 Norm Minimization in Gauss-Markov Models

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Math Models of OR: Branch-and-Bound

On polyhedral approximations of the second-order cone Aharon Ben-Tal 1 and Arkadi Nemirovski 1 May 30, 1999

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

Barrier Method. Javier Peña Convex Optimization /36-725

IE 521 Convex Optimization

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

SPECTRA - a Maple library for solving linear matrix inequalities in exact arithmetic

Pareto Efficiency in Robust Optimization

Introduction to Convex Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Handout 6: Some Applications of Conic Linear Programming

Advances in Convex Optimization: Theory, Algorithms, and Applications

Bulletin of the. Iranian Mathematical Society

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Algorithms for MDPs and Their Convergence

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Relaxations and Randomized Methods for Nonconvex QCQPs

From Convex Optimization to Linear Matrix Inequalities

2.098/6.255/ Optimization Methods Practice True/False Questions

An Introduction to CVX

Research Article Constraint Consensus Methods for Finding Interior Feasible Points in Second-Order Cones

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Lecture 1 Introduction

Lecture 8. Strong Duality Results. September 22, 2008

CS711008Z Algorithm Design and Analysis

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

A NEW METHOD FOR SOLVING ILL-CONDITIONED LINEAR SYSTEMS. Fazlollah Soleymani

A Projection Algorithm for the Quasiconvex Feasibility Problem 1

Linear Inverse Problems

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

Scientific Computing: Optimization

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Second-order cone programming formulation for two player zero-sum game with chance constraints

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Lecture 18: Optimization Programming

A class of Smoothing Method for Linear Second-Order Cone Programming

Characterization of half-radial matrices

Transcription:

Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions of Linear Inequalities Department of Applied Mathematics, Faculty of Mathematical Sciences University of Guilan, salahim@guilan.ac.ir Received: March 28, 2011; Accepted: November 15, 2011 Abstract Infeasible linear inequalities appear in many disciplines. In this paper we investigate the l 1 and l solutions of such systems in the presence of uncertainties in the problem data. We give equivalent linear programming formulations for the robust problems. Finally, several illustrative numerical examples using the cvx software package are solved showing the importance of the robust model in the presence of uncertainties in the problem data. Keywords: Linear inequalities, Infeasibility, Uncertainty, Robust model, Linear programs MSC 2010 No.: 15A39, 90C05, 90C25 1. Introduction Robust optimization is an effective approach to many real world optimization problems (Ben- Tal, 2001 and Soyster, 1973). In this paper we consider the following form of linear inequalities: Ax b, (1) where A R m n and b R m. The solution of such linear inequalities is a fundamental problem that arises in several applications. Some important applications arise in medical 522

523 image reconstruction from projections, and in inverse problems in radiation therapy (Censor, 1988, 1982, 1997, 2008 and Herman, 1975, 1978). In practice, it often happens that the feasible region of (1) is empty due to measurements errors in the data vector b. In such a case it is desired to find the smallest correction of b that recovers feasibility (Dax, 2006, Ketabchi, 2009 and Salahi, 2010). More precisely, the following two problems should be solved in l 1 and l norms: min x (b Ax) + 1 (2) min x (b Ax) + (3) where (b i a i T x) + = max b i a i T x, 0. Obviously both (2) and (3) are equivalent to LPs that can be efficiently solved using simplex or interior point methods (Grant, 2010, 2008). Our goal in this paper is to consider (2) and (3) when there are uncertainties in both A and b. In such a case, we can not get the solution by solving problems in (2) and (3). We give new LP formulations of both problems in (2) and (3) in the presence of uncertainties. Finally several randomly generated test problems are presented showing the importance of the robust approach using cvx software package (Grant, 2010, 2008). 2. The l 1 Case In this section we consider the robust solution of linear inequalities in the l 1 norm sense i.e., min x, [ A, b] 1 ρ ((b + b) (A + A)x) + 1 (4) where A and b are uncertainties in the matrix A and vector b, respectively and ρ is a given positive parameter. We may write (4) as follows min x max [ A, b] 1 ρ (b Ax + b Ax) + 1 (5) which minimizes the residual in the worst case. Now for a given x R n, let us consider the inner problem in (5), then we have (b Ax + [ A b][ x T 1] T ) + 1 (b Ax) + 1 + ρ [ x T 1] T 1 (6)

AAM: Intern. J., Vol. 6, Issue 2 (December 2011) 524 In the sequel we show that for a specific choice of A and b we have inequality as equality in (6). Let us consider where [ A b] = ρ (b Ax ) + (b Ax ) + 1 (sgn([ x T 1] T )) T (7) sgn(a) = 1, if a > 0, 1, if a 0. Obviously A b 1 = ρ and (b Ax + [ A b][ x T 1] T ) + = (b Ax + ρ (b Ax) + (b Ax) + 1 (sgn([ x T 1] T )) T [ x T 1] T ) + =(b Ax + ρ (b Ax) + (b Ax) + 1 [ x T 1] T 1 ) + =(b Ax) + + ρ (b Ax) + (b Ax) + 1 [ x T 1] T 1. This implies the equality in (6). Therefore, (5) is equivalent to the following problem: min x (b + b) + 1 + ρ [ x T 1] T 1 (8) Obviously, this is equivalent to an LP as follows: min e T z + ρe T s + ρ Ax + z b x s (9) x s s, z 0 where two vectors e are all one vector of appropriate dimensions. 3. The l Case In this section we consider the robust solution of linear inequalities in the l norm sense i.e., min x, [ A, b] ρ ((b + b) (A + A)x) + (10)

525 where A and b are uncertainties in the matrix A and vector b, respectively and ρ is a given positive parameter. We may write (10) as follows min x max [ A, b] ρ (b Ax + b Ax) +. (11) Now for a given x R n, let us consider the inner problem in (11), then we have (b Ax + [ A b][ x T 1] T ) + (b Ax) + + ρ [ x T 1] T. (12) Let x = [x T 1] T and r = arg max i=1,,n+1 x i. Now we show that for the following specific choice of A and b we have the inequality (12) as equality. Let (b Ax ) + T [ A b] = ρ sgn(x r) e (b Ax ) + r (13) where e r is the rth column of the identity matrix and sgn(a) = 1, if a > 0, 0, if a 0. Obviously A b = ρ and (b Ax) (b Ax + [ A b][ x T 1] T + ) + = (b Ax + ρ sgn(x r) e T (b Ax) + r x ) + (b Ax ) + =(b Ax + ρ sgn(x r) x ) (b Ax ) + + =(b Ax) + + ρ (b Ax ) + (b Ax ) + x. This implies the equality in (12). Therefore, (10) is equivalent to the following problem: min x (b + b) + + ρ [x T 1] T (14) This itself is equivalent to the following LP: min z + ρt b Ax y y i z, i = 1,, m (15) t x i t, i = 1,, n t 1, y 0.

AAM: Intern. J., Vol. 6, Issue 2 (December 2011) 526 4. Numerical Results In this section we present numerical results for some randomly generated test problems with different dimensions. In both tables, the first three test problems are generated using the following simple MATLAB code: clear all clc seed=0; randn( state,seed); m=enter( number of rwos ); n=enter( number of columns ); A=randn(m,n); b=randn(m,1); For problems generated by this code, matrix A is not necessarily ill-condition, so we use MATLAB's hilb() command to generate the well-know ill-condition Hilbert matrix of the given dimension and make the system infeasible. The last two test problems in both tables are generated by the following code: A=hilb(m); b=randn(m,1); A=[A;-A(m,:)]; b=[b;b(m)+1]; We solve LPs using cvx (Grant, 2010, 2008) software package. Results for l 1 and l norms are summarized in Tables 1 and 2, respectively. We have used three different values for uncertainty parameter, namely 0.1, 1, 5 and numbers in the parenthesis of both tables are also for these three values, respectively. As our results show, the robust solution is different than the original problem although their objective values might be closer. Moreover, when the coefficient matrix is ill-condition, even a small uncertainty might significantly change the solution, see the last two rows of both tables. Therefore, in the presence of uncertainties, it is better to use the robust model rather than the original one.

527 Table 1: Comparison of Problem (2) and its Robust version (9) m, n Method (b Ax ) + 1 x 1 50,10 500,100 700,300 100,30 500,300 Problem (2) 12.0456 3.9991 Problem (9) (12.0456,12.415,16.796) (3.97,3.28,0.4620) Problem (2) 143.24 7.398 Problem (9) (143.24,143.62,151.007) (7.398,6.735,3.75) Problem (2) 152.9 15.52 Problem (9) (152.9,153.85,171.25) (14.97,13.25,7.18) Problem (2) 0.469 3.4252e4 Problem (9) (3.41,25.44,38.13) (62.27,8.71,8.7e-9) Problem (2) 0.871 3.6729e5 Problem (9) (20.28,126.36,174.22) (377.78,30.56,4.3e-8) Table 2: Comparison of Problem (3) and its Robust version (15) m, n Method (b Ax ) + x 50,10 500,100 700,300 100,30 500,300 Problem (3) 1.0893 0.5045 Problem (15) (1.0893,1.0893,1.0893) (0.5045, 0.5045, 0.5045) Problem (3) 1.243 0.222 Problem (15) (1.243,1.243,1.243) (0.222,0.222,0.222) Problem (3) 0.937 0.3340 Problem (15) (0.937,0.937,0.937) (0.3340,0.3340.0.3340) Problem (3) 0.234 3.407e4 Problem (15) (1.091,1.091,1.091) (1,1,1) Problem (3) 0.436 2.1894e9 Problem (15) (1.5851,1.5851,1.5851) (1,1,1) 5. Conclusions In this paper, we have studied the robust l 1 and l solutions of linear inequalities in the presence of uncertainties in both A and b. Equivalent LP formulations of both robust problems are given. Finally, several numerical results are presented showing the importance of the robust framework.

AAM: Intern. J., Vol. 6, Issue 2 (December 2011) 528 REFERENCES Ben-Tal, A. and Nemirovski, A. (2001). Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MPS-SIAM Series on Optimization, SIAM, Philadelphia. Censor, Y., Altschuler, M.D. and Powlis, W.D. (1988). A computational solution of the inverse problem in radiation therapy treatment planning, Appl. Math. Comput. 25, pp. 57-87. Censor, Y. and Elfving, T. (1982). New methods for linear inequalities, Linear Algebra Appl., 42, pp. 199-211. Censor, Y. and Zenios, S.A. (1997). Parallel Optimization, Theory Algorithms, and Applications, Oxford University Press, Oxford. Censor, Y., Ben-Israel, A., Xiao, Y. and Galvin, J.M. (2008). On linear infeasibility arising in intensity-modulated radiation therapy inverse planning, Linear Algebra and Its Applications, 428, pp. 1406 1420. Dax, A. (2006). The l 1 solutions of linear inequalities, Computational Statistics & Data Analysis, 50, pp. 40 60. Grant, M. and Boyd, S. (2010). CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx. Grant, M. and Boyd, S. (2008). Graph implementations for nonsmooth convex programs, Recent Advances in Learning and Control (a tribute to M. Vidyasagar), V. Blondel, S. Boyd, and H. Kimura, editors, pp. 95 110, Lecture Notes in Control and Information Sciences, Springer, http://stanford.edu/~boyd/graph_dcp.html. Herman, G.T. (1975). A relaxation method for reconstructing object from noisy x-rays. Math. Prog. 8, pp. 1-19. Herman, G.T. and Lent, A. (1978). A family of iterative quadratic optimization algorithms for pairs of inequalities, with applications in diagnostic radiology. Math. Prog. Study, 9, pp. 15-29. Ketabchi, S. and Salahi, M. (2009). Correcting an inconsistent set of linear inequalities by the generalized Newton method, Comput. Sci. J. Moldova, 17, 2(50), pp. 179 192. Salahi, M. and Ketabchi, S. (2010). Correcting an inconsistent set of linear inequalities by the generalized Newton method, Optimization Methods and Software, 25(3), pp. 457 465. Soyster, A.L. (1973). Convex programming with set-inclusive constraints and applications to inexact linear programming, Oper. Res., 21, pp. 1154-1157.