Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices

Similar documents
APPENDIX A Some Linear Algebra

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Solutions to exam in SF1811 Optimization, Jan 14, 2015

The Order Relation and Trace Inequalities for. Hermitian Operators

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Randić Energy and Randić Estrada Index of a Graph

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

CSCE 790S Background Results

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Perfect Competition and the Nash Bargaining Solution

More metrics on cartesian products

Linear Approximation with Regularization and Moving Least Squares

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Affine transformations and convexity

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Deriving the X-Z Identity from Auxiliary Space Method

Problem Set 9 Solutions

Time-Varying Systems and Computations Lecture 6

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

The Geometry of Logit and Probit

6.854J / J Advanced Algorithms Fall 2008

On the Multicriteria Integer Network Flow Problem

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

SL n (F ) Equals its Own Derived Group

Lecture 10 Support Vector Machines II

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Lecture Notes on Linear Regression

Monotonic convergence of a general algorithm for computing optimal designs

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

Convexity preserving interpolation by splines of arbitrary degree

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

The Second Anti-Mathima on Game Theory

Global Sensitivity. Tuesday 20 th February, 2018

Another converse of Jensen s inequality

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Finding Dense Subgraphs in G(n, 1/2)

MMA and GCMMA two methods for nonlinear optimization

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Difference Equations

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

Formulas for the Determinant

A new construction of 3-separable matrices via an improved decoding of Macula s construction

Lecture 21: Numerical methods for pricing American type derivatives

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

10-701/ Machine Learning, Fall 2005 Homework 3

2.3 Nilpotent endomorphisms

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

e - c o m p a n i o n

Assortment Optimization under MNL

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Composite Hypotheses testing

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

COS 521: Advanced Algorithms Game Theory and Linear Programming

Errors for Linear Systems

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

A new Approach for Solving Linear Ordinary Differential Equations

Lecture 12: Discrete Laplacian

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

The Minimum Universal Cost Flow in an Infeasible Flow Network

Linear Algebra and its Applications

First day August 1, Problems and Solutions

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

COMPUTING THE NORM OF A MATRIX

Lecture 4: Constant Time SVD Approximation

Estimation: Part 2. Chapter GREG estimation

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

NP-Completeness : Proofs

EEE 241: Linear Systems

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Generalized Linear Methods

Research Article Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

CHAPTER III Neural Networks as Associative Memory

The lower and upper bounds on Perron root of nonnegative irreducible matrices

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

arxiv: v1 [math.co] 12 Sep 2014

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

An efficient algorithm for multivariate Maclaurin Newton transformation

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Eigenvalues of Random Graphs

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Linear Regression Analysis: Terminology and Notation

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Appendix B. Criterion of Riemann-Stieltjes Integrability

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

Short running title: A generating function approach A GENERATING FUNCTION APPROACH TO COUNTING THEOREMS FOR SQUARE-FREE POLYNOMIALS AND MAXIMAL TORI

Transcription:

Approxmate D-optmal desgns of experments on the convex hull of a fnte set of nformaton matrces Radoslav Harman, Mára Trnovská Department of Appled Mathematcs and Statstcs Faculty of Mathematcs, Physcs and Informatcs Comenus Unversty Bratslava Abstract In the paper we solve the problem of D H -optmal desgn on a dscrete expermental doman, whch s formally equvalent to maxmzng determnant on the convex hull of a fnte number of postve semdefnte matrces. The problem of D H -optmalty covers many specal desgn settngs, e.g. the D-optmal expermental desgn for regresson models wth grouped observatons. For D H -optmal desgns we prove several theorems generalzng known propertes of standard D-optmalty. Moreover, we show that D H -optmal desgns can be numercally computed usng a multplcatve algorthm, for whch we gve a proof of convergence. We llustrate the results on the problem of D-optmal augmentaton of ndependent regresson trals for the quadratc model on a rectangular grd of ponts n the plane. Keywords: D-optmal desgn, grouped observatons, D-optmal augmentaton of trals, multplcatve algorthm 1 Introducton Consder the standard homoscedastc lnear regresson model wth uncorrelated observatons Y satsfyng E(Y ) = β T f(t), where β R m s an unknown vector of parameters and f = (f 1,..., f m ) T s a vector of real-valued regresson functons lnearly ndependent on the expermental doman X = {x 1,..., x n }. For ths model, constructng the D-optmal expermental desgn (see, e.g., monographs [4], or [5]) s 1

equvalent to fndng a vector of D-optmal weghts, whch s any soluton w of the problem { ( } max ln det w f(x )f T (x )) w S n, (1) where S n s the unt smplex n R n : { } S n = w R n : w = 1, w 1,..., w n 0. In ths paper we study a generalzaton of the problem (1) that can be used n a varety of less standard optmal desgn settngs and, n the same tme, exhbts smlar theoretcal propertes as well as permts the use of effcent algorthms, such as a generalzaton of the multplcatve algorthm for the standard problem of D-optmalty on a dscrete expermental doman. Notaton: By the symbols S m, S m + and S m ++ we denote the set of all symmetrc, postve semdefnte and postve defnte matrces of type m m. The symbol defnes the Loewner partal orderng on S m,.e., A B A B S m +. Let H be the convex hull of the set of nonzero postve semdefnte matrces H 1,..., H n of type m m, such that H S m ++, (2).e., H contans a regular matrx. Our am s to fnd a vector w = (w1,..., w n) T that solves the optmzaton problem { ( ) } max ln det w H w S n. (3) Note that there always exsts a soluton w of (3), and the matrx M = n w H s unque and regular, whch follows from compactness and convexty of H, exstence of a regular matrx n H, and strct concavty of ln det( ) on S++. m We wll say that the vector w s a D H -optmal desgn, and ts components w1,..., w n are D H -optmal weghts correspondng to the elementary desgn matrces H 1,..., H n. The matrx M wll be called the D H -optmal nformaton matrx. Clearly, the standard problem (1) s a specal case of (3) wth elementary nformaton matrces correspondng to ndvdual regresson trals,.e., H = f(x )f T (x ) for all = 1,..., n. However, (3) covers 2

all problems of approxmate D-optmal expermental desgn n whch there exsts a mappng H : X S+ m, such that the nformaton matrx correspondng to trals n ponts t 1,..., t k X s equal to k H(t ), that s n any model, n whch the nformaton matrx s addtve. For nstance, n the regresson model wth grouped observatons (see [4], Secton II.5.3), the nformaton matrx of a desgn w S n s gven by the formula M(w) = w G K 1 G T, where G s an m r matrx and K s a regular r r covarance matrx of the r -dmensonal vector of observatons correspondng to the tral n the pont x X. Hence, D-optmalty for the model wth grouped observatons s a problem of D H -optmalty wth the basc nformaton matrces H = G K 1 G T for = 1,..., n. In Secton 6 we demonstrate that D H -optmalty covers also other desgn problems, such as D-optmal augmentaton of ndependent regresson trals. 2 Equvalence theorem for D H -optmal desgns Consder the functon Φ : S m + R { } defned Φ(M) = ln det(m) for M S m ++ and Φ(M) = otherwse. We wll call the functon Φ the crteron of D-optmalty. It s well known that Φ s a concave functon and the gradent n M S m ++ s Φ(M) = M 1, see, e.g., [4], Secton IV.2.1. Hence the drectonal dervatve n M along the drecton H M, where H S m, s Φ(M; H M) = tr ( Φ(M)(H M)) = tr(m 1 H) m. Clearly, a matrx M maxmzes Φ on H ff M s regular and Φ(M, H M ) 0 for all = 1,..., n, whch s equvalent to tr(m 1 H ) m for all = 1,..., n. That s, the optmzaton problem: { mn max tr ( M 1 ) H,...,n M = } w H, w S n has the optmal value less or equal to m, and M = n w H, where w are components of an optmal soluton of (4), maxmzes the 3 (4)

determnant on the set H. Moreover, notce that m max H ) w tr(m 1 H ),...,n tr(m 1 = tr ( M 1 ) w H = m, (5) that s max,...,n tr(m 1 H ) = m, whch means that the optmal value of the problem (4) s exactly m. Therefore, we know the optmal value of the problem (4), although we do not know the pont at whch t s attaned. Moreover, from (5) we see that a coeffcent w of the optmal convex combnaton s nonzero only f tr(m 1 H ) = m. We obtan a generalzaton of the famous Kefer-Wolfowtz theorem of equvalence between the problems of D- and G-optmalty (see [3], or [4] Secton IV.2.4): Theorem 1. Let w S n. Then the followng three statements are equvalent: () w s a D H -optmal desgn; () w s a soluton of the problem (4); () max,...,n tr(m 1 H ) = m, where M = n w H. where Note also that for any nformaton matrx M H we have Φ(M ) Φ(M) tr((m M) Φ(M)) = tr(m M 1 ) m = w tr ( H M 1) m ε M, (6) ε M = max tr(h M 1 ) m. (7),...,n By Theorem 1, f M approaches M, then ε M converges to 0. Therefore, nequalty (6) can be used to control convergence of D H -optmal desgn algorthms; cf. Secton 5. 3 Bounds for D H -optmal weghts A well known fact n the theory of D-optmal desgn s that the components of the vector of D-optmal weghts are bounded by 1/m from above (see, e.g., [5] Secton 8.12). It turns out that n the case of general D H -optmalty, any optmal weght w satsfes constrants determned by the rank of the correspondng matrx H : 4

Theorem 2. Let w be a D H -optmal desgn. Then w rank(h ) m for all = 1,..., n. Proof. Let M = n w H be the D H -optmal nformaton matrx, where w s a vector of D H -optmal weghts. Fx a sngle ndex {1,..., n}, such that w > 0 and denote N = M 1 2 H M 1 2. From Theorem 1 we know that tr(m 1 H ) = tr(n ) = m, whch means that the sum of the egenvalues of N s m. In the same tme, the number of nonzero egenvalues of N cannot exceed rank(n ) = rank(h ). Thus, there exsts an egenvalue λ of the matrx N such that λ m(rank(h )) 1. Snce λ s the egenvalue of N we have det(n λi m ) = 0, from whch we obtan 0 = det(h λm ). In other words M λ 1 H s sngular. We wll show that ths mples w λ 1. Assume the converse, that s w > λ 1. Then = j M λ 1 H = j w j H j + (w λ 1 )H wj H j + w λ 1 w w H w λ 1 w wj H j j=1 = w λ 1 w M S++, m whch s a contradcton wth sngularty of M λ 1 H. Therefore w λ 1 rank(h )m 1. As a straghtforward corollary of the prevous theorem we obtan that f n rank(h ) = m, then the D H -optmal desgn s smply a vector w wth components w = rank(h )/m. 4 Identfcaton of zero weghts of D H - optmal desgns In ths secton we formulate a drect generalzaton the method from the paper [2], whch allows us to use any regular nformaton matrx 5

M = n w H to dentfy ndces j {1,..., n}, such that w j = 0 for any D H -optmal desgn w. Smlarly as n the paper [2], let N = M 1 2 M M 1 2, where M s the D H -optmal nformaton matrx, and let 0 < λ 1... λ m denote the egenvalues of N. Let w be any D H -optmal desgn and let j {1,..., n}, such w j > 0. Set Y j = N 1 2 M 1 2 H 1 2 j. We have tr(m 1 H j ) = tr(ny j Y T j ) λ 1 tr(y j Y T j ) m = λ 1 tr(m 1 H j ) = λ 1 m, (8) λ 1 = tr(n 1 ) = tr(m 1 M) = w tr(m 1 H ) m, (9) m λ = tr(n) = tr(m M 1 ) = w tr(h M 1 ) m + ε M, (10) where ε M s defned by (7). Usng dentcal methods as n the paper [2], we can show that nequaltes (9) and (10) mply λ 1 1 + ε M 2 εm (4 + ε M 4/m), (11) 2 whch together wth nequalty (8) yelds: Theorem 3. Let w S n be any desgn such that M = n w H s regular, let w be any D H -optmal desgn, and let j {1,..., n} be such that wj > 0. Then [ tr(m 1 H j ) m 1 + ε ] M 2 εm (4 + ε M 4/m). (12) 2 Therefore, usng any regular nformaton matrx M we can remove all the basc nformaton matrces H j for whch the nequalty (12) s not satsfed, snce correspondng weghts of the D H -optmal desgn must be 0. Ths can help us sgnfcantly speed up computaton of a D H -optmal desgn (cf. [2]). 6

5 A multplcatve algorthm for constructng D H -optmal desgns As a smple numercal method for calculatng D H -optmal desgns, we wll formulate a generalzaton of the Ttterngton-Torsney multplcatve algorthm (see, e.g., [6], [7]). Let w (0) = (w (0) 1,..., w(0) n ) T be an ntal desgn such that w (0) > 0 for all = 1,..., n. Based on the desgn w (j) = (w (j) 1,..., w(j) n ) T, j 0, and ts nformaton matrx M j = n w(j) H, we can construct the new vector w (j+1) = (w (j+1) 1,..., w n (j+1) ) T usng the formula: w (j+1) = m 1 tr(m 1 j H )w (j) for all = 1,..., n. (13) (Note that (2) and postvty of w (j) mples regularty of M j.) Clearly ( ) = 1 m tr M 1 j w (j) H = 1, w (j+1) that s w (j+1) s also a desgn. Note that the algorthm s computatonally very rapd, snce t calculates the nverse of an m m matrx only once per teraton, wth m beng usually small (less than 10 n most optmal desgn problems). The speed of calculaton s more nfluenced by the number n of support matrces, but the number of canddate support matrces can be sgnfcantly reduced durng the calculaton usng the technque of Secton 4. In the followng, we wll prove that the multplcatve algorthm produces a sequence ( w (j)) j=1 of desgns that converges to the D H- optmal desgn n the sense that lm j det(m j ) = det(m ), where M s the D H -optmal nformaton matrx. The proof of convergence of the multplcatve algorthm for the standard D-optmalty has been based on a technque of condtonal expectatons, see [4], Secton V.3. For the proof of convergence of the multplcatve algorthm for general D H -optmalty, we wll use a dfferent approach based on the followng lemmas: Lemma 1. (Theorem 6.10 of [8], or Theorem IX.5.11. n [1]) If ( ) A11 A 12 0 A 21 A 22 and all of the blocks are square matrces of the same sze, then (det A 12 ) 2 det A 11 det A 22. 7

Lemma 2. (Problem 28, Secton 6.2 of [8]) Let A, B be postve semdefnte. Then det(a + B) det A and the equalty occurs f and only f A + B s sngular or B = 0. Theorem 4. For the algorthm defned by (13) t holds det(m j ) det(m j+1 ) for all j 0, wth optmalty f and only f M j = M. Moreover lm j det(m j ) = det(m ). Proof. Let M = n w H for some vector w of postve weghts and let M + = n w+ H, where w + s the vector of weghts obtaned from w by one step of the multplcatve algorthm,.e. w + = m 1 tr(m 1 H )w for all = 1,..., n. Defne α = tr(m 1 H )/m = w + /w and M = n 1 α w H, and note that M + = n α w H. Clearly = ( M + ) M M M = ( α w H w H w H 1 α w H ( α w H 1 2 w α H 1 2 ) T ( α w H 1 2 w α H 1 2 ) 0. (14) From (14) and Lemma 1 t mmedately follows that det 2 (M) det(m + ) det( M). (15) Let λ be the th egenvalue of the matrx M 1 M. Usng the nequalty between geometrc and arthmetc means we obtan ) det 1 m (M 1 M) = m λ 1 m 1 m m λ = 1 m tr(m 1 M) = Consequently w α tr(m 1 H ) = w = 1. (16) det( M) det(m), (17) whch together wth (15) gves the requred nequalty det(m) det(m + ). (18) To prove the second part of the theorem, assume that det(m) = det(m + ). (19) 8

From (15), (17) and (19) t follows, that det(m) = det(m + ) = det( M). Therefore we have equalty n (16), whch mples that the egenvalues λ of the matrx M 1 M are all equal. Moreover, snce det(m 1 M) = 1 and the matrces M 1, M are postve defnte, we have M 1 M = I,.e. M = M. Usng ths fact, (14), and propertes of the Schur complement we obtan M + M = M + M M 1 M 0. Hence we can apply Lemma 2 wth A = M and B = M + M, whch, together wth (19) and the postve defnteness of M +, mples M + = M,.e. α w H = w H. By multplyng both sdes of ths equalty by 1 m M 1 and by takng the trace we have α w tr(m 1 H ) m = w tr(m 1 H ) m that s n α2 w = n α w. Snce n α w = n w+ = 1, we have ( ) 2 α w = α 2 w. (20) The equalty condton of the weghted Cauchy-Schwarz nequalty together wth (20) mples α = tr(m 1 H ) m = 1 for all = 1,..., n Therefore M must be D H -optmal by Theorem (1). The last statement of the theorem can be proved usng the same arguments as n the proof of Proposton V.6 n [4], that s, usng compactness of the space S n of weghts and monotoncty det(m j ) det(m j+1 ), whch s strct, unless M j s optmal. 9

6 Example In ths secton we wll demonstrate a technque of how to apply our results to the problem of D-optmal approxmate augmentaton of a set of regresson trals. Consder the quadratc regresson wth ndependent responses Y modeled by E(Y ) = β 1 + β 2 u + β 3 v + β 4 u 2 + β 5 v 2 + β 6 uv = f T (u, v)β = (1, u, v, u 2, v 2, uv)(β 1,..., β 6 ) T, where the desgn pont (u, v) T belongs to the expermental doman X = {x 1,..., x n }, correspondng to the n = 25 pont equspaced dscrete grd n the square I 2 = [ 1, 1] [ 1, 1], that s ( 3 x 5( 1)+j = 2, j 3 ) T, for, j {1,..., 5}. 2 Assume that we have already performed k trals unformly on X,.e., we have performed k/n N trals n each desgn pont (for nstance n order to verfy valdty of the model). Our am s to perform γk N addtonal trals n a way that maxmzes determnant of the fnal nformaton matrx. Let M(x) = f(x)f T (x) for all x X. In accord wth the methodology of approxmate desgn of experments, we shall solve the problem argmax w Sn ln det k n M(x j) + γkw M(x ), (21) j=1 where w s the proporton of the γk addtonal trals to be performed n x. The problem (21) s clearly equvalent to the problem of D H - optmalty wth H = j=1 1 n M(x j) + γm(x ), for = 1,..., n. Due to symmetres of the quadratc model and the expermental doman X, the D-optmal augmentaton desgn s supported on at most 9 ponts, wth optmal weghts w1 = w 5 = w 21 = w 25 correspondng to the vertces of I 2, optmal weghts w3 = w 11 = w 15 = w 23 correspondng to the mdponts of edges of I 2, and an optmal weght w13 correspondng to the central pont (0, 0). Fgure 1 exhbts dependence of the optmal weghts on the augmentaton factor γ. Notce that f we add less than about 50% of trals, then the optmal method s to perform them only n the vertces of the square, and f we add 10

between around 50% and 150% of trals, then we should only perform them n the vertces and edge mdponts of the square. Naturally, for γ the ntal phase of expermentaton becomes neglgble and the D-optmal augmentaton desgn converges to the standard D-optmal desgn. References [1] Bhata, R., 1996. Matrx Analyss (Graduate Texts n Mathematcs). Sprnger. [2] Harman, R., Pronzato, L., 2007. Improvements on removng nonoptmal support ponts n D-optmum desgn algorthms. Statstcs & Probablty Letters 77, 90-94 [3] Kefer, J., Wolfowtz, J., 1960. The equvalence of two extremum problems. Canadan Journal of Mathematcs 12, 363 366. [4] Pázman, A., 1986. Foundatons of optmum expermental desgn. Redel. [5] Pukelshem, F., 1993. Optmal desgn of experments, John Wley & Sons. [6] Ttterngton, D., 1976. Algorthms for computng D-optmal desgns on a fnte desgn space. In: Proceedngs of the 1976 Conference on Informaton Scence and Systems. Department of Electronc Engneerng, John Hopkns Unversty, Baltmore, 213-216. [7] Torsney, B., 1983. A moment nequalty and monotoncty of an algorthm. In: Kortanek, K., Facco, A. (Eds.), Proceedngs of the Internatonal Symposum on Sem-Infnte Programmng and Applcatons. Sprnger, Hedelberg, pp. 249-260. [8] Zhang, F., 1999. Matrx Theory, Basc Results and Technques, Sprnger Verlag. 11

optmal weghts 0 0.05 0.10 0.15 0.20 0.25 0.1 0.2 0.5 1 2 5 10 20 50 100 gamma Fgure 1: Optmal weghts dependng on the augmentaton factor γ correspondng to the support ponts at the vertces of the square I 2 (sold lne), mdponts of the edges of the square I 2 (dashed lne) and the central pont (0, 0) (dotted lne). 12