Appl Math Optim 29: 211-222 (1994) Applied Mathematics &Optimization c 1994 Springer-Verlag New Yor Inc. An Algorithm for Finding the Chebyshev Center of a Convex Polyhedron 1 N.D.Botin and V.L.Turova-Botina 2 Institut fur Angewandte Mathemati und Statisti der Universitat Wurzburg, Am Hubland, D-8700 Wurzburg, Deutschland Abstract. An algorithm for nding the Chebyshev center of a nite point set in the Euclidean space R n is proposed. The algorithm terminates after a nite number of iterations. In each iteration of the algorithm the current point is projected orthogonally onto the convex hull of a subset of the given point set. Key Words. Chebyshev center, Finite algorithm. AMS Classication. Primary 90C20, Secondary 52A. Abbreviated version of the article title: An Algorithm for Finding the Chebyshev Center 1 Part of this research was done at the University Wurzburg (Institut fur Angewandte Mathemati und Statisti) when the rst author was supported by the Alexander von Humboldt- Foundation, Germany. 2 On leave from the Institute of Mathematics and Mechanics, Ural Department of Russia Academy of Sciences, 620219 Eaterinburg, S.Kovalevsaya str.16, Russia.
Introduction Many problems of control theory and computational geometry require an eective procedure for nding the Chebyshev center of a convex polyhedron. Particularly, it is natural to consider the Chebyshev center as the center of an information set in control problems with uncertain disturbances and state information errors. Chebyshev centers are also useful as an auxiliary tool for some problems of computational geometry. These are the reasons to propose a new algorithm for nding the Chebyshev center (see also [1,4]). 1. Formulation of the Problem. Optimality Conditions Let Z = fz 1 ; z 2 ; :::; z m g be a nite set of points in the Euclidean space R n. A point x 2 R n is called Chebyshev center of the set Z, if z2z x z = min x2r n z2z x z (1) It is easily seen that the point x satisfying relation (1) is unique and x 2 coz, where symbol \co" means the convex hull of a set. For any point x 2 R n we denote d (x) = x z: z2z Let symbol E(x) denote the subset of points of Z which have the largest distance from x, i.e. E(x) = fz 2 Z : x z = d (x)g: The optimality conditions are given by the following theorem [2]. Theorem 1. A point x 2 R n is the Chebyshev center of Z i x 2 coe(x ). This fact will be used as a criterion of the algorithm termination. 2. The Idea of the Algorithm Choose an initial point x 0 2 coz and nd the set E(x 0 ). Assume that x 0 62 coe(x 0 ). Otherwise x 0 = x. Let J = fj 2 1; m : z j 2 E(x 0 )g; I = fi 2 1; m : z i 2 Z n E(x 0 )g: Find a point y 0 2 coe(x 0 ) nearest to the point x 0. For any 2 [0; 1] consider the point x = x 0 + (y 0 x 0 ). Obviously, for = 0 the following inequality holds x z j x z i > 0 (2) When we increase, the point x moves from the point x 0 towards y 0. We will increase until the left-hand side of the inequality (2) becomes zero. If we reach the point y 0 before the left-hand side of (2) is zero, then the point y 0 is the desired solution x. If the left-hand side of (2) is equal zero for some = 0 < 1, 1
then we tae the point x 0 as new initial point and repeat the above process. Note that nding the value 0 does not require solving of any equation. The algorithm provides explicit formula for 0. Thus, the algorithm resembles the simplex method in linear programming. The set E(x 0 ) is an analog of the support basis in the simplex method. After one iteration we obtain a new set E(x 0 ) by adding to the set E(x 0 ) some new points and removing some old points. Besides the \objective" function d ( ) decreases, i.e. d (x 0 ) < d (x 0 ). 3. The Algorithm Algorithm 1 Step 0: Choose an initial point x 0 2 coz and set := 0. Step 1: Find E(x ). If E(x ) = Z, then x := x and stop. Step 2: Find y 2 coe(x ) as the nearest point to x. If y = x, then x := x and stop. Step 3: Calculate by the formula: z i x = min 2 2 d 2hy x ; z i y i I (x ) ; (3) = fi : z i 2 Z n E(x ); hy x ; z i y i < 0g (we set formally = +1 whenever I If 1, then x := y and stop. = 6). Step 4: Let x +1 := x + (y x ); := + 1 and go to Step 1. Remar. Note that Algorithm 1 comprises the non-trivial operation of nding the distance to convex hull of the point set (Step 2). In our latest program realizing Algorithm 1 we used the recursive algorithm of [3] for the implementation of this operation. However, we will see below that the point y of Step 2 is the Chebyshev center of E(x ). So we come to the following recursive algorithm. 2
Algorithm 2 Step 0: Choose an initial point x 0 2 coz and := 0. Step 1: Find E(x ). If E(x ) = Z, then x := x and stop. Step 2: Call Algorithm 2 with Z = E(x ) and let y be an output of this call (the Chebyshev center of E(x )). If y = x, then x := x and stop. Step 3: Calculate by the formula: z i x = min 2 2 d 2hy x ; z i y i I (x ) = fi : z i 2 Z n E(x ); hy x ; z i y i < 0g ; (we set formally = +1 whenever I If 1, then x := y and stop. = 6). Step 4: Let x +1 := x + (y x ); := + 1 and go to Step 1. First we give the proof of Algorithm 1. The proof of Algorithm 2 will simply follow from the fact that y obtained at Step 2 of Algorithm 2 is the nearest point to x. 4. Auxiliary Propositions Consider -th iteration of Algorithm 1. We have the current approximation x and the point y 2 coe(x ) nearest to x. Let us assume further that x 6= y. Otherwise x = y by Theorem 1. Introduce some notations. For any subset S Z we put Dene the following sets of indices J = num num(s) = fl 2 1; m : z l 2 Sg: E(x ) ; J 0 = fl 2 J : hy x ; z l i = min hy x ; z j ig; I = num(z) n J ; I = fi 2 I : hy x ; z i y i < 0g; I + = I n I : Note that the set I is the same as in formulas (3). We now prove two propositions characterizing the point y. 3
Proposition 1. The following equality holds hy x ; y i = min hy x ; z j i: Proof. Choose an arbitrary point z 2 coe(x ) and consider the function () = (1 )y + z x 2 ; 2 [0; 1]. Because of the denition of y the function ( ) taes its minimum at = 0. Hence 0 (0) 0, which implies Therefore, hy x ; y i = hy x ; z y i 0: Taing into account the obvious equality min hy x ; zi: z2co E(x ) min hy x ; zi = minhy x ; z j i z2co E(x ) we obtain the desired result. Proposition 2. The following inclusion holds y 2 cofz j : j 2 J 0 g: Proof. Since y 2 coe(x ), we have y = X j z j ; j 0; X j = 1: or Proposition 1 gives This implies X hy x ; X j [hy x ; z j i j [hy x ; z j i j z j i = min hy x ; z j i min l2j hy x ; z l i] = 0: minhy x ; z l i] = 0 l2j for any j 2 J. Hence j = 0 for any j 62 J 0 (the term in the square bracets is positive for j 62 J 0 ). Thus, Proposition 2 is proved. Suppose that I 6= 6 and consider for 0 the following function g() = x + (y x ) z j 2 x + (y x ) z i 2 : (4) Lemma 1. The function g( ) is monotone decreasing and has an unique zero > 0 given by formula (3). 4
Proof. First we note that g(0) > 0 by denition of the sets J ; I. Rewrite g( ) as follows g() = min [x + (y x ) z j 2 x + (y x ) z i 2 ] = i2i min [h2x + 2(y x ) (z j + z i ); z i z j i] = i2i min [2hy x ; z i z j i + h2x (z j + z i ); z i z j i] = i2i min [2hy x ; (z i y )+(y z j )i+h(x z j )+(x z i ); (z i x ) (z j x )i] = i2i min [2hy x ; z i y i + 2hy x ; y z j i + x z j 2 x z i 2 ] = i2i min[2hy x ; z i y i x z i 2 ] + [2hy x ; y z j i + x z j 2 ]: i2i Since x z j = d (x ) for any j 2 J, and hy x ; y z j i = hy x ; y i minhy x ; z j i = 0 due to Proposition 1, we obtain the following representation of g( ) g() = min [2hy x ; z i y i x z i 2 ] + d 2 (x ): (5) Since hy x ; z i y i < 0 for any i 2 I, the function g( ) is the minimum of a nite set of strictly decreasing ane functions of. Hence g( ) is strictly decreasing. Using inequality g(0) > 0 we obtain that there exists an unique zero 0 > 0 of g( ). Due to the monotonicity of the function g( ) the value 0 can be found by 0 = f 0 : g() 0g: Let us prove now that 0 coincides with dened by formula (3). Choose an arbitrary >. Then from denition of there exists i 2 I such that z i x 2 d 2 (x ) 2hy x ; z i y i < : Since hy x ; z i y i < 0 we obtain 2hy x ; z i y i z i x 2 + d 2 (x ) < 0: A comparison with (5) gives g() < 0. Using the monotonicity of g( ) we get > 0. Thus, for any > we have > 0. Hence 0. Let us prove the opposite inequality. For any i 2 I we have z i x 2 d 2 (x ) 2hy x ; z i y i : 5
This implies 2 hy x ; z i y i z i x 2 + d 2 (x ) 0 for any i 2 I : A comparison with (5) gives g( ) 0. Hence 0 and Lemma 1 is proved. Lemma 2. For any > 0 the imum in the expression x + (y x ) z j 2 is attained on the subset J 0, and all j 2 J 0 are imizing. Proof. Let > 0; s; q 2 J : Consider the function '() = x + (y x ) z s 2 x + (y x ) z q 2 ; with '(0) = 0. Since ' 0 () = hy x ; z q z s i > 0 for s 2 J 0 ; q 62 J 0 ; '() increases strictly with > 0: On the other hand ' 0 () 0 for s; q 2 J 0, which proves the lemma. Lemma 3. For any i 2 I + and any 0 the following inequality holds x + (y x ) z j 2 > x + (y x ) z i 2 : Proof. Let i 2 I + and consider a xed arbitrary index s 2 J 0. One can easily see from the denition of I + and Proposition 1 that the function '() = x + (y x ) z s 2 x + (y x ) z i 2 satises '(0) > 0 and ' 0 () = hy x ; z i z s i = hy x ; z i y i + hy x ; y z s i 0, hence '() is increasing with 0. This gives the proof. Now dene x +1 by x +1 = ( x + (y x ) ; 1 y ; > 1 : (6) Lemma 4. The following equality holds ( J num(e(x +1 )) = 0 [ I 0 ; 1 J 0 ; > 1; (7) where I 0 = fl 2 I : x +1 z l 2 = x +1 z i 2 g: Proof. Consider rst the case 1. Since is a zero of the function g( ) dened by (4), we have x +1 z j 2 = x +1 z i 2 : (8) 6
Taing into account the denition of the set I 0, we obtain Using Lemma 3, we get x +1 z j 2 > ni 0 x +1 z i 2 : x +1 z j 2 > i2(i ni 0 )[I+ x +1 z i 2 : With Lemma 2 we get for any s 2 J 0 x +1 z s 2 = x +1 z j 2 > x +1 z i 2 : (9) i2(i ni 0 )[I+[(J nj 0 ) Taing into account the denition of the set I 0 (9) is valid for any s 2 J 0 [ I 0. Since we conclude that Let now > 1 but I J 0 [ I 0 [ (I n I 0 ) [ I + [ (J n J 0 ) = num(z); With Lemma 3 this implies num(e(x +1 )) = J 0 [ I 0 : and (8), we obtain that relation 6= 6. Then g(1) > 0 which means that x +1 z j 2 > x +1 z i 2 : x +1 z j 2 > [I + x +1 z i 2 : Using Lemma 2, we obtain for any s 2 J 0 Since we have If I x +1 z s 2 = x +1 z j 2 > x +1 z i 2 : [I + [(J nj 0 ) J 0 [ I [ I + [ (J n J 0 ) = num(z); num(e(x +1 )) = J 0 : = 6 then I = I +, and from Lemma 3 we get x +1 z j 2 > x +1 z i 2 : With the help of Lemma 2 this gives for any s 2 J 0 x +1 z s 2 = x +1 z j 2 > x +1 z i 2 : [(J nj 0 ) 7
Since J 0 [ I [ (J n J 0 ) = num(z); we conclude that which completes the proof. num(e(x +1 )) = J 0 ; Lemma 2 shows that for any j 2 J 0 the expression y z j assumes the same value. By Proposition 2 this value is the Chebyshev radius of the set fz j : j 2 J 0g, which we denote by r. Lemma 5. The following formulas for r are valid q d 2 (x ) y x 2 ; (10) r = r = Proof. Choose an arbitrary s 2 J 0. Then we have q d 2 (x +1 ) y x +1 2 : (11) z s x = y x + z s y : By squaring both sides of this expression we obtain z s x 2 = y x 2 + r 2 + 2hy x ; z s y i: From Proposition 1 and the denition of the set J 0 we have hy x ; z s y i = 0: (12) Using that z s x = d (x ) for any s 2 J 0 J, we obtain (10). As for (11), we have Squaring gives again z s x +1 = y x +1 + z s y : z s x +1 2 = y x +1 2 + r 2 + 2hy x +1 ; z s y i: Since y x +1 = (1 )(y x ), we get analogously to (12) hy x +1 ; z s y i = 0. Since s 2 J 0 num(e(x +1)) (see (7)), we conclude that z s x +1 2 = d 2 (x +1 ): Thus we obtain (11) and lemma is proved. Lemma 6. If < 1 then r +1 > r. Proof. In fact, from (10) we have q d 2 (x +1 ) y +1 x +1 2 ; (13) r +1 = 8
where y +1 2 coe(x +1 ) is the nearest point to x +1. Comparing (11) and (13), we see that we have to prove inequality y +1 x +1 2 < y x +1 2 : (14) To do this we note that from Proposition 2 and (7) the point y 2 coe(x +1 ): Choose an arbitrary point z q ; q 2 I 0. It is seen from (7) that (1 )y + z q 2 coe(x +1 ) for any 2 [0; 1]: Consider the expression x +1 (1 )y z q 2 = x +1 y + (y z q ) 2 = x +1 y 2 + 2(1 )hy x ; z q y i + 2 y z q 2 : Since q 2 I 0 I the following inequality holds hy x ; z q y i < 0: Hence, taing small enough 0 > 0, we obtain x +1 (1 0 )y 0 z q 2 < x +1 y 2 : Since y +1 2 coe(x +1 ) is the nearest point to x +1 and (1 0 )y + 0 z q 2 coe(x +1 ), we get x +1 y +1 2 x +1 (1 0 )y 0 z q 2 < x +1 y 2 : Thus (14) is true and inequality r +1 > r also holds. 5. The Properties of Finiteness and Monotonicity Theorem 2. The Algorithm 1 terminates within a nite number of iterations. Proof. Notice that the inequality r +1 > r of Lemma 6 was obtained under assumptions x 62 coe(x ) and < 1. It is easily seen that after a nite number of iterations one of these conditions will be violated. Indeed, there is only a nite number of distinct sets fz j : j 2 J 0 g; = 1; 2; :::. This number does not exceed the number of all subsets of Z. By the properties of the algorithm each set fz j : j 2 J 0g can arise only once, because the corresponding Chebyshev radius r increases strictly with. Hence, after a nite number of iterations we obtain either x 2 coe(x ), or 1. The rst case means termination of the algorithm. Consider the case 1. Let x +1 be dened by formula (6). Then x +1 = y and from Proposition 2 and (7) we have x +1 = y 2 cofz j : j 2 J 0 g co E(x +1 ): Thus, by Theorem 1 y is the Chebyshev center and the algorithm stops. Theorem 2 is proved. Theorem 3. The Algorithm 2 terminates within a nite number of iterations. Proof. If y 2 coe(x ) is the nearest point to x, then Lemma 2 (with = 1) implies that fz j : j 2 J 0 g is the subset of such points of E(x ) which have the largest 9
distance from y. Then from Proposition 2 and Theorem 1 we obtain that y is the Chebyshev center of E(x ). So, with the uniqueness of the Chebyshev center we deduce that y obtained at Step 2 of Algorithm 2 is the nearest point to x. It remains to prove that there is an exit from the recursion. In fact, if the point x is not the Chebyshev center of Z, then the set E(x ) has fewer points than Z has. So, the depth of recursion less or equal m. This proves Theorem 3. Theorem 4. The Algorithms 1,2 have the property of monotonicity, that is Proof. From (10) and (11) we have d (x +1 ) = This gives the desired result. 6. Numerical Examples d (x +1 ) < d (x ): d (x ) = q r 2 + x +1 y 2 = q r 2 + x y 2 ; q r 2 + (1 ) 2 x y 2 : Let us consider some numerical examples characterizing the eciency of the algorithm. These examples were implemented with a QuicBASIC program on a PC (80486, 33 MHz). Example 1. Z = fz : z = ( 1 ; 2 ; :::; n ) T ; j = 1; j = 1; :::; ng (vertices of the unit cube in R n ). The number of vertices is m = 2 n. The Chebyshev center is x = 0. The program realizing Algorithm 1 was run with the initial point x 0 = ( 1 1 ; 1 2 ; :::; 1 n )T for dimensions n = 2; 3; :::; 10. For each n the solution was obtained just in n iterations. The CPU-time for n = 10 was 8.62 s. Example 2. Z = fz i : z i = ( 1 ; :::; n ) T ; i = 1; :::mg, where j are random values uniformly distributed on the interval [ 1; 1]. The results obtained for Algorithm 1 are presented in the following table. The initial point was the same as in Example 1. The average values were computed on the basis of 20 testruns. n=10 Number of iterations Time (s) m min. av.. min. av.. 100 8 12.55 18 1.1 3.4 14.66 200 8 13.45 18 1.49 5.7 12.97 500 9 15.65 24 2.85 10.26 25.04 1000 11 17.4 26 8.56 16.77 33.61 5000 13 21.6 29 71.46 115.06 151.21 The average speed of Algorithm 2 is approximately 1.5 times lower then the speed of Algorithm 1, but the corresponding computer program is shorter and loos more elegant. Computer tests show that the eciency of the algorithms proposed is comparable with the rapidity of the algorithm of [4]. Our test results compare favourable with the test results reported in [1] for various methods to compute the 10
Chebyshev center proposed by other authors. Acnowledgement This wor was partly supported by the Alexander von Humboldt- Foundation, Germany. The authors than Prof. Dr. J.Stoer for his warm hospitality at the University Wuerzburg, Germany and for helpful discussions. Authors also than Dr. M.A.Zarh for his help with computer program and valuable remars. References 1. Goldbach R. (1992) Inugel und Umugel onvexer Polyeder. Diplomarbeit, Institut fur Angewandte Mathemati und Statisti Julius-Maximilians- Universitat Wurzburg. 2. Pschenichny B.N. (1980) Convex analysis and extremal problems, Naua, Moscow. 3. Seitani K., Yamamoto Y. (1991) A recursive algorithm for nding the minimum norm point in a polytope and a pair of closest points in two polytopes, Sci. report No.462, Institute of Socio-Economic Planning, University of Tsuuba, Japan. 4. Welzl E. (1991) Smallest enclosing diss (balls and ellipsoids). Lecture Notes in Computer Science, Springer-Verlag, New Yor 555:359-370. 11