MATRIX BALANCING PROBLEM AND BINARY AHP

Size: px
Start display at page:

Download "MATRIX BALANCING PROBLEM AND BINARY AHP"

Transcription

1 Journal of the Operations Research Society of Japan 007, Vol. 50, No. 4, MATRIX BALANCING PROBLEM AND BINARY AHP Koichi Genma Yutaka Kato Kazuyuki Sekitani Shizuoka University Hosei University Shizuoka University Received October 4, 006 Abstract A matrix balancing problem and an eigenvalue problem are transformed into two minimumnorm point problems whose difference is only a norm. The matrix balancing problem is solved by scaling algorithms that are as simple as the power method of the eigenvalue problem. This study gives a proof of global convergence for scaling algorithms and applies the algorithm to Analytic Hierarchy process AHP, which derives priority weights from pairwise comparison values by the eigenvalue method EM traditionally. Scaling algorithms provide the minimum χ square estimate from pairwise comparison values. The estimate has properties of priority weights such as right-left symmetry and robust ranking that are not guaranteed by the EM. Keywords: AHP, minimum-norm point, matrix scaling, binary AHP, rank reversal. Introduction A variety of biological, statistical and economical science data appear in the form of crossclassified tables of counts, that are often briefly expressed in nonnegative matrices. In practice, such nonnegative matrices have frequently some biased entries and missing data. This practical problem makes the nonnegative matrix being inconsistent with theoretical prior requirements. Then, we must adjust the nonnegative matrix to satisfy the prior consistency requirements. Adjustment of the entries of the nonnegative matris commonly known as matrix balancing problems or matrix scaling problems, that involve both mathematically and statistically well-posed issues of practical interest. A usual way for adjustment of the matrix is to multiply its row and columns by positive constant. A well studied instance of this problem occurring in transportation planning and input-output analysis requires that the matrix be adjusted so that the row and column sums equal fixed positive values. A related problem for a square nonnegative matrix requires that the sum of entries in each of its rows equals the sum of entries in the corresponding column. This study considers the latter matrix balancing problem, that is applied to adjustment of social accounting matrices, as described in [5]. The matrix balancing problem is equivalent to minimizing the sum of linear ratios over the positive orthant. See Eaves et al. [5] for details of the equivalence proof. This paper aims to guarantee a convergence of algorithms for the equivalent fractional minimization problem and to establish an application of the matrix balancing problem to Analytic Hierarchy Process AHP. AHP, originally developed by Saaty [], is a widely applied multiple-criteria decision making technique, where subjective judgments by decision makers are quantified and summarized into a reciprocal matrix under each criterion. An important feature of AHP is that priority weights are given by a principal eigenvector of the reciprocal matrix. In AHP, solv- 55

2 56 K. Genma & Y. Kato & K. Sekitani ing the eigenvalue problem of a reciprocal matris called the eigenvalue method EM. The validity of the EM is discussed in Saaty [3] and Sekitani and Yamaki [6]. The applicability and practicality of AHP can be easily confirmed in [3]. However, it is well known that the EM has shortcomings such as rank reversal phenomena and right-left asymmetry. To overcome this EM shortcomings, this paper applies the minimum χ square method [, 30] into derivation of the priority weights from the reciprocal matrix. It is expected that the application of the minimum χ square method to AHP will open up a new evaluation method for not only the reciprocal matrix of AHP but also nonnegative matrices corresponding to cross-classified tables, e.g., a supermatrix of Analytic Network Process, the extension of AHP. This study shows that application of the minimum χ square method to AHP is reduced to solving the matrix balancing problem, whose computation procedure are generally called scaling algorithms [0, 5]. Scaling algorithms are as simple as the power method, which is a well known algorithm for the eigenvalue problem. The existence of the power method contributes AHP practicality. By using equivalence between the matrix balancing problem and the minimum fractional problem [5], this paper also gives a transparent proof for global convergence of scaling algorithms. Therefore, on the computational aspect, the minimum χ square method is compatible to the EM. For an irreducible nonnegative matrincluding a reciprocal matrix, this paper introduces a unified framework, a class of minimum-norm point problems [7], into the matrix balancing problem and the eigenvalue problem. The unified framework shows that these two problems are to find a closest point to the origin under the distinct L P norms. This fact gives necessary and sufficient conditions of equivalence between the matrix balancing problem and the eigenvalue problem. The equivalence conditions plays a key role of comparison between priority weights of the EM and that of the minimum χ square method in Binary AHP that is simplified from AHP by Takahashi [8] and Jensen [3]. This article is organized as follows: Section shows necessary and sufficient conditions of equivalence between the matrix balancing problem and the eigenvalue problem. Section 3 gives a convergence proof of scaling algorithms for the matrix balancing problem. Section 4 introduces the minimum χ square method of AHP and shows properties of the minimum χ square estimate. Section 5 compares the minimum χ square method with the EM in Binary AHP. The numerical experiments report that the minimum χ square method has the less frequency of rank reversal than the EM. Conclusion and future extensions are documented in Section 6.. Matrix Balancing Problem and Eigenvalue Problem Before discussing priority weights deriving from a pairwise matrin AHP, we show more general results for nonnegative matrices. For an n n nonnegative matrix A, we introduce three definitions sum-symmetry, reducible and irreducible. The matrix A is called sum-symmetry if a lk a kl for all l,...,n.. k k The matrix A is said to be reducible, either if A is the[ zero ] matrix or if n and there B O exists a permutation matrix P such that PAP where B and D are square C D matrices and O is a zero matrix. The matrix A is irreducile if it is not reducible. Suppose that an n n matrix A is irreducible, then a matrix balancing problem for A

3 Matrix Balancing Problem and Binary AHP 57 is defined as a problem of finding a positive vector x x such that a j ij is sum-symmetry. x i Hereafter, we assume that the matrix A is irreducible. For a positive vector x, we define fx a ij.. The following lemma shows that the matrix balancing problem is equivalent to minimize fx of. over the positive orthant. Lemma. There exists an optimal solution of min fx,.3 x>0 whose optimal solution is unique up to a positive scalar multiple. Furthermore, x is an x optimal solution of.3 if and only if the n n matrix a j ij is sum-symmetry. x i Proof: See Theorem of [] for the existence of optimal solutions of.3. The equivalence between a solution of the matrix balancing problem for A and an optimal solution of.3 is shown in Theorem 3 of [5]. For a positive vector x, we define x rx j a j,, a nj and.4 j x j x n n x x n cx a i,, a in..5 i i By using.4 and.5, Lemma. is represented alternatively by the following manner: Lemma. Let e be an n-dimensional vector whose entries are all one, then any positive vector x satisfies fx cxe rxe..6 A positive vector x satisfies cx rx.7 if and only if x is an optimal solution of.3. Proof: The proof is directly from definitions of r andc and Lemma.. A well known theorem of a principal eigenvector for an irreducible matrix, Perron- Frobenius Theorem, is a key way of transforming an eigenvalue problem into an optimization problem: Lemma.3 Finding the principal eigenvalue of A is to solve min max x j a x>0 j,, a nj j x j x n,.8 whose optimal solution is unique up to a positive scalar multiple. Proof: The proof is directly proved by Perron-Frobenius Theorem. See chapter 4 of [9] for Perron-Frobenius Theorem. Two optimization problems.3 and.8 are reduced to minimum-norm point problems, respectively, whose norms are dual:

4 58 K. Genma & Y. Kato & K. Sekitani Lemma.4 Problem.3 and a principal eigenvalue problem are equivalent to min rx x>0 and.9 min rx x>0,.0 respectively, where x n i and x max{ x,, x n }. Proof: Choose a positive vector x. Sincer i x > 0 for all i,...,n, rx rx e and rx max{r x,,r n x}. The proof is complete from lemmas. and.3. Kalantari et al. [6] also generally consider the matrix balancing problem.3 in a framework of L p -norm minimization problem, however, whose norm is measured for an n n vector a,a x /x,,a n x n /x,,a n x /x n,,a nn. The matrix balancing problem is equivalent to.3 and the principal eigenvalue problem is.8. From Lemma.4, the matrix balancing problem.3 and the eigenvalue problem.8 implicitly have the dual norms for a common positive vector as their corresponding objective functions. This is summarized into the following relations between the matrix balancing problem and the eigenvalue problem: Theorem. Let x be an optimal solution of.3 and let x # be a right principal eigenvector of A, then we have rx # λe and. rx # n rx# n rx,. where λ is the principal eigenvalue of A and e is an n-dimensional row vector whose entries are all. Moreover, x # is equal to a positive scalar multiple of x if and only if rx # rx. n Proof: Since Ax # λx #,wehave nj rx # a j x # nj j a nj x # j x #,..., λx # x # n x #,..., λx# n x # λ,..., λe,.3 n implying from Lemma.4 that rx # λ rx # n n min x>0 rx rx..4 n Suppose that rx # rx, then it follows from.4 that n rx # rx. Therefore, x # is also an optimal solution of.9, that is equivalent to.3, by Lemma.4. Thus, Lemma. implies that x # is equal to a positive scalar multiple of x. Conversely, suppose that x # is equal to a positive scalar multiple of x,thenx is also a right principal eigenvector of A and hence, it follows from.3 that n rx n λe λ n e λ n n λ λ e λe rx #. A pair of a left and a right eigenvector of A characterizes an equivalence relation between the matrix balancing problem and the eigenvalue problem as follows:

5 Matrix Balancing Problem and Binary AHP 59 Corollary. Suppose that x is an optimal solution of.3 and that x # is a right principal eigenvector of A. If /x #,, /x # n is a left principal eigenvector of A, then rx # n rx.5 and vice versa. Proof: Let λ be the principal eigenvalue of A and let e be an n-dimensional row vector whose entries are all. Since x # is a right principal eigenvector of A, it follows from. of Theorem. that rx # λe..6 Suppose that /x #,, /x # n is a left principal eigenvector of A, then it follows that n cx # a i x # a in λ,..., x # λ,..., λ,...,λλe, i x # i i x # x # n i x # x # x # n n implying from.6 that λe cx # rx #. Therefore, it follows from Lemma. that x # is an optimal solution of.3 and rx # rx. Since λ rx # rx #, an optimal solution of.3 and a right principal eigenvector of A satisfy.5. n Conversely, suppose that an optimal solution x of.3 and a right principal eigenvector of A satisfy.5, then we have rx # rx # rx and it follows from n n Lemma.4 that x # is also an optimal solution of.3. By Lemma. and.6 we have n λe rx # cx # a i x # a in,...,, i x # i i x # x # n i implying that n a i a in λ x #,..., x # n i x #,..., i i x #. i Consequently, /x #,, /x # n is a left principal eigenvector of A. Corollary. is an evidence that a principal eigenvector of a pairwise comparison matrix has a right-left asymmetry [8, 4]. Right-left symmetry will be discussed in Theorem 4. of section 4 The following examples illustrate the existence of equivalence between the matrix problem and the eigenvalue problem: Example. Consider a doubly stochastic matrix A , then A has a left principal eigenvector [,,,, ] and a right principal eigenvector [,,,, ] that is also an optimal solution of.3. / /3 /4 /3 / Consider a reciprocal matrix A then, A has a right principal 3 3/ 3/4 4 4/3 eigenvector [,, 3, 4] and a left principal eigenvector [, /, /3, /4] that is also an optimal solution of.3.

6 50 K. Genma & Y. Kato & K. Sekitani 3. Scaling Algorithms for the Matrix Balancing Problem Algorithms for a class of matrix balancing problems are classified into two classes, scaling algorithms and optimization algorithms, which are compared with respect to various viewpoints including quality of solutions and ease of implement and use [5]. Schneider et al. [5] conclude that practitioners is easy to implement and use scaling algorithms rather than optimization algorithms. Scaling algorithms for.3 is called the diagonal similarity scaling DSS algorithms by Schneider et al. [5]. Recently, Huang et al. [0] proposes one of the DSS algorithms, but they do not complete a proof of global convergence of the algorithm. The RAS algorithm, developed by [, 7, ], is a generalization of the DSS algorithms that can solve.3 under constraints of row and column sums being equal to prespecified values. A matrix balancing problem with such equalities constraints is not necessarily feasible under an assumption of irreducibility of a matrix A. A convergence of the RAS algorithms is proved under a stronger assumption of a matrix A than irreducibility that is only our assumption. For instance, see [] for the stronger assumption. This section gives a transparent proof of global convergence of the DSS algorithms for solving the matrix balancing problem.3 under only an assumption of irreducibility of the matrix. We describe a DSS algorithm consisting of three steps, the second step of which is slightly different from that of existing DSS algorithms by [0] and [5]. The difference is independent of global convergence of DSS algorithms. Algorithm for solving.3 Step Choose an initial positive vector x 0 such that n i x 0 i andlett : 0. Step Let r : rx t a,,a nn andc: cx t a,,a nn. Let r p c p /x p : max k,...,n r k c k /x k. 3. If r p c p ɛ, thens an optimal solution of.3 and stop. Step3 Let μ : r p /c p and { μx t xμ i : i i p i p Set x t+ : xμ/ i xμ i and t : t +, and go to Step. x t i 3. A proof of global convergence of the above algorithm consists of four lemmas and one theorem. As a corollary of the theorem, we guarantee global convergence of existing two DSS algorithms. The following lemma corresponds to the termination criterion of Step : Lemma 3. Let r rx a,,a nn and c cx a,,a nn. A positive vector s not an optimal solution of.3 if and only if r c. Proof: Since r c is equivalent to rx cx, this assertion is directly followed from Lemma.. The following lemma indicates that the update process 3. of x t is a certain type of an exact line search of the step size in a descent method of nonlinear programming [33]: Lemma 3. Choose a positive vector x, arbitrarily, and let r rx a,...,a nn and c cx a,...,a nn. Suppose that c p r p and let { μxi i p xμ i : i p, 3.3

7 Matrix Balancing Problem and Binary AHP 5 then problem min fxμ fx 3.4 μ>0 has a unique optimal solution r p /c p and its optimal objective function value is rp. c p Proof: It follows from the definition 3.3 of xμ that f xμ f x i p j p i p j p i p j p a ij + a pj + μx p μx p a ip + a pp j p μx p i p μx p a ij + a pj j p a ij + μ r p + μc p + a pp μc p + μ r p r p c p. + x p x p a ip + a pp x p i p x p a ij + r p + c p + a pp i p j p Let gμ μc p + μ r p r p c p, then it follows from c p > 0andr p > 0thatgμ is a strictly convex function over {μ μ >0}. Since μ r p c p unique optimal solution μ r p c p, and hence, gμ μ c p + μ r p r p c p satisfies dg dμ c p μ r p 0, 3.4 has a rp cp c p + r p r p c p r p c p + r p c p r p c p c p r p r p c p r p c p r p r p c p + c p rp c p. The update process 3. of x t reduces strictly the objective function value fx of.. Therefore, the sequence {fx 0,fx, }generated by the algorithm is strictly decreasing. This is summarized into the following lemma: Lemma 3.3 Let p be an index satisfying 3., then r p c p fx t+ fx t. 3.5 Therefore, fx t >fx t+ t 0,,,... Proof: Since x t+ is equal to a positive scalar multiple of x rp from f x t+ f x rp c p and Lemma 3. that 0 > r p c p min μ>0 fxμ fxt f x rp c p c p in Step 3, it follows fx t fx t+ fx t. The following lemma guarantees that there exists a compact set in the positive orthant such that the compact set contains {x 0, x, }: Lemma 3.4 Let e be an n-dimensional row vector whose entries are all. Let Ω {x ex and x > 0} and δ min{a ij a ij > 0}. Choose x 0 Ω arbitrarily, then, the iterate x t of the algorithm satisfies { x t δ n } x ex and x nfx 0 n e 3.6 for all t 0,,...

8 5 K. Genma & Y. Kato & K. Sekitani Proof: Since x t Ω, there exists an index pair k, l such that x t k min i,...,n x t i, x t l ma,...,n x t i and x t l /n. Since the matrix A is irreducible, there exist distinct indices i 0,i,,i q such that i 0 k, i q l and a ij i j+ > 0 for all j 0,...,q. By Lemma 3.3 and the arithmetic mean the geometric mean inequality, we have fx 0 fx t q fxt x t j a ij q x t q i a j+ q x t ij i i q j+ j0 x t i j j0 /q q δ xt q i j+ x t /q i δ j+ δ j0 x t iq x t i 0 x t i j j0 /q x t /q δ l δ x t k x t i j x t l x t k a ij i j+ x t i j+ x t i j /q δ x t i x t i 0 xt i x t i xt i q x t i q /n δ Therefore, it follows from the definition of the index k that nx t k /n. x t i q x t i q /q n δ x t n fx 0 k x t i for all i,,...,n. The above proof is based on a of Lemma of Eaves et al. [5]. Lemma 3.4 means that the set { x ex andx n δ/fx 0 n e } of 3.6 is compact and it is a proper subset of Ω. Therefore, accumulation points of {x 0, x, } exists and all the functions f, r and c are well defined at any accumulation points. We can show global convergence of the algorithm as follows: Theorem 3. Let ɛ 0in Step and suppose that the algorithm provides an infinite sequence {x 0, x, }, then any accumulation point of {x 0, x, } is an optimal solution of.3. Moreover, the algorithm converges globally. Proof: Since {fx 0,fx, } is monotonically decreasing and fx t > 0 for all t 0,,,..., the sequence converges to a finite value, say f. Let δ min{a ij a ij > 0} and let { Ω δ x e x andx n n δ e}, fx 0 where e is an n-dimensional row vector whose entries are all. By Lemma 3.4, the compact set Ω δ contains {x 0, x, }, which has an accumulation point x Ω δ. Therefore, all functions f, c and r are well defined at x. Let {x t t T } be a subsequence converging to x. Let rx rx a,,a nn and cx cx a,,a nn, then the function max k,...,n r k x c k x /x k is continuous over {x ex andx > 0} and the right-hand of 3. is max k,...,n r k x t c k x t /x t k. Let p t be an index satisfying 3. at the tth iteration, then it follows from the continuity of max k,...,n r k x c k x /x k that there exists an index p satisfying 3. at x such that {x t t T and p t p} is an infinite sequence. Let T {t T p t p}, then it follows from lim t fx t f that lim t T t fx t lim fx t+ f. t T t

9 Therefore, it follows from 3.5 that 0 lim t T t lim t T t fx t fx t+ lim t T Matrix Balancing Problem and Binary AHP 53 t r p t c p t r p x t a p p c p x t a p p r p x a p p c p x a p p. Thus, we have r p x c p x and it follows from the definition of p that r x c x. Lemma. implies that s an optimal solution x of.3, and hence, f f x fx. Since an optimal solution x of.3 is unique in { x ex andx > 0}, any convergent subsequence of {x 0, x, }has the same limit point x. Therefore, the algorithm converges globally. For any index p satisfying r p x t c p x t, even if it is not maximal of 3., all lemmas in this section holds. The p selection 3. of the second step corresponds to a function max k n r k x c k x /x k at a positive vector x. As stated in the proof of Theorem 3., the continuity of the corresponding function over the positive orthant plays a key role of global convergence of the algorithm. Therefore, we may have a globally convergent algorithm by replacing 3. with an alternative way of the p selection. In fact, existing two DSS algorithms adopt p selections other than 3.. This is summarized into the following corollary: Corollary 3. Consider the algorithm replacing 3. with r p c p : max r k c k or 3.7 k,...,n r p c p : max r k c k 3.8 k,...,n and let {x t t 0,,,...} be an infinite sequence of the modified algorithm, then modified algorithm has global convergence and its limit point of {x t t 0,,,...} is an optimal solution of.3. Proof: Let Ω { x R n n i andx > 0}. Let rx rx a,,a nn and cx cx a,,a nn, then the function max k,...,n r k x c k x is continuous over Ω. The right-hand of 3.7 is max k,...,n r k x t c k x t. Therefore, we can prove global convergence of the modified algorithm replacing 3. with 3.7 in the same manner as the proof of Theorem 3.. In the similar way to the above argument, we can also prove global convergence of the modified algorithm replacing 3. with 3.8. Huang et al. adopt 3.7 as the p selection criteria. The selection criteria of 3.8 is in [5]. The Zangwill global convergence theory [33] may also show another proof of global convergence of these DSS algorithms. Optimization algorithms for.3 are recently studied [5, 6]. Kalantari et al. [6] shows the polynomial-time solvalitity of.3 to any prescribed accuracy by using interior-point Newton methods. Johnson et al. [5] develop the DomEig algorithm by reducing.3 to the minimum dominant eigenvalue problem. 4. A χ-square Estimation Model for AHP and Binary AHP AHP, first proposed by Saaty [], is a decision making approach in which a decision maker s preferences are elicited through pairwise comparison of n alternatives on a ratio scale.

10 54 K. Genma & Y. Kato & K. Sekitani The comparison of alternative i with alternative j is quantified into pairwise comparison value a ij, whose meaning is that alternative i is a ij times as important as alternative j. Let a ii foralli,...,n, then all pairwise comparison values are stored in a reciprocal matrix A [a ij ], that is, a ij /a ji and a ij > 0 for all i, j,...,n. We often assume that a priority vector x, whoseith entry is an ideal importance of alternative i, satisfies a ij i, j,...,n. 4. Saaty recommends eigenvalue method EM to deriving a priority vector from a reciprocal matrix A. Therefore, a priority vector of Saaty s AHP is a principal eigenvector of A, that is, an optimal solution of.3. However, the EM is criticized both from prioritization and consistency points of view and several new techniques are developed on the statistical model 4., e.g., the geometric means of the row entries of A and an optimal solution of a ij / min.qx 4. / that is called the minimum χ square method by Jensen []. Strictly speaking, Qx isnot an index of Pearson s χ goodness-of-fit because a ij is not an observed frequency and / is not an expected theoretical frequency. The minimum χ square method for the matrix A is to minimize the sum of linear ratios over the positive orthant. This minimization fractional problem is reduced into a matrix balancing problem.3 for [ a ij + ] by the following two lemmas: Lemma 4. For every matrix A and every positive vector x, Proof: Qx a ij x i Since n i nj / n i nj /, it follows that a ij x i a ij a ij + a ij + a ij. 4.3 a xi ij a ij + a ij + a ij. Lemma 4. Suppose that A is an n n matrix, then the optimization problem 4. has a unique optimal solution up to a positive scalar multiple, that is equal to an optimal solution of nj ni minx>0 a ij +. If the optimal value of 4. is 0, then a ij / for all i, j,...,n. Proof: Lemma 4. implies that 4. is equivalent to nj ni minx>0 a ij +. Let x beapositivevector,thenwehave / a ij / 0, where the equality is valid if and only if a ij /. Therefore, Qx 0 is equivalent to a ij / for all i, j,...,n. Lemma 4. implies that the minimum χ square estimate for the reciprocal matrix A is unique up to a scalar multiple and it is completely solved by applying the scaling algorithms to the matrix [ a ij + ]. Furthermore, the minimum χ square estimate derived from 4. guarantees following two properties that are desirable from the prioritization point of view:

11 Matrix Balancing Problem and Binary AHP 55 Theorem 4. Row dominance Suppose that A is a nonnegative matrix and that there is a pair of l, k such that a lj a kj and a jl a jk for all j,...,n, then an optimal solution x of 4. satisfies x l x k. Moreover, if there is an index h such that a lh >a kh or a hl <a hk, then x l >x k. Proof: Let b ij a ij + for all i, j,...,n. Without loss of generality, suppose that a j a j for all j,...,n and a i a i for all i,...,n,thenwehave b j b j for all j,...,n and b i b i for all i,...,n. 4.4 Let x be an optimal solution of 4., then it follows from Lemma 4. and Lemma. that j b j x j x i b i x x i and j b j x j x i x b i. 4.5 x i It follows from 4.5, 4.4 and x > 0that nj b j x j x nj ni x b i x b i j x ni j b i x i nj b j x nj j b j x ni, 4.6 ni b i x j b i x i i implying that x x. If a h >a h then, we have b h >b h and hence, n j b j x j > n j b j x j. This means from 4.6 that x >x. Both the minimum χ square method and the EM satisfy row dominance, this result is also proved in [] by using the fact that A is reciprocal, however the EM does not satisfy the right-left symmetry that is also desirable on psychological grounds [8, 4]. The minimum χ square method satisfies right-left symmetry as follows: Theorem 4. Right-Left Symmetry Suppose that A is an n n matrix. Let x be an optimal solution of 4. and let y be an optimal solution of min y>0 a ji y i y j y j y i, 4.7 then there exists a constant c>0 such that x i c/yi for all i,,...,n. Proof: It follows from Lemma 4. that 4.7 is equivalent to min y>0 Let b ij a ij +and /y i,thenwehave min y>0 a ji + y j y i min y>0 min x>0 a ji + y j y i. 4.8 y j y j b ji min b ji min y i y>0 j i y i y>0 b ij min a j +. i x>0 b ij y i y j Therefore, it follows from Lemma 4. that there exists a positive number c such that x i c/yi for all i,...,n.

12 56 K. Genma & Y. Kato & K. Sekitani Positivity of all entries of the matrix B is a key premise for both two proofs of Theorem 4. and Theorem 4.. In fact, Theorem 4. holds for any positive matrix B satisfying 4.4 and b h >b h for some h {,...,n}. Theorem 4. holds for any positive matrix B. Therefore, both theorems do not request an assumption of reciprocity of the given matrix A. Saaty recommends that pairwise comparison value a ij is chosen among {/9, /7, /5, /3,, 3, 5, 7, 9}, that is called the scale -9. Takahashi [8] and Jensen [3] independently simplify the scale -9 into {/α, α}. Here, α is a coding parameter which is greater than. The simplest AHP, say Binary AHP, has any pairwise comparison value a ij {/α, α}, whose simplification leads to an analytical priority weights [7, 8]. See Nishizawa [9, 0] for recent extensions of Binary AHP. For the given coding parameter α, the pairwise comparison matrix of Binary AHP is denoted by Aα, whose off-diagonal element is α or /α. The matrix Aα is also reciprocal. In AHP with the -9 scale, the minimum χ square estimate for the pairwise comparison matrix A is a solution of the matrix balancing problem for the matrix [ a ij + ] A. However, the minimum χ square estimate for Aα in Binary AHP is also a solution of the matrix balancing problem for Aα as follows: Theorem 4.3 Choose any α and let A Aα, then an optimal solution x of 4. coincides with that of.3 and Qx α +/αfx n. Proof: Let K + {i, j a ij α} and K {i, j a ij /α}, theni, j K + if and only if j, i K. Therefore, a ij a ij + x i,j K + i α + x i,j K + i α + α i,j K + a ij + x i,j K i + n α x i,j K + j i + n i,j K a ii α + x i,j K + i + n α x i,j K i αxj + x i + n α In the same fashion, we have a ij + i,j K + i,j K + α + α + + α + + i,j K + α + α + α i,j K i,j K i,j K i,j K + xj + α αxj + x i +n + α α + xj +n α + xi +n α +n αxj + x i +n α It follows from Lemma 4., 4.9 and 4.0 that both optimization problems 4. and.3 are reduced to minx>0 i,j K α x + j + x i / αxi. Since i,j K α x + j + x i / αxi n i j i a ij /, it follows from Lemma. that both optimization problems 4. and

13 Matrix Balancing Problem and Binary AHP 57.3 have the same optimal solution as ni minx>0 j i a ij /. Let x be an optimal solution of 4., then it follows from n nj i a ij α +/αn n/+n that Qx α +/αfx n+n a ij α +/αfx n+n {α +/αn n/+n} α +/αfx n α +/αn n α +/α fx n. Sekitani and Yamaki [6] has an interpretation of the EM in AHP with the scale -9 by using the ith entry of rx, i.e., j a ij a ij + a ii a ij j i j i In 4., is called the self-evaluation value of alternative i and n j i a ij is called the external evaluation value of alternative i. Then, j i a ij / means the gap between the self-evaluation value and the external evaluation value with respect to alternative i, which is called the over-estimation ratio of alternative i. Therefore, each entry of rx is the corresponding over-estimation ratio plus. From Lemma.4, the EM is to solve minx>0 rx.letebeall one vector, then each entry of rx e is the corresponding over-estimation ratio. Since minx>0 rx e is equivalent to minx>0 rx,theem is to minimize the largest over-estimation ratio [6]. Theorem 4.3 implies that the minimum χ square method is equivalent to minx>0 rx. Since minx>0 rx is equivalent to minx>0 rx e, the minimum χ square method for Aα in Binary AHP is to minimize the sum of over-estimation ratio. Binary AHP has the following relation between the EM and the minimum χ square method: Corollary 4. Choose any α. Let A Aα and let λ max be the principal eigenvalue of Aα, then n α +/αλ max n Qx, where x is an optimal solution of 4.. Proof: Let x be an optimal solution of 4., then it follows from Theorem 4.3 and. that Qx α +/α rx n n α +/αλ max n. 5. Comparisons with Eigenvalue Method on Binary AHP Takahashi [8] and Genest et al. [7] independently show surprisingly elegant properties of priority weights of a pairwise matrix Aα by the EM. By introducing such their properties of priority weights, this section shows sufficient conditions of equivalence between the minimum χ square method and the EM. Genest et al. [7] define two classes of Aα as follows: Aα is called maximally intransitive if n j a j n j a nj. Aα is called ordinally transitive if a ij > anda jk > imply a ik >. A pairwise matrix Aα is ordinally transitive if and only if there is a permutation matrix P such that any upper off-diagonal entry of P AαP is α see, e.g., [8]. Hereafter, without loss of generality, we assume that an ordinally transitive matrix Aα satisfies a ij α for all i<j. The following lemmas shows that a pairwise comparison matrix Aα has a left principal eigenvector and a right one that are equivalent up to a positive scalar multiple if Aα is ordinally transitive or maximally intransitive.

14 58 K. Genma & Y. Kato & K. Sekitani Lemma 5. Suppose that Aα is ordinally transitive, then Aα has a right principal eigenvector α /n,α 3/n,,α n /n andaleftone α /n,α 3/n,, α n /n. Proof: See Theorem 4 of [8] and Proposition 4.3 of [7]. Lemma 5. Suppose that Aα is maximally intransitive, then Aα has a right principal eigenvector,,, and a left principal eigenvector,,,. Proof: Suppose that Aα is maximally intransitive, then A has a right principal eigenvector,,,. Since Aα is reciprocal, we have {j a ij α} {j a ji /α} and {j a ij /α} {j a ji α}. This means that n i a i n i a in and hence, A has a left principal eigenvector,,,. The equivalence between a left and a right eigenvectors implies that the minimum χ square method is equivalent to the EM as follows: Lemma 5.3 Suppose that Aα is ordinally transitive, then an optimal solution of 4. for A Aα is equal to a positive scalar multiple of α /n,α 3/n,,α n /n. Proof: Let x # be a right principal eigenvector of Aα that is ordinally transitive, then Lemma 5. and Corollary. imply that x # is an optimal solution of 4.. Therefore, it follows from Theorem 4.3 that x # is also an optimal solution of.3. Lemma 5.4 Suppose that Aα is maximally intransitive, then an optimal solution of 4. is equal to a positive scalar multiple of,,,. Proof: In the same way as the proof of Lemma 5.3, we can prove it. When a pairwise comparison matrix Aαisofsizen 3, the minimum χ square method coincides with the EM as follows: Lemma 5.5 Suppose that a pairwise comparison matrix Aα is of size n 3, then a right principal eigenvector of Aα and an optimal solution of 4. coincide for all α. [ ] α Proof: If Aα isofsizen, then it suffices to consider Aα. A left /α principal eigenvector of Aα is equal to a positive scalar multiple of [/α, ] and a right principal eigenvector of Aα is[α, ]. Therefore, a right principal eigenvector of Aα and an optimal solution of 4. coincide for all α. The matrix Aα with size n 3 is ordinally transitive or maximally intransitive. Hence, it follows from Lemma 5.3 and Lemma 5.4 that a right principal eigenvector of Aα with size n 3 and an optimal solution of 4. coincide for all α. For a general size n of Aα, we have sufficient conditions of equivalence between the minimum χ square method and the EM as follows: Theorem 5. A principal right eigenvector of Aα and an optimal solution of 4. coincide for all α whenever. Aα is ordinally transitive;. Aα is maximally intransitive; or 3. Aα is of size n 3. Proof: This immediately follows from Lemma 5.3, Lemma 5.4 and Lemma 5.5. Hereafter, we compare the ranking from priority weights by the minimum χ square method with the ranking from priority weights by the EM. All priority weights x,,w n

15 Matrix Balancing Problem and Binary AHP 59 is ordered according each value. That is, the rank of i is higher than that of j if >, and i and j have the same rank if. Corollary 5. Two rankings deriving from a principal right eigenvector of Aα and an optimal solution of 4. coincide for all α> whenever. Aα is ordinally transitive;. Aα is maximally intransitive; or 3. Aα is of size n 4 and there is no permutation matrix P such that P AαP α α /α /α α α /α /α α α /α /α. 5. Proof: Let x be priority weights by the minimum χ square method and let x # be priority weights by the EM. If Aα isofsizen 3, ordinally transitive, or maximally intransitive, then it follows from Theorem 5. that the ranking from x is equal to that from x #. For any 4 4 matrix Aαn there is a permutation matrix P such that P AαP is one among the following four different matrices: α α α α α α A 0 /α α α /α /α α, A /α α /α /α /α α, /α /α /α /α α /α /α /α /α α α /α A α /α α α α /α and A3 /α α α /α /α α. α /α α α /α /α Note that every permutation matrix P satisfies P A i P A k for all k i. Since A 3 is the matrix 5., the proof for Aα ofsizen 4 is sufficient to discuss rankings for three matrices A 0,A and A. Consider the case where Aα is reduced to A 0. The matrix A 0 is ordinally transitive and it follows from Theorem 5. that x # and x provide the same ranking. Consider that Aα is reduced to A. Genest [7] shows that the matrix A has a right principal eigenvector x # 3αβ,,,,whereβ [ α +/α/+ α +/α /4+3 ]. Since 3αβ > for all α>, the ranking from the right principal eigenvector x # is given as x # > x # x # 3 x # The matrix balancing problem.3 for A has an optimal solution x α,,,. In fact, the optimal solution x α,,, satisfies x α α α x x α α 0 x α 0 0 x x 3 α α 3 0 α α, x x α α 4 α α 4 that is sum-symmetry. It follows from Theorem 4.3 that x is also an optimal solution of 4. for A. Therefore, the ranking from x is given as x α>x x 3 x 4, 5.4

16 530 K. Genma & Y. Kato & K. Sekitani for all α>. The ranking 5.4 from the optimal solution x of 4. coincides with 5. from the right principal eigenvector x # of A. ConsiderthecaseofA,thenA A and a right principal eigenvector of A is a positive scalar multiple of x # 3β/α,,,,whereβ [ α +/α/+ α +/α /4+3 ]. Genest et al. [7] shows that the principal eigenvalue λ max of A is β +. Since Qx 0for any positive vector x, it follows from Corollary 4. that 4 n λ max β +, implying that 3β and3β/α < for all α>. The ranking from x # is given as x # < x # x # 3 x # Let x be an optimal solution of 4. for A, then it follows from A A and Theorem 4. that x /α,,, for all α>. The ranking from x is which is the same order as that from 5.5. x < x x 3 x 4, 5.6 As stated above, by using a permutation of a pairwise comparison matrix Aα, Aα with n 4 is classified into four equivalent classes. Whenever Aα withn 4isnot equivalent to 5., the minimum χ square method provides the same ranking as the EM. For the equivalent class to 5., the ranking by the minimum χ square method is different fromtherankingbytheemasfollows: Corollary 5. Suppose that Aα is 5.. Let x # be a right principal eigenvector of Aα and let x be an optimal solution of 4.. For any α> we have x # >x # >x # 4 >x # 3 and 5.7 x >x >x 4 >x Proof: See Proposition 4.4 of [8] for the proof of 5.7. The proof of 5.8 is given in Appendix. Consider a 5 5 matrix similar to 5. as follows[7]: α α α /α /α α α α /α /α α α /α /α /α α α /α /α /α. 5.9 Genest et al. [7] report from some numerical experiments that a ranking by the EM for 5.9 depends on choice of coding parameter of α. For a small values of α>, a right principal eigenvector x # of 5.9 satisfies x # >x # >x # 3 >x # 5 >x # For any α>3.7, a right principal eigenvector of 5.9 usually satisfies x # >x # >x # 5 >x # 3 >x # Checking rankings from the minimum χ estimate of the matrix 5.9 with α, 3,...,00, we confirm numerically that an optimal solution x of 4. satisfies x >x >x 3 >x 5 >x 4 5.

17 Matrix Balancing Problem and Binary AHP 53 Table : Comparison of the EM with minimum χ square method on Rank reversal Number of Eigenvector method χ square method n equivalent number of percentage of number of percentage of classes rank reversals rank reversals rank reversals rank reversals 0 0% 0 0% 0 0% 0 0% 3 0 0% 0 0% % 0 0% 5 8.3% 0 0% % 3.8% % 0 4.4% % % % 8.% for all α, 3,...,00 and hence, the ranking from x of 5.9 may be independent of the choice of coding parameter of α. Genest et al. [7] document that such rank reversibility by the EM increases dramatically with size n 6, 7, 8, 9ofAα. Comparing rank reversibility by the EM with that by the minimum χ square method, we carry out numerical experiments to measure the rank reversals frequency generated by the two methods. For α, 3, 5, 7, 9, let x α be an optimal solution of 4. and let x # α be an right principal eigenvector of Aα. For a sufficiently small positive number ɛ>0, we define K ɛ { i, j x i x j ɛ }. 5.3 We say that the rank reversal occurs by the minimum χ square method, if and only if there exists an α {, 3, 5, 7, 9} such that an optimal solution xα of 4. violates at least one of x i α x jα ɛ and x i x j x i α x jα > 0 for all i, j Kɛ, x i α x jα <ɛ for all i, j / K ɛ. 5.4 In the similar way to 5.3 of the minimum χ square estimate, we define K ɛ # for a right principal eigenvector of x # and consider two conditions for x # α that are similar to 5.4. Then we say that the rank reversal occurs by the EM, if and only if there exists an α {, 3, 5, 7, 9} such that x # α violates at least one of two conditions for x # α. As stated in the proof of Corollary 5., a set of Aα withsizen 4 is classified into four equivalent classes. The size n of Aα and the number of the equivalent classes on the whole set of Aα with the corresponding size n are listed in the first and second column of Table, respectively. Let ɛ 0 7 of Kɛ and K ɛ # and choose one matrix Aα from each equivalent class, we check rank reversals occurrence for the minimum χ square method and the EM. The third column of Table indicates the number of the equivalent classes where rank reversals occurs by the EM. The forth column is given the proportion of the equivalence classes including rank reversals by the EM. In the same manner, the resulting rank reversals occurrence by the minimum χ square method are listed in the fifth and sixth columns of Table. For the EM and the minimum χ square method, the two proportions of the equivalent classes including the rank reversals increase with size n. Especially, the rank reversals with

18 53 K. Genma & Y. Kato & K. Sekitani size n 9 occurs by the EM at the almost 50% probability. The rank reversals occurrence by the EM is at least 4 times as much as that by the minimum χ square method. It is well known in regression analysis [9] that the L -norm estimation is robust for outliers on the comparison with L -norm and L -norm. The minimum χ square method is the minimum-norm point problem.9 whose norm is L norm and the EM is the minimum-norm point problem.0 whose norm is L -norm. By analogy with the regression analysis, it is natural that rank preservation of the minimum χ square method is superior to that of the EM. 6. Conclusion This research shows necessary and sufficient conditions Theorem. of equivalence between the eigenvalue problem and the matrix balancing problem for a nonnegative matrix, by using a unified framework, the minimum-norm point problem, for the two problems. Furthermore, Theorem 3. shows global convergence of scaling algorithms for solving the matrix balancing problem. A variety of nonnegative matrix analyses appear frequently in social sciences, e.g., economics, management science and operations research, where Theorem. and Theorem 3. may be useful for modeling and simulation. This study also illustrates a contribution of Theorem. and Theorem 3. by applying the minimum χ square method to AHP. Lemma 4. shows that the minimum χ square method 4. is to solve the matrix balancing problem. The minimum χ square method has the desirable properties of Theorem 4. and Theorem4., that is not satisfied with the EM. Furthermore, Theorem 4. and Theorem4. do not request a usual assumption of AHP that is reciprocal of a pairwise comparison matrix. The simplest AHP, Binary AHP, can compare directly the EM with the minimum χ square method, by virtue of Theorem 4.3. Consequently, Theorem 5., Corollary 5., and Corollary 5. show how pairwise comparison matrix Aα of Binary AHP is when the priority weights of the EM coincide with ones of the minimum χ square method. In the other case, i.e., when the priority weights of the EM and the minimum χ square method are not equivalent, dual norms incorporated into each method make a remarkable gap between their frequencies of rank reversals occurrence. The numerical experiments indicate that the rank reversals frequency of the EM is at least 4 times as much as that of the the minimum χ square method. The minimum χ square method of AHP has a potential merit that is to incorporate a data reliability β ij β ji for a pairwise comparison value a ij into 4. as follows: min x>0 This optimization problem 6. is reduced to min x>0 β ij a ij x i. 6. a ij + β ij, 6. which is equivalent to the matrix balancing problem for [ a ij + ] β ij. By using βij,the minimum χ square method is directly available to incomplete information case, where all pairwise comparisons are not carried out. Let C {i, j i, j are not compared} and D {i, j i, j are compared} and set β ij foralli, j D and β ij 0forall

19 Matrix Balancing Problem and Binary AHP 533 i, j C, then the minimum χ square method under the incomplete information is to solve thematrixbalancingproblemasfollows: min x>0 i,j D a ij +. AHP including multiple decision makers, group AHP, can be applied by the minimum χ square method. Suppose that L decision makers individually provide L pairwise comparison matrices [a l ij] l,...,l, then the minimum χ square method 4. is straightly modified as follows: min x>0 L l The modified problem 6.3 is equivalent to min x>0 L l a l ij x i. 6.3 a l ij + L xj. 6.4 Therefore, the minimum χ square method for group AHP is to solve the matrix balancing ] problem for a l ij + L. [ L l One of future investigations in this study is to apply the minimum χ square method to real-world decision making problems such as merit-based personnel systems [3] and strategic decision of supply chain management [8]. Another of future investigations is to apply the minimum χ square method into parameter estimation of stochastic models, e.g., Bradley-Terry model [3], Huff model [] and contingency analysis [6]. Their models often use maximum likelihood estimation, whose validity of the model is checked by a likelihood ratio test. The minimum χ square estimate gives the minimum χ square value that is immediately used in Pearson s χ square test of goodness of fit. Acknowledgments The third author is partly supported by Grant-in-Aid for the Japan Society for the Promotion of Science, No The author wishes to thank Prof. Yoshitsugu Yamamoto, Prof. Akiko Yoshise and Prof. Tomomi Matsui for stimulating discussions and constructive comments. References [] A. Bachem and B. Korte: On the RAS-algorithm. Computing, 3 979, [] R.B. Bapatn and T.E.S. Raghaven: Nonnegative Matrices and Applications Cambridge University Press, Cambridge, 997. [3] R.A. Bradley and M.E. Terry: Rank analysis of incomplete block designs: the method of paired comparisons. Biometrika, 39 95, [4] L.M. Bregman: Proof of the convergence of Sheleikhovskii s method for a problem with transportation constraints. USSR Computational Mathematics and Mathematical Physics, 7 967, [5] B.C. Eaves, A.J. Hoffman, U.G. Rothblum and H. Schneider: Line-sum-symmetric scalings of square nonnegative matrices. Mathematical Programming Study, 5 985, 4 4.

20 534 K. Genma & Y. Kato & K. Sekitani [6] D. Friedlander: A technique for estimating a contingency table, given the marginal totals and some supplementary data. Journal of the Royal Statistical Society:Series A, 4 96, [7] C. Genest, F. Lapointe and S.W. Drury: On a proposal of Jensen for the analysis of ordinal pairwise preferences using Saaty s eigenvector scaling method. Journal of Mathematical Psychology, , [8] C. Genest and C.-É. M lan: Deriving priorities from the Bradley-Terry model. Mathematical and Computer Modelling, 9 999, [9] R. Gonin and A.H. Money: Nonlinear L p -Norm Estimation Marcel Dekker, New York, 989. [0] T. Huang, W. Li and W. Sun: Estimates for bounds of some numerical characters of matrices. Linear Algebra and its Applications, , [] D.L. Huff: A probabilistic analysis of shopping center trade area. Land Economics, , [] R.E. Jensen: Comparisons of eigenvector, least squares, chi square and logarithmic least squares methods of scaling a reciprocal matrix. Trinity University, USA, Working Paper, [3] R.E. Jensen: Comparison of consensus methods for priority ranking problems. Decision Sciences, 7 986, 95. [4] C.R. Johnson, W.B. Beine and T.J. Wang: Right-left asymmetry in an eigenvector ranking procedure. Journal of Mathematical Psychology, 9 979, [5] C.R. Johnson, J. Pitkin and D.P. Stanford: Line-sum symmetry via the DomEig algorithm. Computational optimization and applications, 7 000, 5 0. [6] B. Kalantari, L. Khachiyan and A. Shokoufandeh: On the complexity of matrix balancing. SIAM Journal on Matrix Analysis and Applications, 8 997, [7] J. Kruithof: Telefoonverkeersrekening. De Ingenieur, [8] T. Nakagawa and K. Sekitani: A use of analytic network process for supply chain management. Asia Pacific Management Review, 9 004, [9] K. Nishizawa: A consistency improving method in Binary AHP. Journal of the Operations Research Society of Japan, , 33. [0] K. Nishizawa: A method to estimate results of unknown comparisons in Binary AHP. Journal of the Operations Research Society of Japan, , 05. [] B.N. Parlett and T.L. Landis: Methods for scaling to doubly stochastic form. Linear Algebra and its Applications, 48 98, [] T.L. Saaty: Analytic Hierarchy Process McGraw-Hill, New York, 980. [3] T.L. Saaty: Decision-making with the AHP: Why is the principal eigenvector necessary. European Journal of Operational Research, , [4] T.L. Saaty and L.G. Vargas: Inconsistency and rank preservation. Journal of Mathematical Psychology, 0 984, [5] M.H. Schneider and S.A. Zenios: A comparative study of algorithms for matrix balancing. Operations Research, , [6] K. Sekitani and N. Yamaki: A logical interpretation for the eigenvalue method in AHP. Journal of the Operations Research Society of Japan 4 999, 9 3. [7] K. Sekitani and Y. Yamamoto: A recursive algorithm for finding the minimum norm point in a polytope and a pair of closest points in two polytopes. Mathematical Programming, 6 993,

21 Matrix Balancing Problem and Binary AHP 535 [8] I. Takahashi: AHP applied to binary and ternary comparisons. Journal of the Operations Research Society of Japan, , [9] A. Takayama: Mathematical Economics Cambridge University Press, Cambridge, 985. [30] Q.H. Vuong and W. Wang: Minimum chi-square estimation and tests for model selection. Journal of Econometrics, , [3] N. Yamaki and K. Sekitani: A large-scale AHP including incomplete information and multiple evaluators and its applications to personnel administration evaluation system. Journal of the Operations Research Society of Japan, 4 999, In Japanese [3] F. Zahedi: The Analytic Hierarchy Process a survey of the method and its applications. Interfaces, 6 986, [33] W.I. Zangwill: Nonlinear Programming: A Unified Approach Prentice-Hall, New Jersey, 969. Appendix Proof of Corollary 5. Since x is an optimal solution of 4., it follows from Theorem 4.3 that x is also an optimal solution of the problem.3 whose A is replaced with the matrix 5.. Hence, it follows from Lemma. that the matrix α x x /α x x /α x x 3 α x x 4 α x 3 x α x 3 x /α x x 3 /α x /α x x 3 4 x 4 /α x 4 x α x 4 x α x 4 x is sum-symmetry. The sum-symmetry of 6.5 is equivalent to the following four equations: x x x 3 x 4 αx + αx 3 + x 4/α, /αx +/αx 3+α/x x /α + αx 3 + αx 4 α/x +/αx 3+/αx 4, 6.7 x /α + x /α + αx 4 α/x + α/x +/αx 4, 6.8 αx + x /α + x 3/α. /αx +α/x + α/x We will show x >x 4. The proof is by contradiction. Assume x 4 x,thenwehave x 4 x /x +/α + x 3/α x +x /x 3 + x /α x 3++α x /x 4 + x /x 4 + x 3/x 4. x x /x + x 3/x + x 4/α x +α + α x 3/x + x 4/x + α x /x 3 + α + x 4/x 3 This implies from x 4/x andα>that 0 x /x + x 3/x + x 4/α x +α + α x 3/x + x 4/x + α x /x 3 + α + x 4/x 3 x /x +/α + x 3/α x +x /x 3 + x /α x 3+/α + α x /x 4 + x /x 4 + x 3/x 4 x + x 3/x /x 4+x 4/α x +/α x 4/x α x /x 4 +α /α

Incomplete Pairwise Comparison Matrices and Weighting Methods

Incomplete Pairwise Comparison Matrices and Weighting Methods Incomplete Pairwise Comparison Matrices and Weighting Methods László Csató Lajos Rónyai arxiv:1507.00461v2 [math.oc] 27 Aug 2015 August 28, 2015 Abstract A special class of preferences, given by a directed

More information

WEIGHTED AND LOGARITHMIC LEAST SQUARE METHODS FOR MUTUAL EVALUATION NETWORK SYSTEM INCLUDING AHP AND ANP

WEIGHTED AND LOGARITHMIC LEAST SQUARE METHODS FOR MUTUAL EVALUATION NETWORK SYSTEM INCLUDING AHP AND ANP Journal of the Operations Research Society of Japan 2009, Vol. 52, No. 3, 221-244 WEIGHTED AND LOGARITHMIC LEAST SQUARE METHODS FOR MUTUAL EVALUATION NETWORK SYSTEM INCLUDING AHP AND ANP Kazutomo Nishizawa

More information

Inverse Sensitive Analysis of Pairwise Comparison Matrices

Inverse Sensitive Analysis of Pairwise Comparison Matrices Inverse Sensitive Analysis of Pairwise Comparison Matrices Kouichi TAJI Keiji Matsumoto October 18, 2005, Revised September 5, 2006 Abstract In the analytic hierarchy process (AHP), the consistency of

More information

A Group Analytic Network Process (ANP) for Incomplete Information

A Group Analytic Network Process (ANP) for Incomplete Information A Group Analytic Network Process (ANP) for Incomplete Information Kouichi TAJI Yousuke Sagayama August 5, 2004 Abstract In this paper, we propose an ANP framework for group decision-making problem with

More information

THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS

THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS ISAHP 200, Berne, Switzerland, August 2-4, 200 THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS Yuji Sato Department of Policy Science, Matsusaka University 846, Kubo,

More information

PROPERTIES OF A POSITIVE RECIPROCAL MATRIX AND THEIR APPLICATION TO AHP

PROPERTIES OF A POSITIVE RECIPROCAL MATRIX AND THEIR APPLICATION TO AHP Journal of the Operations Research Society of Japan Vol. 41, No. 3, September 1998 1998 The Operations Research Society of Japan PROPERTIES OF A POSITIVE RECIPROCAL MATRIX AND THEIR APPLICATION TO AHP

More information

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP MULTIPLE CRITERIA DECISION MAKING Vol. 11 2016 Sławomir Jarek * REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP DOI: 10.22367/mcdm.2016.11.05 Abstract The Analytic Hierarchy Process (AHP)

More information

Newton s method in eigenvalue optimization for incomplete pairwise comparison matrices

Newton s method in eigenvalue optimization for incomplete pairwise comparison matrices Newton s method in eigenvalue optimization for incomplete pairwise comparison matrices Kristóf Ábele-Nagy Eötvös Loránd University (ELTE); Corvinus University of Budapest (BCE) Sándor Bozóki Computer and

More information

International Journal of Information Technology & Decision Making c World Scientific Publishing Company

International Journal of Information Technology & Decision Making c World Scientific Publishing Company International Journal of Information Technology & Decision Making c World Scientific Publishing Company A MIN-MAX GOAL PROGRAMMING APPROACH TO PRIORITY DERIVATION IN AHP WITH INTERVAL JUDGEMENTS DIMITRIS

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ESTIMATION METHODS BY STOCHASTIC MODEL IN BINARY AND TERNARY AHP

ESTIMATION METHODS BY STOCHASTIC MODEL IN BINARY AND TERNARY AHP Journal of the Operations Research Society of Japan 2007, Vol. 50, No. 2, 101-122 ESTIMATION METHODS BY STOCHASTIC MODEL IN BINARY AND TERNARY AHP Kazutomo Nishizawa Nihon University Iwaro Takahashi Tsukuba

More information

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 A vector x R n is called positive, symbolically x > 0,

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

Measuring transitivity of fuzzy pairwise comparison matrix

Measuring transitivity of fuzzy pairwise comparison matrix Measuring transitivity of fuzzy pairwise comparison matrix Jaroslav Ramík 1 Abstract. A pair-wise comparison matrix is the result of pair-wise comparison a powerful method in multi-criteria optimization.

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

A note on deriving weights from pairwise comparison ratio matrices

A note on deriving weights from pairwise comparison ratio matrices 144 European Journal of Operational Research 73 (1994) 144-149 North-Holland Theory and Methodology A note on deriving weights from pairwise comparison ratio matrices Akihiro Hashimoto Institute of Socio-Economic

More information

On pairwise comparison matrices that can be made consistent by the modification of a few elements

On pairwise comparison matrices that can be made consistent by the modification of a few elements Noname manuscript No. (will be inserted by the editor) On pairwise comparison matrices that can be made consistent by the modification of a few elements Sándor Bozóki 1,2 János Fülöp 1,3 Attila Poesz 2

More information

ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX

ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX ISAHP 003, Bali, Indonesia, August 7-9, 003 ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX Mingzhe Wang a Danyi Wang b a Huazhong University of Science and Technology, Wuhan, Hubei 430074 - P.

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Scaling of Symmetric Matrices by Positive Diagonal Congruence

Scaling of Symmetric Matrices by Positive Diagonal Congruence Scaling of Symmetric Matrices by Positive Diagonal Congruence Abstract Charles R. Johnson a and Robert Reams b a Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, VA

More information

Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations

Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations S. Alonso Department of Software Engineering University of Granada, 18071, Granada, Spain; salonso@decsai.ugr.es, F.J. Cabrerizo

More information

Decision-Making with the AHP: Why is The Principal Eigenvector Necessary

Decision-Making with the AHP: Why is The Principal Eigenvector Necessary Decision-Making with the AHP: Why is The Principal Eigenvector Necessary Thomas L. Saaty Keywords: complex order, consistent, near consistent, priority, principal eigenvector Summary: We will show here

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

Normalized priority vectors for fuzzy preference relations

Normalized priority vectors for fuzzy preference relations Normalized priority vectors for fuzzy preference relations Michele Fedrizzi, Matteo Brunelli Dipartimento di Informatica e Studi Aziendali Università di Trento, Via Inama 5, TN 38100 Trento, Italy e mail:

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

CONSISTENCY-DRIVEN APPROXIMATION OF A PAIRWISE COMPARISON MATRIX

CONSISTENCY-DRIVEN APPROXIMATION OF A PAIRWISE COMPARISON MATRIX KYBERNETIKA VOŁÜME» (2003), NUMBER 5, PAGES 561-568 CONSISTENCY-DRIVEN APPROXIMATION OF A PAIRWISE COMPARISON MATRIX ESTHER DOPAZO AND JACINTO GONZÁLEZ-PACHÓN The pairwise comparison method is an interesting

More information

An LP-based inconsistency monitoring of pairwise comparison matrices

An LP-based inconsistency monitoring of pairwise comparison matrices arxiv:1505.01902v1 [cs.oh] 8 May 2015 An LP-based inconsistency monitoring of pairwise comparison matrices S. Bozóki, J. Fülöp W.W. Koczkodaj September 13, 2018 Abstract A distance-based inconsistency

More information

Tropical Optimization Framework for Analytical Hierarchy Process

Tropical Optimization Framework for Analytical Hierarchy Process Tropical Optimization Framework for Analytical Hierarchy Process Nikolai Krivulin 1 Sergeĭ Sergeev 2 1 Faculty of Mathematics and Mechanics Saint Petersburg State University, Russia 2 School of Mathematics

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Additive Consistency of Fuzzy Preference Relations: Characterization and Construction. Extended Abstract

Additive Consistency of Fuzzy Preference Relations: Characterization and Construction. Extended Abstract Additive Consistency of Fuzzy Preference Relations: Characterization and Construction F. Herrera a, E. Herrera-Viedma a, F. Chiclana b Dept. of Computer Science and Artificial Intelligence a University

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Perron-Frobenius theorem

Perron-Frobenius theorem Perron-Frobenius theorem V.S. Sunder Institute of Mathematical Sciences sunder@imsc.res.in Unity in Mathematics Lecture Chennai Math Institute December 18, 2009 Overview The aim of the talk is to describe

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

From Satisfiability to Linear Algebra

From Satisfiability to Linear Algebra From Satisfiability to Linear Algebra Fangzhen Lin Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Technical Report August 2013 1 Introduction

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

A Questionable Distance-Regular Graph

A Questionable Distance-Regular Graph A Questionable Distance-Regular Graph Rebecca Ross Abstract In this paper, we introduce distance-regular graphs and develop the intersection algebra for these graphs which is based upon its intersection

More information

Mathematical foundations of the methods for multicriterial decision making

Mathematical foundations of the methods for multicriterial decision making Mathematical Communications 2(1997), 161 169 161 Mathematical foundations of the methods for multicriterial decision making Tihomir Hunjak Abstract In this paper the mathematical foundations of the methods

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Budapest Bridges Benchmarking

Budapest Bridges Benchmarking Budapest Bridges Benchmarking András Farkas Óbuda University, Faculty of Economics 1084 Budapest, Tavaszmező u. 17, Hungary e-mail: farkas.andras@kgk.uni-obuda.hu Abstract: This paper is concerned with

More information

On optimal completions of incomplete pairwise comparison matrices

On optimal completions of incomplete pairwise comparison matrices On optimal completions of incomplete pairwise comparison matrices Sándor BOZÓKI 1,2, János FÜLÖP 2, Lajos RÓNYAI 3 Abstract An important variant of a key problem for multi-attribute decision making is

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 0: Vector spaces 0.1 Basic notation Here are some of the fundamental sets and spaces

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

ASSESSMENT FOR AN INCOMPLETE COMPARISON MATRIX AND IMPROVEMENT OF AN INCONSISTENT COMPARISON: COMPUTATIONAL EXPERIMENTS

ASSESSMENT FOR AN INCOMPLETE COMPARISON MATRIX AND IMPROVEMENT OF AN INCONSISTENT COMPARISON: COMPUTATIONAL EXPERIMENTS ISAHP 1999, Kobe, Japan, August 12 14, 1999 ASSESSMENT FOR AN INCOMPLETE COMPARISON MATRIX AND IMPROVEMENT OF AN INCONSISTENT COMPARISON: COMPUTATIONAL EXPERIMENTS Tsuneshi Obata, Shunsuke Shiraishi, Motomasa

More information

A Characterization of the Hurwitz Stability of Metzler Matrices

A Characterization of the Hurwitz Stability of Metzler Matrices 29 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June -2, 29 WeC52 A Characterization of the Hurwitz Stability of Metzler Matrices Kumpati S Narendra and Robert Shorten 2 Abstract

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC - Coimbra

Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC - Coimbra Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC - Coimbra Claude Lamboray Luis C. Dias Pairwise support maximization methods to exploit

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

PREFERENCE MATRICES IN TROPICAL ALGEBRA

PREFERENCE MATRICES IN TROPICAL ALGEBRA PREFERENCE MATRICES IN TROPICAL ALGEBRA 1 Introduction Hana Tomášková University of Hradec Králové, Faculty of Informatics and Management, Rokitanského 62, 50003 Hradec Králové, Czech Republic e-mail:

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

B best scales 51, 53 best MCDM method 199 best fuzzy MCDM method bound of maximum consistency 40 "Bridge Evaluation" problem

B best scales 51, 53 best MCDM method 199 best fuzzy MCDM method bound of maximum consistency 40 Bridge Evaluation problem SUBJECT INDEX A absolute-any (AA) critical criterion 134, 141, 152 absolute terms 133 absolute-top (AT) critical criterion 134, 141, 151 actual relative weights 98 additive function 228 additive utility

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Metoda porównywania parami

Metoda porównywania parami Metoda porównywania parami Podejście HRE Konrad Kułakowski Akademia Górniczo-Hutnicza 24 Czerwiec 2014 Advances in the Pairwise Comparisons Method Heuristic Rating Estimation Approach Konrad Kułakowski

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Research Article Deriving Weights of Criteria from Inconsistent Fuzzy Comparison Matrices by Using the Nearest Weighted Interval Approximation

Research Article Deriving Weights of Criteria from Inconsistent Fuzzy Comparison Matrices by Using the Nearest Weighted Interval Approximation Advances in Operations Research Volume 202, Article ID 57470, 7 pages doi:0.55/202/57470 Research Article Deriving Weights of Criteria from Inconsistent Fuzzy Comparison Matrices by Using the Nearest Weighted

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015 EE613 Machine Learning for Engineers Kernel methods Support Vector Machines jean-marc odobez 2015 overview Kernel methods introductions and main elements defining kernels Kernelization of k-nn, K-Means,

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Computing closest stable non-negative matrices

Computing closest stable non-negative matrices Computing closest stable non-negative matrices Yu. Nesterov and V.Yu. Protasov August 22, 2017 Abstract Problem of finding the closest stable matrix for a dynamical system has many applications. It is

More information