GROUP JUDGMENTS IN FLOOD RISK MANAGEMENT: SENSITIVITY ANALYSIS AND THE AHP

Size: px
Start display at page:

Download "GROUP JUDGMENTS IN FLOOD RISK MANAGEMENT: SENSITIVITY ANALYSIS AND THE AHP"

Transcription

1 IHP 3, Bali, Indonesia, ugust 7-9, 3. GROUP JUDGMENT IN FLOOD RIK MNGEMENT: ENITIVITY NLYI ND THE HP Jason K. Lev, Jens Hartmann, Hirokazu Tatano, Norio Okada, hogo Tanaka Disaster Prevention Research Institute, Koto Universit, Japan Keith W. Hipel Universit of Waterloo, Waterloo, Ontario Canada D. Marc Kilgour Wilfrid Laurier Universit, Waterloo, Ontario, Canada Chennat Gopalakrishnan Universit of Hawaii, Manoa, Hawaii Kewords: uncertaint, sensitivit analsis, flood management, consistenc ratio ummar: Flood management is an important issue in Japan as Japanese rivers are steep in gradient and short in length, and some million people populate the river basin densel. The HP is shown to be a valuable technolog for aggregating group judgments. ensitivit measures are developed to determine the robustness of the consistenc ratio and the principal right eigenvector to perturbations in the group judgments of the pairwise comparison matrix. This paper investigates how uncertaint in each of the input affects the derived output variables.. Introduction MCD for flood management is one of the fastest growing areas in operations research, borrowing heavil from fields such as hdrolog, geolog, pscholog and computer science. Most flood management problems in Japan are participator processes which inherentl involve input from a variet of decision makers, whose judgments must be aggregated. In the context of HP, the aggregation of group judgments in discussed ection. Next, sensitivit metrics are proposed to quantif the notion of robustness to uncertaint. It is shown that the derived consistenc ratio ection 3 and principal eigenvector ection are insensitive to small perturbation of the consistent pairwise comparison matrix.. ggregation of Individual Judgments and ensitivit nalsis in HP n important problem in decision analsis involves the combination or aggregation of problem-solving knowledge and judgments. In the HP it is well known that that an m x m pairwise comparison matrix requires m m such input judgments. Bolloju 997 emphasizes the importance of combining or aggregating individual judgments into a group score: Man tpical business environments require such combination or aggregation of decision making for man reasons such as validation, consistenc verification, and training. n technique for elicitation ggregation of problem-solving knowledge should deal with inconsistencies, conflicts, and decision makers' subjectivit. In the context of HP, this paper uses the geometric mean of individual judgments to obtain a combined group judgment. This ma help to overcome difficulties arising from a lack of group consensus Davies, 99. Other techniques for aggregation include conceptual aggregation based on conceptual clustering and case-based learning for real-time dnamic decision making Chaturvedi et al., 993; the flexible modeling approach based on Baesian analsis for aggregation of point estimates Clemen and Winkler, 993; ggregation of preference patterns using social choice framework Dubois and Koning, 99. Comparative studies on Proceedings 7 th IHP 3 Bali, Indonesia 9

2 group preference aggregation are reported b Ramanathan and Ganesh 99 and Perez and Barba- Romero Mean of Individual Judgments In the Tokai flooding problem, three important criteria are considered: flood frequenc criteria, water velocit criteria, and inundation depth criteria 3. Consider judgment a, the relative importance of the flood frequenc compared with water velocit. The sample geometric mean of a set 3 N, a, a, K a of judgments from N experts is defined b: { } a, < L N 3 N a a a a a In such a wa, a consensus matrix can be established b aggregating the individual judgments of the N decision makers using the geometric mean approach. While the geometric mean method is the most widel used and theoreticall accepted approach in HP for combining individual judgments to form a group opinion, Ramanathan and Ganesh 99 showed, using counter-examples, that the geometric mean method does not alwas satisf the Pareto optimalit axiom, one of the prominent social choice axioms.. Variance of Individual Judgments It is also important to consider a measure of data variabilit. For illustrative purposes, consider again judgment a, the relative importance of the flood frequenc versus water velocit. n unbiased estimator, the sample variance of the judgments, can be established as the mean square deviation of a values from the geometric mean, : var[ N i a a ] σ [ a ] N i ince we are eliciting opinions from a sample of N decision makers from a possibl large population, the geometric mean is onl an approximation to the "true mean." Because this mean must be estimated from available judgments data, the number of degrees of freedom is reduced b, hence the factor of /N- in the variance. If we have judgments available from the total population of decision makers, or if the mean was known a priori, then the factor would be /N. Of course the standard deviation of judgment a is well-known to be σ [ a ], the square root of the variance. The standard deviation gives a measure of the "spread" of the judgments and can also be used to assign probabilities for being within a certain range of values. Consider now the snthesis of individual judgments for entr a, assuming that the judgments of three decision makers are as follows: 3 a, a, a,, 3 { } { } The combined group judgment, using the geometric mean should be: 3 with corresponding variance: Proceedings 7 th IHP 3 Bali, Indonesia 9

3 For convenience, assume that we have the following geometric mean values for each entr in the pairwise comparison matrix for the Tokai flooding problem, giving rise to a perfectl consistent matrix: imilarl, let us assume the following pairwise comparison matrix of variances: 3 3 /5 7 5/ 9 7 / 9 / 7.3 ensitivit Metrics In this section we develop sensitivit metrics to quantif how output variables are affected b uncertaint in the group judgments of the pairwise comparison matrix. To obtain the eigenvalue and eigenvectors of a 3 b 3 matrix, we must solve the problem a a w w a a w λ max w a 3 a3 w3 w3 In the cubic and quartic cases, derived output variables, such the consistenc ratio, can be explicitl written as a function of the judgments in the pairwise comparison matrix. For example, in the cubic case, aat 3 has explicitl derived CR f a, a, as follows: a CR CI MRCI n 3 a / 3 t a t a t + a t a t / a t. 9 where CR, CI, and MRCIn are the Consistenc Ratio, Consistenc Index and Mean Random Consistenc Index. imilarl, the weights, w, and the largest eigenvalue, λ, are also a function of the input i a judgments, hence w i f a, a, and λ f a, a, a. scenario can be defined as a max max vector of values for the pairwise comparison matrix input judgments, a, where: a a, a, a For each input, the geometric mean of the group judgments represent the nominal or base case value for each input. Denote these nominal input values,,, and. Together, these three input values specif the nominal scenario:,, Proceedings 7 th IHP 3 Bali, Indonesia 93

4 It is of interest to note how an output variable, such as the consistenc ratio, changes with the value of the inputs. The corresponding nominal output variable is defined as: < f sensitivit metric,, is used to quantif the degree to which each of the group input judgments aij contributes to the uncertaint in the output variable : ij aij < ij Note that all of the partial derivatives are evaluated at the nominal i.e. geometric mean scenario,. These measures can be combined to determine the total sensitivit, : T + a T a + a 5 Morgan and Henrion 99 refer to the metric in Eq. as a measure of elasticit, and discuss the use of additional related sensitivit metrics. 3. ensitivit of the Consistenc Ratio to Uncertaint in Group Judgments This section illustrates that the consistenc ratio is robust to uncertaint small perturbations in the group judgments of a 3 b 3 pairwise comparison matrix. Using Eq., the sensitivit in the CR arising from uncertainties in judgment a is denoted: < a CR The factor in Eq ensures that this measure of uncertaint is unaffected b the unit of < measurement of both the input and output variables. imilar measures can be defined for the two remaining T group judgments a. s shown in Figures through 3 CR should be relativel small because the individual sensitivities CR a, CR a, and CR a are shown to be small, when the consistenc ratio is the output variable. However, a drawback of this approach is that it does not consider the degree of variation in each input. group input judgment that has a small sensitivit, but a large variation about its nominal value due to uncertaint and perhaps disagreements among decision makers in the group ma lead to significant variation in the derived consistenc ratio and eigenvector. Proceedings 7 th IHP 3 Bali, Indonesia 9

5 a_ a_ a_ a b Figure. CR a as a function of a a and b a a_. -. a_ a_.. a_ a_ a b Figure. CR a as a function of a a and b a a_ a_ a_ a_ a b Figure 3. CR a as a function of a a and b a Proceedings 7 th IHP 3 Bali, Indonesia 95

6 3. Gaussian pproximation to measure the variance of the consistenc ratio The Talor series expansion provides a wa to express deviations of an output from nominal values, <, in terms of deviations of inputs from their nominal values: a. uccessive terms of the Talor series contain higher order powers of deviations and higher order derivatives of the function with respect to each input. If the deviations aij ij are relativel small, then higher powers will become ver small. nd if the output function is relativel smooth in the region of, the higher derivatives will also be small and hence higher order terms can be safel ignored. Under these conditions, the Talor series produces a good approximation. For example, consider an approximation for the variance of assuming independence of the a, a, input judgments, using onl Talor series terms up to the second order: var a From Eq. 5 one can see that if the input judgments are independent, then the variance of the output is approximatel the sum of squares of the products of the standard deviation, σ a, and the sensitivit a ij of each input: var σ a + σ a + σ a This implies that the variance of the output is given b a Gaussian approximation, where total uncertaint in the output, expressed as variance, is explicitl decomposed as the sum of uncertaint contributions from the input. We can prove that ever pair of input judgments, for example a are independent b showing that their covariance is zero. The covariance of two elements a is defined b: cov a, a < a µ a µ 7 a where µ and µ are the means of a, respectivel. Now, it is true that for an two distinct judgments in a pairwise comparison matrix, the geometric mean of the product of two judgments is equal to the product of their geometric means. For example: a a < < a, a cov, a It follows that for a 3x3 matrix cov a, a, and cov a, a. Finall, the covariance of two identical judgments can be expressed as the variance of a single judgment. For example, the covariance between judgment a and itself becomes: cov a, a ij ij ij 9 Proceedings 7 th IHP 3 Bali, Indonesia 9

7 .5..5 varha_l varha_l Fig. Variance of the Consistenc Ratio, varcr, as a function of and The relationship between variations in the inputs, and, and variations in the output, varcr, is given in Figure where it is assumed, for illustrative purposes, that and CR a.. ensitivit of Criteria Weights to Perturbations in Input Judgments ssume that the pairwise comparisons in Figure 5 are obtained b taking the geometric mean and variance of the group judgments. Note that in Figure 5 there are three criteria: flood frequenc criteria, flood velocit criteria B, and flood depth criteria C. Fig 5. Consistent Matrix for the Flooding Criteria: Frequenc, Velocit, and Depth Proceedings 7 th IHP 3 Bali, Indonesia 97

8 The weights of criteria frequenc, B velocit, and C depth are denoted as w, w B, and w c. Using the sensitivit notation developed earlier, for group input a the sensitivit in w would be: w < w w a The flooding criteria w should also be examined for sensitivit to changes in group inputs a : w w a < w and w w a < w We next investigate how uncertaint in the aggregated judgment a affects the sensitivit of the flooding, velocit, and depth criteria. This is illustrated in Figures through respectivel. Note that in Figure 7 the sensitivit metric: w < w B w B a B is used because we are considering the sensitivit of the velocit criteria criteria B. Finall, Figure considers the sensitivit of depth criteria criteria C to uncertainties in judgment a and hence uses the metric: w < w C w C a C a_ a_ a_ a_ Figure. w a as a function of a a and b a Proceedings 7 th IHP 3 Bali, Indonesia 9

9 a_ a_.3.5. a_ a_ Figure 7. w a as a function of a a and b a B a_ a_ a_ a_ Figure. w a as a function of a a and b a C Note that: w a a C wb w f a f Hence, we can conclude that the weight of the depth criteria w c is most affected b uncertainties in the group judgment a. lso, from Eq. we can conclude that the weight of the velocit criteria w B is more sensitive to uncertainties in a than is the depth criteria w c. References Bolloju, N On Problems and Issues in Discover of Multi-ggregations of Classification Problem-olving Knowledge from Multiple Decision Makers. ssociation for Information stems. 997 mericas Conference. Indianapolis, Indiana. ugust 5-7,997. Chan, C. and Wong.K.C., 99. tatistical Technique for Extracting Classificator Knowledge from Databases, Knowledge Discover in Databases, G. Piatetsk-hapiro, and W.J. Frawle, eds., I Press, Proceedings 7 th IHP 3 Bali, Indonesia 99

10 Chaturvedi, C. Hutchinson, G.K., and Nazareth, D.L. 993, upporting complex real-time decision making through machine learning, Decision upport stems,, -3. Clemen, R. and Winkler R.L. 993, ggregating Point Estimates: Flexible Modeling pproach, Management cience, 39:, Davies, 993. Multicriteria Decision Model pplication for Managing Group Decisions, Journal of the Operational Research ociet, 5: Markham, C. and Ragsdale, C.T. 995, Combining Neural Networks and tatistical Predictions to olve the Classification Problem in Discriminant nalsis, Decision ciences, : Perez, C. and Barba-Romero,. 995, Three practical criteria of comparison among ordinal preference aggregating rules, European Journal of Operational Research, 5, Ramanathan, R. and Ganesh, L.. 99, Group preference aggregation methods emploed in HP: n evaluation n intrinsic process for deriving members' weightages, European Journal of Operational Research, 79, 9-5. aat, T. 3, n xiomatic Framework for ggregating Weights and Weight-Ratio Matrices, Proceedings of the eventh International mposium on the naltic Hierarch Process, ugust 7-9, 3, Bali, Indonesia. Proceedings 7 th IHP 3 Bali, Indonesia 3

THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS

THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS ISAHP 200, Berne, Switzerland, August 2-4, 200 THE IMPACT ON SCALING ON THE PAIR-WISE COMPARISON OF THE ANALYTIC HIERARCHY PROCESS Yuji Sato Department of Policy Science, Matsusaka University 846, Kubo,

More information

CONTINUOUS SPATIAL DATA ANALYSIS

CONTINUOUS SPATIAL DATA ANALYSIS CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s

More information

Uncertainty and Parameter Space Analysis in Visualization -

Uncertainty and Parameter Space Analysis in Visualization - Uncertaint and Parameter Space Analsis in Visualiation - Session 4: Structural Uncertaint Analing the effect of uncertaint on the appearance of structures in scalar fields Rüdiger Westermann and Tobias

More information

ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX

ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX ISAHP 003, Bali, Indonesia, August 7-9, 003 ORDER STABILITY ANALYSIS OF RECIPROCAL JUDGMENT MATRIX Mingzhe Wang a Danyi Wang b a Huazhong University of Science and Technology, Wuhan, Hubei 430074 - P.

More information

5. Discriminant analysis

5. Discriminant analysis 5. Discriminant analysis We continue from Bayes s rule presented in Section 3 on p. 85 (5.1) where c i is a class, x isap-dimensional vector (data case) and we use class conditional probability (density

More information

International Journal of Information Technology & Decision Making c World Scientific Publishing Company

International Journal of Information Technology & Decision Making c World Scientific Publishing Company International Journal of Information Technology & Decision Making c World Scientific Publishing Company A MIN-MAX GOAL PROGRAMMING APPROACH TO PRIORITY DERIVATION IN AHP WITH INTERVAL JUDGEMENTS DIMITRIS

More information

Decision-Making with the AHP: Why is The Principal Eigenvector Necessary

Decision-Making with the AHP: Why is The Principal Eigenvector Necessary Decision-Making with the AHP: Why is The Principal Eigenvector Necessary Thomas L. Saaty Keywords: complex order, consistent, near consistent, priority, principal eigenvector Summary: We will show here

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

B best scales 51, 53 best MCDM method 199 best fuzzy MCDM method bound of maximum consistency 40 "Bridge Evaluation" problem

B best scales 51, 53 best MCDM method 199 best fuzzy MCDM method bound of maximum consistency 40 Bridge Evaluation problem SUBJECT INDEX A absolute-any (AA) critical criterion 134, 141, 152 absolute terms 133 absolute-top (AT) critical criterion 134, 141, 151 actual relative weights 98 additive function 228 additive utility

More information

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP MULTIPLE CRITERIA DECISION MAKING Vol. 11 2016 Sławomir Jarek * REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP DOI: 10.22367/mcdm.2016.11.05 Abstract The Analytic Hierarchy Process (AHP)

More information

Comparison of Two Ratio Estimators Using Auxiliary Information

Comparison of Two Ratio Estimators Using Auxiliary Information IOR Journal of Mathematics (IOR-JM) e-in: 78-578, p-in: 39-765. Volume, Issue 4 Ver. I (Jul. - Aug.06), PP 9-34 www.iosrjournals.org omparison of Two Ratio Estimators Using Auxiliar Information Bawa, Ibrahim,

More information

Least-Squares Independence Test

Least-Squares Independence Test IEICE Transactions on Information and Sstems, vol.e94-d, no.6, pp.333-336,. Least-Squares Independence Test Masashi Sugiama Toko Institute of Technolog, Japan PRESTO, Japan Science and Technolog Agenc,

More information

Experimental Uncertainty Review. Abstract. References. Measurement Uncertainties and Uncertainty Propagation

Experimental Uncertainty Review. Abstract. References. Measurement Uncertainties and Uncertainty Propagation Experimental Uncertaint Review Abstract This is intended as a brief summar of the basic elements of uncertaint analsis, and a hand reference for laborator use. It provides some elementar "rules-of-thumb"

More information

Optimum Structure of Feed Forward Neural Networks by SOM Clustering of Neuron Activations

Optimum Structure of Feed Forward Neural Networks by SOM Clustering of Neuron Activations Optimum Structure of Feed Forward Neural Networks b SOM Clustering of Neuron Activations Samarasinghe, S. Centre for Advanced Computational Solutions, Natural Resources Engineering Group Lincoln Universit,

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Learning SVM Classifiers with Indefinite Kernels

Learning SVM Classifiers with Indefinite Kernels Learning SVM Classifiers with Indefinite Kernels Suicheng Gu and Yuhong Guo Dept. of Computer and Information Sciences Temple University Support Vector Machines (SVMs) (Kernel) SVMs are widely used in

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

The Joy of Path Diagrams Doing Normal Statistics

The Joy of Path Diagrams Doing Normal Statistics Intro to SEM SEM and fmri DBN Effective Connectivit Theor driven process Theor is specified as a model Alternative theories can be tested Specified as models Theor A Theor B Data The Jo of Path Diagrams

More information

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Journal of the Operations Research Societ of Japan 008, Vol. 51, No., 191-01 QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Tomonari Kitahara Shinji Mizuno Kazuhide Nakata Toko Institute of Technolog

More information

An Analysis on Consensus Measures in Group Decision Making

An Analysis on Consensus Measures in Group Decision Making An Analysis on Consensus Measures in Group Decision Making M. J. del Moral Statistics and Operational Research Email: delmoral@ugr.es F. Chiclana CCI Faculty of Technology De Montfort University Leicester

More information

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 http://intelligentoptimization.org/lionbook Roberto Battiti

More information

Generative classifiers: The Gaussian classifier. Ata Kaban School of Computer Science University of Birmingham

Generative classifiers: The Gaussian classifier. Ata Kaban School of Computer Science University of Birmingham Generative classifiers: The Gaussian classifier Ata Kaban School of Computer Science University of Birmingham Outline We have already seen how Bayes rule can be turned into a classifier In all our examples

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

Revision: Chapter 1-6. Applied Multivariate Statistics Spring 2012

Revision: Chapter 1-6. Applied Multivariate Statistics Spring 2012 Revision: Chapter 1-6 Applied Multivariate Statistics Spring 2012 Overview Cov, Cor, Mahalanobis, MV normal distribution Visualization: Stars plot, mosaic plot with shading Outlier: chisq.plot Missing

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analsis L. Graesser March 1, 1 Introduction Principal Component Analsis (PCA) is a popular technique for reducing the size of a dataset. Let s assume that the dataset is structured

More information

EUSIPCO

EUSIPCO EUSIPCO 3 56974469 APPLICATION OF THE SAMPLE-CONDITIONED MSE TO NON-LINEAR CLASSIFICATION AND CENSORED SAMPLING Lori A. Dalton Department of Electrical and Computer Engineering and Department of Biomedical

More information

Incomplete Pairwise Comparison Matrices and Weighting Methods

Incomplete Pairwise Comparison Matrices and Weighting Methods Incomplete Pairwise Comparison Matrices and Weighting Methods László Csató Lajos Rónyai arxiv:1507.00461v2 [math.oc] 27 Aug 2015 August 28, 2015 Abstract A special class of preferences, given by a directed

More information

Autonomous Equations / Stability of Equilibrium Solutions. y = f (y).

Autonomous Equations / Stability of Equilibrium Solutions. y = f (y). Autonomous Equations / Stabilit of Equilibrium Solutions First order autonomous equations, Equilibrium solutions, Stabilit, Longterm behavior of solutions, direction fields, Population dnamics and logistic

More information

Ch 2.5: Autonomous Equations and Population Dynamics

Ch 2.5: Autonomous Equations and Population Dynamics Ch 2.5: Autonomous Equations and Population Dnamics In this section we examine equations of the form d/dt = f (), called autonomous equations, where the independent variable t does not appear explicitl.

More information

FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier

FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier University, Waterloo, Canada 1 Introduction Aggregation

More information

Introduction to Linear Models

Introduction to Linear Models Introduction to Linear Models STA721 Linear Models Duke Universit Merlise Clde August 25, 2015 Coordinates Instructor: Merlise Clde 214 Old Chemistr Office Hours MWF 1:00-2:0 or right after class (or b

More information

ANALYTIC HIERARCHY PROCESS (AHP)

ANALYTIC HIERARCHY PROCESS (AHP) ANALYTIC HIERARCHY PROCESS (AHP) PAIRWISE COMPARISON Is it or less important? How much or less important is it? equal equal equal equal equal equal PAIRWISE COMPARISON Is it or less important? How much

More information

An Information Theory For Preferences

An Information Theory For Preferences An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop

More information

Iran University of Science and Technology, Tehran 16844, IRAN

Iran University of Science and Technology, Tehran 16844, IRAN DECENTRALIZED DECOUPLED SLIDING-MODE CONTROL FOR TWO- DIMENSIONAL INVERTED PENDULUM USING NEURO-FUZZY MODELING Mohammad Farrokhi and Ali Ghanbarie Iran Universit of Science and Technolog, Tehran 6844,

More information

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive

More information

A Group Analytic Network Process (ANP) for Incomplete Information

A Group Analytic Network Process (ANP) for Incomplete Information A Group Analytic Network Process (ANP) for Incomplete Information Kouichi TAJI Yousuke Sagayama August 5, 2004 Abstract In this paper, we propose an ANP framework for group decision-making problem with

More information

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification INF 4300 151014 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter 1-6 in Duda and Hart: Pattern Classification 151014 INF 4300 1 Introduction to classification One of the most challenging

More information

Multi-criteria Decision Making by Incomplete Preferences

Multi-criteria Decision Making by Incomplete Preferences Journal of Uncertain Systems Vol.2, No.4, pp.255-266, 2008 Online at: www.jus.org.uk Multi-criteria Decision Making by Incomplete Preferences Lev V. Utkin Natalia V. Simanova Department of Computer Science,

More information

(v i,v j ) A D, i,j {1,2,...n}% E(v j ) {(v i,v j ),(v j,v i )} A D % C v V

(v i,v j ) A D, i,j {1,2,...n}% E(v j ) {(v i,v j ),(v j,v i )} A D % C v V Neutrosopic Sets and Sstems, Vol. 1, 201 11 Universit o New Mexico Entrop based Single Valued Neutrosopic igrap and its applications Kalan Sina 1, Pinaki Majumdar 2 1 epartment o Matematics, PSS Vidapit,

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Experimental Design and Data Analysis for Biologists

Experimental Design and Data Analysis for Biologists Experimental Design and Data Analysis for Biologists Gerry P. Quinn Monash University Michael J. Keough University of Melbourne CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv I I Introduction 1 1.1

More information

Budapest Bridges Benchmarking

Budapest Bridges Benchmarking Budapest Bridges Benchmarking András Farkas Óbuda University, Faculty of Economics 1084 Budapest, Tavaszmező u. 17, Hungary e-mail: farkas.andras@kgk.uni-obuda.hu Abstract: This paper is concerned with

More information

Strain Transformation and Rosette Gage Theory

Strain Transformation and Rosette Gage Theory Strain Transformation and Rosette Gage Theor It is often desired to measure the full state of strain on the surface of a part, that is to measure not onl the two etensional strains, and, but also the shear

More information

Ordinal One-Switch Utility Functions

Ordinal One-Switch Utility Functions Ordinal One-Switch Utilit Functions Ali E. Abbas Universit of Southern California, Los Angeles, California 90089, aliabbas@usc.edu David E. Bell Harvard Business School, Boston, Massachusetts 0163, dbell@hbs.edu

More information

Regularized Discriminant Analysis and Reduced-Rank LDA

Regularized Discriminant Analysis and Reduced-Rank LDA Regularized Discriminant Analysis and Reduced-Rank LDA Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Regularized Discriminant Analysis A compromise between LDA and

More information

1.1 The Equations of Motion

1.1 The Equations of Motion 1.1 The Equations of Motion In Book I, balance of forces and moments acting on an component was enforced in order to ensure that the component was in equilibrium. Here, allowance is made for stresses which

More information

Some practical remarks (recap) Statistical Natural Language Processing. Today s lecture. Linear algebra. Why study linear algebra?

Some practical remarks (recap) Statistical Natural Language Processing. Today s lecture. Linear algebra. Why study linear algebra? Some practical remarks (recap) Statistical Natural Language Processing Mathematical background: a refresher Çağrı Çöltekin Universit of Tübingen Seminar für Sprachwissenschaft Summer Semester 017 Course

More information

COIN TOSS MODELING RĂZVAN C. STEFAN, TIBERIUS O. CHECHE

COIN TOSS MODELING RĂZVAN C. STEFAN, TIBERIUS O. CHECHE Romanian Reports in Phsics 69, 94 (7) COIN TOSS MODELING RĂZVAN C. STEFAN, TIBERIUS O. CHECHE Universit of Bucharest, Facult of Phsics, P.O.Box MG-, RO-775, Bucharest Măgurele, Romania E-mails: stefan.constantin6@gmail.com;

More information

Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size

Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size Berkman Sahiner, a) Heang-Ping Chan, Nicholas Petrick, Robert F. Wagner, b) and Lubomir Hadjiiski

More information

INF Introduction to classifiction Anne Solberg

INF Introduction to classifiction Anne Solberg INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300

More information

A New Fuzzy Positive and Negative Ideal Solution for Fuzzy TOPSIS

A New Fuzzy Positive and Negative Ideal Solution for Fuzzy TOPSIS A New Fuzzy Positive and Negative Ideal Solution for Fuzzy TOPSIS MEHDI AMIRI-AREF, NIKBAKHSH JAVADIAN, MOHAMMAD KAZEMI Department of Industrial Engineering Mazandaran University of Science & Technology

More information

Chapter Adequacy of Solutions

Chapter Adequacy of Solutions Chapter 04.09 dequac of Solutions fter reading this chapter, ou should be able to: 1. know the difference between ill-conditioned and well-conditioned sstems of equations,. define the norm of a matri,

More information

CH. 1 FUNDAMENTAL PRINCIPLES OF MECHANICS

CH. 1 FUNDAMENTAL PRINCIPLES OF MECHANICS 446.201 (Solid echanics) Professor Youn, eng Dong CH. 1 FUNDENTL PRINCIPLES OF ECHNICS Ch. 1 Fundamental Principles of echanics 1 / 14 446.201 (Solid echanics) Professor Youn, eng Dong 1.2 Generalied Procedure

More information

A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP)

A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP) A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP) This is a full methodological briefing with all of the math and background from the founders of AHP

More information

Finding common ground when experts and models disagree:

Finding common ground when experts and models disagree: Finding common ground when experts and models disagree: Belief dominance and Climate Change R&D Policy Erin Baker, University of Massachusetts Valentina Bosetti, Bocconi University and FEEM Ahti Salo,

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

The approximation of piecewise linear membership functions and Łukasiewicz operators

The approximation of piecewise linear membership functions and Łukasiewicz operators Fuzz Sets and Sstems 54 25 275 286 www.elsevier.com/locate/fss The approimation of piecewise linear membership functions and Łukasiewicz operators József Dombi, Zsolt Gera Universit of Szeged, Institute

More information

Table of Contents. Multivariate methods. Introduction II. Introduction I

Table of Contents. Multivariate methods. Introduction II. Introduction I Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation

More information

PRINCIPAL COMPONENTS ANALYSIS

PRINCIPAL COMPONENTS ANALYSIS 121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves

More information

Research Design - - Topic 15a Introduction to Multivariate Analyses 2009 R.C. Gardner, Ph.D.

Research Design - - Topic 15a Introduction to Multivariate Analyses 2009 R.C. Gardner, Ph.D. Research Design - - Topic 15a Introduction to Multivariate Analses 009 R.C. Gardner, Ph.D. Major Characteristics of Multivariate Procedures Overview of Multivariate Techniques Bivariate Regression and

More information

1 History of statistical/machine learning. 2 Supervised learning. 3 Two approaches to supervised learning. 4 The general learning procedure

1 History of statistical/machine learning. 2 Supervised learning. 3 Two approaches to supervised learning. 4 The general learning procedure Overview Breiman L (2001). Statistical modeling: The two cultures Statistical learning reading group Aleander L 1 Histor of statistical/machine learning 2 Supervised learning 3 Two approaches to supervised

More information

PREFERENCE MATRICES IN TROPICAL ALGEBRA

PREFERENCE MATRICES IN TROPICAL ALGEBRA PREFERENCE MATRICES IN TROPICAL ALGEBRA 1 Introduction Hana Tomášková University of Hradec Králové, Faculty of Informatics and Management, Rokitanského 62, 50003 Hradec Králové, Czech Republic e-mail:

More information

Figure 1: Visualising the input features. Figure 2: Visualising the input-output data pairs.

Figure 1: Visualising the input features. Figure 2: Visualising the input-output data pairs. Regression. Data visualisation (a) To plot the data in x trn and x tst, we could use MATLAB function scatter. For example, the code for plotting x trn is scatter(x_trn(:,), x_trn(:,));........... x (a)

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of

More information

ORDERING OBJECTS ON THE BASIS OF POTENTIAL FUZZY RELATION FOR GROUP EXPERT EVALUATION

ORDERING OBJECTS ON THE BASIS OF POTENTIAL FUZZY RELATION FOR GROUP EXPERT EVALUATION ORDERING OBJECTS ON THE BASIS OF POTENTIAL FUZZY RELATION FOR GROUP EXPERT EVALUATION B. Kh. Sanzhapov and R. B. Sanzhapov Volgograd State University of Architecture and Civil Engineering, Akademicheskaya

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Supplementary Material: A Robust Approach to Sequential Information Theoretic Planning

Supplementary Material: A Robust Approach to Sequential Information Theoretic Planning Supplementar aterial: A Robust Approach to Sequential Information Theoretic Planning Sue Zheng Jason Pacheco John W. Fisher, III. Proofs of Estimator Properties.. Proof of Prop. Here we show how the bias

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Comparison of Modern Stochastic Optimization Algorithms

Comparison of Modern Stochastic Optimization Algorithms Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Biostatistics in Research Practice - Regression I

Biostatistics in Research Practice - Regression I Biostatistics in Research Practice - Regression I Simon Crouch 30th Januar 2007 In scientific studies, we often wish to model the relationships between observed variables over a sample of different subjects.

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Uncertainty correlation and Monte Carlo sampling in LCA

Uncertainty correlation and Monte Carlo sampling in LCA The world s most consistent and transparent Life Cycle Inventory database Uncertainty correlation and Monte Carlo sampling in LCA LCAXV, Vancouver, Canada 07 October 2015 Guillaume Bourgault, Ph.D Project

More information

DECISION MAKING SUPPORT AND EXPERT SYSTEMS

DECISION MAKING SUPPORT AND EXPERT SYSTEMS 325 ITHEA DECISION MAKING SUPPORT AND EXPERT SYSTEMS UTILITY FUNCTION DESIGN ON THE BASE OF THE PAIRED COMPARISON MATRIX Stanislav Mikoni Abstract: In the multi-attribute utility theory the utility functions

More information

2: Distributions of Several Variables, Error Propagation

2: Distributions of Several Variables, Error Propagation : Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the

More information

INTEGRATION OF GIS AND MULTICRITORIAL HIERARCHICAL ANALYSIS FOR AID IN URBAN PLANNING: CASE STUDY OF KHEMISSET PROVINCE, MOROCCO

INTEGRATION OF GIS AND MULTICRITORIAL HIERARCHICAL ANALYSIS FOR AID IN URBAN PLANNING: CASE STUDY OF KHEMISSET PROVINCE, MOROCCO Geography Papers 2017, 63 DOI: http://dx.doi.org/10.6018/geografia/2017/280211 ISSN: 1989-4627 INTEGRATION OF GIS AND MULTICRITORIAL HIERARCHICAL ANALYSIS FOR AID IN URBAN PLANNING: CASE STUDY OF KHEMISSET

More information

Linear and Non-Linear Dimensionality Reduction

Linear and Non-Linear Dimensionality Reduction Linear and Non-Linear Dimensionality Reduction Alexander Schulz aschulz(at)techfak.uni-bielefeld.de University of Pisa, Pisa 4.5.215 and 7.5.215 Overview Dimensionality Reduction Motivation Linear Projections

More information

Business Statistics: A First Course

Business Statistics: A First Course Business Statistics: A First Course 5 th Edition Chapter 7 Sampling and Sampling Distributions Basic Business Statistics, 11e 2009 Prentice-Hall, Inc. Chap 7-1 Learning Objectives In this chapter, you

More information

COMPARISON STUDY OF SENSITIVITY DEFINITIONS OF NEURAL NETWORKS

COMPARISON STUDY OF SENSITIVITY DEFINITIONS OF NEURAL NETWORKS COMPARISON SUDY OF SENSIIVIY DEFINIIONS OF NEURAL NEORKS CHUN-GUO LI 1, HAI-FENG LI 1 Machine Learning Center, Facult of Mathematics and Computer Science, Hebei Universit, Baoding 07100, China Department

More information

Simultaneous Orthogonal Rotations Angle

Simultaneous Orthogonal Rotations Angle ELEKTROTEHNIŠKI VESTNIK 8(1-2): -11, 2011 ENGLISH EDITION Simultaneous Orthogonal Rotations Angle Sašo Tomažič 1, Sara Stančin 2 Facult of Electrical Engineering, Universit of Ljubljana 1 E-mail: saso.tomaic@fe.uni-lj.si

More information

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005 6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose

More information

SCIENTIFIC APPLICATIONS OF THE AHP METHOD IN TRANSPORT PROBLEMS

SCIENTIFIC APPLICATIONS OF THE AHP METHOD IN TRANSPORT PROBLEMS THE ARCHIVES OF TRANSPORT ISSN (print): 0866-9546 Volume 29, Issue 1, 2014 e-issn (online): 2300-8830 DOI: 10.5604/08669546.1146966 SCIENTIFIC APPLICATIONS OF THE AHP METHOD IN TRANSPORT PROBLEMS Valentinas

More information

Lévy stable distribution and [0,2] power law dependence of. acoustic absorption on frequency

Lévy stable distribution and [0,2] power law dependence of. acoustic absorption on frequency Lév stable distribution and [,] power law dependence of acoustic absorption on frequenc W. Chen Institute of Applied Phsics and Computational Mathematics, P.O. Box 89, Division Box 6, Beijing 88, China

More information

NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS

NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS By Yun Zhai, M.Sc. A Thesis Submitted to the School of Graduate Studies in partial fulfilment of the requirements for the degree of Ph.D. Department

More information

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang Machine Learning Basics Lecture 2: Linear Classification Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d.

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Yi Zhang and Jeff Schneider NIPS 2010 Presented by Esther Salazar Duke University March 25, 2011 E. Salazar (Reading group) March 25, 2011 1

More information

Stability of Limit Cycles in Hybrid Systems

Stability of Limit Cycles in Hybrid Systems Proceedings of the 34th Hawaii International Conference on Sstem Sciences - 21 Stabilit of Limit Ccles in Hbrid Sstems Ian A. Hiskens Department of Electrical and Computer Engineering Universit of Illinois

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,

More information

Perturbation Theory for Variational Inference

Perturbation Theory for Variational Inference Perturbation heor for Variational Inference Manfred Opper U Berlin Marco Fraccaro echnical Universit of Denmark Ulrich Paquet Apple Ale Susemihl U Berlin Ole Winther echnical Universit of Denmark Abstract

More information

A New Space for Comparing Graphs

A New Space for Comparing Graphs A New Space for Comparing Graphs Anshumali Shrivastava and Ping Li Cornell University and Rutgers University August 18th 2014 Anshumali Shrivastava and Ping Li ASONAM 2014 August 18th 2014 1 / 38 Main

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA THE ROYAL STATISTICAL SOCIETY 4 EXAINATIONS SOLUTIONS GRADUATE DIPLOA PAPER I STATISTICAL THEORY & ETHODS The Societ provides these solutions to assist candidates preparing for the examinations in future

More information

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC Kepoint etraction: Corners 9300 Harris Corners Pkw Charlotte NC Wh etract kepoints? Motivation: panorama stitching We have two images how do we combine them? Wh etract kepoints? Motivation: panorama stitching

More information

Inverse Sensitive Analysis of Pairwise Comparison Matrices

Inverse Sensitive Analysis of Pairwise Comparison Matrices Inverse Sensitive Analysis of Pairwise Comparison Matrices Kouichi TAJI Keiji Matsumoto October 18, 2005, Revised September 5, 2006 Abstract In the analytic hierarchy process (AHP), the consistency of

More information

Kernel PCA, clustering and canonical correlation analysis

Kernel PCA, clustering and canonical correlation analysis ernel PCA, clustering and canonical correlation analsis Le Song Machine Learning II: Advanced opics CSE 8803ML, Spring 2012 Support Vector Machines (SVM) 1 min w 2 w w + C j ξ j s. t. w j + b j 1 ξ j,

More information

CLOSE-TO-CLEAN REGULARIZATION RELATES

CLOSE-TO-CLEAN REGULARIZATION RELATES Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of

More information

First-Level Transitivity Rule Method for Filling in Incomplete Pair-Wise Comparison Matrices in the Analytic Hierarchy Process

First-Level Transitivity Rule Method for Filling in Incomplete Pair-Wise Comparison Matrices in the Analytic Hierarchy Process Appl. Math. Inf. Sci. 8, No. 2, 459-467 (2014) 459 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080202 First-Level Transitivity Rule Method for Filling

More information

An Accurate Incremental Principal Component Analysis method with capacity of update and downdate

An Accurate Incremental Principal Component Analysis method with capacity of update and downdate 0 International Conference on Computer Science and Information echnolog (ICCSI 0) IPCSI vol. 5 (0) (0) IACSI Press, Singapore DOI: 0.7763/IPCSI.0.V5. An Accurate Incremental Principal Component Analsis

More information