Weighted K-Nearest Neighbor Revisited

Size: px
Start display at page:

Download "Weighted K-Nearest Neighbor Revisited"

Transcription

1 Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy M. Loog Delft University of Tehnology Delft, The Netherlands Abstrat In this paper we show that weighted -Nearest Neighbor, a variation of the lassi -Nearest Neighbor, an be reinterpreted from a lassifier ombining perspetive, speifially as a fixed ombiner rule, the sum rule. Subsequently, we experimentally demonstrate that it an be rather benefiial to onsider other ombining shemes as well. In partiular, we fous on trained ombiners and illustrate the positive effet these an have on lassifiation performane. lass(x) ---> x 0.9 x 0.5 lass(x) ---> I. INTRODUCTION The -nearest neighbor () rule is a widely used and easy to implement lassifiation rule. It assigns a point x to the lass most present among the points in the training set nearest to x 1, 2, 3, 4. Deiding whih points are nearest is done aording to some prespeified distane. In this proedure, all points within the neighborhood ontribute equally to the final deision for x. It seems obvious, therefore, to allow for weighted voting (of some sort) in order to improve performane. Royall was probably the first to seriously onsider this option 5: he demonstrated that improvements are indeed possible in the regression setting under squared loss. In the lassifiation setting, Dudani 6 was the first to introdue a speifi distane-weighted rule and provided empirial evidene of its admissibility. He disussed some alternatives to define the weights, all with weights dropping in terms of the distane to x with a weight of 1 for the first nearest neighbor and a weight of 0 for the th. Given the weights, eah neighbor of x ontributes to the final deision with its own weight: in partiular, the Weighted -Nearest- Neighbor (W) rule assigns x to that lass for whih the weights of the representatives among the nearest neighbors sum to the greatest value 6 (see. Fig. (1)). The weighting sheme introdued by Dudani 6, even when weights are leverly hosen, is not neessarily helpful as, for instane, demonstrated in 7. The paper showed that, asymptotially, unweighted is to be preferred over any weighted version in ase we fix. However, when dealing with the realisti setting of finite samples, improvements are possible (see 8 for instane). Clearly, whether weighting an help also depends on what we onsider as improvement 5, 8, 9. Though weighted rules are used in various appliations, little oneptual, theoretial, or methodologial advanes have been made in the past deades. Two reent additions to this literature inlude 10 and 11. In 10, a soalled dual distane funtion is onsidered, whih turns out to be less sensitive to the hoie of and supposedly avoids (a) Fig. 1: Example of (a) -Nearest Neighbor and (b) Weighted -Nearest Neighbor ( = 3). With, every neighbor ounts in the same way for the final deision: in the ase shown in figure, the ross is assigned to the irle lass, the most frequent lass in the neighborhood. On the ontrary, with Weighted every neighbor has assoiated a weight; in the final deision, eah neighbor ounts with its own weight: in the example, sine the sum of the weights of the neighbors from the square lass is larger than that of the neighbors of the irle lass, the ross is assigned to the square lass. degradation of the lassifiation auray in the small sample ase and when dealing with outliers. In 11, the authors derive an asymptotially optimal way of defining nonnegative weights to be used within the W sheme. In this work, we reinterpret the Weighted (and the ) from a lassifier ombining perspetive 12: we show that an be seen as a plain majority voting sheme and, generally, the weighted as a fixed ombiner rule (the sum rule). This view opens the door to the use of other lassifier ombiners and we show that it an indeed be quite benefiial to onsider alternative and more advaned shemes. In partiular, here we fous on trained ombining shemes 13, 14, for whih with our experiments demonstrate potentially signifiant improvements in lassifiation performanes over the original weighting sheme by Dudani 6. A. Outline Setion II introdues the neessary bakground on, its weighted variant, and lassifier ombiners, while fixing notation. Setion III offers our interpretation of as a ombining sheme and skethes how various ombiners ould be integrated using the terminology of mathing sores. The next setion, Setion IV, desribes the experiments that were arried out with our revisited using a trained ombiner. It (b)

2 also reports on the results and disusses them. Finally, Setion V onludes. II. PRELIMINARIES AND ADDITIONAL BACGROUND In this setion we introdue the neessary bakground on, the W, and the theory of lassifier ombiners, while fixing notation. A. -Nearest Neighbor Let us start with some definitions: x: the pattern to be lassified; {x i } (with1 i N): the set of N points in the training set; eah training pattern is equipped with a label {y i } (with 1 i N). The label y i an be one of the possible values 1...C, where C is the number of lasses of the problem at hand ne (x) = {n 1,...,n }: the points in the training set whih are nearest to x aording to a ertain distane d(, ); y n1,...y n are the orresponding labels; please note that we onsider {n 1,...,n } as ordered aording to the distane from x n 1 is the nearest neighbor, n is the farthest of the nearest neighbors. Given these definitions, the standard rule assigns x to the lass ĉ more frequent in the set ne (x), i.e. x argmax {n i : y ni = } (1) where X denotes the ardinality of the set X. Rule (1) an be rewritten as x argmax I (n i ) (2) i=1 where I (z) is the indiator funtion for lass { 1 if z belongs to lass I (z) = The summation in (2), for a given, simply ounts the number of points in the neighborhood ne (x) belonging to lass. B. Weighted -Nearest Neighbor Within the Weighted -Nearest Neighbor rule 6, eah neighbor n i ne (x) is equipped with a weight w ni, whih an be omputed using for example the methods presented in 6, 11. Note that in the general setting, we may have a different set of weights for every point to be lassified: when hanging the point x to be lassified, also the neighborhood ne (x) hanges and therefore the orresponding weights, as they typially depend diretly on the relation between the neighbors and the point x. This is lear, for instane, when onsidering the definition of weights introdued in Equation (2) of 6: 1 w ni = (4) d(x,n i ) With this definition, the weight of a given training example is different when hanging the point x to be lassified it depends on the distane from suh x: the more distant to the (3) neighbor, the lower its weight/importane in the lassifiation of x. This definition of weights takes inspiration from ideas typial of the Parzen Windows estimator 15. Given neighbors and weights, the Weighted -nearest neighbor rule assigns x to the lass ĉ for whih the weights of its representatives in the neighborhood ne(x) sum to the greatest value. Following the notation of Equation (2), x argmax I (n i )w ni (5) i=1 Clearly, the and the Weighted rules are equivalent when =1. C. Classifier Combining The main idea behind the Classifier Combining theory 16, 12 is that it is possible to improve the lassifiation auray by exploiting the diversity present in different pattern reognition systems. Suh diversity an derive from the employment of different sensors, different features, different training sets, different lassifiers or others 12. In partiular, here we fous on the following senario: we have a set of M different lassifiers (experts) E 1,...E M. Given a lassifiation problem involving C lasses, and a pattern x to be lassified, every lassifier E l returns a set of values E l (x): E l (x) = e l1 (x),e l2 (x),,e lc (x) where e l (x) an be a posterior of the lass i.e e l (x) = P( x) or simply a mathing sore, i.e. a number indiating how likely is that the lass of x is (alled onfidenes in 12). A given lassifier (expert) E l takes a deision on x with the following rule x argmaxe l (x) (6) Given a pool ofm lassifiers, the goal is to ombine the values present in the following matrix e 11 (x) e 12 (x) e 1C (x) e 21 (x) e 22 (x) e 2C (x) E(x) = e M1 (x) e M2 (x) e MC (x) to reah a lassifiation that is potentially better than those of the single lassifiers. Many methods have been proposed in the past to address this problem (16, 12, 17, 18, just to ite a few), whih are based on different ideas, intuitions, or hypotheses. Here we summarize the following three lasses of approahes, whih will beome useful in the remainder of this work. 1) Combination of deisions: In this ase, eah expert E l takes its own deision; the final lassifiation is then obtained by ombining suh deisions. One relevant example is the majority voting rule, where the final deision is taken by looking at the lass whih reeived the majority of votes. More formally, x argmax M l (x) (7)

3 where l (x) = { 1 if el (x) = maxe lj (x) j In other words l (x) is 1 only if the lassifier E l assigns x to the lass. 2) Fixed ombination of mathing sores: In this ase, for a given lass, the mathing sores e l (x) of the different lassifiers (with 1 l M) are ombined together, in order to return an unique mathing sore for the onsidered lass. The ombination of these sores follows fixed rules, suh as the sum or the produt of them, the max or the min among them, the linear ombination of them, and similar 12. The final deision is finally taken by looking at these aggregated mathing sores. For example, with the Sum Rule, a pattern x is lassified with the following rule: M x argmax e l (x) (9) whereas with the Prod Rule we have M x argmax e l (x) (8) (10) 3) Trained Combiners: This represents a more advaned sheme 13, 14, in whih the idea is to diretly use the sores derived in the matrix E(x) as new features for the pattern x: in this way a lassifier is learned on the ouputs of other lassifiers, following what is sometimes referred to as staked ombination 19. In more detail, a pattern x is desribed with ve(e(x)), where the so-alled ve( ) operator (vetorization) takes a matrix argument and returns a vetor with the matrix elements staked olumn by olumn. In the training phase, the vetorized E(x i ) matrix is omputed for all objets x i of the training set, resulting in a novel training set, whih is used to train a lassifier f. In the testing phase, the testing objet x is firstly enoded with ve(e(x)) and then lassified using the lassifier f. III. THE WEIGHTED RULE REVISITED In this setion we propose an interpretation of the W rule (and the rule) from a ombining lassifier perspetive. The main idea behind our interpretation is the following: in the (W) the final deision on x is obtained by ombining information provided by the nearest neighbors ne (x) = n 1, n of x. Therefore it seems reasonable to onsider these points as different experts/lassifiers, whih provide information to be ombined for reahing the final deision. Let us larify our vision by firstly onsidering the : we will show how to build the E(x) matrix, and whih ombination rule should be used to get exatly the rule. As said before, we have experts/lassifiers, whih we indiate as E n1, E n, eah one related to one speifi neighbor n l. In the ase, the elements of the matrix E (x) are defined, l {1..} and {1..C}, as: { a if ynl = e nl (x) = (11) withaafixed positive number 1 (it an also be 1). For example, if = 3, C = 4, and y n1 = 1, y n2 = 1, and y n3 = 2, the matrix E (x) is E (x) = a a a 0 0 Given this formulation, if we apply the majority voting rule defined in Equation (7) we have to perform two steps: i) to take a deision for eah lassifier (eah row), and this is done by taking the maximum over the row; ii) then to assign x to the lass whih reeived the majority of votes. Atually in this way we obtain exatly the -nearest neighbor lassifier: given the definition in Equation (11), every expert (neighbor) votes for the lass orresponding to its label, and the final lass is deided by looking at the most voted lass, whih is exatly the most frequent lass in the neighborhood 2. In the ase of Weighted, we define the elements of the matrix E W (x), l {1..} and {1..C}, as: { wnl if y e nl (x) = nl = (12) For example, for the problem introdued before ( = 3, C = 4, y n1 = 1, y n2 = 1, and y n3 = 2) w n E W (x) = w n w n3 0 0 Given this definition, if we apply Sum Rule desribed in Equation (9) to E W (x), we have to perform two steps: i) aggregate the sores for every lass, and this is done by summing the values ontained in eah olumn; ii) assign x to the lass for whih this aggregated sore is maximum. It is straightforward to note that this is exatly the deision rule proposed by the Weighted rule desribed in Equation (5). A. Normalization of E(x) In many ombination rules, before applying a ombination sheme, the mathing sores (onfidenes) E l (x) should be normalized, in order to get values whih are omparable among the different lassifiers (see 12, hap. 5) this is espeially true when using trained ombiners 13, as those desribed in the previous setion. In the following we provide some intuitions on what happens when using a ommon and established normalization sheme, the so-alled Soft-Max 1 Please note that e nl (x) an be defined in a more ompat way using the Indiator Funtion I (z) used in Equation (2). However, for larity, here he presented this more verbose formulation. 2 Please note that in this ase we also get the rule by applying the sum rule (sine a is a ostant).

4 normalization 15. After this normalization the mathing sores are in the range 0 1; moreover, for every lassifier, they sums to 1, so that they an be interpreted with somehow an abuse of interpretation as posterior probabilities this is espeially useful when trying to derive theoretial properties as in 16. When applied to our ase, eah e l (x) of E(x) is transformed in ê l (x) via the following formula: ê l (x) = ee l(x) C j=1 e e lj(x) (13) With this normalization, E (x) is transformed in Ê (x), where { e ê nl (x) = a /R if y nl = (14) 1/R otherwise where R = (C 1)+e a (15) is the normalization fator present in the denominator of Equation (13). It is straightforward to observe that, given this normalized Ê (x), the rule is still obtained by applying the majority voting rule to Ê (x). On the ontrary, after this normalization, the Weighted rule beomes equivalent to another fixed rule, namely the prod rule. Atually, Ê W (x) is defined as { e w nl /R ê nl (x) = l if y nl = (16) 1/R l otherwise where R l is again the normalization fator of Equation (13), whih in this ase is different for different neighbors n l, and is defined as R l = (C 1)+e wn l (17) If we onsider the prod rule in Equation (10) applied to Ê W (x), we have M x argmax ê l (x) (18) Taking the log does not affet the argument of the max, therefore an equivalent rule is: M x argmax log ê l (x) (19) whih beomes x argmax = arg max = arg max = arg max logê l (x) e l (x) logr l M e l (x) logr l e l (x) (20) (21) (22) (23) where we dropped the last term beause is equal among all lasses. The resulting rule is equivalent to the Weighted rule of Equation (5). Summarizing, here we provided a revisitation of the and the Weighted rules from the Classifier Combining perspetive: this opens the door to the possibility of using different (even omplex) ombination strategies. We will provide some evidene, in the experimental setion, that using a trained ombiner permits to improve the performanes of both the and the W rule. IV. EXPERIMENTAL RESULTS In this setion, we provide some empirial evidene that the perspetive introdued in this paper permits to exploit advaned ombination tehniques, suh as those represented by trained ombiners 13, 14. In partiular, in our empirial evaluation we ompare three tehniques: 1) : this is the lassi -Nearest Neighbor rule. As we have shown in Setion III, this orresponds to the majority vote rule applied to the E matrix defined in Equation (11) as well as to the ÊW defined in Equation (16). 2) : this is the Original Weighted - Nearest Neighbor rule desribed in 6 and presented in Setion II-B. This orresponds to the sum rule applied to the E W matrix defined in Equation (12) or to the prod rule applied to the ÊW defined in Equation (16). 3) W (TrainedComb): in this ase we applied a trained ombiner sheme: as explained in Setion II-C, with this sheme every pattern is desribed with the vetorization of its orresponding matrix of sores, whih is used as feature vetor to represent it. In other words, all the objets of the problem are mapped in a novel feature spae, where another lassifier is used. Here we adopt the deision template sheme proposed in 12, whih represents one of the first and most basi trained ombiner. More in details, for every pattern x i of the training set we ompute the matrix ÊW (x i ), as defined in Equation (16); we used the normalized sores, as suggested in 13. For every training point x i, the orresponding neighborhood ne (x i ) is determined without onsidering x i (this an partially prevent the overtraining situation whih may our with trained ombiners for a disussion on these aspets see 13). Given this novel feature spae, the Nearest Mean Classifier 15 is used as lassifier. In partiular, for every lass, we ompute the mean of the vetorized sores of the x i belonging to lass : this averaged vetorized sore then represents the template t of suh lass: t = mean x is.t.y ni = ve(ê(x i)) (24) Finally, the testing objet x is lassified by looking at the similarity between its ve(ê(x)) and the different

5 lasses templates t, assigning it to the nearest template: x argmin d(ve(ê(x)),t ) (25) where d(, ) is a distane between vetors. For more details interested readers an refer to Subsetion of 12. The three tehniques have been tested using 6 different lassi datasets (from the UCI-ML repository), whih harateristis are summarized in table I. All datasets have been normalized so that every feature has zero mean and unit variane. As distane to ompute neighbors we used the lassial Dataset Objets Classes Features Sonar Soybean Ionosphere Wine Breast Bananas TABLE I: Desription of the datasets Eulidean distane. Weighted weights are omputed using Equation (1) of Dudani s paper 6: d(x,n ) d(x,n i ) if d(x,n ) d(x,n 1 ) d(x,n w ni = ) d(x,n 1 ) 1 otherwise In this way weights are normalized between 0 and 1 (1 the weight of the nearest neighbor, 0 the weight of the farthest neighbor). We let vary from 5 to 45 (step 2). Classifiation errors have been omputed using the Averaged Holdout Cross validation protool: the dataset is randomly split into two parts, one used for training and the other used for testing; the proedure has been repeated 30 times. Results are shown in Figure 2. From the plots it an be observed that the Trained Combiner rule permits to improve the auray of both and W. This is more evident when the problem lives in a high dimensional spae. For moderately dimensional spae we annot observe suh a drasti improvement. One interesting observation derives by looking at the behavior for large. Apparently, the Trained Combiner sheme does not suffer too muh from a bad hoie of ; this may be due to the fat that adding neighbors to the analysis simply orresponds to a different normalization of the feature spae indued by ve( E(x)). ˆ More in details, adding neighbors hanges d(x,n ), whih results in a shift (the numerator) and in a resaling (the denominator) of the weight defined in Equation (26). Sine we onsider suh weights as features in the novel spae, adding neighbors simply result in a different saling, whih seems to not affet too muh the final lassifiation. a fixed ombiner rule, whereas a majority voting rule. Then we provided some evidene that lassifiation improvements are possible when using other lassifier ombining tehniques, suh as trained ombiners. ACNOWLEDGEMENTS This work was partially supported by the University of Verona through the CooperInt Program 2014 Edition. The authors are extremely grateful for all the guidane and inspiration that O. Ai Preti offered. REFERENCES 1 E. Fix and J. L. Hodges Jr, Disriminatory analysis-nonparametri disrimination: onsisteny properties, DTIC Doument, Teh. Rep., , Disriminatory analysis-nonparametri disrimination: Small sample performane, DTIC Doument, Teh. Rep., T. Cover and P. Hart, The nearest neighbor deision rule, IEEE Trans. Inform. Theory, vol. IT-13, pp , L. Devroye, L. Györfi, and G. Lugosi, A probabilisti theory of pattern reognition. Springer Siene & Business Media, 2013, vol R. M. Royall, A lass of non-parametri estimates of a smooth regression funtion, Ph.D. dissertation, Dept. of Statistis, Stanford University., S. Dudani, The distane-weighted k-nearest-neighbor rule, IEEE Trans. on Systems, Man, and Cybernetis, vol. SMC-6, no. 4, pp , T. Bailey and A. Jain, A note on distane-weighted k-nearest neighbor rules, IEEE Transations on Systems, Man, and Cybernetis, no. 4, pp , J. E. MaLeod, A. Luk, and D. M. Titterington, A re-examination of the distane-weighted k-nearest neighbor lassifiation rule, Systems, Man and Cybernetis, IEEE Transations on, vol. 17, no. 4, pp , J. F. Banzhaf III, Weighted voting doesn t work: A mathematial analysis, Rutgers L. Rev., vol. 19, p. 317, J. Gou, L. Du, Y. Zhang, and T. Xiong, A new distane-weighted k- nearest neighbor lassifier, Journal of Information & Computational Siene, vol. 9, no. 6, pp , R. Samworth, Optimal weighted nearest neighbor lassifiers, The Annals of Statistis, vol. 40, no. 5, pp , L. unheva, Combining Pattern Classifiers: Methods and Algorithms. Wiley, R. Duin, The ombining lassifier: To train or not to train? in Pro. Int. Conf. on Pattern Reognition, 2002, pp L. unheva, J. Bezdek, and R. Duin, Deision templates for multiple lassifier fusion: an experimental omparison, Pattern Reognition, vol. 34, no. 2, p , R. Duda, P. Hart, and D. Stork, Pattern Classifiation, 2nd ed. John Wiley & Sons, J. ittler, M. Hatef, R. Duin, and J. Matas, On ombining lassifiers, IEEE Trans. Pattern Anal. Mah. Intell., vol. 20, no. 3, pp , A. Ross,. Nandakumar, and A. Jain, Handbook of Multibiometris. Springer, G. Fumera and F. Roli, A theoretial and experimental analysis of linear ombiners for multiple lassifier systems, IEEE Trans. Pattern Anal. Mah. Intell., vol. 27, no. 6, pp , D. Wolpert, Staked generalization, Neural Networks, vol. 5, no. 2, pp , V. CONCLUSIONS In this paper we revisited the Weighted -Nearest Neighbor (and the -Nearest Neighbor) sheme under a lassifier ombining perspetive. Assuming this view, W implements

6 W (TrainedComb) sonar (a) W (TrainedComb) soybean2 (b) 5 ionosphere W (TrainedComb) wine W (TrainedComb) () 0.02 (d) 0.4 breast 0.5 bananas 5 W (TrainedComb) W (TrainedComb) (e) (f) Fig. 2: Cross validation errors of the tested tehniques for different datasets: (a) Sonar; (b) Soybean2; () Ionosphere; (d) Wine; (e) Breast; (f) Bananas.

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems Neutrosophi Sets and Systems, Vol. 8, 05 63 A new method of measuring similarity between two neutrosophi soft sets and its appliation in pattern reognition problems Anjan Mukherjee, Sadhan Sarkar, Department

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Normative and descriptive approaches to multiattribute decision making

Normative and descriptive approaches to multiattribute decision making De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

Variation Based Online Travel Time Prediction Using Clustered Neural Networks

Variation Based Online Travel Time Prediction Using Clustered Neural Networks Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah

More information

Sensor management for PRF selection in the track-before-detect context

Sensor management for PRF selection in the track-before-detect context Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

3 Tidal systems modelling: ASMITA model

3 Tidal systems modelling: ASMITA model 3 Tidal systems modelling: ASMITA model 3.1 Introdution For many pratial appliations, simulation and predition of oastal behaviour (morphologial development of shorefae, beahes and dunes) at a ertain level

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

Convergence of reinforcement learning with general function approximators

Convergence of reinforcement learning with general function approximators Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

Multi-version Coding for Consistent Distributed Storage of Correlated Data Updates

Multi-version Coding for Consistent Distributed Storage of Correlated Data Updates Multi-version Coding for Consistent Distributed Storage of Correlated Data Updates Ramy E. Ali and Vivek R. Cadambe 1 arxiv:1708.06042v1 [s.it] 21 Aug 2017 Abstrat Motivated by appliations of distributed

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

The Influences of Smooth Approximation Functions for SPTSVM

The Influences of Smooth Approximation Functions for SPTSVM The Influenes of Smooth Approximation Funtions for SPTSVM Xinxin Zhang Liaoheng University Shool of Mathematis Sienes Liaoheng, 5059 P.R. China ldzhangxin008@6.om Liya Fan Liaoheng University Shool of

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION 5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

State Diagrams. Margaret M. Fleck. 14 November 2011

State Diagrams. Margaret M. Fleck. 14 November 2011 State Diagrams Margaret M. Flek 14 November 2011 These notes over state diagrams. 1 Introdution State diagrams are a type of direted graph, in whih the graph nodes represent states and labels on the graph

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

A Probabilistic Fusion Framework

A Probabilistic Fusion Framework A Probabilisti Fusion Framework ABSTRACT Yael Anava Faulty of IE&M, Tehnion yaelan@tx.tehnion.a.il Oren Kurland Faulty of IE&M, Tehnion kurland@ie.tehnion.a.il There are numerous methods for fusing doument

More information

Performing Two-Way Analysis of Variance Under Variance Heterogeneity

Performing Two-Way Analysis of Variance Under Variance Heterogeneity Journal of Modern Applied Statistial Methods Volume Issue Artile 3 5--003 Performing Two-Way Analysis of Variane Under Variane Heterogeneity Sott J. Rihter University of North Carolina at Greensboro, sjriht@ung.edu

More information

Error Bounds for Context Reduction and Feature Omission

Error Bounds for Context Reduction and Feature Omission Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction Version of 5/2/2003 To appear in Advanes in Applied Mathematis REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS MATTHIAS BECK AND SHELEMYAHU ZACKS Abstrat We study the Frobenius problem:

More information

Estimating the probability law of the codelength as a function of the approximation error in image compression

Estimating the probability law of the codelength as a function of the approximation error in image compression Estimating the probability law of the odelength as a funtion of the approximation error in image ompression François Malgouyres Marh 7, 2007 Abstrat After some reolletions on ompression of images using

More information

Tests of fit for symmetric variance gamma distributions

Tests of fit for symmetric variance gamma distributions Tests of fit for symmetri variane gamma distributions Fragiadakis Kostas UADPhilEon, National and Kapodistrian University of Athens, 4 Euripidou Street, 05 59 Athens, Greee. Keywords: Variane Gamma Distribution,

More information

Research Article Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

Research Article Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations Computational Intelligene and Neurosiene Volume 2015, Artile ID 423581, 8 pages http://dx.doi.org/10.1155/2015/423581 Researh Artile Combining MC and SVM Classifiers for earning Based Deision Making: Analysis

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

INTRO VIDEOS. LESSON 9.5: The Doppler Effect

INTRO VIDEOS. LESSON 9.5: The Doppler Effect DEVIL PHYSICS BADDEST CLASS ON CAMPUS IB PHYSICS INTRO VIDEOS Big Bang Theory of the Doppler Effet Doppler Effet LESSON 9.5: The Doppler Effet 1. Essential Idea: The Doppler Effet desribes the phenomenon

More information

Applying CIECAM02 for Mobile Display Viewing Conditions

Applying CIECAM02 for Mobile Display Viewing Conditions Applying CIECAM2 for Mobile Display Viewing Conditions YungKyung Park*, ChangJun Li*, M.. Luo*, Youngshin Kwak**, Du-Sik Park **, and Changyeong Kim**; * University of Leeds, Colour Imaging Lab, UK*, **

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

Probabilistic and nondeterministic aspects of Anonymity 1

Probabilistic and nondeterministic aspects of Anonymity 1 MFPS XX1 Preliminary Version Probabilisti and nondeterministi aspets of Anonymity 1 Catusia Palamidessi 2 INRIA and LIX Éole Polytehnique, Rue de Salay, 91128 Palaiseau Cedex, FRANCE Abstrat Anonymity

More information

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems 009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng

More information

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS

ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS ON A PROCESS DERIVED FROM A FILTERED POISSON PROCESS MARIO LEFEBVRE and JEAN-LUC GUILBAULT A ontinuous-time and ontinuous-state stohasti proess, denoted by {Xt), t }, is defined from a proess known as

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

Quantum secret sharing without entanglement

Quantum secret sharing without entanglement Quantum seret sharing without entanglement Guo-Ping Guo, Guang-Can Guo Key Laboratory of Quantum Information, University of Siene and Tehnology of China, Chinese Aademy of Sienes, Hefei, Anhui, P.R.China,

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

Frequency Domain Analysis of Concrete Gravity Dam-Reservoir Systems by Wavenumber Approach

Frequency Domain Analysis of Concrete Gravity Dam-Reservoir Systems by Wavenumber Approach Frequeny Domain Analysis of Conrete Gravity Dam-Reservoir Systems by Wavenumber Approah V. Lotfi & A. Samii Department of Civil and Environmental Engineering, Amirkabir University of Tehnology, Tehran,

More information

Average Rate Speed Scaling

Average Rate Speed Scaling Average Rate Speed Saling Nikhil Bansal David P. Bunde Ho-Leung Chan Kirk Pruhs May 2, 2008 Abstrat Speed saling is a power management tehnique that involves dynamially hanging the speed of a proessor.

More information

The universal model of error of active power measuring channel

The universal model of error of active power measuring channel 7 th Symposium EKO TC 4 3 rd Symposium EKO TC 9 and 5 th WADC Workshop nstrumentation for the CT Era Sept. 8-2 Kosie Slovakia The universal model of error of ative power measuring hannel Boris Stogny Evgeny

More information

THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detcheva

THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detcheva International Journal "Information Models and Analyses" Vol.1 / 2012 271 THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detheva

More information

Hypothesis Testing for the Risk-Sensitive Evaluation of Retrieval Systems

Hypothesis Testing for the Risk-Sensitive Evaluation of Retrieval Systems Hypothesis Testing for the Risk-Sensitive Evaluation of Retrieval Systems B. Taner Dinçer Dept of Statistis & Computer Engineering Mugla University Mugla, Turkey dtaner@mu.edu.tr Craig Madonald and Iadh

More information

Phase Diffuser at the Transmitter for Lasercom Link: Effect of Partially Coherent Beam on the Bit-Error Rate.

Phase Diffuser at the Transmitter for Lasercom Link: Effect of Partially Coherent Beam on the Bit-Error Rate. Phase Diffuser at the Transmitter for Laserom Link: Effet of Partially Coherent Beam on the Bit-Error Rate. O. Korotkova* a, L. C. Andrews** a, R. L. Phillips*** b a Dept. of Mathematis, Univ. of Central

More information

Wave Propagation through Random Media

Wave Propagation through Random Media Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene

More information

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01 JAST 05 M.U.C. Women s College, Burdwan ISSN 395-353 -a peer reviewed multidisiplinary researh journal Vol.-0, Issue- 0 On Type II Fuzzy Parameterized Soft Sets Pinaki Majumdar Department of Mathematis,

More information

Frequency hopping does not increase anti-jamming resilience of wireless channels

Frequency hopping does not increase anti-jamming resilience of wireless channels Frequeny hopping does not inrease anti-jamming resiliene of wireless hannels Moritz Wiese and Panos Papadimitratos Networed Systems Seurity Group KTH Royal Institute of Tehnology, Stoholm, Sweden {moritzw,

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

A model for measurement of the states in a coupled-dot qubit

A model for measurement of the states in a coupled-dot qubit A model for measurement of the states in a oupled-dot qubit H B Sun and H M Wiseman Centre for Quantum Computer Tehnology Centre for Quantum Dynamis Griffith University Brisbane 4 QLD Australia E-mail:

More information

Bäcklund Transformations: Some Old and New Perspectives

Bäcklund Transformations: Some Old and New Perspectives Bäklund Transformations: Some Old and New Perspetives C. J. Papahristou *, A. N. Magoulas ** * Department of Physial Sienes, Helleni Naval Aademy, Piraeus 18539, Greee E-mail: papahristou@snd.edu.gr **

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION 4 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION Jiri Nozika*, Josef Adame*, Daniel Hanus** *Department of Fluid Dynamis and

More information

After the completion of this section the student should recall

After the completion of this section the student should recall Chapter I MTH FUNDMENTLS I. Sets, Numbers, Coordinates, Funtions ugust 30, 08 3 I. SETS, NUMERS, COORDINTES, FUNCTIONS Objetives: fter the ompletion of this setion the student should reall - the definition

More information

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions Salable Positivity Preserving Model Redution Using Linear Energy Funtions Sootla, Aivar; Rantzer, Anders Published in: IEEE 51st Annual Conferene on Deision and Control (CDC), 2012 DOI: 10.1109/CDC.2012.6427032

More information

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS Fratals, Vol. 9, No. (00) 09 World Sientifi Publishing Company UPPER-TRUNCATED POWER LAW DISTRIBUTIONS STEPHEN M. BURROUGHS and SARAH F. TEBBENS College of Marine Siene, University of South Florida, St.

More information

Parallel disrete-event simulation is an attempt to speed-up the simulation proess through the use of multiple proessors. In some sense parallel disret

Parallel disrete-event simulation is an attempt to speed-up the simulation proess through the use of multiple proessors. In some sense parallel disret Exploiting intra-objet dependenies in parallel simulation Franeso Quaglia a;1 Roberto Baldoni a;2 a Dipartimento di Informatia e Sistemistia Universita \La Sapienza" Via Salaria 113, 198 Roma, Italy Abstrat

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

Graph-covers and iterative decoding of finite length codes

Graph-covers and iterative decoding of finite length codes Graph-overs and iterative deoding of finite length odes Ralf Koetter and Pasal O. Vontobel Coordinated Siene Laboratory Dep. of Elet. and Comp. Eng. University of Illinois at Urbana-Champaign 1308 West

More information

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED The Seventh Asia-Paifi Conferene on Wind Engineering, November 8-1, 9, Taipei, Taiwan RESEARCH ON RANDOM FORIER WAVE-NMBER SPECTRM OF FLCTATING WIND SPEED Qi Yan 1, Jie Li 1 Ph D. andidate, Department

More information

Time and Energy, Inertia and Gravity

Time and Energy, Inertia and Gravity Time and Energy, Inertia and Gravity The Relationship between Time, Aeleration, and Veloity and its Affet on Energy, and the Relationship between Inertia and Gravity Copyright 00 Joseph A. Rybzyk Abstrat

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

A New Version of Flusser Moment Set for Pattern Feature Extraction

A New Version of Flusser Moment Set for Pattern Feature Extraction A New Version of Flusser Moment Set for Pattern Feature Extration Constantin-Iulian VIZITIU, Doru MUNTEANU, Cristian MOLDER Communiations and Eletroni Systems Department Military Tehnial Aademy George

More information

2 The Bayesian Perspective of Distributions Viewed as Information

2 The Bayesian Perspective of Distributions Viewed as Information A PRIMER ON BAYESIAN INFERENCE For the next few assignments, we are going to fous on the Bayesian way of thinking and learn how a Bayesian approahes the problem of statistial modeling and inferene. The

More information

MOLECULAR ORBITAL THEORY- PART I

MOLECULAR ORBITAL THEORY- PART I 5.6 Physial Chemistry Leture #24-25 MOLECULAR ORBITAL THEORY- PART I At this point, we have nearly ompleted our rash-ourse introdution to quantum mehanis and we re finally ready to deal with moleules.

More information

A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS

A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS Vietnam Journal of Mehanis, VAST, Vol. 4, No. (), pp. A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS Le Thanh Tung Hanoi University of Siene and Tehnology, Vietnam Abstrat. Conventional ship autopilots are

More information

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i THE METHOD OF SECTIONING '\ WITH APPLICATION TO SIMULATION, I by Danie 1 Brent ~~uffman'i Thesis submitted to the Graduate Faulty of the Virginia Polytehni Institute and State University in partial fulfillment

More information

7.1 Roots of a Polynomial

7.1 Roots of a Polynomial 7.1 Roots of a Polynomial A. Purpose Given the oeffiients a i of a polynomial of degree n = NDEG > 0, a 1 z n + a 2 z n 1 +... + a n z + a n+1 with a 1 0, this subroutine omputes the NDEG roots of the

More information

Counting Idempotent Relations

Counting Idempotent Relations Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their

More information

EE 321 Project Spring 2018

EE 321 Project Spring 2018 EE 21 Projet Spring 2018 This ourse projet is intended to be an individual effort projet. The student is required to omplete the work individually, without help from anyone else. (The student may, however,

More information

Growing Evanescent Envelopes and Anomalous Tunneling in Cascaded Sets of Frequency-Selective Surfaces in Their Stop Bands

Growing Evanescent Envelopes and Anomalous Tunneling in Cascaded Sets of Frequency-Selective Surfaces in Their Stop Bands Growing Evanesent Envelopes and Anomalous Tunneling in Casaded Sets of Frequeny-Seletive Surfaes in Their Stop ands Andrea Alù Dept. of Applied Eletronis, University of Roma Tre, Rome, Italy. Nader Engheta

More information

FNSN 2 - Chapter 11 Searches and limits

FNSN 2 - Chapter 11 Searches and limits FS 2 - Chapter 11 Searhes and limits Paolo Bagnaia last mod. 19-May-17 11 Searhes and limits 1. Probability 2. Searhes and limits 3. Limits 4. Maximum likelihood 5. Interpretation of results methods ommonly

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

A new initial search direction for nonlinear conjugate gradient method

A new initial search direction for nonlinear conjugate gradient method International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

Array Design for Superresolution Direction-Finding Algorithms

Array Design for Superresolution Direction-Finding Algorithms Array Design for Superresolution Diretion-Finding Algorithms Naushad Hussein Dowlut BEng, ACGI, AMIEE Athanassios Manikas PhD, DIC, AMIEE, MIEEE Department of Eletrial Eletroni Engineering Imperial College

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La

More information