JAI 97 XVI. JORNADA DE ATUALIZAÇÃO EM INFORMÁTICA JAI 97 XVI. JOURNEY OF ACTUALIZATION IN COMPUTER SCIENCE

Size: px
Start display at page:

Download "JAI 97 XVI. JORNADA DE ATUALIZAÇÃO EM INFORMÁTICA JAI 97 XVI. JOURNEY OF ACTUALIZATION IN COMPUTER SCIENCE"

Transcription

1 JAI 97 XVI. JORNADA DE ATUALIZAÇÃO EM INFORMÁTICA XVII. CONGRESSO DA SOCIEDADE BRASILEIRA DE COMPUTAÇÃO Brasília, DF- 997 Mini-Curso Reonheimento de Padrões Resumo: Este doumento apresenta uma introdução para a área de reonheimento de padrões om um destaque espeial para lassifiação de padrões. Em primeiro lugar motiva-se o problema e dão-se os termos básios usados, junto om três eemplos ilustrativos. Depois riam-se os fundamentos para aproimar o problema de lassifiação num onteto Bayesiano, introduzindo erro, rejeição e riso para fazer deisões. A parte prinipal do urso dedia-se aos modelos diversos de lassifiadores: Bayesiano baseado em modelos estatístios, estimador de Parzen, lassifiador do vizinho mais próimo, Funções de Disriminação Linear om um representante dos algoritmos de aprendizagem, a regra de delta, que é estendida para a regra de delta generalizada no pereptron multi-amadas, funções de base radiais e lassifiadores polinomiais. Apresenta-se a redução da dimensão do espaço das araterístias omo um importante passo de préproessamento em reonheimento de padrões, inluindo seleção de araterístias e etração de araterístias. Mostram-se estratégias de pesquisa e ritérios de seleção para a seleção de araterístias. A análise dos omponentes prinipais e a análise disriminativa linear são os eemplos para a etração de araterístias. O tema de estimativa de erro é a medição do desempenho do lassifiador. Utiliza-se o mapa de Sammon para a visualização de dados de alta dimensão. O teto feha om o algoritmo de aglomeração -means omo um representante de aprendizagem não-supervisionada. JAI 97 XVI. JOURNEY OF ACTUALIZATION IN COMPUTER SCIENCE XVII. CONGRESS OF THE BRAZILIAN COMPUTER SCIENCE SOCIETY Brasília, DF, Brazil Short Course Pattern Reognition Abstrat: This tet gives a short introdution into the field of automati pattern reognition, with a speial emphasis of pattern lassifiation. First the motivation and the neessary terms are given, together with three illustrative eamples. Then the groundwork is laid how to approah lassifiation in a Bayesian probabilisti framework, introduing error, rejetion and risk for deision making. The prinipal part deals with the different lassifier arhitetures, model-based parametri Bayesian lassifiers, Parzen windows, Nearest-Neighbor lassifiers, Linear Disriminant Funtions with one representative learning algorithm, the delta rule, whih is etended to the generalized delta rule in the multilayer pereptron, radial basis funtion networks and polynomial lassifiers. Dimensionality redution is presented as an important preproessing step in pattern reognition, onsisting of feature seletion and feature etration. Feature seletion searh strategies are outlined together with the seletion riteria. Feature etration omprises Prinipal Component Analysis and Linear Disriminant Analysis. The topi of error estimation is used to measure the performane of a lassifier. The Sammon map is used to visualize highdimensional data. The tet loses with the -means lustering algorithm as a representative of unsupervised learning. Thomas Walter Rauber Departamento de Informátia Universidade Federal do Espírito Santo Av. F. Ferrari, Vitória - ES, BRASIL Tel.: (+55)(27) Fa: (+55)(27) thomas@inf.ufes.br WWW-homepage:

2 INDEX MOTIVATION AND TERMINOLOGY.... Introdution....2 Patterns, Pattern Reognition, Features and Classifiation....3 Constraints Pattern lassifiation and higher-level information proessing Numerial features vs. symboli features Probabilisti framework Time stationary patterns vs. dynami patterns Basi Model of a Pattern Reognition System Nomenlature Eamples of Pattern Generating Proesses Eample Iris : Iris Flowers Eample Reator : Fault detetion in a hemial reator Eample Vision : Objet reognition in an industrial assembly task Further Reading Software Support PROBABILISTIC FRAMEWORK FOR PATTERN CLASSIFICATION Bayes Theorem Bayes Deision Rule For Minimum Error Classifiation Bayes Deision Rule For Maimum Likelihood Classifiation Bayes Deision Rule For Minimum Risk Classifiation Inferene and Deision Model Assumption and Parameter Estimation Model assumption Parameter estimation Assuming multivariate Gaussian distribution and estimating its parameters CLASSIFIER MODELS, SUPERVISED LEARNING AND PROBLEMS Model-based Parametri Classifiers Parzen Windows The -Nearest Neighbor Classifier The K-Nearest Neighbor Classifier The Nearest Prototype Classifier Linear Disriminant Funtions for Classifiation The Model Gradient Desent Learning and Least Mean Squared Error Generalized Linear Disriminant Funtions for Classifiation Universal Funtion Approimators Curse of Dimensionality and the Bias-Variane Dilemma Polynomial Classifier Radial Basis Funtion Networks Multilayer Pereptron Arhiteture Delta Rule and Generalized Delta Rule...29 i

3 4 DIMENSIONALITY REDUCTION BY FEATURE SELECTION AND EXTRACTION Feature Seletion Seletion riteria Searh strategies Limitations of Feature Seletion Feature Etration ERROR ESTIMATION HIGH-DIMENSIONAL DATA VISUALIZATION UNSUPERVISED LEARNING...38 BIBLIOGRAPHY ii

4 . Motivation and Terminology. Introdution Pattern reognition is one of the most hallenging topis when it omes to the simulation of the ognitive apabilities of a human being. Appliation eamples of pattern reognition are the identifiation of a person by its voie, fingerprints or the struture of the iris of the eye, the reognition of handwritten haraters, guiding an autonomous vehile by sensorial information, the supervision of proesses or mahinery and many other fields. Despite the variety of the appliation areas there do eist essential parts of the pattern reognition proess that are idential for all the eamples that were mentioned above. The following ourse material tries to give a slight glimpse into some tehniques that are neessary to build a pattern reognition system. It will be our goal to etrat those methods of pattern reognition that an universally be applied to most potential appliation areas. Naturally sine we want our system to be automati, the methods must be suitable to be eeuted by a mahine, speifially as a software implementation on a digital omputer. First of all we have to lay the fundament for the subsequent material. This means that we have to define the basi model of pattern reognition. Furthermore, in order to limit the sope of this tet, we will restrit our attention to the most prominent tehnique in pattern reognition, namely the automati data-driven lassifiation of patterns that are represented by a multidimensional ontinuous feature vetor into a finite set of lasses. Other areas of pattern reognition, like strutural desriptions, symboli reasoning, fuzzy approahes or regression problems are outside the sope of this tet. It an be stated that muh progress has been made in the last years in the area of automati pattern lassifiation, espeially due to the reborn interest in artifiial neural networks. Nevertheless a solid amount of knowledge about the field remains always up to date and it would be disadvantageous for the reader to omit important topis of pattern reognition tehniques that have be established by Bayes, Fisher or Duda & Hart, to mention only a few prominent names related to the field. The organization of the ourse will be as follows. First we will introdue the neessary terminology of pattern reognition and limit the material to the lassifiation of time-stationary, multidimensional ontinuous and labeled patterns. A short outline is given about the problems that arise and how they will be approahed. Then we will regard the at of pattern lassifiation as a universal funtion approimation in a probabilisti framework. Different lassifier arhitetures and assoiated learning tehniques will be presented, originating from statistial and artifiial neural network tehniques. Feature etration and feature seletion are presented as an indispensable part of pattern reognition. The evaluation of how well the generated lassifier will perform is the topi of error estimation. Also important is the visualization of high dimensional data points in two or three dimensions in order to make the intrinsi struture of the data aessible to a human observer. The ourse loses with the topi of unsupervised learning..2 Patterns, Pattern Reognition, Features and Classifiation What is a pattern? What is pattern reognition? The answers to these questions are intuitively very lear to a human being sine pattern reognition is onstantly eerised during most of its lifetime. When the reader is looking at this tet, the eyes apture the physial signal that represent the tet, then haraters are reognized, words, sentenes, and finally the semantis of the tet is etrated on the highest level of information pro-

5 essing. Pattern reognition is transforming information from the subsymboli level (signals) to the symboli level (meanings). One an observe that pattern reognition is also a proess of information redution. From the unountable amount of physial information that is onneted to eah atom on a piee of paper, only the visual blak-to-white transitions are interesting in order to read a tet. A harater is reognized by the reader based on ertain harateristis or features of the harater. In the eample of the harater reognition, the features are rather of symboli nature, like the eistene or absene of a stroke or the shape of the urves. One should note however that a urve of a harater ould also be represented numerially, like for instane traing the urve and measuring its urvature. For omputer analysis it is most desirable to have numerial ontinuous features at hand. Another advantage of the numerial representation of a pattern is the availability of mature mathematial tools to handle multivariate ontinuous data, like for instane statistis. The use of statistis is furthermore justified by the noisy and hene stohasti nature of the pattern generation soure. The same lass of objets is represented by a probability distribution. This approah enables the use of statistial inferene, i.e. to say to whih lass a ertain pattern belongs. This is the final step in the information redution sequene, namely the lassifiation of an objet that is represented by the feature vetor. In this tet we will not onsider the higher-level reasoning that an eventually follow the lassifiation proess. Referring to the eample above, after the reader has reognized a word he ould for instane detet a grammatial error in the sequene of isolated haraters. In this sense the lassifiation of a pattern would be embedded into a more omple framework of symboli reasoning..3 Constraints In order to define the topis of pattern reognition that are presented here, some further omments are given about the allowed ompleity of the problem. As it was mentioned before we will restrit ourselves to the most important aspets of pattern reognition..3. Pattern lassifiation and higher-level information proessing In the lassial tetbook of [Duda and Hart, 973] it is mentioned that for the desriptive approah the lassifiation of objets is not suffiient to solve a omple pattern reognition task. It would be rather important to obtain high-level desriptions of a given real world situation. For instane the desription of a sene inside a room is suh a highlevel desription. It would be neessary not only to lassify a ertain region of a digital image as a penil and another region as a notebook, but also to relate eah of the objets in the sene to eah other. Ever more omple reasoning an be imagined, for instane the spaial relationship of the objets in the sene. As it was mentioned before we will restrit the material of this ourse to the lassifiation of objets. In many ases of pratial pattern reognition problems the lassifiation an be seen as a stand-alone problem, for instane the reognition of fingerprints or handwritten numerals..3.2 Numerial features vs. symboli features If objets are desribed by ategorial (symboli, qualitative) features, like ugly, beautiful or aeptable, unaeptable, no geometrial relationship an be established between different patterns. The tools to lassify objets that are desribed by symboli features usually differ from those for the handling of numerial features. Here we will only onsider ontinuous numerial features R. An objet is desribed by more than one feature. Therefor the representation of an objet results in a D-dimensional fea- 2

6 ture vetor = (,, D ) T whih is equivalent to a point in a D-dimensional spae. This spae is alled pattern spae. A data point in a 2-D or 3-D pattern spae an be visualized diretly. For higher-dimensional pattern spaes appropriate mapping tehniques must be employed, for instane the Sammon map that is presented later..3.3 Probabilisti framework We will regard the pattern generating soure as a stohasti proess. This benefits us with the possibility to use a probabilisti framework to model the relation between features and lasses. Probabilisti tehniques are further reommended beause the data whih we are working with are noisy measurements and hene need a mathematial framework that an handle these random variations. Probabilities and probability densities will play an essential role in the presented tehniques. It beomes possible to use tehniques from statistis, artifiial neural networks and optimization to reate a lassifier and to adapt its variables to the problem at hand..3.4 Time stationary patterns vs. dynami patterns A further restrition that we impose on the nature of the patterns is that they do not hange their position in time. Hene ( t i ) = ( t j ) = for all times t i, t j. Often dynami information an be transformed to time stationary patterns by appropriate mappings, for instane the Fourier analysis of periodi signals an deliver non-hanging patterns in frequeny spae..4 Basi Model of a Pattern Reognition System Fig. shows a sequene of proessing steps that has been widely aepted as the model for automati pattern lassifiation. The model onsists of a Divide and Conquer approah to the ompleity problem. A sequene of dimensionality redution steps and information filters are applied to the raw data in order to ahieve the final lassifiation results. World Sensor Measurements Feature Etration Dimensionality redution FEATURES Classifier indution Classifier Classifiation Classes Fig. Classifiation model in pattern reognition A sensor is attahed to the signal generating proess whih ollets measurable physial data. A feature etrator onverts the raw signals to mahine proessable features, in our ase ontinuous numerial values. Many features ompose a multidimensional feature vetor. Sine the dimension of this feature vetor is sometimes too big, or sine irrelevant or redundant information is hiding within the features, a dimensionality redution step is performed. This means that fewer features are derived from the original feature set. An obvious advantage of reduing the number of features is that the lassifier based on the features is more effiient in terms of storage and proessing time requirements. The new feature set serves as input to a lassifier that ategorizes the real world objets represented by the features. We present tehniques that generate or adapt a lassifier on the basis of a training data set, i.e. we are learning a lassifier from eamples. The basi motivation to separate the lassifiation problem into a sequene of data redution steps is the need to handle the ompleity of the pattern reognition problem. 3

7 The goal of lassifiation goes hand in hand with data redution. Eah representation spae in the sequene ontains an order of magnitude less information than the anterior spae. From the huge volume of the information in the physial world, only a few bits remain whih represent the lasses. In this ourse we will fous on the proessing steps that follow after the features have been aquired, i.e. dimensionality redution and the definition and adaptation of the lassifier. Our basi goal will be the analysis of the features that desribe the given problem, the definition of the lassifier arhiteture, the adaptation of the free parameters of the lassifier and the analysis of the performane of the lassifier..5 Nomenlature We will now introdue the neessary terminology that is needed for the formal definition of our lassifiation tools. A real world objet is represented in the form of a D- dimensional vetor = (,, j,, D ) T of ontinuous features, i.e. R D. Eah objet belongs to one lass ω i from the set Ω = { ω,, ω i,, ω } of mutually elusive lasses. Patterns in the sense of pattern reognition are pairs (, ω) whih assoiate the feature vetor with its meaning, i.e. to whih lass it belongs. In a probabilisti framework we are interested in probabilities and probability densities whih an be used to model the probabilisti distribution of the pattern generating soure and draw onlusions about the lass membership of a ertain objet. In the net hapter we will refine these ideas about using probabilisti tehniques to approah the lassifiation problem. Our pattern lassifiation is based on a learning-from-eample approah. This means that the problem is desribed by a finite set of eample pairs n n (, ), eah pair onsisting of the feature vetor and the lass label. Eah lass ω i has n i data samples, suh that is the kth sample of lass with,, and is the jth value of. These eamples normally are the only available information about the problem at hand. We must therefor rely on them to build our pattern lassifier, i. e. define the struture of the lassifier. Furthermore, if we use a parametri statistial or neural model we must adapt the free parameters of the pattern lassifier. Also for this task the data samples an be used. A good lassifier generalizes the data well. This means that future unknown samples are lassified onforming to the true distribution of the data. It means also that the lassifier is not overfitted to the noise in the samples, i.e. we must find a good ompromise between the degrees of freedom and the fleibility of our lassifier models..6 Eamples of Pattern Generating Proesses As often, the best way to support the theory is to give some pratial eamples. It will beome lear that a pattern an represent a wide variety of real world objets or situations that have to be lassified. In Fig. 2 three pattern lassifiation eamples are resumed in terms of the possible lasses, the sensors that are used to aquire the information and the features that haraterize the real world objets or situations..6. Eample Iris : Iris Flowers In 935 E. Anderson olleted the measurements of the petal and sepal leaves of three different speies of the Iris of the family Iridaeae: Iris setosa, Iris versiolor and Iris virginia. One year later R. A. Fisher used the (modified) data to illustrate the onept of Linear Disriminant Funtions [Fisher, 936]. Four different features were inves- ω i { ω p p } p = ω ik i k = n i ikj ik 4

8 tigated: sepal length, sepal width, petal length and petal width. For eah of the three flower lasses 50 samples were olleted, totalling 50 samples. The features are all ontinuous. Sine then this data set has turned into a lassi in pattern reognition. The real world objets in this eample are speies of flowers. The number of lasses is = 3 with Ω = { Setosa, Virginia, Versiolor}. The sensor in this ase is of etremely simple nature, namely a devie that measures length, for instane a ruler. The measurements an diretly be used as the D = 4 features. The lassifiation task is to disriminate the flowers by the four attributes. The only information available to the lassifier designer is the data set of 50 eamples for eah of the 3 lasses. This means that all the knowledge about the problem must be etrated from the available data. The data set is available at the following address: ftp://ftp.is.ui.edu/pub/mahine-learning-databases/.6.2 Eample Reator : Fault detetion in a hemial reator From a quite different appliation area omes the net eample of pattern reognition. A liquid is poured into a stirred reator tank where a hemial reation takes plae. The finished produt is leaving the reator through an outflow pipe. The hemial reation produes heat so the temperature of the liquid must be ontrolled. Also the level in the tank must be kept within a ertain interval to insure the proper funtioning of the reation. The ontrol of the proess is not onsidered in more detail here sine it does not interest for the pattern reognition part of the problem. The important fat is that there are two features whih haraterize the proess: the temperature and the level of the liquid, hene D = 2. During normal operation only ertain values are allowed for these two features. For instane we have an abnormal or faulty situation if the temperature eeeds a ertain level or if the level falls below a ertain value. The lassifiation task is to distinguish between two lasses: normal funtioning and fault, hene = 2. We define that also in this ase the problem is speified by a number of eamples, let us assume n = 5000 onsisting of pairs of 2-D feature vetors and the respetive lass, for instane ( ( 60degrees, 3m) T, Normal), ( ( 59degrees, 3. m) T, Normal), ( ( 00degrees, 3m) T, Fault) ould be the first three learning eamples available..6.3 Eample Vision : Objet reognition in an industrial assembly task The final eample tries to ategorize 3-D objets by their 2-D silhouette whih an be one of = 3 lasses from the set Ω = { Retangle, Triangle, Cirle}. In this eample the sensor is more omple than in Iris. The features are not diretly available from the aquisition proess, and must be etrated (alulated) from the raw measurements, see Fig.. The visual information is aptured by a amera and the digital image is transferred to a image proessing faility, e.g. a frame grabber with proessing apability. Then the 2-D image is analyzed by image proessing algorithms (e.g. filtering, enhanement, segmentation) to finally alulate the D = 4 features length, width, area and perimeter. These features finally are the basis for the lassifiation of the different geometri shapes. We will return to some of these eamples in the subsequent parts of the ourse, espeially in the onsideration of the probability distribution of the pattern samples and the analysis of the features for the lassifiation task..7 Further Reading Many good tetbooks eist in the area of pattern reognition. Although reently there has been a number of publiations, espeially from the area of artifiial neural networks, the older books that lean themselves prinipally to statistial approahes did not loose muh of their atuality. [Duda and Hart, 973] for whih a seond edition [Duda et al., 5

9 996] is in preparation, is a must. Important is also [Fukunaga, 990]. Another eellent work for statistial pattern reognition that both emphasizes the theoretial bakground and provides pratial methods is [Devijver and Kittler, 982]. Easier to read is [Tou and Gonzalez, 974], [Shalkhoff, 992] and [Pao, 989] where the latter two give also onepts for syntati and neural lassifiation methods. More on statistial lassifiation an be found in [MLahlan, 992] and [Vapnik, 996]. The renewed interest in the area of artifiial neural networks has generated a number of interesting insights into the relation of pattern reognition and neural omputation, e.g. [Bishop, 995], [Ripley, 996], [Shürmann, 996]. Topis that are omitted here are syntati pattern reognition [Gonzalez and Thomason, 982] and mahine learning approahes applied to pattern reognition, e.g. [Breiman et al., 984]. For unsupervised learning (lustering) tehniques see e.g. [Anderberg, 973]. A good starting point on the internet is the page of the pattern reognition group of the University of Delft at the following address: Software Support The majority of the pattern reognition methods that are presented in this tet have been implemented in a highly portable C-oded program pakage alled Tooldiag. The program an be freely opied for researh purposes: PATTERN CLASSIFICATION TASK CLASSES SENSORS FEATURES IRIS FLOWERS 3 Flower speies - Iris Setosa - Iris Virginia - Iris Versiolor - Ruler 4 Shape harateristis - Sepal length - Sepal width - Petal length - Petal width CHEMICAL PROCESS Level [m] L 2 Operation states - Normal - Fault - Level meter - Thermometer 2 physial parameters - Level - Temperature Temperature [ o C] T Reation tank OBJECT RECOGNITION 3 Shapes (2-D) - Retangle - Triangle - Cirle Image proessing - Aquisition: Camera - Image proessing unit - Measuring algorithm Shape harateristis - Length - Width - Area - Perimeter Fig. 2 Eamples of pattern lassifiation problems 2. Probabilisti Framework for Pattern Classifiation The Bayes rule for deision making is a theoretial optimum of the performane of the lassifier beyond whih it is impossible to go. Pratial implementations of lassifiers an only try to reah this limit as lose as possible. Therefor the Bayes deision the- 6

10 ory is a good starting point to understand the priniples of pattern reognition, espeially pattern lassifiation as an at of deision making. All major tetbooks treat this subjet and should be onsulted for more details. At this point of the ourse the reader should refresh his or her knowledge of statistis, respetively aquire the neessary basis to understand the ideas, for instane [Mood et al.,974]. A lassifiation is made if a pattern is ategorized into one of the possible lasses. Let d ( ) be a deision rule that makes the deision to whih lass the pattern belongs, d: R D Ω. Note that d ( ) is defined as a funtion. Let us for now use the notation to epress the deision that has been lassified to belong to. d ( ) ω i = ω i Sometimes it is wiser for an automati lassifiation system not to deide at all. This means that we introdue the option of rejetion. The at of rejetion is easily integrated into lassifiation by introduing a new lass ω 0, the rejetion lass. The number of lasses inreases then to +, i.e. the new set of possible outomes of the lassifiation beomes Ω = { ω 0, ω,, ω }. The usual eample for the use of the rejet option is the diagnosis in a medial ontet. Often it is better to rejet the automati lassifiation of an illness and leave the final deision to a human epert in order to avoid damage to the patient s health. We an also assoiate a ost funtion to deision making in order to penalize wrong deisions that have a onsiderable negative effet. If it was deided that a pattern belongs to lass ω i, i = 0 by some rule, but in reality the pattern belongs to lass ω j, j =, then we will attribute a ost λ ( ω i ω j ) to that deision. The introdution of the ost ould provoke a deision that does not neessarily hoose the most probable lass. Nevertheless it is important to weigh one s deision in pattern reognition beause the onsequenes of that deision have to be taken into onsideration. In the Reator eample a ost funtion is quite appropriate to reflet the realities in proess supervision. If we have a fault with a low probability in a hemial proess and do not deide that we really have a fault and onsequently do not shut down the plant, we ould provoke an immense damage to the plant. Therefor λ ( Normal Fault) should have a high value. All mutual ost funtions among the lasses are resumed in the ost matri C = [ λ ( ω i ω j ) ]. The dimension of the matri is, respetively ( + ) if the rejetion option is allowed. One partiular problem of ost funtions is that they must be defined by a human being and therefor introdue a fair amount of heuristi knowledge into the deision making proess. In most ases a unit ost is assoiated with a lassifiation error and a zero ost is assoiated with a orret deision. In this ase the ost funtion beomes λ ( ω i ω j ) = δ ij, where δ ij is the Kroneker delta symbol. As an eample take the lassifiation of the ten digits. It surely makes no sense to assoiate a different ost for the onfusion of a for a 2 than for the onfusion of a 3 for a Bayes Theorem Let us onsider the pattern generating soure as a stohasti proess that is governed by probabilities P (. ) respetively by probability densities p (. ) in the ase of ontinuous variables. The soure produes a feature vetor in onjuntion with its meaning, i.e. their lass membership ω, as outlined in Fig. 3. Both are random variables. What is interesting to us are the stohasti variables involved in this proess. We shall onsider the feature vetor as a multidimensional ontinuous random variable and the lass membership ω as a disrete random variable. The probabilities respetively probability density funtions involved are: 7

11 p (, ω) : The joint probability density of a pattern having the values of feature vetor, and at the same time belonging to lass ω. p ( ) : The probability density funtion of a pattern having the values of feature vetor. This funtion tells us how the D-dimensional pattern spae is filled up with the patterns, regardless of their lass membership. P ( ω) : The a priori probability of lass ω. This value answers the question: How probable is it, without knowing anything about the feature vetor of a pattern that the pattern belongs to the lass ω. p ( ω) : The onditional probability density funtion of a pattern having the values of feature vetor, given that the pattern belongs to lass ω. P ( ω ) : The onditional probability of a pattern belonging to lass ω given the fat that the pattern has the values of feature vetor, also alled the a posteriori probability The latter value is eatly what we need for lassifiation. We know and want to infer the lass membership ω. If we knew the law that defines the pattern generation soure p (, ω), the lassifiation would be trivial. Basi laws from probability alulus establish the relationship among the values. The joint probability density for a partiular lass (, ) an be epressed in two different ways, using the produt rule: ω i p ω i p (, ω i ) = p ( ω i ) P ( ω i ) = P ( ω i ) p ( ) The Bayes theorem for the a posteriori probability an be derived diretly from () by further epanding p ( ) to p ( ) = p (, ω i ) = p ( ω : (2) i = i ) P ( ω i ) i = Bayes theorem: P ( ω i ) = p ( ω i ) P ( ω i ) ( p ( ω i ) P ( ω i ) ) (3) i = The Bayes theorem gives us the neessary tool to alulate the a posteriori probability for a given pattern belonging to a ertain lass. This is all we need for lassifiation in its basi form. Just aquire the feature vetor and then alulate the probability that this feature vetor belongs to a partiular lass. () p (, ω ) ω pattern = featurevetor lass Fig. 3 Pattern generating soure as a stohasti proess Eample : Calulating a posteriori probabilities Let us hoose a simple one-dimensional eample that illustrates the Bayes theorem of (3) to alulate a posteriori probabilities. Assume that in the Iris pattern reognition problem we ignore all but the feature petal width. Now we have feature vetors of dimension D =, and the feature vetor degenerates to a single real value whih represents petal width. Let us further assume that the a priori probabilities for the three flower speies were P ( Setosa) = 0.35, P ( Virginia) = 0.37 and P ( Versiolor) = The last assumption onerns the way how the values for petal width are distributed. Let us say that the onditional probability density funtion (pdf) is from the same family of density funtions for all three lasses. The most ommon type of pdf found in pattern reognition is the Gaussian (Normal) distribution. A onedimensional Gaussian pdf p ( ) of a random variable is a parametri funtion that is defined by two parameters, the mean µ and variane σ 2 (standard deviation σ). A more ompat, speial notation is N ( µ, σ 2 ). Gaussian probability density funtion of random variable : p ( ) = ep [ ( µ ) 2 ( 2σ 2 ) ] (4) σ 2π 8

12 Assume that we had the knowledge p ( PetalWidth ω i ) about the lass-onditional pdf for eah lass as the following rules p ( PetalWidth setosa) N ( 0.24, 0.0 ), p ( PetalWidth virginia) N ( 2.03, ) and p ( PetalWidth versiolor) N (.33, ). What is the a posteriori probability for instane for the partiular value =.5 in respet to eah of the three lasses? The value p (.5 ) of the unonditional pdf of the feature follows from (2) as p (.5 ) = p (.5 setosa) P ( setosa) + p (.5 virginia) P ( virginia) + p (.5 versiolor) P ( versiolor), i.e. p (.5 ) = We are now able to alulate the lass-onditional a posteriori probability for eah lass from the Bayes theorem (3): P ( setosa.5 ) = = 0.000, P ( virginia.5 ) = = 0.83 and P ( versiolor.5 ) = = One an observe that the a posteriori probabilities sum up to unity, as required for the sure event (the true lass is one of the three possible). The value of the unonditional pdf p ( ) ats as a normalizing fator whih ensures that the sum of all a posteriori probabilities satisfy this ondition of summing to unity. 2.2 Bayes Deision Rule For Minimum Error Classifiation The net logial step on the way to build a lassifier is to design a deision rule d ( ) whih says that the feature vetor belongs to lass ω i, i.e. d ( ) = ω i. An equivalent for deision rule is disriminant funtion. What do we want to ahieve during lassifiation? One possible obvious answer is that we want to deide in suh a manner that we ommit as less errors in the lassifiation as possible, i.e minimizing the average probability of error. We define an error for a given feature vetor, if we deide that belongs to a lass ω j although the a posteriori probability for another lass ω i is higher than for the lass ω j. Furthermore we define the probability of the error as the a posteriori probability of the wrongly deided lass: if we deide and ( ) > P ( ω j ), i, j, i j (5) P ( error ) = P ( ω j ) ω j P ω i To minimize the average error over the whole pattern spae P ( error) = p ( error, ) d = R D P ( error ) p ( ) d we simply have to minimize the probability of error P ( error ) for eah feature vetor. This is equivalent to hoosing that lass with the highest a posteriori probability for. We are now able to formulate the first lassifiation rule using Bayes theorem in a probabilisti ontet. Bayes Deision Rule for Minimum Error Classifiation: = if P ( ω i ) > P ( ω j ), j =, i j (7) d ( ) ω i This rule does not mean that we do not ommit any lassifiation errors. It just ensures that the error reahes a minimum value if we always deide that lass that possesses the highest a posteriori probability. If we allow rejetion we have to introdue a rejetion threshold. For the sake of simpliity it is defined as being onstant for all lasses, i.e. λ ( ω 0 ω j ) = λ r, j =. In this ase the value of λ r is the minimum a posteriori probability that is neessary to make a deision. It ats as a threshold value whih says: Unfortunately the probability that the pattern belongs to one of the possible lasses is so low (namely less or equal than λ r ) that the lassifiation result annot be aepted. The rule (7) beomes then: Bayes Deision Rule for Minimum Error Classifiation with rejetion option: = if P ( ω i ) > P ( ω j ), j =, i j, and d ( ) ω i d ( ) = ω 0 R D P ( ω i ) > λ r (6) otherwise (8) 9

13 2.3 Bayes Deision Rule For Maimum Likelihood Classifiation Eample 2: Minimum risk lassifiation Let us assume the ost matri C below in the Reator eample, allowing rejetion. We assoiate no ost with orret lassifiation. If a normal situation has been deteted and in reality the reator has a prob- If we observe the Bayes theorem of (3) we easily verify that the deision rule (7) is unaffeted by the division of the normalizing fator p ( ). The alternative deision rule then beomes Deide if > ( ), i =, i j (9) ω i p ( ω i ) P ( ω i ) p ( ω j ) P ω j The lass onditional pdf ( ) is also alled the likelihood of ω i with respet to, p ω i and the ratio ( ) alled the likelihood ratio. p ( ω i ) p ω j Sometimes there is no useful information about the a priori probabilities P ( ω) of the lasses available. Classifiation based solely on the likelihoods defines the Bayes Deision Rule for Maimum Likelihood Classifiation: if > ( ), j =, i j (0) d ( ) ω i = p ( ω i ) p ω j In the eample above this would mean that we based the flower lassifiation on the probability density funtions p ( PetalWidth flower), for instane: Deide setosa if p ( PetalWidth setosa) > p ( PetalWidth virginia), p ( PetalWidth setosa) > p ( PetalWidth versiolor). A rejetion option an readily be introdued into (0), but this ase is left out here. 2.4 Bayes Deision Rule For Minimum Risk Classifiation The most general ase for making deisions within the Bayesian framework is the deision rule that minimizes the risk of taking deisions. The risk is the epeted value of the osts of deisions that are made over the whole set of possible lasses and the whole (in our ase ontinuous) pattern spae. The epeted loss or onditional risk for deiding that belongs to is defined as ω i Conditional risk lassifying ω i : R i ( ) = λ ( ω i ω j ) P ( ω j ), i = 0 () j = Note that this formula also inludes the onditional risk R 0 ( ) of rejeting the lassifiation. Sine the deided lass ω i is a funtion of the deision rule we an formulate the onditional risk as a funtion of the deision funtion d ( ), suh that () an be equivalently be formulated as Conditional risk of making a deision: R ( ) = λ ( d ( ) ω j ) P ( ω j ) (2) j = The average risk over the whole pattern spae is the epeted value E { R ( ) } of the onditional risk R ( ) : Average risk: E { R ( ) } = R ( ) p ( ) d (3) An optimal rule d ( ) tries to minimize this average risk. The integral in (3) is minimized by hoosing always the minimum R ( ) for eah possible value. By observing () we see that this is easily ahieved by hoosing that onditional risk R i ( ) that has the smallest value for all lasses ω i, i = 0. We an now state the d ( ) ω i Bayes Deision Rule for Minimum Risk: if ( ) < R j ( ), j =, i j, i { 0 } (4) = R i This rule is the most general mehanism that lassifies a pattern into a lass, allowing the possibility for rejetion and the assoiation of osts to ertain deisions. R D 0

14 lem we say the ost is high, e.g. 000 in this eample. This reflets the high risk of a situation in whih the operation of the reator ontinues while a serious problem might damage the system. If in normal operation a fault has been deteted, we have the ost of unneessarily having to take ountermeasures to a fault that has not ourred, e.g. the osts of suspending the operation of the plant. We furthermore say that asking a human operator (i.e. rejeting automati lassifiation) osts 0 in the ase of a fault and 20 in ase of normal operation. True lass Normal Fault C = [ λ ( ω i ω j ) ] = Rejet Normal Fault Classified lass Let us further assume that we know the a priori probabilities of the two lasses as P ( Normal) = 0.95 and P ( Fault) = 0.05, i.e. a fault ours relatively seldom. Finally let us assume that the onditional probability density funtion for Normal p ( Normal) and Fault p ( Fault) were known and that the likelihood of the two lasses for the proess situation = ( 62degrees, 2.8 m) T had been alulated as p ( Normal) = 2.5 and p ( Fault) =.4. Using (2) on page 8 we alulate p ( ) = = The two a posteriori probabilities from the Bayes theorem (3) on page 8 follow as P ( Normal ) = = 0.97 and P ( Fault ) = = We an now alulate the onditional risk for eah of the three possible deisions Rejet, Normal and Fault, using (): R Rejet R Normal R Fault ( ) = R 0 ( ) = λ ( ω 0 ω ) P ( ω ) + λ ( ω 0 ω 2 ) P ( ω 2 ) = = 9.7 ( ) = R ( ) = λ ( ω ω ) P ( ω ) + λ ( ω ω 2 ) P ( ω 2 ) = = 30 ( ) = R 2 ( ) = λ ( ω 2 ω ) P ( ω ) + λ ( ω 2 ω 2 ) P ( ω 2 ) = = 97 Finally (7) deides that it is less risky to rejet the deision. Although the Normal situation is muh more probable, the high risk assoiated with a wrong deision reverts the result of the lassifiation. 2.5 Inferene and Deision From the onepts of the Bayesian deision rules that were presented above we an establish a model for Bayesian pattern lassifiation as depited Fig. 4. Inferene P ( ω j ) j = { } Deision making ω i = d ( ) i { 0 } Cost funtion Rejetion rule Fig. 4 Bayesian pattern lassifiation: Inferene and deision making We have to distinguish learly between the two stages of inferene and deision making. In the inferene part we alulate or estimate the a posteriori probabilities P ( ω j ) for all possible lasses. The seond part of deision making works on a logially higher level of weighting the onsequenes of a lassifiation result or rejeting a lassifiation. The introdution of a ost funtion and the definition of a rejetion rule pay attention to this. The deision making part finally determines the lass ω i that will be attributed to the pattern.

15 More generally in the inferene part we want to find disriminant funtions d j ( ) whih alulate a degree of membership for a lass ω j. Ideally this is the a posteriori probability P ( ω ), but not always it is possible to obtain this value diretly sine no statistial model is available to alulate probabilities. Let us assume that the inferene part of the lassifier delivers a disriminant funtion d j ( ) for eah of the possible lasses ω j and that d ma and d ma2 are the highest and seond highest values of the disriminant funtion. Let us furthermore assume that t is the j target vetor of the -out-of- oding of lass ω j and that d ( ) is the deision vetor of the deision rule (see setion 3.6.2). For the rejetion option we an define three different rules whih make the deision when a rejetion should be triggered: - The maimum value of the deision funtion falls short of a threshold d ma < τ - The two highest values of the disriminant funtion are too similar d ma d ma2 < - The value of the disriminant funtion is too far away from any plausible target value: 2 d > τ, j =,, ( ) t j In the subsequent part of this tet our main onern will be the inferene stage of Bayesian pattern lassifiation task. It will be our goal to find funtions whih quantify the probability, or more general the degree of membership, that a pattern belongs to a ertain lass. We must remember the very objetive that we have in mind, namely lassifying patterns into lasses. The probabilisti framework is not the only method for lassifiation. We an also partition the pattern spae diretly into deision regions, by defining boundaries between different lasses. Let us however first fous on the basi problems of datadriven pattern lassifiation in a probabilisti framework. 2.6 Model Assumption and Parameter Estimation The eamples of setion 2. and setion 2.4 lead us to the main problems linked to the diret alulus of the a posteriori probabilities in the Bayesian framework. First, how do we know that the features obey a ertain probability density funtion and seond where do we get the parameters of these distributions from? For instane in Eample on page 8 we ould ask: How do we know that the feature petal width is really normally distributed, i.e. PetalWidth N ( µ, σ 2 ) and what are the values of the two parameters that are neessary to haraterize the distribution, the mean µ and variane σ 2? Usually in a real world pattern reognition task the only available information are the eamples, the training set a.k.a. learning set. If we use the Bayesian framework to alulate a posteriori probabilities, we onsequently have to solve two often unsolvable problems. The first is the model assumption problem. We have to assume that the data obeys a ertain law of probability distributions. The seond problem is the parameter estimation problem. From the available data we have to estimate the true parameters of the probability distribution. Both problems are potential soures of error that an deteriorate the lassifiation performane of a system to suh an etend that the ahievable theoretial error rates are onsiderably surpassed. In order to illuminate the reader with the problems of assumption and estimation we will diret our attention to the knowledge that is normally available to the lassifier designer. As it already was mentioned earlier, the only information that is aessible are the training samples whih haraterize the lassifiation problem. These eamples are points in the multidimensional pattern spae. They are a finite set of values for the τ 2

16 unknown joint probability of the stohasti pattern soure of Fig. 3. For tutorial purposes usually only two of the D features are used sine it is not possible to visualize more that three dimensions diretly. In Fig. 5a only two of the four possible features are used to plot the position of the 50 eamples of the Iris data set. Sine for Reator only two features eist a priori, we an use all features in Fig. 5b. From the plot the observer an derive a lot of information that is surely helpful in making assumptions or hoosing a partiular (probabilisti) model of the lassifier. The dotted urves are boundaries that the observer would probably realize in this or a similar form as separators among the different lasses. The boundaries define regions whih divide the pattern spae into the lasses. It an be seen that the boundaries of the two speies Versiolor and Virginia overlap in the Iris eample. This means impliitly that the lasses are not totally separable and also that we will automatially ommit lassifiation errors. The overlap region defines a new region, for instane a rejetion region, sine the lass of a sample is quite ambiguous in that overlapping region. Petal width Iris using 2 of 4 features Iris-setosa Iris-versiolor Iris-virginia Petal length a.) Tank level Reator Normal Fault Temperature b.) Fig. 5 Training samples in the pattern spae for the Iris and Reator eample The real positions of the boundaries between deision regions are defined by the deision rule (disriminant funtion) d ( ). We attribute the pattern to that lass with the highest value of d ( ). Note that the use of disriminant funtions divides the pattern spae in mutually elusive regions (without overlap). Some eamples for the disriminant funtions are: Bayes minimum risk (negative): d i ( ) = R i ( ) (Eq. (7)) Bayes minimum error: d i ( ) = P ( ω ) (Eq. (7)) Bayes minimum error with rejetion: (Eq. (7)) Any monotonous funtions of the above, e.g. logp ( ω ), sine the deision boundaries are not hanged. Easier understanding or omputation an result from this approah. The nature of the deision boundaries are also different. We an assume a linear separator between Setosa and the resting two lasses, and nonlinear separators between all other lasses. The Reator eample has an interesting onstellation of the lasses and their boundaries. The Normal lass is situated within the Fault lass and the separator seems to be irular whih means that the boundary an be desribed by a quadrati funtion. If we look at the density distribution of the samples in Fig. 5 we an suppose a quite different situation of the modes of the probability density funtions. Remember that a 3

17 mode is defined as a maimum of a pdf. The pdf of the three flowers in Fig. 5a seem to be unimodal i.e. all samples of the same lass appear in the same luster, whereas the pdf of the Fault lass in Fig. 5b seems to be quite multimodal, i.e. there appears more than one luster Model assumption We have speulated about the distribution of the samples. The plot gave us an idea about what ould really be the nature of the probability density funtions of the lass. Sine we need the definition of the pdf, and sine we do not have it, we must assume it. This assumption should be in aordane with the real situation that haraterizes the pattern soure. In statistis there eist hypotheses tests whih theoretially ould be employed to verify to a ertain etent that the sample data belong to a ertain family of probability density distributions. In lassifiation pratie these tests do not make muh sense. This is beause the size of the training set is usually small. Espeially if the dimension D of the pattern spae is large we need a pratially impossibly high number of training samples to be able to make reliable statements about the statistial nature of the data. Therefor in pratie we go the other way. First we assume a ertain model, estimate its optimal parameters, look at the performane of the lassifier and ompare it to alternative lassifiers. If the result mathes our epetations we an say that the assumed distribution model is satisfatory Parameter estimation One the model has been defined its free variables must be defined. So to say we have defined the vehile whih will lead us to the desired lassifiation, but now we have to tune its mahine in order to provide best performane. The number of samples is limited. Therefore we have to rely on the data to estimate the parameters. Estimation is usually done by maimum-likelihood estimators, tehniques that try to minimize the approimation error between the real values of the parameters and its estimated version. Parameter estimation is a partiular ase of supervised learning in the ontet of pattern reognition. We use the labeled eamples to adapt the free variables of a given model. The goal is the optimize the internal variables of the model relative to a performane riterion Assuming multivariate Gaussian distribution and estimating its parameters The most ited eample in the pattern reognition literature for an assumed probability density funtion is the multivariate Gaussian distribution. Equation (4) on page 8 already defined a unidimensional Gaussian probability density funtion. If we use more than one variable the probability density funtion for a lass is given as p ( ω i ) = ( 2π) D 2 Σ 2 i where is the mean vetor of lass and is the ovariane matri of lass. The ompat notation for the multivariate Gaussian distribution is ep (5). The mean vetor is the multidimensional etension of the mean of one lass and the D D ovariane matri Σ i the multidimensional etension of the variane σ i, this time also inluding ovarianes σ i σ j between two features i and j. Furthermore we have to know the a priori probabilities P ( ω i ) of eah lass in order to alulate minimum error and minimum risk Bayesian lassifiers. ω i ( µ ) T ( Σ 2 i i ( µ ) ) i µ i ω i Σ i ω i D µ µ i i p ( ω i ) N ( µ, Σ i i ) 4

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Normative and descriptive approaches to multiattribute decision making

Normative and descriptive approaches to multiattribute decision making De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

The gravitational phenomena without the curved spacetime

The gravitational phenomena without the curved spacetime The gravitational phenomena without the urved spaetime Mirosław J. Kubiak Abstrat: In this paper was presented a desription of the gravitational phenomena in the new medium, different than the urved spaetime,

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

A model for measurement of the states in a coupled-dot qubit

A model for measurement of the states in a coupled-dot qubit A model for measurement of the states in a oupled-dot qubit H B Sun and H M Wiseman Centre for Quantum Computer Tehnology Centre for Quantum Dynamis Griffith University Brisbane 4 QLD Australia E-mail:

More information

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena Page 1 of 10 Physial Laws, Absolutes, Relative Absolutes and Relativisti Time Phenomena Antonio Ruggeri modexp@iafria.om Sine in the field of knowledge we deal with absolutes, there are absolute laws that

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Variation Based Online Travel Time Prediction Using Clustered Neural Networks

Variation Based Online Travel Time Prediction Using Clustered Neural Networks Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

Modelling and Simulation. Study Support. Zora Jančíková

Modelling and Simulation. Study Support. Zora Jančíková VYSOKÁ ŠKOLA BÁŇSKÁ TECHNICKÁ UNIVERZITA OSTRAVA FAKULTA METALURGIE A MATERIÁLOVÉHO INŽENÝRSTVÍ Modelling and Simulation Study Support Zora Jančíková Ostrava 5 Title: Modelling and Simulation Code: 638-3/

More information

State Diagrams. Margaret M. Fleck. 14 November 2011

State Diagrams. Margaret M. Fleck. 14 November 2011 State Diagrams Margaret M. Flek 14 November 2011 These notes over state diagrams. 1 Introdution State diagrams are a type of direted graph, in whih the graph nodes represent states and labels on the graph

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

EE 321 Project Spring 2018

EE 321 Project Spring 2018 EE 21 Projet Spring 2018 This ourse projet is intended to be an individual effort projet. The student is required to omplete the work individually, without help from anyone else. (The student may, however,

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Adaptive neuro-fuzzy inference system-based controllers for smart material actuator modelling

Adaptive neuro-fuzzy inference system-based controllers for smart material actuator modelling Adaptive neuro-fuzzy inferene system-based ontrollers for smart material atuator modelling T L Grigorie and R M Botez Éole de Tehnologie Supérieure, Montréal, Quebe, Canada The manusript was reeived on

More information

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat

More information

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION 4 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION Jiri Nozika*, Josef Adame*, Daniel Hanus** *Department of Fluid Dynamis and

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used

More information

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED The Seventh Asia-Paifi Conferene on Wind Engineering, November 8-1, 9, Taipei, Taiwan RESEARCH ON RANDOM FORIER WAVE-NMBER SPECTRM OF FLCTATING WIND SPEED Qi Yan 1, Jie Li 1 Ph D. andidate, Department

More information

General Equilibrium. What happens to cause a reaction to come to equilibrium?

General Equilibrium. What happens to cause a reaction to come to equilibrium? General Equilibrium Chemial Equilibrium Most hemial reations that are enountered are reversible. In other words, they go fairly easily in either the forward or reverse diretions. The thing to remember

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Critical Reflections on the Hafele and Keating Experiment

Critical Reflections on the Hafele and Keating Experiment Critial Refletions on the Hafele and Keating Experiment W.Nawrot In 1971 Hafele and Keating performed their famous experiment whih onfirmed the time dilation predited by SRT by use of marosopi loks. As

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Subject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % (

Subject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % ( 16.50 Leture 0 Subjet: Introdution to Component Mathing and Off-Design Operation At this point it is well to reflet on whih of the many parameters we have introdued (like M, τ, τ t, ϑ t, f, et.) are free

More information

Collocations. (M&S Ch 5)

Collocations. (M&S Ch 5) Colloations M&S Ch 5 Introdution Colloations are haraterized by limited ompositionality. Large overlap between the onepts of olloations and terms tehnial term and terminologial phrase. Colloations sometimes

More information

Array Design for Superresolution Direction-Finding Algorithms

Array Design for Superresolution Direction-Finding Algorithms Array Design for Superresolution Diretion-Finding Algorithms Naushad Hussein Dowlut BEng, ACGI, AMIEE Athanassios Manikas PhD, DIC, AMIEE, MIEEE Department of Eletrial Eletroni Engineering Imperial College

More information

Verka Prolović Chair of Civil Engineering Geotechnics, Faculty of Civil Engineering and Architecture, Niš, R. Serbia

Verka Prolović Chair of Civil Engineering Geotechnics, Faculty of Civil Engineering and Architecture, Niš, R. Serbia 3 r d International Conferene on New Developments in Soil Mehanis and Geotehnial Engineering, 8-30 June 01, Near East University, Niosia, North Cyprus Values of of partial fators for for EC EC 7 7 slope

More information

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics

Design and Development of Three Stages Mixed Sampling Plans for Variable Attribute Variable Quality Characteristics International Journal of Statistis and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 763-772 Researh India Publiations http://www.ripubliation.om Design and Development of Three Stages Mixed Sampling

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i THE METHOD OF SECTIONING '\ WITH APPLICATION TO SIMULATION, I by Danie 1 Brent ~~uffman'i Thesis submitted to the Graduate Faulty of the Virginia Polytehni Institute and State University in partial fulfillment

More information

Multicomponent analysis on polluted waters by means of an electronic tongue

Multicomponent analysis on polluted waters by means of an electronic tongue Sensors and Atuators B 44 (1997) 423 428 Multiomponent analysis on polluted waters by means of an eletroni tongue C. Di Natale a, *, A. Maagnano a, F. Davide a, A. D Amio a, A. Legin b, Y. Vlasov b, A.

More information

4 Puck s action plane fracture criteria

4 Puck s action plane fracture criteria 4 Puk s ation plane frature riteria 4. Fiber frature riteria Fiber frature is primarily aused by a stressing σ whih ats parallel to the fibers. For (σ, σ, τ )-ombinations the use of a simple maximum stress

More information

On the Quantum Theory of Radiation.

On the Quantum Theory of Radiation. Physikalishe Zeitshrift, Band 18, Seite 121-128 1917) On the Quantum Theory of Radiation. Albert Einstein The formal similarity between the hromati distribution urve for thermal radiation and the Maxwell

More information

Distributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Process

Distributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Process istributed Gaussian Mixture Model for Monitoring Multimode Plant-wide Proess Jinlin Zhu, Zhiqiang Ge, Zhihuan Song. State ey Laboratory of Industrial Control Tehnology, Institute of Industrial Proess Control,

More information

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling ONLINE APPENDICES for Cost-Effetive Quality Assurane in Crowd Labeling Jing Wang Shool of Business and Management Hong Kong University of Siene and Tehnology Clear Water Bay Kowloon Hong Kong jwang@usthk

More information

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems Neutrosophi Sets and Systems, Vol. 8, 05 63 A new method of measuring similarity between two neutrosophi soft sets and its appliation in pattern reognition problems Anjan Mukherjee, Sadhan Sarkar, Department

More information

Bäcklund Transformations: Some Old and New Perspectives

Bäcklund Transformations: Some Old and New Perspectives Bäklund Transformations: Some Old and New Perspetives C. J. Papahristou *, A. N. Magoulas ** * Department of Physial Sienes, Helleni Naval Aademy, Piraeus 18539, Greee E-mail: papahristou@snd.edu.gr **

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

arxiv:physics/ v1 [physics.class-ph] 8 Aug 2003

arxiv:physics/ v1 [physics.class-ph] 8 Aug 2003 arxiv:physis/0308036v1 [physis.lass-ph] 8 Aug 003 On the meaning of Lorentz ovariane Lszl E. Szab Theoretial Physis Researh Group of the Hungarian Aademy of Sienes Department of History and Philosophy

More information

3 Tidal systems modelling: ASMITA model

3 Tidal systems modelling: ASMITA model 3 Tidal systems modelling: ASMITA model 3.1 Introdution For many pratial appliations, simulation and predition of oastal behaviour (morphologial development of shorefae, beahes and dunes) at a ertain level

More information

The universal model of error of active power measuring channel

The universal model of error of active power measuring channel 7 th Symposium EKO TC 4 3 rd Symposium EKO TC 9 and 5 th WADC Workshop nstrumentation for the CT Era Sept. 8-2 Kosie Slovakia The universal model of error of ative power measuring hannel Boris Stogny Evgeny

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Determination of the reaction order

Determination of the reaction order 5/7/07 A quote of the wee (or amel of the wee): Apply yourself. Get all the eduation you an, but then... do something. Don't just stand there, mae it happen. Lee Iaoa Physial Chemistry GTM/5 reation order

More information

Lecture 3 - Lorentz Transformations

Lecture 3 - Lorentz Transformations Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the

More information

The Hanging Chain. John McCuan. January 19, 2006

The Hanging Chain. John McCuan. January 19, 2006 The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a

More information

MAC Calculus II Summer All you need to know on partial fractions and more

MAC Calculus II Summer All you need to know on partial fractions and more MC -75-Calulus II Summer 00 ll you need to know on partial frations and more What are partial frations? following forms:.... where, α are onstants. Partial frations are frations of one of the + α, ( +

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

Controller Design Based on Transient Response Criteria. Chapter 12 1

Controller Design Based on Transient Response Criteria. Chapter 12 1 Controller Design Based on Transient Response Criteria Chapter 12 1 Desirable Controller Features 0. Stable 1. Quik responding 2. Adequate disturbane rejetion 3. Insensitive to model, measurement errors

More information

Wave Propagation through Random Media

Wave Propagation through Random Media Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene

More information

2 The Bayesian Perspective of Distributions Viewed as Information

2 The Bayesian Perspective of Distributions Viewed as Information A PRIMER ON BAYESIAN INFERENCE For the next few assignments, we are going to fous on the Bayesian way of thinking and learn how a Bayesian approahes the problem of statistial modeling and inferene. The

More information

ON THE LEAST PRIMITIVE ROOT EXPRESSIBLE AS A SUM OF TWO SQUARES

ON THE LEAST PRIMITIVE ROOT EXPRESSIBLE AS A SUM OF TWO SQUARES #A55 INTEGERS 3 (203) ON THE LEAST PRIMITIVE ROOT EPRESSIBLE AS A SUM OF TWO SQUARES Christopher Ambrose Mathematishes Institut, Georg-August Universität Göttingen, Göttingen, Deutshland ambrose@uni-math.gwdg.de

More information

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION 5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.

More information

). In accordance with the Lorentz transformations for the space-time coordinates of the same event, the space coordinates become

). In accordance with the Lorentz transformations for the space-time coordinates of the same event, the space coordinates become Relativity and quantum mehanis: Jorgensen 1 revisited 1. Introdution Bernhard Rothenstein, Politehnia University of Timisoara, Physis Department, Timisoara, Romania. brothenstein@gmail.om Abstrat. We first

More information

Sensor management for PRF selection in the track-before-detect context

Sensor management for PRF selection in the track-before-detect context Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

Relativistic Addition of Velocities *

Relativistic Addition of Velocities * OpenStax-CNX module: m42540 1 Relativisti Addition of Veloities * OpenStax This work is produed by OpenStax-CNX and liensed under the Creative Commons Attribution Liense 3.0 Abstrat Calulate relativisti

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Calibration of Piping Assessment Models in the Netherlands

Calibration of Piping Assessment Models in the Netherlands ISGSR 2011 - Vogt, Shuppener, Straub & Bräu (eds) - 2011 Bundesanstalt für Wasserbau ISBN 978-3-939230-01-4 Calibration of Piping Assessment Models in the Netherlands J. Lopez de la Cruz & E.O.F. Calle

More information

The Second Postulate of Euclid and the Hyperbolic Geometry

The Second Postulate of Euclid and the Hyperbolic Geometry 1 The Seond Postulate of Eulid and the Hyperboli Geometry Yuriy N. Zayko Department of Applied Informatis, Faulty of Publi Administration, Russian Presidential Aademy of National Eonomy and Publi Administration,

More information

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite.

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite. Leture Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the funtion V ( x ) to be positive definite. ost often, our interest will be to show that x( t) as t. For that we will need

More information

Indian Institute of Technology Bombay. Department of Electrical Engineering. EE 325 Probability and Random Processes Lecture Notes 3 July 28, 2014

Indian Institute of Technology Bombay. Department of Electrical Engineering. EE 325 Probability and Random Processes Lecture Notes 3 July 28, 2014 Indian Institute of Tehnology Bombay Department of Eletrial Engineering Handout 5 EE 325 Probability and Random Proesses Leture Notes 3 July 28, 2014 1 Axiomati Probability We have learned some paradoxes

More information

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM CSC2515 Winter 2015 Introdu3on to Mahine Learning Leture 5: Clustering, mixture models, and EM All leture slides will be available as.pdf on the ourse website: http://www.s.toronto.edu/~urtasun/ourses/csc2515/

More information

A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS

A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS Vietnam Journal of Mehanis, VAST, Vol. 4, No. (), pp. A NONLILEAR CONTROLLER FOR SHIP AUTOPILOTS Le Thanh Tung Hanoi University of Siene and Tehnology, Vietnam Abstrat. Conventional ship autopilots are

More information

Aharonov-Bohm effect. Dan Solomon.

Aharonov-Bohm effect. Dan Solomon. Aharonov-Bohm effet. Dan Solomon. In the figure the magneti field is onfined to a solenoid of radius r 0 and is direted in the z- diretion, out of the paper. The solenoid is surrounded by a barrier that

More information

Tarek Aissa, Christian Arnold, Steven Lambeck

Tarek Aissa, Christian Arnold, Steven Lambeck Preprints of the 9th orld Congress he International Federation of Automati Control Cape own, outh Afria. August 4-9, 04 Combined Approah of Fuzzy Deision Making and Preditive Funtional Control to Minimize

More information

Speed-feedback Direct-drive Control of a Low-speed Transverse Flux-type Motor with Large Number of Poles for Ship Propulsion

Speed-feedback Direct-drive Control of a Low-speed Transverse Flux-type Motor with Large Number of Poles for Ship Propulsion Speed-feedbak Diret-drive Control of a Low-speed Transverse Flux-type Motor with Large Number of Poles for Ship Propulsion Y. Yamamoto, T. Nakamura 2, Y. Takada, T. Koseki, Y. Aoyama 3, and Y. Iwaji 3

More information

Neuro-Fuzzy Modeling of Heat Recovery Steam Generator

Neuro-Fuzzy Modeling of Heat Recovery Steam Generator International Journal of Mahine Learning and Computing, Vol. 2, No. 5, Otober 202 Neuro-Fuzzy Modeling of Heat Reovery Steam Generator A. Ghaffari, A. Chaibakhsh, and S. Shahhoseini represented in a network

More information

Chapter Review of of Random Processes

Chapter Review of of Random Processes Chapter.. Review of of Random Proesses Random Variables and Error Funtions Conepts of Random Proesses 3 Wide-sense Stationary Proesses and Transmission over LTI 4 White Gaussian Noise Proesses @G.Gong

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01

JAST 2015 M.U.C. Women s College, Burdwan ISSN a peer reviewed multidisciplinary research journal Vol.-01, Issue- 01 JAST 05 M.U.C. Women s College, Burdwan ISSN 395-353 -a peer reviewed multidisiplinary researh journal Vol.-0, Issue- 0 On Type II Fuzzy Parameterized Soft Sets Pinaki Majumdar Department of Mathematis,

More information

Performing Two-Way Analysis of Variance Under Variance Heterogeneity

Performing Two-Way Analysis of Variance Under Variance Heterogeneity Journal of Modern Applied Statistial Methods Volume Issue Artile 3 5--003 Performing Two-Way Analysis of Variane Under Variane Heterogeneity Sott J. Rihter University of North Carolina at Greensboro, sjriht@ung.edu

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES L ERBE, A PETERSON AND S H SAKER Abstrat In this paper, we onsider the pair of seond-order dynami equations rt)x ) ) + pt)x

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany

More information

arxiv: v1 [physics.gen-ph] 5 Jan 2018

arxiv: v1 [physics.gen-ph] 5 Jan 2018 The Real Quaternion Relativity Viktor Ariel arxiv:1801.03393v1 [physis.gen-ph] 5 Jan 2018 In this work, we use real quaternions and the basi onept of the final speed of light in an attempt to enhane the

More information

Risk Analysis in Water Quality Problems. Souza, Raimundo 1 Chagas, Patrícia 2 1,2 Departamento de Engenharia Hidráulica e Ambiental

Risk Analysis in Water Quality Problems. Souza, Raimundo 1 Chagas, Patrícia 2 1,2 Departamento de Engenharia Hidráulica e Ambiental Risk Analysis in Water Quality Problems. Downloaded from aselibrary.org by Uf - Universidade Federal Do Ceara on 1/29/14. Coyright ASCE. For ersonal use only; all rights reserved. Souza, Raimundo 1 Chagas,

More information

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker.

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker. UTC Engineering 329 Proportional Controller Design for Speed System By John Beverly Green Team John Beverly Keith Skiles John Barker 24 Mar 2006 Introdution This experiment is intended test the variable

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information