Danielle Maddix AA238 Final Project December 9, 2016

Size: px
Start display at page:

Download "Danielle Maddix AA238 Final Project December 9, 2016"

Transcription

1 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat Bayesian struture and parameter learning an be used in prediting the probability of whether a breast tumor is malignant or benign. Bayesian struture learning is implemented using a K2 Searh over the spae of diagonally ayli graphs (DAGs) to obtain a Bayesian struture with a higher Bayesian sore than that of the Bayesian network orresponding to the Naive Bayes model. We an model the onditional probability distributions (CPDs) for the ontinuous observed features, as gaussians and the disrete lass label as Bernoulli distributed. Parameter learning using maximum likelihood estimation (MLE) is used to learn the mean and variane for the ontinuous CPDs and the observed ounts for the disrete probabilities. A key advantage of this method is that it an be used for feature seletion to redue the dimensionality of the spae. d-separation and the hain rule are used to eliminate variables onditionally independent from the lass. Various metris are used to ompare the methods, where probabilities are weighted with less negative effet than mislassifiations and a reward funtion is defined to see whih methods have more false negatives, the more dangerous outome in this appliation. Preision-reall and ROC urves are also utilized to ompare the methods and see the relation between rounding these probabilities at various thresholds. A. Problem Desription I. INTRODUCTION Deision making under unertainty has a wide variety of appliations in sientifi omputing. One partiular appliation in the biologial sienes to be investigated is in regards to prediting whether a breast tumor is malignant or benign. Even though there are preditive medial proedures that are available for diagnosis, one test may not be definitive enough and there an be error margins of either false positives or false negatives. An algorithm whih ould take into aount many test diagnostis and make a predition has potential to have a broad impat in the medial field. In fat, the medial literature is already beoming rih in suh methods as seen in [1], [2], [3], [4], with the potential goal of patents having to undergo fewer extensive tests. It is lear that some features are very preditive, suh as the size of the tumor, but only onsidering this feature annot give definitive results. The omprehensive dataset utilized is available from the Breast Caner Wisonsin (Diagnosti) Dataset on the UC Irvine Mahine Learning Repository [5]. The dataset is fairly rih in examples with m = 569 patients. It onsists of a matrix with 32 olumns, where eah row onsists of a patient sample. For a given row, the first suh olumn is the patient ID and so ignored in this study and the seond olumn is the label M for the malignant lass, = 1 and B for benign lass, = 0. The remaining n = 30 olumns onsist of the observation vetor, o R n. There are ten distint ontinuous observations, namely the radius, texture, perimeter, area, smoothness, ompatness, onavity, onave points, symmetry and fratal dimension [5]. For eah of these, the mean, standard error and worst ase measurements are reported, where the first ten olumns orrespond to the mean, olumns orrespond to the standard error and olumns orrespond to the worst ase measurements. The lass distribution is given by 357 benign samples ( 62.7%) and 212 malignant samples ( 37.3%). The output is the probability that the tumor is malignant given the input data, that is p( 1 i o i) for patient i. By analyzing this dataset, we would like to determine the subset of the observed features that are the most relevant in prediting this probability. B. Prior Work Past work in [1], [2], [3], [4] on this dataset has been in regards to mahine learning lassifiation problems, where instead of outputting a probability displaying the unertainty in the predition, a label is assigned to the tumor being malignant or not. This problem redues to omputing the deision boundary between the malignant and benign sets. It was shown in [5] that the sets are linearly separable using all 30 input observed features. Moreover, the best preditive auray, as measured by k-fold ross validation (CV), for k = 10 was obtained using one separating plane in the 3- dimensional feature spae of worst area, worst smoothness and mean texture. This was obtained using the ommon optimization method, linear programming (LP), as desribed in [6]. In partiular, the separating plane was omputed using the Multisurfae Method-Tree (MSM-T), whih is a lassifiation method that solves a LP to onstrut a deision tree [7]. The relevant features were seleted using an exhaustive searh in the spae of 1-4 features and 1-3 separating planes [5].

2 In [8], mahine learning lassifiation algorithms were implemented to ompute linear deision boundaries, based on the prior literature s analysis that the data is linearly separable. The supervised learning algorithms tested were logisti regression, linear and quadrati Gaussian Disriminant Analysis (GDA) and Support Vetor Mahines (SVM). Logisti regression is a disriminative learning algorithm diretly modeling the onditional probability p( o) using a logisti funtion. The fitting parameter, θ R n+1, inluding the interept terms are omputed via the MLE and then an optimization algorithm is used to find the optimal θ. In GDA, a model is built for both the malignant tumors, p(o 1 ) and for the benign tumors, p(o 0 ). It then learns p( 1 o) using Bayes Rule, namely p( 1 o) = p(o 1 )p( 1 ) p(o 0 )p( 0 ) + p(o 1 )p( 1 ). (1) In linear GDA, the posterior densities are assumed to be multivariate gaussians with means µ 0 and µ 1 R n and the same ovariane matrix Σ R nxn, as alulated using MLE ounts. The prior, p(), is assumed to be Bernoulli distributed. In the quadrati ase, we have two separate ovariane matries Σ 0 and Σ 1. Sine the denominator is the same for both p( 1 o) and p( 0 o), GDA assigns a positive malignant label of 1 if the numerator of p( 1 o) is larger than the numerator of p( 0 o). Lastly, SVM solves a onvex optimization problem to maximize the distane between the points and the deision boundary. It is key to desire to maximize this distane, sine the points near the deision boundary represent a higher level of unertainty. In this study, the k-fold CV is also omputed, using k = 10 and linear GDA reeives the lowest error of 4.46%, using a large number of 29 features. Furthermore, the error, aording to the various metris, namely hold-out CV and k-fold CV dereases as the number of observed features inreases until some upper bound. The absolute error ounts the number of mislassifiations, namely mtest i=1 i i m test, (2) where i is the exat label, i is the predited label and m test is the test set size. A key limitation of these methods is that they are fored to make a lassifiation, even if there is a high unertainty in the probability. This an lead to a large number of false negatives or false positives. Moreover, it shows that a high dimensional feature spae is required for the best results. II. METHODS The goal of this work is to identify the important features to redue the problem from a high-dimensional feature spae to a smaller one. To do so, struture learning is implemented to learn an optimal Bayesian network and is ompared to the results of Bayesian network from Naive Bayes. MLE is utilized to estimate the parameters from the network, omparing both ontinuous gaussian to disrete CPDs. The output of this method will be the probability of the tumor being malignant, given the observations onneted to the lass,, in the Bayesian network. Let o R n denote the n observations and õ Rñ ontain the omponents of o, whih are not onditionally independent of, as inferred from the Bayesian network. Then, using the definition of onditional probability, the desired probability is given by p( o 1:n ) = p( õ 1:ñ ) = p(, õ 1:ñ) p(õ 1:ñ ). (3) We use our inferene methods to infer the joint distributions using the hain rule for Bayesian networks, namely p(x 1,..., x n ) = n i=1 p(x i pa xi ), where pa xi are the parents of x i in the Bayesian network [9]. By the law of total probability, the denominator is simply 1 =0 p(, õ 1:ñ). So p(, õ 1:ñ ) is the only probability needed to ompute using the hain rule and then we an simply normalize. A. Naive Bayes with gaussian CPDs In this first method, we assume that the Bayesian struture is known and follows the Naive Bayes model. The Naive Bayes assumption is that (o i o j ) i j. Thus, the only edges in this simplified Bayesian network are from to eah of the observed features, sine the observations are onditionally independent of eah other. The Bayesian network for the Naive Bayes assumption for the first 10 mean observed features is shown below. radius texture perimeter area smoothness ompatness onave points onavity symmetry fratal dim Fig. 1: Bayes Net for Naive Bayes for mean features Sine has no parents in this model and eah observation, o i has as its only parent, using the hain rule we obtain ñ p(, õ 1:ñ ) = p() p(õ i ) (4) Thus, our desired probability is as follows: ñ p( 1 i=1 o 1:n ) = p(õ i 1 )p( 1 ) ñ i=1 p(õ i 0 )p( 0 ) + ñ i=1 p(õ i 1 )p( 1 ) (5) Sine we are assuming a speifi Bayesian network for this ase, there is only the parameter learning step to ompute these probabilities. Sine is binary, it an only take on 2 disrete values and so it an be estimated by one independent parameter θ = p( 1 ), where p( 0 ) = 1 p( 1 ). The parameters are omputed using a training set of size m train. From MLE parameter learning, p( 1 ) = i=1 i=1 1(1 i ) m train, (6) where 1 is the indiator funtion and so 1( 1 i ) = 1, if i = 1 and 0 otherwise. Thus, this simply redues to the ounting the number of malignant samples in the training set and dividing by the total number of training samples [9]. The next step is to determine an appropriate model for the p(õ i ), for eah i = 1 : ñ. From the suess of GDA in [8],

3 it is lear that modeling the CPDs for the ontinuous features as gaussians is a good approximation. Thus, we ompute the MLE parameters for a single variable gaussian, namely ˆµ 1, ˆσ 1 2 and ˆµ 0, ˆσ 0 2 on the subset of the data ontaining the malignant and benign samples, respetively. Let m ben = m train m mal. This results in the standard statistial estimates below: i=1 õi1( 0 i ) i=1 (õ i ˆµ 0 ) 2 1( 0 i ) ˆµ 0 = ˆµ 1 =, ˆσ 0 2 = m ben mtrain i=1 õi1( 1 i ), ˆσ 1 2 = m mal m ben i=1 (õ i ˆµ 1 ) 2 1( 1 i ) m mal This formula is similar to Equation (1), where the major differene is that GDA is using MLE to ompute multivariate gaussian parameters and here we are using MLE for a produt of single variable gaussians. Moreover, sine the output is a probability, the denominator is omputed in this ase. After using the training data to learn the parameters, we loop over the test data to predit the probability of malignany on anew patient s tumor given the observations. Thus, for patient with label and observation õ, p(õ i j {0,1} ) = 1 exp( (õi ˆµij)2 ). Equation (5) is then utilized for the 2πˆσ 2 ij final probability. 2ˆσ 2 ij B. Bayesian Network with gaussian CPDs for mean features We no longer assume that the Bayesian network is known and follows a Naive Bayes model. Instead, we use struture learning to learn a Bayesian network with a higher Bayesian sore than the Bayesian sore orresponding to the Bayesian network for Naive Bayes. We first onsider the ase of finding an optimal Bayesian struture for the first 10 mean features. Sine these observed features are ontinuous, the first step is to disretize them into a speified number of bins, with the bin width inversely proportional to the number of bins, namely max(õ i) min(õ i) n bins. This was done using an in-house MATLAB ode, but ould also be done using the Disretizers.jl library in Julia. For these experiments, the optimal parameter for the number of bins was 20. Using the same Gaussian model for the CPDs and Bernoulli prior for p(), we will now ompute a Bayesian struture with a higher Bayesian sore than that of Bayesian network shown in Figure 1 for Naive Bayes. This an be done using the K2GraphSearh in BayesNets.jl to searh over the spae of DAGs for suh a graph with a higher Bayesian sore than the previous graph with the appropriate onditional probability distributions speified for eah node. Sine the output is dependent on the ordering of the nodes given, we loop over 1000 random orderings of the nodes and store the Bayesian network with the highest Bayesian sore, as shown below: texture area radius perimeter smoothness onave points ompatness onavity symmetry (7) fratal dim Fig. 2: Bayesian Net with gaussian CPDs for mean features This Bayesian network an be used to determine the relevant observed features, by taking the subset of the observations with edges onneted to. Using this struture, we an perform inferene using the onditional independene assumption that it enodes. Due to d-separation, several features an be eliminated from the model, reduing the dimensionality of the feature spae. Thus, it is evident sine the path ontains a hain of nodes that given o 4, area, is onditionally independent of o 1, radius and o 3, perimeter. Similarly, given o 7, onave points, is onditionally independent of o 6, ompatness and o 8, onavity. Furthermore, o 10, fratal dimension is unonneted and so is also onditionally independent of. Thus, our model redues to p( o 1:10 ) = p( õ 1:5 ), where õ = (o 2, o 4, o 5, o 7, o 9 ) T. Sine this subgraph follows the Naive Bayes assumption on õ, Equations (4) and (5) hold and the implementation is the same from the prior subsetion. C. Bayesian Network with disrete CPDs for mean features In this approah, we ompute the Bayesian network assuming that the CPDs are disrete. As in the prior subsetion, the first step is to disretize the ontinuous observations into 20 bins. We assume that eah observation remains disrete and an assume integer values from 1 to 20. For this we use an inhouse MATLAB ode to find the optimal Bayesian network, given disrete distributions. We begin with a graph ontaining nodes (, o 1:10 ) over the first mean features to ompare to the method in the prior subsetion. Given a random ordering of the nodes as input, we loop over the predeessors of eah node and add a parent to the node that maximizes the Bayesian sore [9]. We loop over 1000 random orderings of the nodes and return the DAG with the highest Bayesian sore. The omputed Bayesian network with a Bayesian sore for this disrete model of -12, is shown below. texture area radius perimeter onave points onavity smoothness ompatness symmetry fratal dim Fig. 3: Bayes Net with disrete CPDs mean features Note that this is larger than the -13,473 Bayesian sore for the disrete CPD model of the Bayes Net from Naive Bayes. Using the d-separation properties, it is evident that is onditionally independent from every node, exept for onavity, texture, area, smoothness and symmetry, reduing the dimensionality from 10 to 5. So, we have p( o 1:10 ) = p( õ), where õ = (o 4, o 9, o 2, o 5, o 8 ) T. Using the hain rule, we get: p(, õ 1:ñ ) = 4 p(õ i )p( õ 5 )p(õ 5 ) (8) i=1

4 Thus, our desired probability p( 1 o 1:n ) is as follows: 4 i=1 p(õ i 1 )p( 1 õ 5 ) 4 i=1 p(õ i 0 )p( 0 õ 5 ) + 4 i=1 p(õ i 1 )p( 1 õ 5 ) Note that p(õ 5 ) anels from the numerator and denominator, sine it does not depend on. To perform inferene, we must first learn the parameters from the network. For eah i, we must ompute p(õ i ). Sine is binary and õ i an take on 20 values, this an be represented by 38 independent parameters, using the fat that probabilities sum to 1. The MLE parameters ˆθ k = p(õ k i ) are simply the observed ounts in the data, where õ i = k orresponding to the appropriate malignant and benign samples, as given below: p(õ k i 1 ) = p(õ k i 0 ) = i=1 1(õk i, 1 i ) i=1 1(1 i ) = i=1 1(õk i, 0 i ) i=1 1(0 i ) = i=1 1(õk i, 1 i ) m mal i=1 1(õk i, 0 i ) m ben (9) (10) Lastly, to alulate p( õ 5 ), we have 20 independent parameters, sine is binary and õ 5 an take on 20 disrete values. So, p( 1 õ k i=1 5) = 1(õk 5, 1 i ) mtrain i=1 1(õk 5 ) (11) Now that we have the neessary parameters to ompute the final probability for new patients, we loop over the disretized test data and alulate the above probabilities for the disrete values of õ. D. Bayesian Network with gaussian pdfs for all features Sine the gaussian CPDs performed better than the disrete model, as seen in the Results setion, we ompute an optimal Bayesian struture using all 30 features, assuming the gaussian model. The omputed Bayesian network is given below, displaying only the nodes not onditionally independent of. worst area worst smoothness symmetry std perimeter std onave points texture worst symmetry Fig. 4: Bayes Net omputed using all 30 features Worst area, worst smoothness and mean texture were seleted, whih were also the features ited as obtaining the best results in [5]. This resulting Bayesian network also follows the Naive Bayes assumption that the observations are onditionally independent of eah other. Thus, Equations (4) and (5) also hold for õ = (o 24, o 25, o 9, o 13, o 17, o 2, o 29 ) T and the implementation follows from above. This is a new ontribution from the work in [8], sine it only looked at subset of features, as stored as onseutive elements in the feature vetor, whereas here we have the key features seleted from eah ategory. III. RESULTS AND DISCUSSION We define the following metri to ompare the various methods, namely mtest i=1 (p(1 i õ i) i ) 2 m test (12) Due to the square, it penalizes probabilities less than the maximum error for mislassifiations, that is prediting 1 when i = 0 or 0 when i = 1. For the Naive Bayes method, we investigate the optimal number of features to obtain the lowest metri. To do so, the k-fold CV is omputed, whih only holds out m/k of the training data eah time for testing and trains on the remaining portion. The result is the average of these k = 10 error metris. As expeted and similar to the findings in [8], this testing error dereases as the number of features is inreased from 20 to 25 to a minimum of approximately 0.06 and then begins to inrease again. We would like to avoid using suh a large feature dimension spae to obtain the minimum metri. Fig. 5: Naive Bayes k-fold CV for various number of features The results for the various methods are summarized below: TABLE I: Probability Metri Results NB 10 Gaussian 10 Disrete 10 Gaussian 30 Training Error k-fold CV For the first three methods, we only onsider the first 10 mean features and in the final method, we onsider all 30 features. The first row is the training error, whih is simply training on the entire dataset and then using this training set as our test set. The training error should learly be lower than the k-fold CV in every ase. The k-fold CV is more informative on the effetiveness of the method beause it shows the method s potential to generalize, whih is the desirable feature that given a subset of the dataset, it an properly diagnose new patients. The disrete CPD method using the first 10 features to alulate the Bayesian network has the lowest training error,

5 but the highest k-fold CV test error. This illustrates one of the negative effets of using MLE parameter learning for a limited dataset. If the training set data has no ourrene of õ i = k and the given value, then it falsely assigns a probability of 0. This inreases the number of false negatives, as indiated by the lower reall for this method in Table II. The training set, on the other hand, is representative of eah observation having the disrete values. A Bayesian parameter learning approah with Beta and Dirihlet distributions to represent these ounts from the dataset [9] may lead to improvement. With Naive Bayes we need a large number of features, 29 to be preise, to obtain a k-fold CV error of approximately 0.06, whereas with struture learning to learn the Bayesian network of and the observed features, there are only 7 key features resulting in a k-fold CV error of Another key omponent of this researh is determining how to hoose the appropriate lass, using the probability as well as other fators speifi for this appliation. Thus, we need to round the probabilities aording to some threshold parameter, ɛ. If p( 1 õ) ɛ, we diagnose a malignant tumor with label = 1 and if p( 1 õ) < ɛ, we diagnose a benign tumor with label = 0. The simplest ɛ to hoose as a first attempt is 0.5. Using the k-fold CV and rounding to ompute the labels, we alulate the preision, whih is a measure of the number of false positives, the reall, whih is a measure of the number of false negatives and the speifiity. Preision depends on the T P T P +F P, omputed lass label and is defined as p( 1 1 ) = where T P is the number of true positives and F P is the number of false positives. Reall and speifiity depend on the exat label and are defined as p( 1 1 ) = T P T P +F N and p( 0 0 ) = T N F P +T N, respetively, where F N is the number of false negatives and T N is the number of true negatives. TABLE II: Preision, Reall and Speifiity for ɛ = 0.5 NB 10 Gaussian 10 Disrete 10 Gaussian 30 Preision 87.16% 93.18% 90.60% 91.16% Reall 87.81% 87.35% 83.71% 88.37% Speifiity 94.69% 97.10% 96.04% 96.67% and a small ɛ means that we label more malignant samples and so we do not have as many false negatives. Fig. 6: Preision-Reall Curve for various ɛ From this urve, we see that the Bayesian network omputed using Gaussian CPDs with all 30 features is learly the preferable method, sine for the same reall, it has higher preision. For example, if we require a reall 90% using this method, from the plot we an ompute the desired tolerane with the maximum preision. For example, ɛ = 0.32 orresponds to 90.82% reall and 89.83% preision. If we require a higher reall of at least 95%, then we round up to malignant for every probability greater than ɛ = The orresponding preision is 83.96% for a reall of 95.28%. Sine the preision does not drop drastially with suh a low tolerane, this indiates another key feature of this method that it does not ontain many unertain probabilities around 0.5. Thus, there are several values near the extreme values of 0 and 1, unaffeted by this rounding. Is is evident that the Bayesian network over all 30 observations from the gaussian CPDs has the highest reall. We now investigate different values of ɛ to inrease the reall, sine the deision should take into aount the onsequenes of a mislassifiation, rather than just rounding up or down [9]. In this ase, a false negative has higher onsequenes than a false positive beause diagnosing the tumor as benign when it is really malignant ould be deathly, whereas for a false positive more tests may be required to onfirm. In the below figure, we vary ɛ from to 0.999, with equally spaed intervals of and plot the preision versus the reall for eah ɛ. The small values of ɛ orrespond to the highest reall and lowest preision. Thus, as ɛ is dereasing from to 0.001, the preision is dereasing with respet to the reall. This makes sense, sine a large ɛ implies that we label more benign samples and so we do not have as many false positives Fig. 7: ROC urve for various ɛ

6 The ROC urve above is another useful urve in measuring the omparative effetiveness of eah method for lassifiation. It plots the reall, that is, the true positive rate, versus 1 speifiity, that is, the false positive rate. It is evident again that the Bayesian network omputed from the 30 features with gaussian CPDs performs the best. This is indiated, sine it has the steepest urve, representing a high positive rate with a low false positive. This shows that all the methods form good separators, sine a random lassifier would be given by the line onneting (0,0) and (1,1). Lastly, we onsider a reward system, whih assigns a reward of 1 to a false negative and λ for 0 λ 1 to a false positive. We vary λ and ompute the plots for the various methods. We an define the following reward metri, mtest i=1 ( i(1 p( 1 i õ i)) 2 + λ(1 i )p( 1 i õ i) 2 ) m test. (13) It is again lear that the Bayesian network from the gaussian CPDs over all 30 features has the largest reward regardless of the value of λ. This also illustrates that the initial gap between Naive Bayes with 10 features and the Bayesian network from the gaussian CPDs over 10 features is muh smaller than the final, revealing that Naive Bayes has a smaller number of mislassified negatives than mislassified positives. On the ontrary, the gap is large for Disrete Bayes Net and Naive Bayes initially, indiating that Disrete Bayes Net has a larger number of mislassified false negatives than false positives, due to the small gap at λ = 1 This was explained due to the MLE disrete parameter learning and lak of data for every bin. Note that for λ = 1, we get the negative k-fold CV testing error from Table I. Fig. 8: λ reward metri urve IV. CONCLUSION The benefits of returning a probability, rather than a lassifiation are lear beause it identities the preditions with the highest unertainties. Physiians an utilize these probabilities to only diagnose the disease if it is within some onfidene level and for probabilities around 0.5 to reommend further testing. Various rounding and reward tehniques an be investigated to determine a desired threshold to ahieve a speifi reall or preision. It is lear from the results that gaussians are good approximations for the onditional probability distributions of the ontinuous observed features. We see that utilizing struture learning an greatly improve upon the Bayesian struture orresponding to the Naive Bayes assumption, by reduing the dimensionality of the problem and the error metri. This study also shows that using the mean features is not enough, sine the worst ase and standard error measurements are also valuable, by displaying the better performane of the struture and parameter learning using all 30 features, rather than just the first 10. It is also lear that using a disrete model with MLE estimates does not generalize as well to the test data for values of the disretized variables that are present in the test and not in the training data. Future work onsists of experimenting with different onditional probability distributions, suh as in hybrid Bayesian networks, where the variables are a mix of disrete, that is, and ontinuous variables, the observed features. Potential distributions inlude logit and probit models. It is evident that there are promising results in the area of applying deision making under unertainty and mahine learning to the realm of aner diagnosis for potential use in ollaboration with established medial tests and to help avoid evasive diagnosti tests on patients. This is yet another example of a promising onnetion between the sientifi omputing and medial fields. REFERENCES [1] W. W.H., S. W.N., and M. O.L., Mahine learning tehniques to diagnose breast aner from fine-needle aspirates, Caner Letters, vol. 77, pp , [2] M. O.L., W. W.H., and S. W.N., Image analysis and mahine learning applied to breast aner diagnosis and prognosis, Analytial and Quantitative Cytology and Histology, vol. 17, pp , [3] W. W.H., S. W.N., H. D.M., and M. O.L., Computerized breast aner diagnosis and prognosis from fine needle aspirates, Arhives of Surgery, vol. 130, pp , [4] M. O.L., W. W.H., S. W.N., and H. D.M., Computer-derived nulear features distinguish malignant from benign breast ytology, Human Pathology, vol. 26, pp , [5] UCI Mahine Learning Repository Center for Mahine Learning and Intelligent Systems, Breast aner wisonsin (diagnosti) data set, https: //arhive.is.ui.edu/ml/datasets/breast+caner+wisonsin+(diagnosti), [6] B. K.P. and M. O.L., Robust linear programming disrimination of two linearly inseparable sets, Optimization Methods and Software, vol. 1, pp , [7] B. K.P, Deision tree onstrution via linear programming, Proeedings of the 4th Midwest Artifiial Intelligene Cognitive Siene Soiety, pp , [8] M. D., Diagnosing malignant versus benign breast tumors via mahine learning tehniques in high dimensions, projets2014.html, [9] K. M. J., Deision Making Under Unertainty: Theory and Appliation. Cambridge, Massahusetts and London, England: MIT Press, 2015.

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

Normative and descriptive approaches to multiattribute decision making

Normative and descriptive approaches to multiattribute decision making De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS Prepared by S. Broverman e-mail 2brove@rogers.om website http://members.rogers.om/2brove 1. We identify the following events:. - wathed gymnastis, ) - wathed baseball,

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14,

CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14, CS 687 Jana Koseka Unertainty Bayesian Networks Chapter 13 Russell and Norvig Chapter 14 14.1-14.3 Outline Unertainty robability Syntax and Semantis Inferene Independene and Bayes' Rule Syntax Basi element:

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

Common Trends in European School Populations

Common Trends in European School Populations Common rends in European Shool Populations P. Sebastiani 1 (1) and M. Ramoni (2) (1) Department of Mathematis and Statistis, University of Massahusetts. (2) Children's Hospital Informatis Program, Harvard

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information

eappendix for: SAS macro for causal mediation analysis with survival data

eappendix for: SAS macro for causal mediation analysis with survival data eappendix for: SAS maro for ausal mediation analysis with survival data Linda Valeri and Tyler J. VanderWeele 1 Causal effets under the ounterfatual framework and their estimators We let T a and M a denote

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilisti Graphial Models 0-708 Undireted Graphial Models Eri Xing Leture, Ot 7, 2005 Reading: MJ-Chap. 2,4, and KF-hap5 Review: independene properties of DAGs Defn: let I l (G) be the set of loal independene

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

Convergence of reinforcement learning with general function approximators

Convergence of reinforcement learning with general function approximators Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu

More information

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova Computer Siene and Artifiial Intelligene Laboratory Tehnial Report MIT-CSAIL-TR-2008-055 September 13, 2008 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova massahusetts institute of tehnology,

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

A Functional Representation of Fuzzy Preferences

A Functional Representation of Fuzzy Preferences Theoretial Eonomis Letters, 017, 7, 13- http://wwwsirporg/journal/tel ISSN Online: 16-086 ISSN Print: 16-078 A Funtional Representation of Fuzzy Preferenes Susheng Wang Department of Eonomis, Hong Kong

More information

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM CSC2515 Winter 2015 Introdu3on to Mahine Learning Leture 5: Clustering, mixture models, and EM All leture slides will be available as.pdf on the ourse website: http://www.s.toronto.edu/~urtasun/ourses/csc2515/

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

The Influences of Smooth Approximation Functions for SPTSVM

The Influences of Smooth Approximation Functions for SPTSVM The Influenes of Smooth Approximation Funtions for SPTSVM Xinxin Zhang Liaoheng University Shool of Mathematis Sienes Liaoheng, 5059 P.R. China ldzhangxin008@6.om Liya Fan Liaoheng University Shool of

More information

2 The Bayesian Perspective of Distributions Viewed as Information

2 The Bayesian Perspective of Distributions Viewed as Information A PRIMER ON BAYESIAN INFERENCE For the next few assignments, we are going to fous on the Bayesian way of thinking and learn how a Bayesian approahes the problem of statistial modeling and inferene. The

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

Handling Uncertainty

Handling Uncertainty Handling Unertainty Unertain knowledge Typial example: Diagnosis. Name Toothahe Cavity Can we ertainly derive the diagnosti rule: if Toothahe=true then Cavity=true? The problem is that this rule isn t

More information

A Characterization of Wavelet Convergence in Sobolev Spaces

A Characterization of Wavelet Convergence in Sobolev Spaces A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E')

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') 22.54 Neutron Interations and Appliations (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') Referenes -- J. R. Lamarsh, Introdution to Nulear Reator Theory (Addison-Wesley, Reading, 1966),

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Random Effects Logistic Regression Model for Ranking Efficiency in Data Envelopment Analysis

Random Effects Logistic Regression Model for Ranking Efficiency in Data Envelopment Analysis Random Effets Logisti Regression Model for Ranking Effiieny in Data Envelopment Analysis So Young Sohn Department of Information & Industrial Systems Engineering Yonsei University, 34 Shinhon-dong, Seoul

More information

11.1 Polynomial Least-Squares Curve Fit

11.1 Polynomial Least-Squares Curve Fit 11.1 Polynomial Least-Squares Curve Fit A. Purpose This subroutine determines a univariate polynomial that fits a given disrete set of data in the sense of minimizing the weighted sum of squares of residuals.

More information

BAYES CLASSIFIER. Ivan Michael Siregar APLYSIT IT SOLUTION CENTER. Jl. Ir. H. Djuanda 109 Bandung

BAYES CLASSIFIER. Ivan Michael Siregar APLYSIT IT SOLUTION CENTER. Jl. Ir. H. Djuanda 109 Bandung BAYES CLASSIFIER www.aplysit.om www.ivan.siregar.biz ALYSIT IT SOLUTION CENTER Jl. Ir. H. Duanda 109 Bandung Ivan Mihael Siregar ivan.siregar@gmail.om Data Mining 2010 Bayesian Method Our fous this leture

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION

On the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION 5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.

More information

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS Fratals, Vol. 9, No. (00) 09 World Sientifi Publishing Company UPPER-TRUNCATED POWER LAW DISTRIBUTIONS STEPHEN M. BURROUGHS and SARAH F. TEBBENS College of Marine Siene, University of South Florida, St.

More information

CONDITIONAL CONFIDENCE INTERVAL FOR THE SCALE PARAMETER OF A WEIBULL DISTRIBUTION. Smail Mahdi

CONDITIONAL CONFIDENCE INTERVAL FOR THE SCALE PARAMETER OF A WEIBULL DISTRIBUTION. Smail Mahdi Serdia Math. J. 30 (2004), 55 70 CONDITIONAL CONFIDENCE INTERVAL FOR THE SCALE PARAMETER OF A WEIBULL DISTRIBUTION Smail Mahdi Communiated by N. M. Yanev Abstrat. A two-sided onditional onfidene interval

More information

Error Bounds for Context Reduction and Feature Omission

Error Bounds for Context Reduction and Feature Omission Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Improved likelihood inference for discrete data

Improved likelihood inference for discrete data Improved likelihood inferene for disrete data A. C. Davison Institute of Mathematis, Eole Polytehnique Fédérale de Lausanne, Station 8, 1015 Lausanne, Switzerland. D. A. S. Fraser and N. Reid Department

More information

An iterative least-square method suitable for solving large sparse matrices

An iterative least-square method suitable for solving large sparse matrices An iteratie least-square method suitable for soling large sparse matries By I. M. Khabaza The purpose of this paper is to report on the results of numerial experiments with an iteratie least-square method

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

18.05 Problem Set 6, Spring 2014 Solutions

18.05 Problem Set 6, Spring 2014 Solutions 8.5 Problem Set 6, Spring 4 Solutions Problem. pts.) a) Throughout this problem we will let x be the data of 4 heads out of 5 tosses. We have 4/5 =.56. Computing the likelihoods: 5 5 px H )=.5) 5 px H

More information

estimated pulse sequene we produe estimates of the delay time struture of ripple-red events. Formulation of the Problem We model the k th seismi trae

estimated pulse sequene we produe estimates of the delay time struture of ripple-red events. Formulation of the Problem We model the k th seismi trae Bayesian Deonvolution of Seismi Array Data Using the Gibbs Sampler Eri A. Suess Robert Shumway, University of California, Davis Rong Chen, Texas A&M University Contat: Eri A. Suess, Division of Statistis,

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

Singular Event Detection

Singular Event Detection Singular Event Detetion Rafael S. Garía Eletrial Engineering University of Puerto Rio at Mayagüez Rafael.Garia@ee.uprm.edu Faulty Mentor: S. Shankar Sastry Researh Supervisor: Jonathan Sprinkle Graduate

More information

The Second Postulate of Euclid and the Hyperbolic Geometry

The Second Postulate of Euclid and the Hyperbolic Geometry 1 The Seond Postulate of Eulid and the Hyperboli Geometry Yuriy N. Zayko Department of Applied Informatis, Faulty of Publi Administration, Russian Presidential Aademy of National Eonomy and Publi Administration,

More information

arxiv:math/ v4 [math.ca] 29 Jul 2006

arxiv:math/ v4 [math.ca] 29 Jul 2006 arxiv:math/0109v4 [math.ca] 9 Jul 006 Contiguous relations of hypergeometri series Raimundas Vidūnas University of Amsterdam Abstrat The 15 Gauss ontiguous relations for F 1 hypergeometri series imply

More information

Transformation to approximate independence for locally stationary Gaussian processes

Transformation to approximate independence for locally stationary Gaussian processes ransformation to approximate independene for loally stationary Gaussian proesses Joseph Guinness, Mihael L. Stein We provide new approximations for the likelihood of a time series under the loally stationary

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Verka Prolović Chair of Civil Engineering Geotechnics, Faculty of Civil Engineering and Architecture, Niš, R. Serbia

Verka Prolović Chair of Civil Engineering Geotechnics, Faculty of Civil Engineering and Architecture, Niš, R. Serbia 3 r d International Conferene on New Developments in Soil Mehanis and Geotehnial Engineering, 8-30 June 01, Near East University, Niosia, North Cyprus Values of of partial fators for for EC EC 7 7 slope

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

Developing Excel Macros for Solving Heat Diffusion Problems

Developing Excel Macros for Solving Heat Diffusion Problems Session 50 Developing Exel Maros for Solving Heat Diffusion Problems N. N. Sarker and M. A. Ketkar Department of Engineering Tehnology Prairie View A&M University Prairie View, TX 77446 Abstrat This paper

More information

After the completion of this section the student should recall

After the completion of this section the student should recall Chapter I MTH FUNDMENTLS I. Sets, Numbers, Coordinates, Funtions ugust 30, 08 3 I. SETS, NUMERS, COORDINTES, FUNCTIONS Objetives: fter the ompletion of this setion the student should reall - the definition

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Aug 2004

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Aug 2004 Computational omplexity and fundamental limitations to fermioni quantum Monte Carlo simulations arxiv:ond-mat/0408370v1 [ond-mat.stat-meh] 16 Aug 2004 Matthias Troyer, 1 Uwe-Jens Wiese 2 1 Theoretishe

More information

Packing Plane Spanning Trees into a Point Set

Packing Plane Spanning Trees into a Point Set Paking Plane Spanning Trees into a Point Set Ahmad Biniaz Alfredo Garía Abstrat Let P be a set of n points in the plane in general position. We show that at least n/3 plane spanning trees an be paked into

More information

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

Inter-fibre contacts in random fibrous materials: experimental verification of theoretical dependence on porosity and fibre width

Inter-fibre contacts in random fibrous materials: experimental verification of theoretical dependence on porosity and fibre width J Mater Si (2006) 41:8377 8381 DOI 10.1007/s10853-006-0889-7 LETTER Inter-fibre ontats in random fibrous materials: experimental verifiation of theoretial dependene on porosity and fibre width W. J. Bathelor

More information

V. Interacting Particles

V. Interacting Particles V. Interating Partiles V.A The Cumulant Expansion The examples studied in the previous setion involve non-interating partiles. It is preisely the lak of interations that renders these problems exatly solvable.

More information

Tight bounds for selfish and greedy load balancing

Tight bounds for selfish and greedy load balancing Tight bounds for selfish and greedy load balaning Ioannis Caragiannis Mihele Flammini Christos Kaklamanis Panagiotis Kanellopoulos Lua Mosardelli Deember, 009 Abstrat We study the load balaning problem

More information

Lecture 3 - Lorentz Transformations

Lecture 3 - Lorentz Transformations Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the

More information

9 Geophysics and Radio-Astronomy: VLBI VeryLongBaseInterferometry

9 Geophysics and Radio-Astronomy: VLBI VeryLongBaseInterferometry 9 Geophysis and Radio-Astronomy: VLBI VeryLongBaseInterferometry VLBI is an interferometry tehnique used in radio astronomy, in whih two or more signals, oming from the same astronomial objet, are reeived

More information

THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detcheva

THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detcheva International Journal "Information Models and Analyses" Vol.1 / 2012 271 THEORETICAL ANALYSIS OF EMPIRICAL RELATIONSHIPS FOR PARETO- DISTRIBUTED SCIENTOMETRIC DATA Vladimir Atanassov, Ekaterina Detheva

More information

Multicomponent analysis on polluted waters by means of an electronic tongue

Multicomponent analysis on polluted waters by means of an electronic tongue Sensors and Atuators B 44 (1997) 423 428 Multiomponent analysis on polluted waters by means of an eletroni tongue C. Di Natale a, *, A. Maagnano a, F. Davide a, A. D Amio a, A. Legin b, Y. Vlasov b, A.

More information

Factorized Asymptotic Bayesian Inference for Mixture Modeling

Factorized Asymptotic Bayesian Inference for Mixture Modeling Fatorized Asymptoti Bayesian Inferene for Mixture Modeling Ryohei Fujimaki NEC Laboratories Ameria Department of Media Analytis rfujimaki@sv.ne-labs.om Satoshi Morinaga NEC Corporation Information and

More information

Peer Transparency in Teams: Does it Help or Hinder Incentives?

Peer Transparency in Teams: Does it Help or Hinder Incentives? Peer Transpareny in Teams: Does it Help or Hinder Inentives? Parimal Kanti Bag Nona Pepito June 19, 010 Abstrat In a joint projet involving two players who an ontribute in one or both rounds of a two-round

More information

Word of Mass: The Relationship between Mass Media and Word-of-Mouth

Word of Mass: The Relationship between Mass Media and Word-of-Mouth Word of Mass: The Relationship between Mass Media and Word-of-Mouth Roman Chuhay Preliminary version Marh 6, 015 Abstrat This paper studies the optimal priing and advertising strategies of a firm in the

More information

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems 009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng

More information

Estimation for Unknown Parameters of the Extended Burr Type-XII Distribution Based on Type-I Hybrid Progressive Censoring Scheme

Estimation for Unknown Parameters of the Extended Burr Type-XII Distribution Based on Type-I Hybrid Progressive Censoring Scheme Estimation for Unknown Parameters of the Extended Burr Type-XII Distribution Based on Type-I Hybrid Progressive Censoring Sheme M.M. Amein Department of Mathematis Statistis, Faulty of Siene, Taif University,

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

THE EQUATION CONSIDERING CONCRETE STRENGTH AND STIRRUPS FOR DIAGONAL COMPRESSIVE CAPACITY OF RC BEAM

THE EQUATION CONSIDERING CONCRETE STRENGTH AND STIRRUPS FOR DIAGONAL COMPRESSIVE CAPACITY OF RC BEAM - Tehnial Paper - THE EQUATION CONSIDERING CONCRETE STRENGTH AND STIRRUPS FOR DIAGONAL COMPRESSIE CAPACITY OF RC BEAM Patarapol TANTIPIDOK *, Koji MATSUMOTO *, Ken WATANABE *3 and Junihiro NIWA *4 ABSTRACT

More information

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling ONLINE APPENDICES for Cost-Effetive Quality Assurane in Crowd Labeling Jing Wang Shool of Business and Management Hong Kong University of Siene and Tehnology Clear Water Bay Kowloon Hong Kong jwang@usthk

More information

Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lock Loops

Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lock Loops Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lok Loops Mihael Lentmaier and Bernhard Krah, German Aerospae Center (DLR) BIOGRAPY Mihael Lentmaier reeived the Dipl.-Ing.

More information

Reliability-Based Approach for the Determination of the Required Compressive Strength of Concrete in Mix Design

Reliability-Based Approach for the Determination of the Required Compressive Strength of Concrete in Mix Design Reliability-Based Approah for the Determination of the Required Compressive Strength of Conrete in Mix Design Nader M Okasha To ite this version: Nader M Okasha. Reliability-Based Approah for the Determination

More information

Advances in Radio Science

Advances in Radio Science Advanes in adio Siene 2003) 1: 99 104 Copernius GmbH 2003 Advanes in adio Siene A hybrid method ombining the FDTD and a time domain boundary-integral equation marhing-on-in-time algorithm A Beker and V

More information

Robust Flight Control Design for a Turn Coordination System with Parameter Uncertainties

Robust Flight Control Design for a Turn Coordination System with Parameter Uncertainties Amerian Journal of Applied Sienes 4 (7): 496-501, 007 ISSN 1546-939 007 Siene Publiations Robust Flight ontrol Design for a urn oordination System with Parameter Unertainties 1 Ari Legowo and Hiroshi Okubo

More information

Counting Idempotent Relations

Counting Idempotent Relations Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their

More information

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined

the following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined How should a snake turn on ie: A ase study of the asymptoti isoholonomi problem Jianghai Hu, Slobodan N. Simić, and Shankar Sastry Department of Eletrial Engineering and Computer Sienes University of California

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

Improved likelihood inference for discrete data

Improved likelihood inference for discrete data J. R. Statist. So. B (2006) 68, Part 3, pp. 495 508 Improved likelihood inferene for disrete data A. C. Davison Eole Polytehnique Fédérale de Lausanne, Switzerland and D. A. S. Fraser and N. Reid University

More information

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems

A new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems Neutrosophi Sets and Systems, Vol. 8, 05 63 A new method of measuring similarity between two neutrosophi soft sets and its appliation in pattern reognition problems Anjan Mukherjee, Sadhan Sarkar, Department

More information

7.1 Roots of a Polynomial

7.1 Roots of a Polynomial 7.1 Roots of a Polynomial A. Purpose Given the oeffiients a i of a polynomial of degree n = NDEG > 0, a 1 z n + a 2 z n 1 +... + a n z + a n+1 with a 1 0, this subroutine omputes the NDEG roots of the

More information