STOPPING SIMULATED PATHS EARLY

Size: px
Start display at page:

Download "STOPPING SIMULATED PATHS EARLY"

Transcription

1 Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor, NY 27, U.S.A. Jerey Stau School of Operations Research and Industrial Engineering Cornell University Ithaca, NY 4853, U.S.A. ABSTRACT We provide results about stopping siulation paths early as a variance reduction technique, adding to our earlier wor on this topic. The proble of pricing a financial instruent with cashflows at ultiple ties, such as a ortgage-baced security, otivates this approach, which is ore broadly applicable to probles in which early steps are ore inforative than later steps of a path. We prove a liit theore that deonstrates that this relative inforativeness of siulation steps, not the nuber of steps, deterines the effectiveness of the ethod. Next we consider an extension of the idea of stopping siulation paths early, showing how early stopping can be rando and depend on the state a path has reached, yet still produce an unbiased estiator. We illustrate the potential effectiveness of such estiators, and describe directions for future research into their design. INTRODUCTION In a standard siulation for a purpose such as pricing a financial derivative, every siulated path reaches each of a fixed nuber of tie steps. We consider siulations that estiate an expected total reward, for concreteness focusing on a su of discounted cashflows that a derivative security pays. When discounted cashflows at early steps are ore iportant than those at later steps, devoting equal coputational resources to siulation at early steps and later steps sees to be a suboptial allocation of resources. An exaple is a ortgage-baced security, whose onthly cashflows are the su of all payents ade on a pool of ortgages. These payents are noinally constant, but the value of discounted cashflows tends to decline because of the tie value of oney and because soe hoeowners prepay their ortgages, thus exiting the pool. In previous wor, we showed how to find a policy of early stopping, with soe paths scheduled to terinate before the final tie step, that iniizes the variance of an estiator with a fixed coputational budget Glasseran 38 and Stau 2b. In Section 2, we review the foralis and ain results of the previous paper, including the use of the ethod for soe probles to which it is seeingly inapplicable, such as those with only a terinal cashflow. Section 3 contains a new theore, which proves that the variance reduction obtained by early stopping depends not on the nuber of tie steps, but on the shape of the covariance atrix of discounted cashflows. An extension of early stopping occupies Section 4: here we introduce a ethod where the decision to stop is randoized and depends on the current state, rather than being planned in advance. We then suggest directions for future research into its effective use. 2 REVIEW OF EARLY STOPPING This section presents without proof a result of Glasseran and Stau 2b. Motivated by the proble of pricing a ortgage-baced security, we consider a siulation to estiate the expectation of a su X X, interpreting the ters X,...,X as discounted cashflows. When soe paths stop early, there are n paths that reach step, producing the observations X,...,X n. The nubers of paths reaching each step ust satisfy a onotonicity constraint: n n +. Let the cost of step be c and the total coputational budget be C. The budget constraint is c n C. The obvious estiator of the ean µ E[X] is now ˆµ n n i X i We want to iniize over all feasible choices of n,...,n the variance of ˆµ, whichisvar[ˆµ] v /n where v Var[X ]+2Cov X, j+ X j.

2 The variance coponents v,...,v on which the solution is based are typically unnown, but they can be estiated fro a set of initial siulated paths, and the optial resource allocation applied to the rest of the siulation. The resource allocation proble is in v n s.t. c n C 2 and n n +,,..., The solution to proble 2 involves the convex hull of a graph deterined by the costs c and the variance coponents v. Define the partial sus and the graph V v j and C j j V {C, V,...,}. Let the function V be the upper convex hull of this graph, and let V denote V C. The solution is based on the slopes u v /c where v is the increent V V. Here we restate Theore of 2b. Theore. The solution to the resource allocation proble 2 is n ν u ν c u C 2 c j The other approach is to study new rando variables X,...,X that satisfy the following two properties. At step of the siulation, X can be coputed. The su X X X. The estiator based on the new rando variables has the sae expectation µ, but different variance coponents v,...,v, which ay render early stopping ore effective. The optial choice of X,...,X would satisfy j X j E[X F ], but of course this conditional expectation is unnown. However, there are any probles for which we have a decent approxiation to the conditional expectation E[X F ], and this enables early stopping to provide ore variance reduction. It is this perspective to which we will return in Section 4 as we develop ethods of rando early stopping. 3 CONTINUOUS LIMIT This section shows that the shape of the covariance atrix of discounted cashflows deterines the solution and effectiveness of the resource allocation proble 2. First define a sequence of resource allocation probles with convergent shape as follows. Let v be a positive, continuous function on [, ],andletv t vs ds. The sequence of probles is indexed by size : intheth { proble, C is the budget and V is the point set C }, V,...,, where C C V V / V t v s ds v t v[t]/ and the ratio of optial variance to standard variance is R i j v i c j u j /u i i j v i c j. We go on to describe two ethods that can enhance the effectiveness of early stopping and ae it ore broadly applicable. The first involves the statistical perspective on issing data. Viewing the siulation as an experient, the data table has any issing observations X i when path i stops before reaching step. The theory of issing data leads to estiators which are superior to ˆµ because they tae advantage of dependence between the variables X,...,X and use inforation about the ore frequently observed variables to learn about those less frequently observed. 39 The th proble has variance coponents v v // and equal costs c C /. Let V be the upper convex hull of V and V be the upper convex hull of V. We will prove in the leas in the Appendix that this sequence of probles has convergent shape, as described by V, and that the upper convex hulls V and their slopes v also converge respectively to V and its derivative v. Then Theore 2 asserts that this sequence of probles has solution and effectiveness that also converge. To ae this precise, we define v n t t v s ds 3

3 and F v v s ds s v s ds. 4 These are the quantities of interest. Lea. The th proble of for 2 has solution given by n n / and its optial variance objective is F. Proof. Because V is continuous and piecewise linear, so is its upper convex hull V,andv is piecewise constant. We have v v // just lie v v //. By Theore, the solution to the th proble is n v νc C v /c j c j v j v / j v j /. Because v is piecewise constant, n is indeed equal to n t as defined in equation 3 for t [ /, /. The variance objective of the th proble is v n v v j j v v j j v j j v v v v and again because v is piecewise constant, this equals F as defined in equation Also define nt and F as in equations 3 and 4, but without the superscript. Now we can state the ain theore, whose proof, involving any leas, is deferred to the Appendix. Theore 2. For t,, li n t nt; also li F F. With no early stopping, the variance objective of the th proble would be V, which is the -step Rieann approxiation to the integral vt dt; these converge because we assued v is continuous on the copact set [, ]. Therefore the convergence of the optial variance objectives F to F is equivalent to convergence of the variance reduction ratios F /V. The next theore states that these liits of solutions to discrete optiization probles are also the solutions to a continuous optiization proble that is a liit of the discrete probles. Its proof is siilar to that of the discrete result Theore of 2b, but sipler and ore elegant. Theore 3. If v is continuously differentiable, then the liits n and F are the solution and objective of the calculus of variations proble vt in nt dt s.t. nt dt and n nondecreasing and continuously differentiable on,. Proof. Because n is differentiable, the onotonicity constraint is n, so the Lagrangian integrand function is vt/nt + νnt + λtn t. The Euler-Lagrange equation is then vt/nt 2 + ν λ t. Copleentary slacness is λtn t. We will show that ν v t dt v nt t ν λt V t V t nt 2 satisfy these conditions as well as the constraints. Monotonicity holds because the slope v of the upper convex 2

4 hull V ust be nonincreasing. Because v is positive, so is v, and as it is continuously differentiable, n ust also be continuously differentiable on,. The budget constraint nt dt is satisfied because the nuerator of nt integrates to ν. As for copleentary slacness, where V t V t, λt. It reains to consider intervals [a, b] where V and V are equal at the endpoints but not on the interior. Here the upper convex hull V ust be linear, i.e. its second derivative v is zero, so n is also zero, and copleentary slacness is satisfied. Turning to the Euler-Lagrange equation,we differentiate λ to get λ t v t vt nt 2 ν vt nt 2 2λtn t nt 2V t V tn t nt 3 By copleentary slacness, the third ter is zero, and the Euler-Lagrange equation is verified. The assuption that v be continuously differentiable is technical: it siplifies the use of the calculus of variations. Liewise the constraint that n be continuouslydifferentiable does not see to be central to the result. 4 STATE-DEPENDENT STOPPING a new a probability easure P by P [S + Q S s] psp[s + Q S s] for any event Q and state s, P [S + S s] ps for any state s, and P [S + S ]. That is, at each step of the siulation there is statedependent probability ps of entering an absorbing ceetery state, but if that does not happen, the transition probabilities are the sae. Define the indicator variable A {S } which has the value when a path is still alive. This is the opposite of the approach of Glasseran and Stau 2a, where we considered conditioning on the event that the state vector not enter a real ceetery state, for instance, that a barrier option not noc out. There we introduced a schee to reduce estiator variance, despite doing ore wor per path, by replacing the indicator variable A with a lielihood ratio The early stopping ethodology of Section 2 succeeds when the unconditional variance coponents v are decreasing. It allocates ore coputational resources to early steps at which their expenditure produces ore variance reduction. Next we investigate how it ight be possible to allocate resources on the basis of conditional variance. This is potentially uch ore effective because the conditional variance of future cashflows often varies greatly across the possible states at one tie. For instance, if a barrier option gets noced out, then it is certain that all future cashflows are zero, and it is obvious that a siulated path should stop at this step. More generally, when a path reaches a state with low conditional variance, it ight be advantageous to do less wor in siulating the rest of the path, which will provide little inforation about the expectation being estiated. But how can a lesser but positive aount of wor be done conditional on the current siulated state, when the only choice can be to siulate the next step of this path or not? One possibility is to randoize the decision whether or not to continue siulating this path, with the probability of continuation positively related to the benefit of continuation. We foralize such a schee by introducing a new ceetery state to the state space. Fro the probability easure P for the state vector path S,...,S,wedefine 32 L j ps j which expresses the probabilitity of surviving up to step given the path S,...,S. Here the goal is to achieve variance reduction, despite increased variance per path, by doing less wor per path and thus running ore paths with a fixed coputational budget. Let F be the siga-algebra generated by S,...,S and let A be the event {A }. Lea 2. The Radon-Niody derivative of the probability easure P with respect to the restriction of P to A which is erely a easure relative to F is /L,i.e. E [XA /L ]E[X] for any F -easurable rando variable X such that E[X] exists and is finite. This lea strongly resebles well-nown results of iportance sapling, and the technicalities of its proof are analogous to those of Lea of 2a. The state-dependent probability of continuation ps controls how uch expected wor will be done on the rest

5 of a path which has reached state S. We can construct an unbiased estiator, using the lielihood ratio L,even though the siulation is less liely to reach soe states than others. States which are unliely tend to be reached with a low value of L, and receive a correspondingly large weight. Let τ ax{ A } be the rando step at which the siulated path stops. If it is less than, the nuber of steps, there is early stopping, and we have not generated the total su of discounted cashflows X X,but we have learned soething about it. Let Y τ represent a guess at the expected value of X given this inforation; this guess could be a sophisticated approxiation based on nowledge about the siulation proble, or as siple as τ X, or even just zero. That is, for each value of, Y Y S,...,S is a deterinistic function of the path to date, which is generally not equal to the unnown conditional expectation E[X F ]. However, we require Y X. An estiator of E[X] under P is X Y τ L τ τ j Y j. ps j L j The intuition is that Y τ is our best guess at what X would be if we were to continue the siulation rather than stopping at step τ. This guess deserves a weight /L τ to ae up for the hazard of early stopping before reaching this step. To avoid bias, we ust subtract soething to ae up for the guesses that we would have ade had we stopped at earlier steps. Although this way of writing the estiator is ost intuitive, another expression is ore useful for theory. Let Y denote. Lea 3. The estiator X also equals j A j L j Y j Y j. Proof. The indicator function for the event that the path stops at step j is A j A j+.thatis,a τ A τ+ while j τ iplies A j A j+. Therefore X Y τ L τ τ j Y j ps j L j Y τ τ + Y j L τ L j L j+ j j A j L j Y j Y j. 322 Using this, we can prove the ey theore. Theore 4. The estiator X is unbiased under the P siulation schee, i.e. E [X ]E[X]. Proof. Using the forulation of Lea 3, E [X ] E A j Y j Y j L j j E [Y j A j /L j Y j A j /L j ] j E[Y j ] E[Y j ] j E[Y ] E[Y ]E[X]. The third equality is justified by Lea 2 since Y j and Y j are both F j -easurable. This approach is closely related to the ethodology of introducing new variables X,...,X, described at the end of Section 2. Just as there, the optial choice of Y j is the unnown conditional expectation E[X F j ], and in practice one ust attept to use nowledge about the siulation proble to construct a good approxiation. Here we also face two ore challenges: finding a good randoized stopping policy P, and designing a coputationally efficient algorith. The proble of finding the variance-iniizing P given a policy of guesses Y,...,Y appears difficult. However, preliinary investigations suggest that the optial continuation probabilities ps are large when proceeding to step + fro state S yields a large reduction in the su of the conditional variance of reaining discounted payoffs and the squared bias of the guess for the value of reaining discounted payoffs. As before, these quantities are unnown, but a decent guess at the produces an unbiased estiator with reduced variance that is erely suboptial in the sense of not providing the axiu possible variance reduction. 5 CONCLUSIONS In this paper we provided further analysis and an extension of the idea of stopping siulated paths early as a variance reduction technique. We proved that when paths stop early independent of the state reached, the extent of variance reduction depends on the shape of the covariance atrix of discounted cashflows, not on its size. The ethod is appropriate for probles where siulation produces ore valuable inforation at early tie steps than later ones. We also showed how to construct an unbiased estiator when the decision to stop paths early is rando and depends on

6 the state reached. This opens up the possibility of greater variance reduction, if one can avoid doing wor siulating the tails of paths when they are unliely to produce uch valuable inforation. These advances increase our ability to use our nowledge about the structure of siulation probles to apply coputational resources efficiently. ACKNOWLEDGMENTS This research was supported in part by NSF grant DMS74637, an IBM University Faculty Award, and an NSF graduate research fellowship. APPENDIX: PROOF OF THEOREM 2 Lea 4. The sequence of functions V converges uniforly to V. Proof. Each V t is a Rieann approxiation to the integral V t. Since v is continuous on the copact set [, ], it is uniforly continuous there, and it is Rieannintegrable. Unifor continuity says ɛ >, δ s, t [, ], t s <δ vt vs <ɛ Therefore by choosing s [t]/, it follows that t s < /, and ɛ >, δ t [, ], > /δ v t vt v[t]/ vt <ɛ This is unifor convergence of v to v. Next, ɛ >, δ t [, ], > /δ reasoning, switching the role of V and V,showsthat V t V t + ɛ. Cobining these, sup V t V t ɛ t [,] and thus V converges uniforly to V. Lea 6. There exist nonincreasing functions v and v such that V t v s ds and V t v s ds. Moreover, for t,, li v t v t. Proof. As concave functions, V and V have both left and right derivatives at every point in,. Ifs < t < u, focusing on V, V x V s li x s+ x s V t V x li x t t x V x V t li x t+ li x u x t V u V x u x The left and right derivatives are unequal only on a set of easure zero. For these assertions, see, for instance, Sundara 996, It follows that any function which at each point equals either one-sided derivative is nonincreasing and integrates to V. The sae holds for V, which indeed has at ost points where the onesided derivatives are unequal. Next, for the convergence of v to v. By Lea 5, we now that for any open interval a, b,, b v t v t dt A- a V b V b V a V a V t V t v s vs ds < tɛ ɛ That is, V converges uniforly to V. Lea 5. The sequence of functions V converges uniforly to V. Proof. To prove the convergence of V to V,define ɛ sup t [,] V t V t. By unifor convergence of V to V, ɛ converges to zero. Because V t is concave, V t + ɛ is concave. Further, using V t V t, V t + ɛ V t V t + V t V t Therefore, by definition of V t as the least concave curve above V t, V t V t + ɛ. The exact sae 323 The strategy is to show that { t, li inf v t <v t } or li sup v t >v t A-2 iplies the contrary of A-, and thus A-2 is false, i.e. for every t,, li inf v t v t li sup v t and thus li v t exists and equals v t. Suppose then that there is an a, such that li inf v a < v a. Define ɛ v a li inf v a >. By definition of li inf, there is a sequence { } such that for all, v a <v a ɛ/2. Because v is continuous and nonincreasing and a <,

7 there is a b a, such that <v a v b <ɛ/4. Both v and v are nonincreasing, so for t a, b, v t v t v b v a v b v a + v a v a > ɛ/4 + ɛ/2 ɛ/4 > Therefore b v t v t dt >b aɛ/4 > a for all, thus li inf b a v t v t dt >, contradicting A-. On the other hand, suppose there is a b, such that li sup v b >v b. Nowletɛ be li sup v b v b >, and there is a sequence { } such that for all, v b >v b + ɛ/2. This tie there is an a, b such that <v a v b <ɛ/4, and we get for t a, b v t v t v a v b v a v b + v b v b < ɛ/4 ɛ/2 ɛ/4 < bounded above on the whole interval [, ]. Either way, v is bounded above, and so is the function v. The sae reasoning applies to boundedness away fro zero. If there is a neighborhood of t onwhichv V, then v v, and if not, there exists t < such that V V t V V t tv. If there is a neighborhood of t onwhichv V,then v v v, and if not, for soe i we have v ji v j// i. Proof of Theore 2. This proof frequently relies on Lea 7. The functions v and v are bounded away fro zero, so we can tae square roots, which are theselves bounded away fro zero. Where v v,trivially v v. Since these are also positive and bounded above, by the Bounded Convergence Theore, v s ds v s ds. Consequently, n converges to n on,. Alsov / v v/ v because the denoinators are bounded away fro zero, and the nuerators and denoinators converge separately. With nuerators positive and bounded above, and denoinators bounded away fro zero, these fractions are positive and bounded above. So again we can apply the Bounded Convergence Theore, and F converges to F. for all, thus li sup b a v t v t dt <, contradicting A-. Lea 7. The functions vt, v t, v t, and v t are bounded above and bounded away fro zero in and t. Proof. Because v is positive and continuous, it is both bounded above and bounded away fro zero on [, ]. By the construction of v, its values are values of v, soit too is both bounded above and bounded away fro zero. Since v and v are nonincreasing and positive, we can focus on t for proving their boundedness above and on t for proving their boundedness away fro zero. If there is a one-sided neighborhood of t on which V V,thenv v. If not, then there exists a point t > suchthatv t V t tv. Either way, v is finite and the function v is bounded above. Liewise, if there is a neighborhood of t onwhich V V,thenv v v. If not, there exists a point i/ > such that i/v V i/ V i/ i/ v s ds i v j/ j So v is a Rieann average of v on the interval [, i/, and this is bounded above in because v is 324 REFERENCES Glasseran, P., and J. Stau. 2a. Conditioning on onestep survival for barrier option siulations. Operations Research, forthcoing. Glasseran, P., and J. Stau. 2b. Resource allocation aong siulation tie steps. Woring paper, Graduate School of Business, Colubia University, New Yor, New Yor. Sundara, R. K A First Course in Optiization Theory. New Yor: Cabridge University Press. AUTHOR BIOGRAPHIES PAUL GLASSERMAN is the Jac R. Anderson Professor and chairan in the division of Decision, Ris, and Operations at Colubia Business School, and holds a secondary appointent in the Departent of Industrial Engineering and Operations Research. He currently serves as departental editor of Manageent Science for Stochastic Models and Siulation. His e-ail address is <pg2@colubia.edu> and web address is < JEREMY STAUM is a Visiting Assistant Professor in the School of Operations Research and Industrial Engineering at Cornell University. He received his Ph.D. fro Colubia University in 2. His research interests include variance reduction techniques and financial engineering.

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term Nuerical Studies of a Nonlinear Heat Equation with Square Root Reaction Ter Ron Bucire, 1 Karl McMurtry, 1 Ronald E. Micens 2 1 Matheatics Departent, Occidental College, Los Angeles, California 90041 2

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Introduction to Optimization Techniques. Nonlinear Programming

Introduction to Optimization Techniques. Nonlinear Programming Introduction to Optiization echniques Nonlinear Prograing Optial Solutions Consider the optiization proble in f ( x) where F R n xf Definition : x F is optial (global iniu) for this proble, if f( x ) f(

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction TWO-STGE SMPLE DESIGN WITH SMLL CLUSTERS Robert G. Clark and David G. Steel School of Matheatics and pplied Statistics, University of Wollongong, NSW 5 ustralia. (robert.clark@abs.gov.au) Key Words: saple

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

A Theoretical Framework for Deep Transfer Learning

A Theoretical Framework for Deep Transfer Learning A Theoretical Fraewor for Deep Transfer Learning Toer Galanti The School of Coputer Science Tel Aviv University toer22g@gail.co Lior Wolf The School of Coputer Science Tel Aviv University wolf@cs.tau.ac.il

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS. By: Ian Blokland, Augustana Campus, University of Alberta

USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS. By: Ian Blokland, Augustana Campus, University of Alberta 1 USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS By: Ian Bloland, Augustana Capus, University of Alberta For: Physics Olypiad Weeend, April 6, 008, UofA Introduction: Physicists often attept to solve

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

A model reduction approach to numerical inversion for a parabolic partial differential equation

A model reduction approach to numerical inversion for a parabolic partial differential equation Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/0266-56/30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Statistical Logic Cell Delay Analysis Using a Current-based Model

Statistical Logic Cell Delay Analysis Using a Current-based Model Statistical Logic Cell Delay Analysis Using a Current-based Model Hanif Fatei Shahin Nazarian Massoud Pedra Dept. of EE-Systes, University of Southern California, Los Angeles, CA 90089 {fatei, shahin,

More information

Estimating Entropy and Entropy Norm on Data Streams

Estimating Entropy and Entropy Norm on Data Streams Estiating Entropy and Entropy Nor on Data Streas Ait Chakrabarti 1, Khanh Do Ba 1, and S. Muthukrishnan 2 1 Departent of Coputer Science, Dartouth College, Hanover, NH 03755, USA 2 Departent of Coputer

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot Vol. 34, No. 1 ACTA AUTOMATICA SINICA January, 2008 An Adaptive UKF Algorith for the State and Paraeter Estiations of a Mobile Robot SONG Qi 1, 2 HAN Jian-Da 1 Abstract For iproving the estiation accuracy

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

arxiv: v3 [cs.lg] 7 Jan 2016

arxiv: v3 [cs.lg] 7 Jan 2016 Efficient and Parsionious Agnostic Active Learning Tzu-Kuo Huang Alekh Agarwal Daniel J. Hsu tkhuang@icrosoft.co alekha@icrosoft.co djhsu@cs.colubia.edu John Langford Robert E. Schapire jcl@icrosoft.co

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,

More information

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

On the block maxima method in extreme value theory

On the block maxima method in extreme value theory On the bloc axiethod in extree value theory Ana Ferreira ISA, Univ Tecn Lisboa, Tapada da Ajuda 349-7 Lisboa, Portugal CEAUL, FCUL, Bloco C6 - Piso 4 Capo Grande, 749-6 Lisboa, Portugal anafh@isa.utl.pt

More information