Lecture 4: November 13
|
|
- Lora Phelps
- 5 years ago
- Views:
Transcription
1 Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae Moivaion Unil his lecure we saw our problem in he primal space x, fx)). In his lecure we will look a he dual space represenaion of our problem, meaning, looking a fx), fx)). For convex funcions, his represenaion conains all he daa we have in he regular problem while giving us a new geomeric view of our problem. Le us define he dual funcion:f y) = max w S y T u fw) Theorem 4.1 Assume ha x = arg max w S y T w fw). Then y fx). Hence, Proof: The definiion of f derives ha: u : f y) y T u fu) u : fu) y T u f y) From our assumpion ha: f y) = y T x fx), i implies ha, u : fu) y T u y T x + fx) = fx) + y T u x), which is he definiion of y fx) Examples Example from economics Assume ha a manufacurer produces d producs wih quaniies of q IR d +. Le us also assume he he cos funcion per quaniy is defined by a convex funcion Cq). Then he revenue is defined by: Revp, q) = p T q Cq), 1
2 2 Lecure 4: November 13 where p IR d is he price per uni of produc. The dual problem in ha case is: C p) = max q p T q Cq) = max q Revp, q). Namely, he dual problem, given prices oupus quaniies ha maximize revenue. In addiion, he marginal cos per per produc is Cq) meaning a opimum we have p = Cq). A single dimension example Define fw) = w log w where f : IR IR. Then: f y) = max w y T w w log w Hence, by aking he derivaive and comparing o 0 we ge: Therefore L-2 disance example y 1 logw = 0 w = e y 1. f y) = ye y 1 y 1)e y 1 = e y 1. Define fw) = 1 2 w 2 2 where f : IRd IR. As before: f y) = max w S y T w 1 2 w 2 2 Taking he derivaive and zeroing we ge w = y. This implies, f y) = y T y 1 2 y 2 2 = 1 2 y Fenchel-young inequaliy Theorem 4.2 Fenchel-young inequaliy: f y) + fu) y T u Proof: By definiion: f y) = max w S y T w fw) Hence: u : f y) y T u fu) Rearranging,
3 Fenchel-Conjugae 3 Theorem 4.3 fw) f w) Proof: Since f w) = f w)), We can se z = w and ge, u : f y) + fu) y T u. f w) = max y y T w f y) = max y y T w max z z T y fz))). f w) max y y T w w T y + fw)) = fw) I is possible o prove ha f = f if he epigraph of fis a close and convex group. As he graph of f is {x, fx)) x S} and he epigraph is {x, ) x S, fx) }. Lemma 4.4 If he epigraph of f is close and convex hen: Proof: y fx) x = arg max z y T z fz)) x f y) 1. Le us assume x = arg max z y T z fz)). Hence: From ha we conclude: Thus x f y). 2. Le x f y), Then As a resul, f y) = y T x fx) f w) f y) w T x fx)) y T x fx))) = x T w y) x f y) w : f w) f y) x T w y) w : x T y f y) f w) x T w y = arg max w x T w f w))
4 4 Lecure 4: November 13 Combining his wih he definiion of f, his leads us o f x) = x T y f y) Recall f = f since he epigraph of f is closed and convex) Hence f y) = x T y fx) = max z z T y fz)) x = arg max z z T y fz)). 3. We would like o show y fx) x = arg max z y T z fz)) y fx) z S : fz) fx) y T z x) z S : y T x fx) y T z fz) x = arg max z y T z fz)). Lemma 4.5 If f is differeniable hen y = f fy)). Proof: Se z such ha fy) + f z) = z T y. Then y fy) = y z y f z)) = z z f z) = z z y fy)) = y Therefore f fy)) = f z) = y. Theorem 4.6 If f = f Then x = f fx)). Proof: Recall he definiion of f ha f y) = max z z T y fz). If x maximizes he funcion hen y = fx). Hence, Since f = f Therefore f y) = x T y fx) f y) + f x) = x T y f x) = max z z T y f z) Meaning f y) = x. Combining y = fx) and x = f y) lead us o x = f y) = f fx)).
5 4.2. BERGMAN-DIVERGENCE Bergman-Divergence Bergman-Divergence of a Convex Funcion Le R be some convex funcion on a se S. We would like o use is dual space for an algorihm we will presen laer his lecure he Mirror-Decen Algorihm). We can go from a poin in S o a poin in he dual space by using R, bu we canno always use R o go back - he resuling poin is no necessarily in S. To fix his we will use he Bergman-Divergence: Definiion Bergman-Divergence of a convex funcion R is defined as: B R x y) = Rx) Ry) [ Ry)] T x y) We can now use argmin x S B R x y) as he projecion of poin y on S Examples L 2 -Norm Le Rw) = 1 2 w 2 2. So Rw) = w and we obain: B R x y) = 1 2 x y 2 2 y T x y) = 1 2 x y 2 2 y T x + y 2 2 = 1 2 x y 2 2 y T x = 1 2 x y 2 2 Negaive Enropy Le Rw) = i w i log w i. We obain Rw) =..., log w i ) + 1,...) T, so: B R x y) = i x i log x i i y i log y i i log y i ) + 1)x i y i )) = i x i log x i ) log y i ))) i x i + i y i = i x i log x i y i i x i + i y i If S = {w w i 0, w 1 = 1} is he simplex, i.e all disribuions, we obain ha B R x y) is he KL-divergence of x, y S. Also, in his case we obain ha he projecion on S is: argmin x S B R x y) = argmin x S i x i log x i y i + i y i 1 ) We will solve using Lagrange-mulipliers: F x, λ) = i x i log x i y i + i ) y i 1 λ x i 1 i
6 6 Lecure 4: November 13 i F = 1 + log x ) i λ = 0 x i y i i x i = y i e λ 1)) And since i x i = e λ 1) i y i = 1, we obain ha x i = S is he normalizaion of y. y i y i 1. Thus, he projecion of y on 4.3 Online Mirror Decen The Online Mirror Decen Algorihm The Online Mirror Decen algorihm is an online learning algorihm, similar o he ones we have already seen. The big difference is ha OMD uses he dual space o updae he curren poin, insead of he primal space, and projecs he updae on he primal space wih he Bergman-Divergence funcion. We will presen he algorihm wih linear loss funcions: Online Mirror Decen begin Se y 1 s.. Ry 1 ) = 0 Se w 1 = argmin w S B R w y 1 ) for [1, T ] do Play w and ge f x) = z T x Se y +1 s. Ry +1 ) = Ry ) η f w ) = Ry ) ηz Namely, y +1 = R 1 Ry ) ηz ) = R Ry ) ηz ) Se w +1 = argmin w S B R w y +1 ) end for end Online Mirror Decen Regre Analysis Theorem 4.7 Le R be some σ-srong-convex funcion. The OMD wih linear loss funcions oupus he same predicions as FoReL. Proof: We will denoe by w F and w O he predicions in ime of FoReL and OMD, respecively. Firs, we noice ha in OMD: Ry +1 ) = Ry ) ηz =... = η z i
7 4.4. EXPONENTIATED GRADIENT ALGORITHM 7 In FoReL, as we seen in Lecure 2, he updae rule is: Hence, and we obain: Therefore, ) w+1 F = argmin w S η zi T w + Rw) ) η zi T w + Rw) w+1) F = 0 η zi T + Rw+1) F = 0 Rw+1) F = η zi T = Ry +1 ) Since R is a σ-srong-convex funcion, we obain ha w F +1 = y +1. If y +1 S, we also have y +1 = w O +1 and we are done. Oherwise, we obain ha: w O +1 = argmin w S B R w y +1 ) = argmin w S Rw) Ry+1 ) [ Ry +1 )] T w y +1 ) ) = argmin w S Rw) [ Ry+1 )] T w ) = argmin w S Rw) + η Which is again he same as w+1. F ) z i w 4.4 Exponeniaed Gradien Algorihm The Exponeniaed Gradien Algorihm We can now proceed o rerieve he Randomized Weighed Majoriy algorihm he Exponeniaed Gradien Algorihm in his conex) from he Online Mirror Descen Algorihm. Regularizaion Analysis Seing he regularizaion funcion o Rw) = d w i logw i ), we have ha Rw) =..., logw i ) + 1,...) T. Now solving for max w Rw) s.. i w i = 1 using he Lagrangian F w, λ) = Rw) λ i w i 1) yields ha: w i F = 1+log w i λ log w i = log w j = λ 1 w i = 1. We herefore conclude ha Rw) log d. d
8 8 Lecure 4: November 13 An Online Mirror Descen Sep The OMD sep is defined as Ry +1 ) = Ry ) ηl, meaning ha: 1 + log y i) +1 = 1 + log y i) ηl i) y i) +1 = y i) e ηli), which is boh he definiion of RWM and coincides wih FoReL. Exponeniaed Gradien Algorihm begin Se y 1 = 1 Se w 1 = y 1 y 1 = 1 1 d for [1, T ] do Play w and ge l Se y i) +1 = y i) e ηli) Se w i) +1 = yi) d j=1 yj) end for end Exponeniaed Gradien Algorihm Regre Analysis Lemma 4.8 The regre of his algorihm is bounded a follows: u T=1 w u) T z i Ru) Rw 1 ) + T =1 B R Z 1; Z 1; 1 ), Where Z 1; = η z i and equaliy holds for u = arg min u Ru) i u T z i ) Proof: By he Fenchel-Young inequaliy, Ru) + R Z 1;T ) u T Z 1;T ) Ru) + u T Z 1;T ) R Z 1;T ), and from he definiion of OGD: Now using a elescopic series, we ge ha: w = R Z 1; 1 ). R Z 1;T ) = R 0) T=1 R Z 1; ) R Z 1; 1 )) ) Which by adding and subracing R Z 1; 1 ) T z o he sum and hen spliing i, equals: R 0) + T =1 R Z 1; 1 ) T z T =1 R Z 1; ) R Z 1; 1 ) R Z 1; 1 ) T z ) ) = R 0) + T =1 w T z T =1 B R Z 1; Z 1; 1 )
9 Exponeniaed Gradien Algorihm 9 Combining everyhing, yields he following inequaliies: Ru) + u T Z 1;T ) R 0) + T =1 w T z T =1 B R Z 1; Z 1; 1 ) Ru) + R 0) + T =1 B R Z 1; Z 1; 1 ) T =1 w u) T z Finally, we evaluae R 0): R 0) = max w 0 T w Rw) = max w Rw) = min w Rw) = Rw 1 ) Theorem 4.9 For he Normalized Exponeniaed Gradien Algorihm, he regre is: regre = T =1 w T u) T z log d η + η T =1 i w i) z i) 2 Proof: From he previous lemma, and since R θ) = 1 η log d e ηθ i ), i suffices o show ha B R Z 1; Z 1; 1 ) η i w i) z 2i). We evaluae B R Z 1; Z 1; 1 ) o ge: B R Z 1; Z 1; 1 ) = R Z 1; ) R Z 1; 1 ) + w T z = 1 η log However, e ηz 1; = e ηz 1; 1 e ηz, and herefore: B R Z 1; Z 1; 1 ) = 1 η log i w i) e ηzi) ) + w T z. i) i e ηz 1; j e ηzj) 1; 1 + w T z Since e a 1 a + a 2 for 1 a, ha value is bounded from above by: 1 η log i w i) 1 ηz i) + η 2 z i) 2 ) ) + w T z = 1 1 log ηw T η z + η 2 i w i) z i) 2 ) + w T z, Since e a 1 a we have log1 a) a, and bound from above by: B R Z 1; Z 1; 1 ) 1 η ηwt z + η 2 i w i) z i) 2 ) + w T z = η i w i) z i) 2.
Notes on online convex optimization
Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec
More information1 Widrow-Hoff Algorithm
COS 511: heoreical Machine Learning Lecurer: Rob Schapire Lecure # 18 Scribe: Shaoqing Yang April 10, 014 1 Widrow-Hoff Algorih Firs le s review he Widrow-Hoff algorih ha was covered fro las lecure: Algorih
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationOnline Learning with Partial Feedback. 1 Online Mirror Descent with Estimated Gradient
Avance Course in Machine Learning Spring 2010 Online Learning wih Parial Feeback Hanous are joinly prepare by Shie Mannor an Shai Shalev-Shwarz In previous lecures we alke abou he general framework of
More informationOnline Convex Optimization Example And Follow-The-Leader
CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion
More informationA Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs
PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers
More informationLecture 9: September 25
0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:
More informationLecture 16: FTRL and Online Mirror Descent
Lecture 6: FTRL and Online Mirror Descent Akshay Krishnamurthy akshay@cs.umass.edu November, 07 Recap Last time we saw two online learning algorithms. First we saw the Weighted Majority algorithm, which
More informationMATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018
MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationAppendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection
Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief
More informationSupplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence
Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given
More information23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes
Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals
More information3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon
3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of
More informationRandom Walk on Circle Imagine a Markov process governing the random motion of a particle on a circular
Random Walk on Circle Imagine a Markov process governing he random moion of a paricle on a circular laice: 1 2 γ γ γ The paricle moves o he righ or lef wih probabiliy γ and says where i is wih probabiliy
More informationPENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD
PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.
More information( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is
UNIT IMPULSE RESPONSE, UNIT STEP RESPONSE, STABILITY. Uni impulse funcion (Dirac dela funcion, dela funcion) rigorously defined is no sricly a funcion, bu disribuion (or measure), precise reamen requires
More informationNotes for Lecture 17-18
U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up
More informationHomework sheet Exercises done during the lecture of March 12, 2014
EXERCISE SESSION 2A FOR THE COURSE GÉOMÉTRIE EUCLIDIENNE, NON EUCLIDIENNE ET PROJECTIVE MATTEO TOMMASINI Homework shee 3-4 - Exercises done during he lecure of March 2, 204 Exercise 2 Is i rue ha he parameerized
More informationEnsamble methods: Bagging and Boosting
Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par
More informationFréchet derivatives and Gâteaux derivatives
Fréche derivaives and Gâeaux derivaives Jordan Bell jordan.bell@gmail.com Deparmen of Mahemaics, Universiy of Torono April 3, 2014 1 Inroducion In his noe all vecor spaces are real. If X and Y are normed
More informationEnsamble methods: Boosting
Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room
More informationA Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization
A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems
More information1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx.
. Use Simpson s rule wih n 4 o esimae an () +. Soluion: Since we are using 4 seps, 4 Thus we have [ ( ) f() + 4f + f() + 4f 3 [ + 4 4 6 5 + + 4 4 3 + ] 5 [ + 6 6 5 + + 6 3 + ]. 5. Our funcion is f() +.
More informationLecture 33: November 29
36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure
More informationLecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.
Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in
More informationLie Derivatives operator vector field flow push back Lie derivative of
Lie Derivaives The Lie derivaive is a mehod of compuing he direcional derivaive of a vecor field wih respec o anoher vecor field We already know how o make sense of a direcional derivaive of real valued
More informationState-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter
Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when
More informationLaplace Transforms. Examples. Is this equation differential? y 2 2y + 1 = 0, y 2 2y + 1 = 0, (y ) 2 2y + 1 = cos x,
Laplace Transforms Definiion. An ordinary differenial equaion is an equaion ha conains one or several derivaives of an unknown funcion which we call y and which we wan o deermine from he equaion. The equaion
More informationArticle from. Predictive Analytics and Futurism. July 2016 Issue 13
Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning
More information(1) (2) Differentiation of (1) and then substitution of (3) leads to. Therefore, we will simply consider the second-order linear system given by (4)
Phase Plane Analysis of Linear Sysems Adaped from Applied Nonlinear Conrol by Sloine and Li The general form of a linear second-order sysem is a c b d From and b bc d a Differeniaion of and hen subsiuion
More informationMath 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:
Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial
More informationLecture 2 October ε-approximation of 2-player zero-sum games
Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion
More informationScheduling of Crude Oil Movements at Refinery Front-end
Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,
More informationAn Introduction to Malliavin calculus and its applications
An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214
More informationU( θ, θ), U(θ 1/2, θ + 1/2) and Cauchy (θ) are not exponential families. (The proofs are not easy and require measure theory. See the references.
Lecure 5 Exponenial Families Exponenial families, also called Koopman-Darmois families, include a quie number of well known disribuions. Many nice properies enjoyed by exponenial families allow us o provide
More informationSpeech and Language Processing
Speech and Language rocessing Lecure 4 Variaional inference and sampling Informaion and Communicaions Engineering Course Takahiro Shinozaki 08//5 Lecure lan (Shinozaki s par) I gives he firs 6 lecures
More informationACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin
ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model
More informationLecture 10: The Poincaré Inequality in Euclidean space
Deparmens of Mahemaics Monana Sae Universiy Fall 215 Prof. Kevin Wildrick n inroducion o non-smooh analysis and geomery Lecure 1: The Poincaré Inequaliy in Euclidean space 1. Wha is he Poincaré inequaliy?
More informationSolutions to Assignment 1
MA 2326 Differenial Equaions Insrucor: Peronela Radu Friday, February 8, 203 Soluions o Assignmen. Find he general soluions of he following ODEs: (a) 2 x = an x Soluion: I is a separable equaion as we
More informationMachine Learning 4771
ony Jebara, Columbia Universiy achine Learning 4771 Insrucor: ony Jebara ony Jebara, Columbia Universiy opic 20 Hs wih Evidence H Collec H Evaluae H Disribue H Decode H Parameer Learning via JA & E ony
More informationInference of Sparse Gene Regulatory Network from RNA-Seq Time Series Data
Inference of Sparse Gene Regulaory Nework from RNA-Seq Time Series Daa Alireza Karbalayghareh and Tao Hu Texas A&M Universiy December 16, 2015 Alireza Karbalayghareh GRN Inference from RNA-Seq Time Series
More informationHamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:
M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno
More informationLinear Cryptanalysis
Linear Crypanalysis T-79.550 Crypology Lecure 5 February 6, 008 Kaisa Nyberg Linear Crypanalysis /36 SPN A Small Example Linear Crypanalysis /36 Linear Approximaion of S-boxes Linear Crypanalysis 3/36
More informationMATH 31B: MIDTERM 2 REVIEW. x 2 e x2 2x dx = 1. ue u du 2. x 2 e x2 e x2] + C 2. dx = x ln(x) 2 2. ln x dx = x ln x x + C. 2, or dx = 2u du.
MATH 3B: MIDTERM REVIEW JOE HUGHES. Inegraion by Pars. Evaluae 3 e. Soluion: Firs make he subsiuion u =. Then =, hence 3 e = e = ue u Now inegrae by pars o ge ue u = ue u e u + C and subsiue he definiion
More informationINTRODUCTION TO MACHINE LEARNING 3RD EDITION
ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class
More informationFinal Spring 2007
.615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o
More information1 Solutions to selected problems
1 Soluions o seleced problems 1. Le A B R n. Show ha in A in B bu in general bd A bd B. Soluion. Le x in A. Then here is ɛ > 0 such ha B ɛ (x) A B. This shows x in B. If A = [0, 1] and B = [0, 2], hen
More informationSMT 2014 Calculus Test Solutions February 15, 2014 = 3 5 = 15.
SMT Calculus Tes Soluions February 5,. Le f() = and le g() =. Compue f ()g (). Answer: 5 Soluion: We noe ha f () = and g () = 6. Then f ()g () =. Plugging in = we ge f ()g () = 6 = 3 5 = 5.. There is a
More informationFinish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!
MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his
More informationEECE 301 Signals & Systems Prof. Mark Fowler
EECE 31 Signals & Sysems Prof. Mark Fowler Noe Se #1 C-T Sysems: Convoluion Represenaion Reading Assignmen: Secion 2.6 of Kamen and Heck 1/11 Course Flow Diagram The arrows here show concepual flow beween
More informationEndpoint Strichartz estimates
Endpoin Sricharz esimaes Markus Keel and Terence Tao (Amer. J. Mah. 10 (1998) 955 980) Presener : Nobu Kishimoo (Kyoo Universiy) 013 Paricipaing School in Analysis of PDE 013/8/6 30, Jeju 1 Absrac of he
More informationV L. DT s D T s t. Figure 1: Buck-boost converter: inductor current i(t) in the continuous conduction mode.
ECE 445 Analysis and Design of Power Elecronic Circuis Problem Se 7 Soluions Problem PS7.1 Erickson, Problem 5.1 Soluion (a) Firs, recall he operaion of he buck-boos converer in he coninuous conducion
More informationWhat Ties Return Volatilities to Price Valuations and Fundamentals? On-Line Appendix
Wha Ties Reurn Volailiies o Price Valuaions and Fundamenals? On-Line Appendix Alexander David Haskayne School of Business, Universiy of Calgary Piero Veronesi Universiy of Chicago Booh School of Business,
More informationEssential Maps and Coincidence Principles for General Classes of Maps
Filoma 31:11 (2017), 3553 3558 hps://doi.org/10.2298/fil1711553o Published by Faculy of Sciences Mahemaics, Universiy of Niš, Serbia Available a: hp://www.pmf.ni.ac.rs/filoma Essenial Maps Coincidence
More information556: MATHEMATICAL STATISTICS I
556: MATHEMATICAL STATISTICS I INEQUALITIES 5.1 Concenraion and Tail Probabiliy Inequaliies Lemma (CHEBYCHEV S LEMMA) c > 0, If X is a random variable, hen for non-negaive funcion h, and P X [h(x) c] E
More informationAsymptotic Equipartition Property - Seminar 3, part 1
Asympoic Equipariion Propery - Seminar 3, par 1 Ocober 22, 2013 Problem 1 (Calculaion of ypical se) To clarify he noion of a ypical se A (n) ε and he smalles se of high probabiliy B (n), we will calculae
More informationHamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t
M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n
More informationSolutions from Chapter 9.1 and 9.2
Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is
More informationModal identification of structures from roving input data by means of maximum likelihood estimation of the state space model
Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix
More informationLearning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power
Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.
More information14 Autoregressive Moving Average Models
14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class
More informationGames Against Nature
Advanced Course in Machine Learning Spring 2010 Games Agains Naure Handous are joinly prepared by Shie Mannor and Shai Shalev-Shwarz In he previous lecures we alked abou expers in differen seups and analyzed
More informationLecture Notes 3: Quantitative Analysis in DSGE Models: New Keynesian Model
Lecure Noes 3: Quaniaive Analysis in DSGE Models: New Keynesian Model Zhiwei Xu, Email: xuzhiwei@sju.edu.cn The moneary policy plays lile role in he basic moneary model wihou price sickiness. We now urn
More informationChapter 1 Fundamental Concepts
Chaper 1 Fundamenal Conceps 1 Signals A signal is a paern of variaion of a physical quaniy, ofen as a funcion of ime (bu also space, disance, posiion, ec). These quaniies are usually he independen variables
More informationZápadočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France
ADAPTIVE SIGNAL PROCESSING USING MAXIMUM ENTROPY ON THE MEAN METHOD AND MONTE CARLO ANALYSIS Pavla Holejšovsá, Ing. *), Z. Peroua, Ing. **), J.-F. Bercher, Prof. Assis. ***) Západočesá Univerzia v Plzni,
More informationOnline Appendix to Solution Methods for Models with Rare Disasters
Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,
More informationLearning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power
Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.
More informationIntermediate Macro In-Class Problems
Inermediae Macro In-Class Problems Exploring Romer Model June 14, 016 Today we will explore he mechanisms of he simply Romer model by exploring how economies described by his model would reac o exogenous
More informationSome Basic Information about M-S-D Systems
Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,
More informationInventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions
Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.
More informationTwo Coupled Oscillators / Normal Modes
Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own
More informationThe Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear
In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,
More informationBeating the Adaptive Bandit with High Probability
Beaing he Adapive Bandi wih High Probabiliy Jacob Abernehy Compuer Science Division UC Berkeley jake@cs.berkeley.edu Alexander Rakhlin Deparmen of Saisics Universiy of Pennsylvania rakhlin@wharon.upenn.edu
More informationThis document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC
his documen was generaed a 1:4 PM, 9/1/13 Copyrigh 213 Richard. Woodward 4. End poins and ransversaliy condiions AGEC 637-213 F z d Recall from Lecure 3 ha a ypical opimal conrol problem is o maimize (,,
More informationTom Heskes and Onno Zoeter. Presented by Mark Buller
Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden
More informationChapter 4. Truncation Errors
Chaper 4. Truncaion Errors and he Taylor Series Truncaion Errors and he Taylor Series Non-elemenary funcions such as rigonomeric, eponenial, and ohers are epressed in an approimae fashion using Taylor
More informationChapter 2. First Order Scalar Equations
Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.
More informationDifferential Geometry: Numerical Integration and Surface Flow
Differenial Geomery: Numerical Inegraion and Surface Flow [Implici Fairing of Irregular Meshes using Diffusion and Curaure Flow. Desbrun e al., 1999] Energy Minimizaion Recall: We hae been considering
More informationDifferential Harnack Estimates for Parabolic Equations
Differenial Harnack Esimaes for Parabolic Equaions Xiaodong Cao and Zhou Zhang Absrac Le M,g be a soluion o he Ricci flow on a closed Riemannian manifold In his paper, we prove differenial Harnack inequaliies
More informationCSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9)
CSE/NB 528 Lecure 14: Reinforcemen Learning Chaper 9 Image from hp://clasdean.la.asu.edu/news/images/ubep2001/neuron3.jpg Lecure figures are from Dayan & Abbo s book hp://people.brandeis.edu/~abbo/book/index.hml
More informationDimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2
Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes
More informationLECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS
LECTURE : GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS We will work wih a coninuous ime reversible Markov chain X on a finie conneced sae space, wih generaor Lf(x = y q x,yf(y. (Recall ha q
More informationSpring Ammar Abu-Hudrouss Islamic University Gaza
Chaper 7 Reed-Solomon Code Spring 9 Ammar Abu-Hudrouss Islamic Universiy Gaza ١ Inroducion A Reed Solomon code is a special case of a BCH code in which he lengh of he code is one less han he size of he
More informationGlobal Optimization for Scheduling Refinery Crude Oil Operations
Global Opimizaion for Scheduling Refinery Crude Oil Operaions Ramkumar Karuppiah 1, Kevin C. Furman 2 and Ignacio E. Grossmann 1 (1) Deparmen of Chemical Engineering Carnegie Mellon Universiy (2) Corporae
More informationdy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page
Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,
More informationEconomics 8105 Macroeconomic Theory Recitation 6
Economics 8105 Macroeconomic Theory Reciaion 6 Conor Ryan Ocober 11h, 2016 Ouline: Opimal Taxaion wih Governmen Invesmen 1 Governmen Expendiure in Producion In hese noes we will examine a model in which
More informationPredator - Prey Model Trajectories and the nonlinear conservation law
Predaor - Prey Model Trajecories and he nonlinear conservaion law James K. Peerson Deparmen of Biological Sciences and Deparmen of Mahemaical Sciences Clemson Universiy Ocober 28, 213 Ouline Drawing Trajecories
More informationThe Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite
American Journal of Operaions Research, 08, 8, 8-9 hp://wwwscirporg/journal/ajor ISSN Online: 60-8849 ISSN Prin: 60-8830 The Opimal Sopping Time for Selling an Asse When I Is Uncerain Wheher he Price Process
More informationAn random variable is a quantity that assumes different values with certain probabilities.
Probabiliy The probabiliy PrA) of an even A is a number in [, ] ha represens how likely A is o occur. The larger he value of PrA), he more likely he even is o occur. PrA) means he even mus occur. PrA)
More informationBook Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition
Boo Correcions for Opimal Esimaion of Dynamic Sysems, nd Ediion John L. Crassidis and John L. Junins November 17, 017 Chaper 1 This documen provides correcions for he boo: Crassidis, J.L., and Junins,
More informationMixing times and hitting times: lecture notes
Miing imes and hiing imes: lecure noes Yuval Peres Perla Sousi 1 Inroducion Miing imes and hiing imes are among he mos fundamenal noions associaed wih a finie Markov chain. A variey of ools have been developed
More informationMacroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3
Macroeconomic Theory Ph.D. Qualifying Examinaion Fall 2005 Comprehensive Examinaion UCLA Dep. of Economics You have 4 hours o complee he exam. There are hree pars o he exam. Answer all pars. Each par has
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More information23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes
Half-Range Series 2.5 Inroducion In his Secion we address he following problem: Can we find a Fourier series expansion of a funcion defined over a finie inerval? Of course we recognise ha such a funcion
More informationHamilton Jacobi equations
Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion
More informationChristos Papadimitriou & Luca Trevisan November 22, 2016
U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream
More informationT L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB
Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal
More informationMODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE
Topics MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 2-6 3. FUNCTION OF A RANDOM VARIABLE 3.2 PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE 3.3 EXPECTATION AND MOMENTS
More informationEssential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems
Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor
More information