Lecture 9: September 25
|
|
- Ezra Pitts
- 5 years ago
- Views:
Transcription
1 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer: These noes have no been subjeced o he usual scruiny reserved for formal publicaions. They may be disribued ouside his class only wih he permission of he Insrucor. 9. Review of Generalized Gradien Descen Generalized Gradien Descen is used o solve he following problem, min x f(x) = g(x) + h(x) where g is convex and differeniable, h(x) is convex and no necessarily differeniable. Paricularly, here are some special cases of Generalized Gradien Descen (herefore all have O(/k) convergence rae): h = 0, which is simply he gradien descen. h = I C, which is called projeced gradien descen. The problem is defined as: Given closed, convex se C R n, we wan: which is equivalen o where I C (x) = { 0 x C x C min g(x) x C min g(x) + I C (x) x is he indicaor funcion of C. Hence prox (x) = arg min z 2 x z 2 + I C (z) = arg min x z 2 z C which is P C (x), he projecion operaor ono C. Therefore he generalized gradien updae sep is: x + = P C (x g(x)), which performs usual gradien descen updae and hen projec back o C. This is called projeced gradien descen. Some of he easy-o-projec-ono ses C are:. Affine images C = {Ax + b : x R n } 2. Soluion se of linear sysem C = {x R n : Ax = b} 3. Nonnegaive orhan C = {x R n : x 0} = R n + 4. Norm balls C = {x R n : x p }, for p =, 2, 5. Some simple polyhedra and simple cones However, i s worh o noe ha P C can be very hard even for seemingly simple se C, like i s generally very hard o projec ono soluion se of arbirary linear inequaliies, i.e., arbirary polyhedron C = {x R n : Ax b}. 9-
2 9-2 Lecure 9: Sepember 25 g = 0, which is called proximal minimizaion. Then generalized gradien updae sep is jus a prox updae x + = arg min z 2 x z 2 + h(z) This is faser han subgradien mehod (O(/k) compared o O(/ k)), bu i s no implemenable unless we know prox funcion in a closed form. Some issues regarding he Generalized Gradien Descen mehod: In Generalized Gradien Descen, we assume ha he minimizaion prox (x) = arg min z 2 x z 2 +h(z) can be done exacly, wha if we canno evaluae he prox funcion? In his case, if we jus rea his as anoher minimizaion problem and obain an approximae soluion, all bes are off and he pracical convergence rae can be very slow. Bu here are also some excepions like parial proximaion minimizaion [B94]. In he nex secion we ll be alking abou acceleraion, which you can ge some flavor by looking a Fig 9.. Figure 9.: Comparison of differen gradien mehods 9.2 Acceleraion for composie funcions There are four acceleraion ideas proposed by Neserov (983, 998, 2005, 2007). The firs wo are proposed for smooh funcions. The hird idea is smoohing echniques for nonsmooh funcions, coupled wih original acceleraion idea. The fourh is he acceleraion idea for composie funcions, which requires enire hisory of previous seps and makes wo prox calls in each sep. In 2008 Beck and Teboulle exend Neserov s idea (983) o composie funcions, which uses only informaion from wo las seps and makes one prox call. In his noe we mainly focus on his idea.
3 Lecure 9: Sepember Acceleraed generalized gradien mehod The problem is: min x R n g(x) + h(x) where g(x) is convex and differeniable, h(x) is convex and no necessarily differeniable. The acceleraed generalized gradien descen mehod works in his way: choose any iniial x (0) = x ( ) R n repea for k =, 2, 3,... y = x (k ) + k 2 k + (x(k ) x (k 2) ) x (k) = prox k (y k g(y)) Some noes abou he acceleraed generalized gradien mehod are:. Firs sep k = is jus usual generalized gradien updae: x () = prox (x (0) g(x (0) )) 2. Afer he firs sep, he mehod carries some momenum from previous ieraions 3. h = 0 gives acceleraed gradien mehod 4. The mehod acceleraes more owards he end of ieraions, as shown in Fig 9.2. Figure 9.2: A figure showing he acceleraion coefficien varying wih number of ieraions Here are wo examples (Fig 9.3) showing he performance of acceleraed gradien descen compared wih usual gradien descen Reformulaion of he acceleraed generalized gradien mehod To make he convergence analysis easier, we can reformulae he acceleraed generalized gradien mehod as: Iniialize x (0) = u (0)
4 9-4 Lecure 9: Sepember 25 Figure 9.3: Performance of acceleraed gradien descen compared wih usual gradien descen repea for k =, 2, 3,... y = ( θ k )x (k ) + θ k u (k ) x k = prox k (y k g(y)) u (k) = x (k ) + θ k (x (k) x (k ) ) where θ = 2/(k + ). Noe his reformulaion is equivalen o he acceleraed generalized gradien mehod presened in secion 9.2. since u (k ) = x (k 2) + θ k (x (k ) x (k 2) ), hen we have: y = ( θ k )x (k ) + θ k u (k ) = ( θ k )x (k ) + θ k x (k 2) + θ k θ k (x (k ) x (k 2) ) = x (k ) + ( θ k θ k θ k )(x (k ) x (k 2) ) = x (k ) + k 2 k + (x(k ) x (k 2) ) 9.3 Convergence Analysis Jus as he generalized gradien mehod, we minimize f(x) = g(x) + h(x) assuming ha: g is convex, differeniable, g is Lipschiz wih consan L > 0, and h is convex, he prox funcion can be evaluaed. Theorem 9. Acceleraed generalized gradien mehod wih fixed sep size /L saisfies: f(x (k) ) f(x ) 2 x(0) x 2 (k + ) 2 This heorem ells us ha acceleraed generalized gradien mehod can achieve he opimal O(/k 2 ) rae for firs-order mehod, or equivalenly, if we wan o ge f(x (k) ) f(x ) ɛ, we only need O(/ ɛ) ieraions. Now we prove his heorem. Proof: In he proof we focus on one ieraion and drop k noaion, so x +, u + are updaed versions of x, u. Firs we bound boh g(x + ) and h(x + ).
5 Lecure 9: Sepember Since /L and g is Lipschiz wih consan L > 0, we have g(x + ) g(y) + g(y) T (x + y) + L 2 x+ y 2 g(y) + g(y) T (x + y) + 2 x+ y 2 (9.) Suppose we have v = prox (w) = arg min v 2 w v 2 + h(v) 0 ( 2 w v 2 + h(v)) = (w v) + h(v) (w v) h(v) According o he definiion of subgradien, we have for all z, h(z) h(v) (v w)t (z v) h(v) h(z) + (v w)t (z v) for all z, w and v = prox (w). Since x + = prox (y g(y)), subsiue x + in he above inequaliy we ge for all z, h(x + ) h(z) + (x+ y + g(y)) T (z x + ) = h(z) + (x+ y) T (z x + ) + g(y) T (z x + ) (9.2) Adding inequaion 9. and 9.2 we ge for all z, f(x + ) g(y) + h(z) + (x+ y) T (z x + ) + 2 x+ y 2 + g(y) T (z y) Using g(z) g(y) + g(y) T (z y) since g is convex, we furher ge f(x + ) f(z) + (x+ y) T (z x + ) + 2 x+ y 2 Now ake z = x and z = x and muliply boh sides by ( θ) and θ respecively, ( θ)f(x + ) ( θ)f(x) + θ Adding hese wo inequaliies ogeher, we ge (x + y) T (x x + ) + θ x + y 2 2 θf(x + ) θf(x ) + θ (x+ y) T (x x + ) + θ 2 x+ y 2 f(x + ) f(x ) ( θ)(f(x) f(x )) (x+ y) T (( θ)x + θx x + ) + 2 x+ y 2 (9.3) Using u + = x+ θ (x+ x) and y = ( θ)x+θu, we have ( θ)x+θx x + = θ(x u ) and x + y = θ(u + u), subsiue hese equaions o he RHS of inequaion 9.3 we have f(x + ) f(x ) ( θ)(f(x) f(x )) θ 2 (u+ u) T [2θ(x u + ) + θ(u + u)] Back o k noaion we have: θ 2 k = θ2 2 [(x u) (x u + )] T [(x u + ) + (x u)] = θ2 2 ( x u 2 x u + 2 ) (f(x (k) ) f(x )) + 2 u(k) x 2 ( θ k) θk 2 (f(x (k ) ) f(x )) + 2 u(k ) x 2
6 9-6 Lecure 9: Sepember 25 Using θ k θ 2 k θ 2 k we have (f(x (k) ) f(x )) + 2 u(k) x 2 θ 2 k θ 2 k (f(x (k ) ) f(x )) + 2 u(k ) x 2 Ierae his inequaliy and use θ =, u (0) = x (0) we ge θ 2 k (f(x (k) ) f(x )) + 2 u(k) x 2 ( θ ) θ 2 (f(x (0) ) f(x )) + 2 u(0) x 2 = 2 x(0) x 2 Hence we conclude f(x (k) ) f(x ) θ2 k 2 x(0) x 2 = 2 x(0) x 2 (k + ) Acceleraed Backracking Line Search In he proof for acceleraed general gradien descen we saw an O(/k 2 ) convergence rae which opimal. Gradien descen on he oher hand has a O(/k) convergence rae. The proofs for hese convergence raes are very differen and are made under differen assumpions. There are a number of differen acceleraed backracking schemes and hese are made under differen crieria for he same reason. We will examine one of he simpler schemes. The assumpions ha his scheme needs o make are : Lipschiz Gradien condiion g(x + ) g(y) + g(y) T (x + y) + 2 x+ y 2 Condiion on θ k ( θ k ) k θ 2 k k θ 2 k k is monoonically decreasing i.e. k k. The problem wih his condiion is ha if you choose a small sep size iniially, you will need o coninue o use small sep sizes furher on as well Algorihm Choose β < 0 = for k =, 2, 3,... ill convergence k = k x + = prox k (y k g(y)) while g(x + ) > g(y) + g(y) T (x + y) + 2 k x + y 2 repea k = β k x + = prox k (y k g(y)) endwhile endfor This mehod checks if g(x + ) is small enough, oherwise i shrinks k by a facor β and updaes x +. This mehod saisfies he required condiions and is hus able o achieve a O(/k 2 ) convergence rae.
7 Lecure 9: Sepember Convergence raes From he above discussion, he following heorem summarizes he O(/k 2 ) convergence rae Theorem 9.2 Acceleraed generalized gradien mehod wih backracking saisfies f(x (k) ) f(x ) 2 x(0) x 2 min (k + ) FISTA Fas Ieraive Sof Thresholding Algorihm(FISTA) is an acceleraed version of ISTA which is applied o problems conaining convex differeniable objecives wih L norm such as Lasso. The Lasso problem is defined as : min x 2 y Ax 2 + λ x ISTA soluion This is he soluion by using he normal generalized gradien also known as he ieraive sof hresholding algorihm (ISTA). See Lecure 8. where S λ ( ) is he sof-hresholding operaor x (k) = S λk (x (k ) + k A T (y Ax (k ) )) k =, 2, 3,... x i λ if x i > λ [S λ (x)] i = 0 if λ x i λ x i + λ if x i < λ This is obained by solving he prox funcion prox (x) = arg min z R n 2 x z 2 + λ z = S λ (x) FISTA soluion The acceleraed version involves solving he same prox problem bu wih an addiional vecor added o he inpu o he prox funcion. v = x (k ) + k 2 k + (x(k ) x (k 2) ) x (k) = S λk (v + k A T (y Ax (k ) )) k =, 2, 3,... Here we show wo images comparing he performance of ISTA vs FISTA. We can see ha FISTA clearly beas ISTA by an order of magniude faser.
8 9-8 Lecure 9: Sepember 25 Figure 9.4: Performance of ISTA vs FISTA for Lasso Regression (n=00,p=500) Figure 9.5: Performance of ISTA vs FISTA for Lasso Logisic Regression (n=00,p=500) 9.6 Failure Cases for Acceleraion Acceleraion achieves he opimal O(/k 2 ) convergence rae for gradien based mehods. However, i does no do so under all condiions. In some cases i migh perform similar o non-acceleraed mehods and in some ohers i migh acually hur performance. Some cases are presened wherein acceleraion fails o do well.
9 Lecure 9: Sepember Warm Sars In ieraive algorihms warm saring can be an effecive sraegy o speed up convergence. For e.g. in cross validaion runs for Lasso, one can use he opimal ˆx found in he previous ieraion as a warm sar for he nex ieraion. Le he uning parameers for lasso be λ λ 2... λ r When solving for λ, iniialize x (0) = 0 and record soluion ˆx(λ ). Now, reuse his value such ha when solving for λ j, iniialize x (0) (λ j ) = ˆx(λ j ). I has been observed ha over a fine grid of values, generalized gradien descen can perform jus as well as acceleraed version when using warm sars Marix Compleion In he case of marix compleion, acceleraion and even backracking can hur performance. The marix compleion problem is described in Lecure 8. Briefly, Given a marix A, only some enries (i, j) Ω of which are visible o you, you wan o fill in he res of enries, while keeping he marix low rank. We solve, min X 2 P Ω(A) P Ω (X) 2 F + λ X where X = r i= σ i(x) is he nuclear norm, r is he rank of X and P Ω ( ) is he projecion operaor, [P Ω (X)] ij = { X ij (i, j) Ω 0 (i, j) / Ω he gradien descen updaes, also known as he sof-impue algorihm are X + = S λ (P Ω (A) + P Ω (X)) where S λ ( ) is he marix sof-hresholding operaor which requires he SVD o compue as S λ (X) = UΣ λ V T where (Σ λ ) ii = max{σ ii λ, 0}. Calculaing he SVD can be expensive and can cos upo O(mn 2 ) operaions. This can be expensive for he backracking search mehod since each backracking loop evaluaes he generalized G (X) a various values of and his involves solving he prox funcion. This equaes o calling he SVD several imes and can be slow. The marix compleion problem does no work well wih acceleraion as well, since acceleraion involves changing he argumen passed o he prox funcion. We pass y g(y) insead of x g(x). This can make he marix high rank which makes he compuaion of SVD more expensive. Here is an example where using acceleraion performs worse han sof-impue [M].
10 9-0 Lecure 9: Sepember 25 Figure 9.6: Performance of Sof impue vs Acceleraed grad descen on a small and a big problem References [M] [B94] R. Mazumder and T. Hasie and R. Tibshirani Specral Regularizaion Algorihms for Learning Large Incomplee Marices, The Journal of Machine Learning Research, 20, pp Bersekas and Tseng Parial proximal minimizaion algorihms for convex programming 994.
Lecture 1: September 25
0-725: Optimization Fall 202 Lecture : September 25 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Subhodeep Moitra Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3
More informationA Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs
PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers
More informationPrimal-Dual Splitting: Recent Improvements and Variants
Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France The proximal poin
More informationTechnical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.
Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear
More informationChapter 2. First Order Scalar Equations
Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.
More informationAppendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection
Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief
More informationOnline Convex Optimization Example And Follow-The-Leader
CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion
More informationEXERCISES FOR SECTION 1.5
1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler
More informationPENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD
PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.
More informationLecture 2 October ε-approximation of 2-player zero-sum games
Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion
More informationIMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013
IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher
More informationMATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018
MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren
More informationSimulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010
Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid
More informationThen. 1 The eigenvalues of A are inside R = n i=1 R i. 2 Union of any k circles not intersecting the other (n k)
Ger sgorin Circle Chaper 9 Approimaing Eigenvalues Per-Olof Persson persson@berkeley.edu Deparmen of Mahemaics Universiy of California, Berkeley Mah 128B Numerical Analysis (Ger sgorin Circle) Le A be
More informationt is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...
Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger
More informationNotes on online convex optimization
Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec
More informationSupplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence
Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given
More informationELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018
ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5
More informationNotes on Kalman Filtering
Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren
More information1 Review of Zero-Sum Games
COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any
More informationLecture 4: November 13
Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae 4.1.1 Moivaion Unil his lecure we saw
More informationA Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization
A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems
More information3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon
3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of
More informationHamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t
M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n
More informationA Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization
1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus
More informationModal identification of structures from roving input data by means of maximum likelihood estimation of the state space model
Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix
More information23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes
Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals
More informationAryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,
More information4.5 Constant Acceleration
4.5 Consan Acceleraion v() v() = v 0 + a a() a a() = a v 0 Area = a (a) (b) Figure 4.8 Consan acceleraion: (a) velociy, (b) acceleraion When he x -componen of he velociy is a linear funcion (Figure 4.8(a)),
More informationSection 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients
Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous
More informationLecture 9: September 28
0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These
More informationLet us start with a two dimensional case. We consider a vector ( x,
Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our
More informationWEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x
WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile
More informationSystem of Linear Differential Equations
Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y
More informationLecture 20: Riccati Equations and Least Squares Feedback Control
34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he
More informationLAPLACE TRANSFORM AND TRANSFER FUNCTION
CHBE320 LECTURE V LAPLACE TRANSFORM AND TRANSFER FUNCTION Professor Dae Ryook Yang Spring 2018 Dep. of Chemical and Biological Engineering 5-1 Road Map of he Lecure V Laplace Transform and Transfer funcions
More informationSome Basic Information about M-S-D Systems
Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,
More informationToday: Falling. v, a
Today: Falling. v, a Did you ge my es email? If no, make sure i s no in your junk box, and add sbs0016@mix.wvu.edu o your address book! Also please email me o le me know. I will be emailing ou pracice
More informationChapter 3 Boundary Value Problem
Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le
More information1 Solutions to selected problems
1 Soluions o seleced problems 1. Le A B R n. Show ha in A in B bu in general bd A bd B. Soluion. Le x in A. Then here is ɛ > 0 such ha B ɛ (x) A B. This shows x in B. If A = [0, 1] and B = [0, 2], hen
More informationHamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:
M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno
More informationLie Derivatives operator vector field flow push back Lie derivative of
Lie Derivaives The Lie derivaive is a mehod of compuing he direcional derivaive of a vecor field wih respec o anoher vecor field We already know how o make sense of a direcional derivaive of real valued
More informationTHE WAVE EQUATION. part hand-in for week 9 b. Any dilation v(x, t) = u(λx, λt) of u(x, t) is also a solution (where λ is constant).
THE WAVE EQUATION 43. (S) Le u(x, ) be a soluion of he wave equaion u u xx = 0. Show ha Q43(a) (c) is a. Any ranslaion v(x, ) = u(x + x 0, + 0 ) of u(x, ) is also a soluion (where x 0, 0 are consans).
More informationTwo Coupled Oscillators / Normal Modes
Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own
More informationHamilton Jacobi equations
Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion
More informationSOLUTIONS TO ECE 3084
SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no
More informationCS376 Computer Vision Lecture 6: Optical Flow
CS376 Compuer Vision Lecure 6: Opical Flow Qiing Huang Feb. 11 h 2019 Slides Credi: Krisen Grauman and Sebasian Thrun, Michael Black, Marc Pollefeys Opical Flow mage racking 3D compuaion mage sequence
More information10. State Space Methods
. Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he
More informationBook Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition
Boo Correcions for Opimal Esimaion of Dynamic Sysems, nd Ediion John L. Crassidis and John L. Junins November 17, 017 Chaper 1 This documen provides correcions for he boo: Crassidis, J.L., and Junins,
More informationExpert Advice for Amateurs
Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he
More informationInventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions
Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.
More informationBU Macro BU Macro Fall 2008, Lecture 4
Dynamic Programming BU Macro 2008 Lecure 4 1 Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an
More informationChapter 6. Systems of First Order Linear Differential Equations
Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh
More information2. Nonlinear Conservation Law Equations
. Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear
More information4.6 One Dimensional Kinematics and Integration
4.6 One Dimensional Kinemaics and Inegraion When he acceleraion a( of an objec is a non-consan funcion of ime, we would like o deermine he ime dependence of he posiion funcion x( and he x -componen of
More informationEE363 homework 1 solutions
EE363 Prof. S. Boyd EE363 homework 1 soluions 1. LQR for a riple accumulaor. We consider he sysem x +1 = Ax + Bu, y = Cx, wih 1 1 A = 1 1, B =, C = [ 1 ]. 1 1 This sysem has ransfer funcion H(z) = (z 1)
More informationdy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page
Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,
More informationOnline Appendix to Solution Methods for Models with Rare Disasters
Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,
More informationEECE 301 Signals & Systems Prof. Mark Fowler
EECE 31 Signals & Sysems Prof. Mark Fowler Noe Se #1 C-T Sysems: Convoluion Represenaion Reading Assignmen: Secion 2.6 of Kamen and Heck 1/11 Course Flow Diagram The arrows here show concepual flow beween
More informationIn this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should
Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion
More informationLecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.
Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in
More informationKINEMATICS IN ONE DIMENSION
KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec
More informationFinish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!
MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his
More informationMATH 128A, SUMMER 2009, FINAL EXAM SOLUTION
MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange
More informationdt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.
Mah Rahman Exam Review Soluions () Consider he IVP: ( 4)y 3y + 4y = ; y(3) = 0, y (3) =. (a) Please deermine he longes inerval for which he IVP is guaraneed o have a unique soluion. Soluion: The disconinuiies
More informationSZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1
SZG Macro 2011 Lecure 3: Dynamic Programming SZG macro 2011 lecure 3 1 Background Our previous discussion of opimal consumpion over ime and of opimal capial accumulaion sugges sudying he general decision
More informationDesigning Information Devices and Systems I Spring 2019 Lecture Notes Note 17
EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive
More informationPredator - Prey Model Trajectories and the nonlinear conservation law
Predaor - Prey Model Trajecories and he nonlinear conservaion law James K. Peerson Deparmen of Biological Sciences and Deparmen of Mahemaical Sciences Clemson Universiy Ocober 28, 213 Ouline Drawing Trajecories
More informationKEY. Math 334 Midterm III Winter 2008 section 002 Instructor: Scott Glasgow
KEY Mah 334 Miderm III Winer 008 secion 00 Insrucor: Sco Glasgow Please do NOT wrie on his exam. No credi will be given for such work. Raher wrie in a blue book, or on your own paper, preferably engineering
More informationOn-line Adaptive Optimal Timing Control of Switched Systems
On-line Adapive Opimal Timing Conrol of Swiched Sysems X.C. Ding, Y. Wardi and M. Egersed Absrac In his paper we consider he problem of opimizing over he swiching imes for a muli-modal dynamic sysem when
More informationMath Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.
Mah 2250-004 Week 4 April 6-20 secions 7.-7.3 firs order sysems of linear differenial equaions; 7.4 mass-spring sysems. Mon Apr 6 7.-7.2 Sysems of differenial equaions (7.), and he vecor Calculus we need
More informationA Shooting Method for A Node Generation Algorithm
A Shooing Mehod for A Node Generaion Algorihm Hiroaki Nishikawa W.M.Keck Foundaion Laboraory for Compuaional Fluid Dynamics Deparmen of Aerospace Engineering, Universiy of Michigan, Ann Arbor, Michigan
More informationSolutions of Sample Problems for Third In-Class Exam Math 246, Spring 2011, Professor David Levermore
Soluions of Sample Problems for Third In-Class Exam Mah 6, Spring, Professor David Levermore Compue he Laplace ransform of f e from is definiion Soluion The definiion of he Laplace ransform gives L[f]s
More informationCHAPTER 12 DIRECT CURRENT CIRCUITS
CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As
More informationSTATE-SPACE MODELLING. A mass balance across the tank gives:
B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing
More informationFrom Particles to Rigid Bodies
Rigid Body Dynamics From Paricles o Rigid Bodies Paricles No roaions Linear velociy v only Rigid bodies Body roaions Linear velociy v Angular velociy ω Rigid Bodies Rigid bodies have boh a posiion and
More information14 Autoregressive Moving Average Models
14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class
More informationNotes for Lecture 17-18
U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up
More informationSolutions from Chapter 9.1 and 9.2
Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is
More information23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes
Half-Range Series 2.5 Inroducion In his Secion we address he following problem: Can we find a Fourier series expansion of a funcion defined over a finie inerval? Of course we recognise ha such a funcion
More informationState-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter
Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when
More informationODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004
ODEs II, Lecure : Homogeneous Linear Sysems - I Mike Raugh March 8, 4 Inroducion. In he firs lecure we discussed a sysem of linear ODEs for modeling he excreion of lead from he human body, saw how o ransform
More informationEchocardiography Project and Finite Fourier Series
Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every
More informationLecture 33: November 29
36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure
More informationMath 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas
Mah 527 Lecure 6: Hamilon-Jacobi Equaion: Explici Formulas Sep. 23, 2 Mehod of characerisics. We r o appl he mehod of characerisics o he Hamilon-Jacobi equaion: u +Hx, Du = in R n, u = g on R n =. 2 To
More informationNetwork Newton Distributed Optimization Methods
Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are
More informationSlide03 Historical Overview Haykin Chapter 3 (Chap 1, 3, 3rd Ed): Single-Layer Perceptrons Multiple Faces of a Single Neuron Part I: Adaptive Filter
Slide3 Haykin Chaper 3 (Chap, 3, 3rd Ed): Single-Layer Perceprons CPSC 636-6 Insrucor: Yoonsuck Choe Hisorical Overview McCulloch and Pis (943): neural neworks as compuing machines. Hebb (949): posulaed
More informationFinal Spring 2007
.615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o
More information15. Vector Valued Functions
1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,
More informationNEWTON S SECOND LAW OF MOTION
Course and Secion Dae Names NEWTON S SECOND LAW OF MOTION The acceleraion of an objec is defined as he rae of change of elociy. If he elociy changes by an amoun in a ime, hen he aerage acceleraion during
More informationThe Contradiction within Equations of Motion with Constant Acceleration
The Conradicion wihin Equaions of Moion wih Consan Acceleraion Louai Hassan Elzein Basheir (Daed: July 7, 0 This paper is prepared o demonsrae he violaion of rules of mahemaics in he algebraic derivaion
More informationChapter 7: Solving Trig Equations
Haberman MTH Secion I: The Trigonomeric Funcions Chaper 7: Solving Trig Equaions Le s sar by solving a couple of equaions ha involve he sine funcion EXAMPLE a: Solve he equaion sin( ) The inverse funcions
More informationV The Fourier Transform
V he Fourier ransform Lecure noes by Assaf al 1. Moivaion Imagine playing hree noes on he piano, recording hem (soring hem as a.wav or.mp3 file), and hen ploing he resuling waveform on he compuer: 100Hz
More information18 Biological models with discrete time
8 Biological models wih discree ime The mos imporan applicaions, however, may be pedagogical. The elegan body of mahemaical heory peraining o linear sysems (Fourier analysis, orhogonal funcions, and so
More informationArticle from. Predictive Analytics and Futurism. July 2016 Issue 13
Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning
More informationThis document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC
his documen was generaed a 1:4 PM, 9/1/13 Copyrigh 213 Richard. Woodward 4. End poins and ransversaliy condiions AGEC 637-213 F z d Recall from Lecure 3 ha a ypical opimal conrol problem is o maimize (,,
More informationHW6: MRI Imaging Pulse Sequences (7 Problems for 100 pts)
HW6: MRI Imaging Pulse Sequences (7 Problems for 100 ps) GOAL The overall goal of HW6 is o beer undersand pulse sequences for MRI image reconsrucion. OBJECTIVES 1) Design a spin echo pulse sequence o image
More information= ( ) ) or a system of differential equations with continuous parametrization (T = R
XIII. DIFFERENCE AND DIFFERENTIAL EQUATIONS Ofen funcions, or a sysem of funcion, are paramerized in erms of some variable, usually denoed as and inerpreed as ime. The variable is wrien as a funcion of
More informationMATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence
MATH 433/533, Fourier Analysis Secion 6, Proof of Fourier s Theorem for Poinwise Convergence Firs, some commens abou inegraing periodic funcions. If g is a periodic funcion, g(x + ) g(x) for all real x,
More information