Primal-Dual Splitting: Recent Improvements and Variants

Size: px
Start display at page:

Download "Primal-Dual Splitting: Recent Improvements and Variants"

Transcription

1 Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France

2 The proximal poin algorihm Consider he problem of finding a x in a Hilber space such ha T (x) 0, where T is a maximal monoone operaor. The proximal poin algorihm: [Marine 70], [Rockafellar 76] Choose a x 0 and generae he sequence {x n } for each n 0 via 0 T (x n+1 ) + λ 1 n (x n+1 x n ) 2 / 27

3 The proximal poin algorihm Consider he problem of finding a x in a Hilber space such ha T (x) 0, where T is a maximal monoone operaor. The proximal poin algorihm: [Marine 70], [Rockafellar 76] Choose a x 0 and generae he sequence {x n } for each n 0 via 0 T (x n+1 ) + λ 1 n (x n+1 x n ) The proximal poin algorihm can be wrien as [Miny 62] x n+1 = (I + λ nt ) 1 (x n ), where (I + λ nt ) 1 is he resolven operaor. 2 / 27

4 The proximal poin algorihm Consider he problem of finding a x in a Hilber space such ha T (x) 0, where T is a maximal monoone operaor. The proximal poin algorihm: [Marine 70], [Rockafellar 76] Choose a x 0 and generae he sequence {x n } for each n 0 via 0 T (x n+1 ) + λ 1 n (x n+1 x n ) The proximal poin algorihm can be wrien as [Miny 62] x n+1 = (I + λ nt ) 1 (x n ), where (I + λ nt ) 1 is he resolven operaor. Le F (x) be a proper, l.s.c. convex funcion and T (x) = F (x), hen he algorihm reads x n+1 = arg min x F (x) + 1 2λ n x x n / 27

5 The proximal poin is O(1/n) Convergence of he proximal poin algorihm in case of summable errors have been proven [Rockafellar 76] in and in case of non-summable errors recenly in [Zaslavski 11] 3 / 27

6 The proximal poin is O(1/n) Convergence of he proximal poin algorihm in case of summable errors have been proven [Rockafellar 76] in and in case of non-summable errors recenly in [Zaslavski 11] Moreover, if λ n+1 λ n, one has [Dong 12] x n+1 (I + λ n+1t ) 1 (x n+1 ) 2 2 x n (I + λ nt ) 1 (x n ) 2 2 from which follows ha x n (I + λ nt ) 1 (x n ) 2 2 O(1/n) 3 / 27

7 The proximal poin is O(1/n) Convergence of he proximal poin algorihm in case of summable errors have been proven [Rockafellar 76] in and in case of non-summable errors recenly in [Zaslavski 11] Moreover, if λ n+1 λ n, one has [Dong 12] x n+1 (I + λ n+1t ) 1 (x n+1 ) 2 2 x n (I + λ nt ) 1 (x n ) 2 2 from which follows ha x n (I + λ nt ) 1 (x n ) 2 2 O(1/n) Unforunaely, compuing he resolven is ofen as difficul as solving he problem However, in case T (x) can be wrien as sum of wo operaors each wih simple o compue resolvens, hings will change... 3 / 27

8 A class of problems Le us consider he following class of srucured convex opimizaion problems min F (Kx) + G(x), x X K : X Y is a linear and coninuous operaor from a Hilber space X o a Hilber space Y and F, G are convex, proper, l.s.c. funcions. Noe ha he subdifferenial of his problem is he sum of wo operaors T (x) = K F (Kx) + G(x) 4 / 27

9 A class of problems Le us consider he following class of srucured convex opimizaion problems min F (Kx) + G(x), x X K : X Y is a linear and coninuous operaor from a Hilber space X o a Hilber space Y and F, G are convex, proper, l.s.c. funcions. Noe ha he subdifferenial of his problem is he sum of wo operaors T (x) = K F (Kx) + G(x) Main assumpion: F, G are simple in he sense ha hey have easy o compue resolven operaors: (I + F ) 1 p ˆp 2 (ˆp) = arg min + F (p) p 2λ (I + G) 1 x ˆx 2 (ˆx) = arg min + G(x) x 2λ 4 / 27

10 A class of problems Le us consider he following class of srucured convex opimizaion problems min F (Kx) + G(x), x X K : X Y is a linear and coninuous operaor from a Hilber space X o a Hilber space Y and F, G are convex, proper, l.s.c. funcions. Noe ha he subdifferenial of his problem is he sum of wo operaors T (x) = K F (Kx) + G(x) Main assumpion: F, G are simple in he sense ha hey have easy o compue resolven operaors: (I + F ) 1 p ˆp 2 (ˆp) = arg min + F (p) p 2λ (I + G) 1 x ˆx 2 (ˆx) = arg min + G(x) x 2λ I urns ou ha many sandard problems can be cas in his framework. 4 / 27

11 Some examples The ROF model min u u 2,1 + λ 2 u f 2 2, 5 / 27

12 Some examples The ROF model min u u 2,1 + λ 2 u f 2 2, Basis pursui problem (LASSO) min x x 1 + λ 2 Ax b / 27

13 Some examples The ROF model min u u 2,1 + λ 2 u f 2 2, Basis pursui problem (LASSO) min x x 1 + λ 2 Ax b 2 2 Linear suppor vecor machine λ n min w,b 2 w max (0, 1 y i ( w, x i + b)) i=1 5 / 27

14 Some examples The ROF model min u u 2,1 + λ 2 u f 2 2, Basis pursui problem (LASSO) min x x 1 + λ 2 Ax b 2 2 Linear suppor vecor machine λ n min w,b 2 w max (0, 1 y i ( w, x i + b)) i=1 General linear programming problems min c, x, s.. x { Ax = b x 0 5 / 27

15 Primal, dual, primal-dual The real power of convex opimizaion comes hrough dualiy Recall he convex conjugae: we can ransform our iniial problem F (y) = max x, y F (x), x X min F (Kx) + G(x) (Primal) x X 6 / 27

16 Primal, dual, primal-dual The real power of convex opimizaion comes hrough dualiy Recall he convex conjugae: we can ransform our iniial problem F (y) = max x, y F (x), x X min F (Kx) + G(x) (Primal) x X min max Kx, y + G(x) F (y) x X y Y (Primal-Dual) 6 / 27

17 Primal, dual, primal-dual The real power of convex opimizaion comes hrough dualiy Recall he convex conjugae: we can ransform our iniial problem F (y) = max x, y F (x), x X min F (Kx) + G(x) (Primal) x X There is a primal-dual gap min max Kx, y + G(x) F (y) x X y Y max (F (y) + G ( K y)) y Y (Primal-Dual) (Dual) G(x, y) = F (Kx) + G(x) + (F (y) + G ( K y)) ha vanishes if and only if (x, y) is opimal 6 / 27

18 Opimaliy condiions We focus on he primal-dual formulaion: min max Kx, y + G(x) F (y) x X y Y We assume, here exiss a saddle-poin (ˆx, ŷ) X Y which saisfies he Euler-Lagrange equaions { K ˆx F (ŷ) 0 K ŷ + G(ˆx) 0 Example for a saddle-poin of a convex-concave funcion 7 / 27

19 Known algorihms Many sandard algorihms o solve he considered class of problems: Classical Arrow-Hurwicz mehod [Arrow-Hurwicz, 58] Exragradien-mehods [Korpelevich 76, Popov 80] Douglas-Rachford Spliing [Mercier-Lions 79] Alernaing direcion mehod of mulipliers Globinski, Marroco 75] [Gabay, Mercier 76] [Ecksein, Bersekas 89], [Goldsein, Osher 09] Many more algorihms for special cases: [Neserov 03], [Daubechies, Defrise, De Mol 04], [Combees, Pesque, 08], [Beck, Teboulle 09], [Rague, Fadili, Peyré, 11] 8 / 27

20 A firs-order primal-dual algorihm Proposed in a series of papers: [P., Cremers, Bischof, Chambolle, 09], [Chambolle, P., 10], [P., Chambolle, 11] Iniializaion: Choose T, Σ S ++, θ [0, 1], (x 0, y 0 ) X Y. Ieraions (n 0): Updae x n, y n as follows: { x n+1 = (I + T G) 1 (x n TK y n ) y n+1 = (I + Σ F ) 1 (y n + ΣK(x n+1 + θ(x n+1 x n ))) Alernaes gradien descend in x and gradien ascend in y Linear exrapolaion of ieraes of x in he y sep T, Σ can be seen as precondiioning marices 9 / 27

21 A firs-order primal-dual algorihm Proposed in a series of papers: [P., Cremers, Bischof, Chambolle, 09], [Chambolle, P., 10], [P., Chambolle, 11] Iniializaion: Choose T, Σ S ++, θ [0, 1], (x 0, y 0 ) X Y. Ieraions (n 0): Updae x n, y n as follows: { x n+1 = (I + T G) 1 (x n TK y n ) y n+1 = (I + Σ F ) 1 (y n + ΣK(x n+1 + θ(x n+1 x n ))) Alernaes gradien descend in x and gradien ascend in y Linear exrapolaion of ieraes of x in he y sep T, Σ can be seen as precondiioning marices Can be derived from a pre-condiioned ADMM algorihm Can be seen as a relaxed Arrow-Hurwicz scheme Can be seen as an approximae exragradien scheme 9 / 27

22 Relaions o he proximal poin algorihm Recall he proximal poin algorihm 0 T (x n+1 ) + λ 1 n (x n+1 x n ), n 0 10 / 27

23 Relaions o he proximal poin algorihm Recall he proximal poin algorihm 0 T (x n+1 ) + λ 1 n (x n+1 x n ), n 0 The ieraions of he primal-dual algorihm can be rewrien as ( ) ( ) G(x 0 n+1 ) + K y n+1 x F (y n+1 ) Kx n+1 + M n+1 x n y n+1 y n, n 0 [ T 1 M = θk K Σ 1 ] 10 / 27

24 Relaions o he proximal poin algorihm Recall he proximal poin algorihm 0 T (x n+1 ) + λ 1 n (x n+1 x n ), n 0 The ieraions of he primal-dual algorihm can be rewrien as ( ) ( ) G(x 0 n+1 ) + K y n+1 x F (y n+1 ) Kx n+1 + M n+1 x n y n+1 y n, n 0 [ T 1 M = θk K Σ 1 This is exacly he proximal poin algorihm, bu wih a norm in M. ] 10 / 27

25 Relaions o he proximal poin algorihm Recall he proximal poin algorihm 0 T (x n+1 ) + λ 1 n (x n+1 x n ), n 0 The ieraions of he primal-dual algorihm can be rewrien as ( ) ( ) G(x 0 n+1 ) + K y n+1 x F (y n+1 ) Kx n+1 + M n+1 x n y n+1 y n, n 0 [ T 1 M = θk K Σ 1 This is exacly he proximal poin algorihm, bu wih a norm in M. Convergence of he proximal poin algorihm is ensured if M is symmeric and posiive definie ] 10 / 27

26 Convergence Theorem Le θ = 1, T and Σ symmeric posiive definie maps saisfying Σ KT 2 2 < 1, hen he primal-dual algorihm converges o a saddle-poin. 11 / 27

27 Convergence Theorem Le θ = 1, T and Σ symmeric posiive definie maps saisfying Σ 1 2 KT < 1, hen he primal-dual algorihm converges o a saddle-poin. The algorihm gives differen convergence raes on differen problem classes [Chambolle, P., 10] F and G nonsmooh: O(1/n) 11 / 27

28 Convergence Theorem Le θ = 1, T and Σ symmeric posiive definie maps saisfying Σ 1 2 KT < 1, hen he primal-dual algorihm converges o a saddle-poin. The algorihm gives differen convergence raes on differen problem classes [Chambolle, P., 10] F and G nonsmooh: O(1/n) F or G uniformly convex: O(1/n 2 ) 11 / 27

29 Convergence Theorem Le θ = 1, T and Σ symmeric posiive definie maps saisfying Σ 1 2 KT < 1, hen he primal-dual algorihm converges o a saddle-poin. The algorihm gives differen convergence raes on differen problem classes [Chambolle, P., 10] F and G nonsmooh: O(1/n) F or G uniformly convex: O(1/n 2 ) F and G uniformly convex: O(ω n ), ω < 1 11 / 27

30 Convergence Theorem Le θ = 1, T and Σ symmeric posiive definie maps saisfying Σ 1 2 KT < 1, hen he primal-dual algorihm converges o a saddle-poin. The algorihm gives differen convergence raes on differen problem classes [Chambolle, P., 10] F and G nonsmooh: O(1/n) F or G uniformly convex: O(1/n 2 ) F and G uniformly convex: O(ω n ), ω < 1 Coincides wih so far bes known raes of firs-order mehods 11 / 27

31 Exensions Since he primal-dual algorihm is in principle a proximal poin algorihm, we can perform an addiional overrelaxaion [Gol shein, Tre yakov 79] x ( n+ 1 2 = (I + T G) 1 x n T K T y n) ) y n+ 2 1 = (I + Σ F ) (y 1 n + Σ K(2x n+ 1 2 x n ) (x n+1, y n+1 ) = (x n+ 2 1, y n+ 1 2 ) + γ(x n+ 1 2 x n, y n+ 2 1 y n ). where γ [0, 1[ Speeds up he convergence in many cases. 12 / 27

32 Exensions Since he primal-dual algorihm is in principle a proximal poin algorihm, we can perform an addiional overrelaxaion [Gol shein, Tre yakov 79] x ( n+ 1 2 = (I + T G) 1 x n T K T y n) ) y n+ 2 1 = (I + Σ F ) (y 1 n + Σ K(2x n+ 1 2 x n ) (x n+1, y n+1 ) = (x n+ 2 1, y n+ 1 2 ) + γ(x n+ 1 2 x n, y n+ 2 1 y n ). where γ [0, 1[ Speeds up he convergence in many cases. The algorihm has recenly been generalized by Lauren Conda in case he funcion G(x) can be wrien as G(x) = G 1(x) + G 2(x) where G 1(x) is convex, proper, l.s.c. bu G 2(x) is differeniable wih Lipschiz coninuous gradien Has advanages in case here is some smoohness in he problem. 12 / 27

33 Effec of he overrelaxaion Applicaion o he ROF model using he acceleraed O(1/n 2 ) algorihm 10 0 γ = 0 γ = No overrelaxaion: γ = 0, 1050 ieraions Overrelaxaion: γ = 0.95, 700 ieraions 13 / 27

34 α-precondiioning I is imporan o choose he precondiioner such ha he prox-operaors are sill easy o compue Resric he precondiioning marices o diagonal marices 14 / 27

35 Lemma α-precondiioning I is imporan o choose he precondiioner such ha he prox-operaors are sill easy o compue Resric he precondiioning marices o diagonal marices Le T = diag(τ 1,...τ n) and Σ = diag(σ 1,..., σ m). τ j = 1 m i=1 K i,j, σ 1 2 α i = n j=1 K i,j α hen for any α [0, 2] Σ 1 2 KT = Σ KT 2 x 2 sup 1. x X, x 0 x 2 14 / 27

36 Lemma α-precondiioning I is imporan o choose he precondiioner such ha he prox-operaors are sill easy o compue Resric he precondiioning marices o diagonal marices Le T = diag(τ 1,...τ n) and Σ = diag(σ 1,..., σ m). τ j = 1 m i=1 K i,j, σ 1 2 α i = n j=1 K i,j α hen for any α [0, 2] Σ 1 2 KT = Σ KT 2 x 2 sup 1. x X, x 0 x 2 The parameer α can be used o vary beween pure primal (α = 0) and pure dual (α = 2) precondiioning 14 / 27

37 Lemma α-precondiioning I is imporan o choose he precondiioner such ha he prox-operaors are sill easy o compue Resric he precondiioning marices o diagonal marices Le T = diag(τ 1,...τ n) and Σ = diag(σ 1,..., σ m). τ j = 1 m i=1 K i,j, σ 1 2 α i = n j=1 K i,j α hen for any α [0, 2] Σ 1 2 KT = Σ KT 2 x 2 sup 1. x X, x 0 x 2 The parameer α can be used o vary beween pure primal (α = 0) and pure dual (α = 2) precondiioning I urns ou ha for α = 0, he primal-dual algorihm is equivalen o he alernaing sep mehod [Ecksein 89] 14 / 27

38 Relaions o marix scaling The α-precondiioner ries o normalize he row- and column norms of he marix K. 15 / 27

39 Relaions o marix scaling The α-precondiioner ries o normalize he row- and column norms of he marix K. This echnique is known as marix scaling or marix binormalizaion [Livne, Golub, 04], [Bradeley 10] Given a symmeric n n marix A, find a diagonal scaling marix D = diag(d 1,..., d n) such ha D AD 2 has row and column 2-norms equal o one 15 / 27

40 Relaions o marix scaling The α-precondiioner ries o normalize he row- and column norms of he marix K. This echnique is known as marix scaling or marix binormalizaion [Livne, Golub, 04], [Bradeley 10] Given a symmeric n n marix A, find a diagonal scaling marix D = diag(d 1,..., d n) such ha D AD 2 has row and column 2-norms equal o one This is equivalen finding a posiive soluion o he equaions n d i A 2 i,jd j = 1, i = 1...n j=1 15 / 27

41 Relaions o marix scaling The α-precondiioner ries o normalize he row- and column norms of he marix K. This echnique is known as marix scaling or marix binormalizaion [Livne, Golub, 04], [Bradeley 10] Given a symmeric n n marix A, find a diagonal scaling marix D = diag(d 1,..., d n) such ha D AD 2 has row and column 2-norms equal o one This is equivalen finding a posiive soluion o he equaions n d i A 2 i,jd j = 1, i = 1...n j=1 Exensions o general marices are discussed bu mos heoreical resuls hold only in he symmeric case 15 / 27

42 Lef-righ-precondiioning For he recangular case we require ha Σ KT 2 column 2-norms as close as possible o one should have row- and 16 / 27

43 Lef-righ-precondiioning For he recangular case we require ha Σ KT 2 should have row- and column 2-norms as close as possible o one Opimized diagonal lef-righ precondiioners can be compued via ( m n 2 ( n m σ i (K i,j ) 2 τ j 1) + σ i (K i,j ) 2 τ j 1 min σ i >0,τ j >0 i=1 j=1 j=1 i=1 ) 2 Can be solved via alernaing minimizaion (slow) Finally, we have o rescale T and Σ such ha Σ 1 2 KT / 27

44 Linear programming Many applicaions of LP relaxaions in compuer vision and machine learning LP program in inequaliy form min c T x s.. Ax b, x 0, Precondiioned primal-dual algorihm {x k+1 = proj [0, ) ( x k T(A T y k + c) ) y k+1 = proj [0, ) ( y k + Σ(A(2x k+1 x k ) b) ), 17 / 27

45 A (simple) 2D example Consider he following 2D linear program ( ) c = ( 1, 1) T 20/3 1, A =, b = (20/3, 20) T primal / 27

46 No precondiioning (14030 ieraions) primal au dual sigma / 27

47 α-precondiioning (α = 0) (910 ieraions) 20 primal au dual sigma / 27

48 α-precondiioning (α = 1) (880 ieraions) 20 primal au dual sigma / 27

49 α-precondiioning (α = 2) (4350 ieraions) 20 primal au dual sigma / 27

50 Lef-righ-precondiioning (190 ieraions) 20 primal au dual sigma / 27

51 Linear programming, furher examples IP PD P-PD sc50b 0.01s 1.75s 0.49s densecolumn 0.17s s 0.61s Table: Comparison of of IP, PD and P-PD on wo sandard LP es problems. 24 / 27

52 Graph cus Graph cus are widely used in compuer vision Can be wrien as a weighed oal variaion energy [Chambolle, 05] min kdw uk`1 + hu, w u i, s.. 0 u 1, u Precondiioned primal-dual algorihm ( u k+1 = proj[0,1] u k + T(DwT y k w u ) y k+1 = proj[ 1,1] y k + Σ(Dw (2u k+1 u k )), Comparison MAXFLOW 0.160s PD 15.75s P-PD 8.56s P-PD-GPU 0.045s 25 / 27

53 Coninuous Pos model The Coninuous Pos model [Chambolle, Cremers, P. 05] wih k phases can be wrien as a convex problem min max u S q B k Du l, q l + u l, f l, l=1 where S is he simplex consrain and B is an inerecion of non-local l 2 balls. Comparison PD P-PD Speedup Synheic (3 phases) s 75.65s 2.9 Synheic (4 phases) s s 2.6 Naural (8 phases) s s / 27

54 Conclusion Precondiioned primal-dual algorihm for convex saddle poin problems wih known srucure Equivalen o he proximal poin algorihm in a paricular norm Differen choices for diagonal precondiioning Applicaions o non-smooh problems (LP, graph cus, Pos model,...) 27 / 27

55 Conclusion Precondiioned primal-dual algorihm for convex saddle poin problems wih known srucure Equivalen o he proximal poin algorihm in a paricular norm Differen choices for diagonal precondiioning Applicaions o non-smooh problems (LP, graph cus, Pos model,...) Ieraive updae of he precondiioners involving also informaion from F and G Acceleraed algorihm performs some kind of scalar precondiioning - relaions need o be undersood beer Can we improve he convergence rae facor? 27 / 27

56 Conclusion Precondiioned primal-dual algorihm for convex saddle poin problems wih known srucure Equivalen o he proximal poin algorihm in a paricular norm Differen choices for diagonal precondiioning Applicaions o non-smooh problems (LP, graph cus, Pos model,...) Ieraive updae of he precondiioners involving also informaion from F and G Acceleraed algorihm performs some kind of scalar precondiioning - relaions need o be undersood beer Can we improve he convergence rae facor? Thank you for your aenion! 27 / 27

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018 ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary

More information

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization 1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus

More information

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation: M ah 5 7 Fall 9 L ecure O c. 4, 9 ) Hamilon- J acobi Equaion: Weak S oluion We coninue he sudy of he Hamilon-Jacobi equaion: We have shown ha u + H D u) = R n, ) ; u = g R n { = }. ). In general we canno

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

t dt t SCLP Bellman (1953) CLP (Dantzig, Tyndall, Grinold, Perold, Anstreicher 60's-80's) Anderson (1978) SCLP

t dt t SCLP Bellman (1953) CLP (Dantzig, Tyndall, Grinold, Perold, Anstreicher 60's-80's) Anderson (1978) SCLP Coninuous Linear Programming. Separaed Coninuous Linear Programming Bellman (1953) max c () u() d H () u () + Gsusds (,) () a () u (), < < CLP (Danzig, yndall, Grinold, Perold, Ansreicher 6's-8's) Anderson

More information

Convergence of the Neumann series in higher norms

Convergence of the Neumann series in higher norms Convergence of he Neumann series in higher norms Charles L. Epsein Deparmen of Mahemaics, Universiy of Pennsylvania Version 1.0 Augus 1, 003 Absrac Naural condiions on an operaor A are given so ha he Neumann

More information

1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx.

1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx. . Use Simpson s rule wih n 4 o esimae an () +. Soluion: Since we are using 4 seps, 4 Thus we have [ ( ) f() + 4f + f() + 4f 3 [ + 4 4 6 5 + + 4 4 3 + ] 5 [ + 6 6 5 + + 6 3 + ]. 5. Our funcion is f() +.

More information

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Network Newton Distributed Optimization Methods

Network Newton Distributed Optimization Methods Nework Newon Disribued Opimizaion Mehods Aryan Mokhari, Qing Ling, and Alejandro Ribeiro Absrac We sudy he problem of minimizing a sum of convex objecive funcions where he componens of he objecive are

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

Correspondence should be addressed to Nguyen Buong,

Correspondence should be addressed to Nguyen Buong, Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se

More information

Then. 1 The eigenvalues of A are inside R = n i=1 R i. 2 Union of any k circles not intersecting the other (n k)

Then. 1 The eigenvalues of A are inside R = n i=1 R i. 2 Union of any k circles not intersecting the other (n k) Ger sgorin Circle Chaper 9 Approimaing Eigenvalues Per-Olof Persson persson@berkeley.edu Deparmen of Mahemaics Universiy of California, Berkeley Mah 128B Numerical Analysis (Ger sgorin Circle) Le A be

More information

Optimality Conditions for Unconstrained Problems

Optimality Conditions for Unconstrained Problems 62 CHAPTER 6 Opimaliy Condiions for Unconsrained Problems 1 Unconsrained Opimizaion 11 Exisence Consider he problem of minimizing he funcion f : R n R where f is coninuous on all of R n : P min f(x) x

More information

Distance Between Two Ellipses in 3D

Distance Between Two Ellipses in 3D Disance Beween Two Ellipses in 3D David Eberly Magic Sofware 6006 Meadow Run Cour Chapel Hill, NC 27516 eberly@magic-sofware.com 1 Inroducion An ellipse in 3D is represened by a cener C, uni lengh axes

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Hamilton Jacobi equations

Hamilton Jacobi equations Hamilon Jacobi equaions Inoducion o PDE The rigorous suff from Evans, mosly. We discuss firs u + H( u = 0, (1 where H(p is convex, and superlinear a infiniy, H(p lim p p = + This by comes by inegraion

More information

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS

INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS INEXACT CUTS FOR DETERMINISTIC AND STOCHASTIC DUAL DYNAMIC PROGRAMMING APPLIED TO CONVEX NONLINEAR OPTIMIZATION PROBLEMS Vincen Guigues School of Applied Mahemaics, FGV Praia de Boafogo, Rio de Janeiro,

More information

t 2 B F x,t n dsdt t u x,t dxdt

t 2 B F x,t n dsdt t u x,t dxdt Evoluion Equaions For 0, fixed, le U U0, where U denoes a bounded open se in R n.suppose ha U is filled wih a maerial in which a conaminan is being ranspored by various means including diffusion and convecion.

More information

Heat kernel and Harnack inequality on Riemannian manifolds

Heat kernel and Harnack inequality on Riemannian manifolds Hea kernel and Harnack inequaliy on Riemannian manifolds Alexander Grigor yan UHK 11/02/2014 onens 1 Laplace operaor and hea kernel 1 2 Uniform Faber-Krahn inequaliy 3 3 Gaussian upper bounds 4 4 ean-value

More information

OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POINT PROBLEMS

OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POINT PROBLEMS OPTIMAL PRIMAL-DUAL METHODS FOR A CLASS OF SADDLE POIT PROBLEMS YUMEI CHE, GUAGHUI LA, AD YUYUA OUYAG Absrac. We presen a novel acceleraed primal-dual APD) mehod for solving a class of deerminisic and

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Space-time Galerkin POD for optimal control of Burgers equation. April 27, 2017 Absolventen Seminar Numerische Mathematik, TU Berlin

Space-time Galerkin POD for optimal control of Burgers equation. April 27, 2017 Absolventen Seminar Numerische Mathematik, TU Berlin Space-ime Galerkin POD for opimal conrol of Burgers equaion Manuel Baumann Peer Benner Jan Heiland April 27, 207 Absolvenen Seminar Numerische Mahemaik, TU Berlin Ouline. Inroducion 2. Opimal Space Time

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013 IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher

More information

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs

On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 63(4), PP. 05, APRIL, 08 On he Convergence Time of Dual Subgradien Mehods for Srongly Convex Programs Hao Yu and Michael J. Neely Universiy of Souhern California

More information

Západočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France

Západočeská Univerzita v Plzni, Czech Republic and Groupe ESIEE Paris, France ADAPTIVE SIGNAL PROCESSING USING MAXIMUM ENTROPY ON THE MEAN METHOD AND MONTE CARLO ANALYSIS Pavla Holejšovsá, Ing. *), Z. Peroua, Ing. **), J.-F. Bercher, Prof. Assis. ***) Západočesá Univerzia v Plzni,

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Single and Double Pendulum Models

Single and Double Pendulum Models Single and Double Pendulum Models Mah 596 Projec Summary Spring 2016 Jarod Har 1 Overview Differen ypes of pendulums are used o model many phenomena in various disciplines. In paricular, single and double

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

Comparison between the Discrete and Continuous Time Models

Comparison between the Discrete and Continuous Time Models Comparison beween e Discree and Coninuous Time Models D. Sulsky June 21, 2012 1 Discree o Coninuous Recall e discree ime model Î = AIS Ŝ = S Î. Tese equaions ell us ow e populaion canges from one day o

More information

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0.

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0. Time-Domain Sysem Analysis Coninuous Time. J. Robers - All Righs Reserved. Edied by Dr. Rober Akl 1. J. Robers - All Righs Reserved. Edied by Dr. Rober Akl 2 Le a sysem be described by a 2 y ( ) + a 1

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3. Mah Rahman Exam Review Soluions () Consider he IVP: ( 4)y 3y + 4y = ; y(3) = 0, y (3) =. (a) Please deermine he longes inerval for which he IVP is guaraneed o have a unique soluion. Soluion: The disconinuiies

More information

Tracking Adversarial Targets

Tracking Adversarial Targets A. Proofs Proof of Lemma 3. Consider he Bellman equaion λ + V π,l x, a lx, a + V π,l Ax + Ba, πax + Ba. We prove he lemma by showing ha he given quadraic form is he unique soluion of he Bellman equaion.

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

System of Linear Differential Equations

System of Linear Differential Equations Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

CHEAPEST PMT ONLINE TEST SERIES AIIMS/NEET TOPPER PREPARE QUESTIONS

CHEAPEST PMT ONLINE TEST SERIES AIIMS/NEET TOPPER PREPARE QUESTIONS CHEAPEST PMT ONLINE TEST SERIES AIIMS/NEET TOPPER PREPARE QUESTIONS For more deails see las page or conac @aimaiims.in Physics Mock Tes Paper AIIMS/NEET 07 Physics 06 Saurday Augus 0 Uni es : Moion in

More information

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Version April 30, 2004.Submied o CTU Repors. EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Per Krysl Universiy of California, San Diego La Jolla, California 92093-0085,

More information

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL HE PUBLISHING HOUSE PROCEEDINGS OF HE ROMANIAN ACADEMY, Series A, OF HE ROMANIAN ACADEMY Volume, Number 4/200, pp 287 293 SUFFICIEN CONDIIONS FOR EXISENCE SOLUION OF LINEAR WO-POIN BOUNDARY PROBLEM IN

More information

4 Sequences of measurable functions

4 Sequences of measurable functions 4 Sequences of measurable funcions 1. Le (Ω, A, µ) be a measure space (complee, afer a possible applicaion of he compleion heorem). In his chaper we invesigae relaions beween various (nonequivalen) convergences

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Linear Gaussian State Space Models

Linear Gaussian State Space Models Linear Gaussian Sae Space Models Srucural Time Series Models Level and Trend Models Basic Srucural Model (BSM Dynamic Linear Models Sae Space Model Represenaion Level, Trend, and Seasonal Models Time Varying

More information

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY ECO 504 Spring 2006 Chris Sims RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 1. INTRODUCTION Lagrange muliplier mehods are sandard fare in elemenary calculus courses, and hey play a cenral role in economic

More information

3, so θ = arccos

3, so θ = arccos Mahemaics 210 Professor Alan H Sein Monday, Ocober 1, 2007 SOLUTIONS This problem se is worh 50 poins 1 Find he angle beween he vecors (2, 7, 3) and (5, 2, 4) Soluion: Le θ be he angle (2, 7, 3) (5, 2,

More information

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Optimal Path Planning for Flexible Redundant Robot Manipulators

Optimal Path Planning for Flexible Redundant Robot Manipulators 25 WSEAS In. Conf. on DYNAMICAL SYSEMS and CONROL, Venice, Ialy, November 2-4, 25 (pp363-368) Opimal Pah Planning for Flexible Redundan Robo Manipulaors H. HOMAEI, M. KESHMIRI Deparmen of Mechanical Engineering

More information

A variational radial basis function approximation for diffusion processes.

A variational radial basis function approximation for diffusion processes. A variaional radial basis funcion approximaion for diffusion processes. Michail D. Vreas, Dan Cornford and Yuan Shen {vreasm, d.cornford, y.shen}@ason.ac.uk Ason Universiy, Birmingham, UK hp://www.ncrg.ason.ac.uk

More information

arxiv: v2 [math.ap] 16 Oct 2017

arxiv: v2 [math.ap] 16 Oct 2017 Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????XX0000-0 MINIMIZATION SOLUTIONS TO CONSERVATION LAWS WITH NON-SMOOTH AND NON-STRICTLY CONVEX FLUX CAREY CAGINALP arxiv:1708.02339v2 [mah.ap]

More information

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function

Existence of positive solution for a third-order three-point BVP with sign-changing Green s function Elecronic Journal of Qualiaive Theory of Differenial Equaions 13, No. 3, 1-11; hp://www.mah.u-szeged.hu/ejqde/ Exisence of posiive soluion for a hird-order hree-poin BVP wih sign-changing Green s funcion

More information

A universal ordinary differential equation

A universal ordinary differential equation 1 / 10 A universal ordinary differenial equaion Olivier Bournez 1, Amaury Pouly 2 1 LIX, École Polyechnique, France 2 Max Planck Insiue for Sofware Sysems, Germany 12 july 2017 2 / 10 Universal differenial

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

Accelerated Schemes for a Class of Variational Inequalities

Accelerated Schemes for a Class of Variational Inequalities Noname manuscrip No. will be insered by he edior Acceleraed Schemes for a Class of Variaional Inequaliies Yunmei Chen Guanghui Lan Yuyuan Ouyang Received: dae / Acceped: dae Absrac We propose a class of

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

For example, the comb filter generated from. ( ) has a transfer function. e ) has L notches at ω = (2k+1)π/L and L peaks at ω = 2π k/l,

For example, the comb filter generated from. ( ) has a transfer function. e ) has L notches at ω = (2k+1)π/L and L peaks at ω = 2π k/l, Comb Filers The simple filers discussed so far are characeried eiher by a single passband and/or a single sopband There are applicaions where filers wih muliple passbands and sopbands are required The

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Projection-Based Optimal Mode Scheduling

Projection-Based Optimal Mode Scheduling Projecion-Based Opimal Mode Scheduling T. M. Caldwell and T. D. Murphey Absrac This paper develops an ieraive opimizaion echnique ha can be applied o mode scheduling. The algorihm provides boh a mode schedule

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Math-Net.Ru All Russian mathematical portal

Math-Net.Ru All Russian mathematical portal Mah-Ne.Ru All Russian mahemaical poral Aleksei S. Rodin, On he srucure of singular se of a piecewise smooh minimax soluion of Hamilon-Jacobi-Bellman equaion, Ural Mah. J., 2016, Volume 2, Issue 1, 58 68

More information

Scheduling of Crude Oil Movements at Refinery Front-end

Scheduling of Crude Oil Movements at Refinery Front-end Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

New effective moduli of isotropic viscoelastic composites. Part I. Theoretical justification

New effective moduli of isotropic viscoelastic composites. Part I. Theoretical justification IOP Conference Series: Maerials Science and Engineering PAPE OPEN ACCESS New effecive moduli of isoropic viscoelasic composies. Par I. Theoreical jusificaion To cie his aricle: A A Sveashkov and A A akurov

More information

Model Reduction for Dynamical Systems Lecture 6

Model Reduction for Dynamical Systems Lecture 6 Oo-von-Guericke Universiä Magdeburg Faculy of Mahemaics Summer erm 07 Model Reducion for Dynamical Sysems ecure 6 v eer enner and ihong Feng Max lanck Insiue for Dynamics of Complex echnical Sysems Compuaional

More information

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff Laplace ransfom: -ranslaion rule 8.03, Haynes Miller and Jeremy Orloff Inroducory example Consider he sysem ẋ + 3x = f(, where f is he inpu and x he response. We know is uni impulse response is 0 for

More information

Waveform Transmission Method, A New Waveform-relaxation Based Algorithm. to Solve Ordinary Differential Equations in Parallel

Waveform Transmission Method, A New Waveform-relaxation Based Algorithm. to Solve Ordinary Differential Equations in Parallel Waveform Transmission Mehod, A New Waveform-relaxaion Based Algorihm o Solve Ordinary Differenial Equaions in Parallel Fei Wei Huazhong Yang Deparmen of Elecronic Engineering, Tsinghua Universiy, Beijing,

More information

Chapter 7 Response of First-order RL and RC Circuits

Chapter 7 Response of First-order RL and RC Circuits Chaper 7 Response of Firs-order RL and RC Circuis 7.- The Naural Response of RL and RC Circuis 7.3 The Sep Response of RL and RC Circuis 7.4 A General Soluion for Sep and Naural Responses 7.5 Sequenial

More information

Lecture 10: The Poincaré Inequality in Euclidean space

Lecture 10: The Poincaré Inequality in Euclidean space Deparmens of Mahemaics Monana Sae Universiy Fall 215 Prof. Kevin Wildrick n inroducion o non-smooh analysis and geomery Lecure 1: The Poincaré Inequaliy in Euclidean space 1. Wha is he Poincaré inequaliy?

More information

THE 2-BODY PROBLEM. FIGURE 1. A pair of ellipses sharing a common focus. (c,b) c+a ROBERT J. VANDERBEI

THE 2-BODY PROBLEM. FIGURE 1. A pair of ellipses sharing a common focus. (c,b) c+a ROBERT J. VANDERBEI THE 2-BODY PROBLEM ROBERT J. VANDERBEI ABSTRACT. In his shor noe, we show ha a pair of ellipses wih a common focus is a soluion o he 2-body problem. INTRODUCTION. Solving he 2-body problem from scrach

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

CS376 Computer Vision Lecture 6: Optical Flow

CS376 Computer Vision Lecture 6: Optical Flow CS376 Compuer Vision Lecure 6: Opical Flow Qiing Huang Feb. 11 h 2019 Slides Credi: Krisen Grauman and Sebasian Thrun, Michael Black, Marc Pollefeys Opical Flow mage racking 3D compuaion mage sequence

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information