Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Size: px
Start display at page:

Download "Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection"

Transcription

1 Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief review of he general framework of ADMM. ADMM has recenly gahered significan aenion in he machine learning communiy due o is wide applicabiliy o a range of learning problems wih complex objecive funcions [, ]. Le px : R a R and qy : R b R be convex funcions, F R c a, G R c b, and z R c. Consider he following opimizaion problem px + qy s.. F x + Gy = z, x,y where he variable vecors x and y are separae in he objecive, and coupled only in he consrain. The augmened Lagrangian for he above problem is given by Lx, y, ρ = px + qy + ρ z F x Gy + ϕ z F x Gy, where ρ R c is he Lagrangian muliplier and ϕ > 0 is a penaly parameer. ADMM uilizes he separabiliy form of and replaces he join imizaion over x and y wih wo simpler problems. The ADMM firs imizes L over x, hen over y, and hen applies a proximal imizaion sep wih respec o he Lagrange muliplier ρ. The enire ADMM procedure is summarized in Algorihm. The γ > 0 is a consan. The subscrip i denoes he ih ieraion of he ADMM procedure. The ADMM procedure has been proved o converge o he global opimal soluion under quie broad condiions []. Algorihm : ADMM Updae Equaions for Solving Ierae unil convergence x i+ arg x Lx, y i, ρ i, y i+ arg y Lx i+, y, ρ i, ρ i+ ρ i + γϕz F x i+ Gy i+. A. ADMM Equaions for updaing X and A s Consider he l -dicionary learning problem A A,X 0 P AX + λ X, where A is defined in Secion 3.. We use he following algorihm from [4] o solve his problem. I is quie easy o adap he ADMM updaes oulined in Algorihm o updae X s and A s, when he oher variable is fixed see e.g., [4]. ADMM for updaing X, given fixed A. Here we are given marices P R m n and A R m k, and we wan o solve he following opimizaion problem X 0 P AX + λ X. Algorihm shows he ADMM updae seps for solving his problem. The enire derivaion is presened in [4] and we are reproducing hem here for compleeness. In our experimens, we se

2 ϕ = 5, κ = /Ψ max A, and γ =.89. These parameers are chosen based on he ADMM convergence resuls presened in [4, 6]. Algorihm : ADMM for Updaing X ADMM procedure for solving X 0 P AX + λ X Inpu: A R m k, P R m n, λ 0, γ 0, ψ 0, κ 0 X 0 k n, E P, ρ 0 m n for i =,,..., o convergence do E i+ sofp AX i + ρ i /ϕ, /ϕ G A AX i + E i+ P ρ i /ϕ X i+ max { X i κg λκ/ϕ, 0 } ρ i+ ρ i + γϕp AX i+ E i+ Reurn X a convergence ADMM for Updaing A, given fixed X. Given inpus P R m n and X R k n, consider he following opimizaion problem A A P AX. When repeaing his opimizaion over muliple imeseps, we use warm sars for faser convergence, i.e., insead of iniializing A o 0 m k, we iniialize A o he dicionary obained a he end of he previous imesep. Algorihm 3 : ADMM for Updaing A ADMM procedure for solving A A P AX Inpu: X R k n, P R m n, γ 0, ψ 0, κ 0 A 0 m k, E P, ρ 0 m n for i =,,..., o convergence do E i+ sofp A i X + ρ i /ϕ, /ϕ G A i X + E i+ P ρ i /ϕx A i+ Π Amax { A i κg, 0 } ρ i+ ρ i + γϕp A i+ X E i+ Reurn A a convergence B Analysis of OIADMM: Proofs from Secion 4 Firs, les recap he OIADMM updae rules. Γ + = arg Γ Â + = arg A A Γ +, Γ Γ + β Γ Γ F. β G +, A Â + τ A Â F, 3 + = + β P Â+X Γ +. 4 Le A op be he opimum soluion o he bach problem P AX. A A = Le Γ = P Â ˆX and Γ = P Â+ ˆX. For any, A A, le Γ = P A ˆX. The lemmas below hold for any A A so in paricular i holds for A se as A op. Proof Flow. Alhough he algorihm is relaively simple, he analysis is somewha involved. Define, Γ op = P A op X. Then he regre of he OIADMM is RT = Γ Γ op. = We spli he proof ino hree echnical lemmas. We firs upper bound, Γ Γ Lemma B., and use i o bound Γ + Γ Lemma B.3. In he proof of Lemma B.4, we bound Γ Γ + and his when added o he bound on Γ + Γ from Lemma B.3 gives a bound

3 on Γ Γ. The proof of he regre bound uses a canceling elescoping sum on he bound on Γ Γ. We use he following simple inequaliy in our proofs. Lemma B.. For marices M, M, M 3, M 4 R m n, we have he following M M, M 3 M 4 = M M 4 F + M M 3 F M M 3 F M M 4 F. Lemma B.. Le {Γ, Â, } be he sequences generaed by he OIADMM procedure. For any A A, we have, Γ Γ β A τ Â F A Â+ F + β Γ Γ + F Γ + Γ F Γ Γ F β Ψ max τ ˆX Â+ Â F. Proof. For any A A, 3 is equivalen o he following variaional inequaliy [5]: β G + + τ Â+ Â, A Â Using Γ = P Â+ ˆX and subsiuing for G +, we have β G +, A Â+ = β /β + Γ Γ + ˆX, A Â+ = β /β + Γ Γ +, Â+ ˆX A ˆX = β /β + Γ Γ +, P A ˆX P Â+ ˆX =, Γ Γ + β Γ Γ +, Γ Γ. 6 Subsiuing 6 ino 5 and rearranging he erms yield, Γ Γ β Γ Γ +, Γ Γ + β τ Â+ Â, A Â+. 7 By using Lemma B., he firs erm on he righ side can be rewrien as Γ Γ +, Γ Γ = Γ Γ F + Γ Γ + F Γ + Γ F Γ Γ F. 8 Subsiuing he definiions of Γ and Γ, we have Γ Γ F = P Â ˆX P Â+ ˆX F = Â+ Â ˆX F Ψ max ˆX Â+ Â F, 9 Remember ha Ψ max ˆX is he maximum eigenvalue of X X. Using Lemma B., we ge ha he second erm in he righ hand side of 7 is equivalen o Â+ Â, A Â+ = A Â F A Â+ F Â+ Â F. 0 Combining resuls in 7, 8, 9, and 0, we ge he desired bound. Lemma B.3. Le {Γ, Â, } be he sequences generaed by he OIADMM procedure. For any A A, we have Γ + Γ F + β F + A β τ Â F A Â+ F β Ψ max τ ˆX Â+ Â F β Γ + Γ F. Proof. Le Γ + denoe he subgradien of Γ +. Now Γ + is a imizer of. Therefore, 0 m n Γ + β Γ Γ +. Rearranging he erms gives + β Γ Γ + Γ +. Since Γ + is a convex funcion, we have Γ + Γ + β Γ Γ +, Γ + Γ, Γ + Γ +, Γ Γ + β Γ Γ +, Γ + Γ. Using Lemma B., he las erm can be rewrien as β Γ Γ +, Γ + Γ = β Γ Γ F Γ Γ + F Γ + Γ F

4 Combining he inequaliy of Lemma B. wih gives, Γ Γ + β Γ Γ +, Γ + Γ β A τ  F A Â+ F β Ψ max τ ˆX Â+  F β Γ + Γ F Γ + Γ F. 3 Since Γ + Γ = + /β, we have, Γ + Γ β Γ + Γ F =, + + F β = F + F. 4 β Plugging 3 and 4 ino yields he resul. Lemma B.4. Le {Γ, Â, } be he sequences generaed by he OIADMM procedure. If τ saisfies Ψ max ˆX. Then τ Γ Γ Λ F + F + β F + A β β τ  F A Â+ F, where Λ Γ. Proof. Le Λ Γ. Therefore, Γ Γ + Λ, Γ Γ +. Now, Therefore, Λ, Γ Γ + = Λ / β, β Γ Γ + β Λ F + β Γ Γ + F Γ Γ + β Λ F + β Γ Γ + F. 5 Adding 5 and he inequaliy of Lemma B.3 ogeher we ge Γ Γ β Λ F + β F + F + β τ β A  F A Â+ F Â+  F. τ Ψ max ˆX Seing /τ Ψ max ˆX means ha β / τ Ψ max ˆX Â+  F 0, Therefore, Γ Γ Λ F + F + β F + A β β τ  F A Â+ F, Theorem B.5 Theorem 4. Resaed. Le {Γ, Â, } be he sequences generaed by he OIADMM procedure and RT be defined as above. Assume he following condiions hold: i he Frobenius norm of Γ is upper bounded by Φ, ii  0 = 0 m k, A op F D, iii 0 = 0 m n, and iv /τ Ψ max ˆX. Seing β = Φ/D τ T, we have RT ΦD T T τ + A op E. =

5 Proof. Subsiuing, Γ op = P A op ˆX for Γ and A op for A in Lemma B.4 and sumg he inequaliy of over from o T, we ge he following canceling elescoping sum Γ Γ op = β β Φ T β Since = = Λ F + β 0 F T + F + β τ A op Â0 F A op ÂT + F Λ F + β τ A op F β T + F β τ A op ÂT + F + D β τ. Γ op = P A op X = P A op ˆX + E = Γ op A op E, we have hen Γ op Γ op A op E. The regre is bounded as follows: RT = Γ Γ op Φ T + D β + A op E. β τ = Seing β = Φ D τ T yields desired bound. = As menioned in Secion 4, OIADMM can violae he equaliy consrain a each i.e., P Â + ˆX Γ +. However, we show in Theorem B.6 ha he accumulaed loss caused by he violaion of equaliy consrain is sublinear in T, i.e., he equaliy consrain is saisfied on average in he long run. Theorem B.6. Le {Γ, Â, } be he sequences generaed by he OIADMM procedure and RT be defined as above. Assume he following condiions hold: i he Frobenius norm of Γ is upper bounded by Φ, ii Â0 = 0 m k, A op F D, iii 0 = 0 m n, iv /τ Ψ max ˆX, and v Γ op Υ. Seing β = Φ/D τ T, we have = Γ + Γ D τ + 4ΥD Φ τ T. Proof. Les look a Γ + Γ F. Γ + Γ F = Γ + Γ + Γ Γ F Γ + Γ F + Γ Γ F Γ + Γ F + Ψ max ˆX Â+ Â F. 6 For he firs inequaliy, we used he simple fac ha for any wo marices M and M M M F M F + M F. The second inequaliy is because of 9. Firsly, since Γ + 0 Γ + Γ op Γ op Υ. Using his and rearranging erms in he inequaliy of Lemma B.3 wih A op insead of A gives Γ + Γ F β F + F + A op τ Â F A op Â+ F Ψ max τ ˆX Â+ Â F + Υ, β Plugging his ino 6 yields Γ + Γ F β F + F + A op τ Â F A op Â+ F Ψ max τ ˆX Â+ Â F + 4Υ. β

6 Leing /τ Ψ max ˆX and sumg over from o T, we have Γ + Γ F β = D + 4ΥT. τ β Seing β = Φ D τ T yields he desired bound. C Pseudo-Codes from Secion 5 0 F T + F + τ A op Â0 F A op ÂT + F + 4ΥT β Le us sar by exending he definiion of A, define A k = {A R m k : A 0 m k j =,..., k, A j }, where A j is he jh column in A. We use Π Ak o denoe he projecion ono he neares poin in he convex se A k. Algorihm 4 : BATCH-IMPL Inpu: P [ ] R m N, X [ ] R k N, P = [p,..., p n ] R m n, A R m k, λ, ζ, η 0 Novel Documen Deecion Sep: for j = o n do Solve: x j = arg x 0 p j A x + λ x solved using Algorihm if p j A x j + λ x j > ζ Mark p j as novel Bach Dicionary Learning Sep: Se k + k + η Se Z [] [X [ ] x,..., x n ] Se X [] [ Z[] 0 η N ] Se P [] [P [ ] p,..., p n ] for i = o convergence do Solve: A + = arg A Ak+ P [] AX [] solved using Algorihm 3 wih warm sars Solve: X [] = arg X 0 P [] A +X + λ X solved using Algorihm Define A k as A k = {A R m k : A 0 m k j =,..., k, A j }, where A j is he jh column in A. We use Π Ak o denoe he projecion ono he neares poin in he convex se A k. Algorihm 5 : L-BATCH Inpu: P [ ] R m N, P = [p,..., p n ] R m n, A R m k, λ 0, ζ 0, η 0 Novel Documen Deecion Sep: for j = o n do Solve: x j = arg x 0 p j A x + λ x solved using he LARS mehod [3] if p j A x j + λ x j > ζ Mark p j as novel l -bach Dicionary Learning Sep: Se k + k + η Se P [] [P [ ] p,..., p n ] [A +, X [] ] = arg A Ak+,X 0 P [] AX + λ X non-negaive sparse coding problem D Addiional Experimenal Evaluaion In Figure, we show he effec of he size of he dicionary on he performance of Algorihm ON- LINE. The average AUC is compued as in Table. No surprisingly, as he size of dicionary k increases he average AUC also increases, bu correspondingly he running ime of he algorihm also increases. The plo suggess ha here is a diishing reurn on AUC wih increase in he size of he dicionary, and his increase in AUC comes a he cos of higher running imes. Pos-processing done o Generae Table. In each imesep, insead of hresholding by ζ, we ake he op 0% of wees measured in erms of he sparse coding objecive value and run a dicionarybased clusering, described in [4], on i. Furher pos-processing is done o discard clusers wihou much suppor and o pick a represenaive wee for each cluser.

7 AUC vs. Time for Differen Online Dicionary Sizes 0.9 Average AUC CPU Running Time in s Figure : TDT daase: Average AUC vs. running ime for differen values of dicionary sizes k in Algorihm ONLINE. The poins ploed from lef o righ are for k = 50, 00, 50, and 00. References [] S. Boyd, N. Parikh, E. Chu, B. Peleao, and J. Ecksein. Disribued Opimizaion and Saisical Learning via he Alernaing Direcion Mehod of Mulipliers. Foundaions and Trends in Machine Learning, 0. [] P. Combees and J. Pesque. Proximal Spliing Mehods in Signal Processing. arxiv:09.35, 009. [3] J. Friedman, T. Hasie, H. Hfling, and R. Tibshirani. Pahwise Coordinae Opimizaion. The Annals of Applied Saisics,, 007. [4] S. P. Kasiviswanahan, P. Melville, A. Banerjee, and V. Sindhwani. Emerging Topic Deecion using Dicionary Learning. In CIKM, pages , 0. [5] R. T. Rockafellar and R. J.-B. Wes. Variaional Analysis. Springer-Verlag, 004. [6] J. Yang and Y. Zhang. Alernaing Direcion Algorihms for L-Problems in Compressive Sensing. SIAM Journal of Scienific Compuing, 33:50 78, 0.

Online l 1 -Dictionary Learning with Application to Novel Document Detection

Online l 1 -Dictionary Learning with Application to Novel Document Detection Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan General Elecric Global Research kasivisw@gmail.com Arindam Banerjee Universiy of Minnesoa banerjee@cs.umn.edu

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Primal-Dual Splitting: Recent Improvements and Variants

Primal-Dual Splitting: Recent Improvements and Variants Primal-Dual Spliing: Recen Improvemens and Varians 1 Thomas Pock and 2 Anonin Chambolle 1 Insiue for Compuer Graphics and Vision, TU Graz, Ausria 2 CMAP & CNRS École Polyechnique, France The proximal poin

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Economics 8105 Macroeconomic Theory Recitation 6

Economics 8105 Macroeconomic Theory Recitation 6 Economics 8105 Macroeconomic Theory Reciaion 6 Conor Ryan Ocober 11h, 2016 Ouline: Opimal Taxaion wih Governmen Invesmen 1 Governmen Expendiure in Producion In hese noes we will examine a model in which

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Lecture 4: November 13

Lecture 4: November 13 Compuaional Learning Theory Fall Semeser, 2017/18 Lecure 4: November 13 Lecurer: Yishay Mansour Scribe: Guy Dolinsky, Yogev Bar-On, Yuval Lewi 4.1 Fenchel-Conjugae 4.1.1 Moivaion Unil his lecure we saw

More information

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem.

E β t log (C t ) + M t M t 1. = Y t + B t 1 P t. B t 0 (3) v t = P tc t M t Question 1. Find the FOC s for an optimum in the agent s problem. Noes, M. Krause.. Problem Se 9: Exercise on FTPL Same model as in paper and lecure, only ha one-period govenmen bonds are replaced by consols, which are bonds ha pay one dollar forever. I has curren marke

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems

More information

BU Macro BU Macro Fall 2008, Lecture 4

BU Macro BU Macro Fall 2008, Lecture 4 Dynamic Programming BU Macro 2008 Lecure 4 1 Ouline 1. Cerainy opimizaion problem used o illusrae: a. Resricions on exogenous variables b. Value funcion c. Policy funcion d. The Bellman equaion and an

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization 1 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari, Wei Shi, Qing Ling, and Alejandro Ribeiro Absrac This paper considers decenralized consensus

More information

THE BELLMAN PRINCIPLE OF OPTIMALITY

THE BELLMAN PRINCIPLE OF OPTIMALITY THE BELLMAN PRINCIPLE OF OPTIMALITY IOANID ROSU As I undersand, here are wo approaches o dynamic opimizaion: he Ponrjagin Hamilonian) approach, and he Bellman approach. I saw several clear discussions

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

System of Linear Differential Equations

System of Linear Differential Equations Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y

More information

A Shooting Method for A Node Generation Algorithm

A Shooting Method for A Node Generation Algorithm A Shooing Mehod for A Node Generaion Algorihm Hiroaki Nishikawa W.M.Keck Foundaion Laboraory for Compuaional Fluid Dynamics Deparmen of Aerospace Engineering, Universiy of Michigan, Ann Arbor, Michigan

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

3, so θ = arccos

3, so θ = arccos Mahemaics 210 Professor Alan H Sein Monday, Ocober 1, 2007 SOLUTIONS This problem se is worh 50 poins 1 Find he angle beween he vecors (2, 7, 3) and (5, 2, 4) Soluion: Le θ be he angle (2, 7, 3) (5, 2,

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n

Aryan Mokhtari, Wei Shi, Qing Ling, and Alejandro Ribeiro. cost function n IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, VOL. 2, NO. 4, DECEMBER 2016 507 A Decenralized Second-Order Mehod wih Exac Linear Convergence Rae for Consensus Opimizaion Aryan Mokhari,

More information

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales Advances in Dynamical Sysems and Applicaions. ISSN 0973-5321 Volume 1 Number 1 (2006, pp. 103 112 c Research India Publicaions hp://www.ripublicaion.com/adsa.hm The Asympoic Behavior of Nonoscillaory Soluions

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Chapter 7: Solving Trig Equations

Chapter 7: Solving Trig Equations Haberman MTH Secion I: The Trigonomeric Funcions Chaper 7: Solving Trig Equaions Le s sar by solving a couple of equaions ha involve he sine funcion EXAMPLE a: Solve he equaion sin( ) The inverse funcions

More information

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite American Journal of Operaions Research, 08, 8, 8-9 hp://wwwscirporg/journal/ajor ISSN Online: 60-8849 ISSN Prin: 60-8830 The Opimal Sopping Time for Selling an Asse When I Is Uncerain Wheher he Price Process

More information

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page Assignmen 1 MATH 2270 SOLUTION Please wrie ou complee soluions for each of he following 6 problems (one more will sill be added). You may, of course, consul wih your classmaes, he exbook or oher resources,

More information

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1 SZG Macro 2011 Lecure 3: Dynamic Programming SZG macro 2011 lecure 3 1 Background Our previous discussion of opimal consumpion over ime and of opimal capial accumulaion sugges sudying he general decision

More information

Week 1 Lecture 2 Problems 2, 5. What if something oscillates with no obvious spring? What is ω? (problem set problem)

Week 1 Lecture 2 Problems 2, 5. What if something oscillates with no obvious spring? What is ω? (problem set problem) Week 1 Lecure Problems, 5 Wha if somehing oscillaes wih no obvious spring? Wha is ω? (problem se problem) Sar wih Try and ge o SHM form E. Full beer can in lake, oscillaing F = m & = ge rearrange: F =

More information

1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx.

1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx. . Use Simpson s rule wih n 4 o esimae an () +. Soluion: Since we are using 4 seps, 4 Thus we have [ ( ) f() + 4f + f() + 4f 3 [ + 4 4 6 5 + + 4 4 3 + ] 5 [ + 6 6 5 + + 6 3 + ]. 5. Our funcion is f() +.

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

ME 391 Mechanical Engineering Analysis

ME 391 Mechanical Engineering Analysis Fall 04 ME 39 Mechanical Engineering Analsis Eam # Soluions Direcions: Open noes (including course web posings). No books, compuers, or phones. An calculaor is fair game. Problem Deermine he posiion of

More information

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff Laplace ransfom: -ranslaion rule 8.03, Haynes Miller and Jeremy Orloff Inroducory example Consider he sysem ẋ + 3x = f(, where f is he inpu and x he response. We know is uni impulse response is 0 for

More information

EE363 homework 1 solutions

EE363 homework 1 solutions EE363 Prof. S. Boyd EE363 homework 1 soluions 1. LQR for a riple accumulaor. We consider he sysem x +1 = Ax + Bu, y = Cx, wih 1 1 A = 1 1, B =, C = [ 1 ]. 1 1 This sysem has ransfer funcion H(z) = (z 1)

More information

Predator - Prey Model Trajectories and the nonlinear conservation law

Predator - Prey Model Trajectories and the nonlinear conservation law Predaor - Prey Model Trajecories and he nonlinear conservaion law James K. Peerson Deparmen of Biological Sciences and Deparmen of Mahemaical Sciences Clemson Universiy Ocober 28, 213 Ouline Drawing Trajecories

More information

Lecture Notes 5: Investment

Lecture Notes 5: Investment Lecure Noes 5: Invesmen Zhiwei Xu (xuzhiwei@sju.edu.cn) Invesmen decisions made by rms are one of he mos imporan behaviors in he economy. As he invesmen deermines how he capials accumulae along he ime,

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

CHAPTER 12 DIRECT CURRENT CIRCUITS

CHAPTER 12 DIRECT CURRENT CIRCUITS CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As

More information

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3. Mah Rahman Exam Review Soluions () Consider he IVP: ( 4)y 3y + 4y = ; y(3) = 0, y (3) =. (a) Please deermine he longes inerval for which he IVP is guaraneed o have a unique soluion. Soluion: The disconinuiies

More information

Chapter 8 The Complete Response of RL and RC Circuits

Chapter 8 The Complete Response of RL and RC Circuits Chaper 8 The Complee Response of RL and RC Circuis Seoul Naional Universiy Deparmen of Elecrical and Compuer Engineering Wha is Firs Order Circuis? Circuis ha conain only one inducor or only one capacior

More information

Chapter 6. Systems of First Order Linear Differential Equations

Chapter 6. Systems of First Order Linear Differential Equations Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

Book Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition

Book Corrections for Optimal Estimation of Dynamic Systems, 2 nd Edition Boo Correcions for Opimal Esimaion of Dynamic Sysems, nd Ediion John L. Crassidis and John L. Junins November 17, 017 Chaper 1 This documen provides correcions for he boo: Crassidis, J.L., and Junins,

More information

ENGI 9420 Engineering Analysis Assignment 2 Solutions

ENGI 9420 Engineering Analysis Assignment 2 Solutions ENGI 940 Engineering Analysis Assignmen Soluions 0 Fall [Second order ODEs, Laplace ransforms; Secions.0-.09]. Use Laplace ransforms o solve he iniial value problem [0] dy y, y( 0) 4 d + [This was Quesion

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3 and d = c b - b c c d = c b - b c c This process is coninued unil he nh row has been compleed. The complee array of coefficiens is riangular. Noe ha in developing he array an enire row may be divided or

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems. Mah 2250-004 Week 4 April 6-20 secions 7.-7.3 firs order sysems of linear differenial equaions; 7.4 mass-spring sysems. Mon Apr 6 7.-7.2 Sysems of differenial equaions (7.), and he vecor Calculus we need

More information

This document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC

This document was generated at 1:04 PM, 09/10/13 Copyright 2013 Richard T. Woodward. 4. End points and transversality conditions AGEC his documen was generaed a 1:4 PM, 9/1/13 Copyrigh 213 Richard. Woodward 4. End poins and ransversaliy condiions AGEC 637-213 F z d Recall from Lecure 3 ha a ypical opimal conrol problem is o maimize (,,

More information

Sliding Mode Extremum Seeking Control for Linear Quadratic Dynamic Game

Sliding Mode Extremum Seeking Control for Linear Quadratic Dynamic Game Sliding Mode Exremum Seeking Conrol for Linear Quadraic Dynamic Game Yaodong Pan and Ümi Özgüner ITS Research Group, AIST Tsukuba Eas Namiki --, Tsukuba-shi,Ibaraki-ken 5-856, Japan e-mail: pan.yaodong@ais.go.jp

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

Machine Learning 4771

Machine Learning 4771 ony Jebara, Columbia Universiy achine Learning 4771 Insrucor: ony Jebara ony Jebara, Columbia Universiy opic 20 Hs wih Evidence H Collec H Evaluae H Disribue H Decode H Parameer Learning via JA & E ony

More information

Correspondence should be addressed to Nguyen Buong,

Correspondence should be addressed to Nguyen Buong, Hindawi Publishing Corporaion Fixed Poin Theory and Applicaions Volume 011, Aricle ID 76859, 10 pages doi:101155/011/76859 Research Aricle An Implici Ieraion Mehod for Variaional Inequaliies over he Se

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems.

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems. di ernardo, M. (995). A purely adapive conroller o synchronize and conrol chaoic sysems. hps://doi.org/.6/375-96(96)8-x Early version, also known as pre-prin Link o published version (if available):.6/375-96(96)8-x

More information

The expectation value of the field operator.

The expectation value of the field operator. The expecaion value of he field operaor. Dan Solomon Universiy of Illinois Chicago, IL dsolom@uic.edu June, 04 Absrac. Much of he mahemaical developmen of quanum field heory has been in suppor of deermining

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

Scheduling of Crude Oil Movements at Refinery Front-end

Scheduling of Crude Oil Movements at Refinery Front-end Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,

More information

Optimizing heat exchangers

Optimizing heat exchangers Opimizing hea echangers Jean-Luc Thiffeaul Deparmen of Mahemaics, Universiy of Wisconsin Madison, 48 Lincoln Dr., Madison, WI 5376, USA wih: Florence Marcoe, Charles R. Doering, William R. Young (Daed:

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

Math 315: Linear Algebra Solutions to Assignment 6

Math 315: Linear Algebra Solutions to Assignment 6 Mah 35: Linear Algebra s o Assignmen 6 # Which of he following ses of vecors are bases for R 2? {2,, 3, }, {4,, 7, 8}, {,,, 3}, {3, 9, 4, 2}. Explain your answer. To generae he whole R 2, wo linearly independen

More information

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018

ELE 538B: Large-Scale Optimization for Data Science. Quasi-Newton methods. Yuxin Chen Princeton University, Spring 2018 ELE 538B: Large-Scale Opimizaion for Daa Science Quasi-Newon mehods Yuxin Chen Princeon Universiy, Spring 208 00 op ff(x (x)(k)) f p 2 L µ f 05 k f (xk ) k f (xk ) =) f op ieraions converges in only 5

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4.

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4. PHY1 Elecriciy Topic 7 (Lecures 1 & 11) Elecric Circuis n his opic, we will cover: 1) Elecromoive Force (EMF) ) Series and parallel resisor combinaions 3) Kirchhoff s rules for circuis 4) Time dependence

More information

DISCRETE GRONWALL LEMMA AND APPLICATIONS

DISCRETE GRONWALL LEMMA AND APPLICATIONS DISCRETE GRONWALL LEMMA AND APPLICATIONS JOHN M. HOLTE MAA NORTH CENTRAL SECTION MEETING AT UND 24 OCTOBER 29 Gronwall s lemma saes an inequaliy ha is useful in he heory of differenial equaions. Here is

More information

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations IOSR Journal of Mahemaics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1, Issue 6 Ver. II (Nov - Dec. 214), PP 48-54 Variaional Ieraion Mehod for Solving Sysem of Fracional Order Ordinary Differenial

More information

MOMENTUM CONSERVATION LAW

MOMENTUM CONSERVATION LAW 1 AAST/AEDT AP PHYSICS B: Impulse and Momenum Le us run an experimen: The ball is moving wih a velociy of V o and a force of F is applied on i for he ime inerval of. As he resul he ball s velociy changes

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214

More information

On-line Adaptive Optimal Timing Control of Switched Systems

On-line Adaptive Optimal Timing Control of Switched Systems On-line Adapive Opimal Timing Conrol of Swiched Sysems X.C. Ding, Y. Wardi and M. Egersed Absrac In his paper we consider he problem of opimizing over he swiching imes for a muli-modal dynamic sysem when

More information

INDEPENDENT SETS IN GRAPHS WITH GIVEN MINIMUM DEGREE

INDEPENDENT SETS IN GRAPHS WITH GIVEN MINIMUM DEGREE INDEPENDENT SETS IN GRAPHS WITH GIVEN MINIMUM DEGREE JAMES ALEXANDER, JONATHAN CUTLER, AND TIM MINK Absrac The enumeraion of independen ses in graphs wih various resricions has been a opic of much ineres

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Homework sheet Exercises done during the lecture of March 12, 2014

Homework sheet Exercises done during the lecture of March 12, 2014 EXERCISE SESSION 2A FOR THE COURSE GÉOMÉTRIE EUCLIDIENNE, NON EUCLIDIENNE ET PROJECTIVE MATTEO TOMMASINI Homework shee 3-4 - Exercises done during he lecure of March 2, 204 Exercise 2 Is i rue ha he parameerized

More information

Solutions from Chapter 9.1 and 9.2

Solutions from Chapter 9.1 and 9.2 Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

Math 333 Problem Set #2 Solution 14 February 2003

Math 333 Problem Set #2 Solution 14 February 2003 Mah 333 Problem Se #2 Soluion 14 February 2003 A1. Solve he iniial value problem dy dx = x2 + e 3x ; 2y 4 y(0) = 1. Soluion: This is separable; we wrie 2y 4 dy = x 2 + e x dx and inegrae o ge The iniial

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Concourse Math Spring 2012 Worked Examples: Matrix Methods for Solving Systems of 1st Order Linear Differential Equations

Concourse Math Spring 2012 Worked Examples: Matrix Methods for Solving Systems of 1st Order Linear Differential Equations Concourse Mah 80 Spring 0 Worked Examples: Marix Mehods for Solving Sysems of s Order Linear Differenial Equaions The Main Idea: Given a sysem of s order linear differenial equaions d x d Ax wih iniial

More information

The Contradiction within Equations of Motion with Constant Acceleration

The Contradiction within Equations of Motion with Constant Acceleration The Conradicion wihin Equaions of Moion wih Consan Acceleraion Louai Hassan Elzein Basheir (Daed: July 7, 0 This paper is prepared o demonsrae he violaion of rules of mahemaics in he algebraic derivaion

More information