Lecture 7 Conjugate Gradient Method(CG)

Size: px
Start display at page:

Download "Lecture 7 Conjugate Gradient Method(CG)"

Transcription

1 Lecture 7 Conjugate Gradient Method(CG) Jinn-Liang Liu 2017/6/7 The CG method is used for solving symmetric and positive definite(spd) systems A x = b (7.1) with A = A T, (7.2) x T A x > 0 forallnon-zerovectors x R N. (7.3) Algorithm CG: Conjugate Gradient Method x (0) = 0 (Arbitrary) r (0) = b A x (0) (Residualvector) p (1) = r (0) fork=1,,n (WhyN notkmax?) p, r (k 1) (S1) α k := p,a p = r (k 1), r (k 1) p,a (Why??) p (S2) x = x (k 1) +α kp (S3) r = b A x = r (k 1) α k A p (Why?) check convergence; continue if necessary (S4) β k := r,a p p,a p = r, r r (k 1), (Why??) r (k 1) (S5) p (k+1) = r +β kp end for Inthisalgorithm,α k iscalledasteppinglengthand p iscalledasearch direction(or predictor, or prediction vector). We can geometrically interpret (S2)asthatwearecurrentlyat x (k 1) andthentakeanextstepwiththe lengthα k inthedirectionof p togotothepoint x. Therefore, itis criticaltodeterminethenextα k and p. 29

2 Example 7.1. Given A = [ and b = [ 0 0, find the solution of A x = [ b byusing the conjugate method with the initial guess x (0) = 1. 1/9 Project7.1. First,writeaprogramfortheCGalgorithmandtestitbyExample7.1. Second,runyourCGprogramforthe1DPoissonProblem (1.1)(withf(x)=2,g D =0,andg N =0)discretizedbyFDM. Input: N,A, b,tol(writetheinputintheprogram). Output: N k E x E u α The CG is a nonstationary iterative method, i.e., in matrix form x =B x (k 1) + c (7.4) whereband c dependontheiteration(i.e.,band c changefromiteration to iteration). Table 7.1. Time Complexity of Various Methods Method 2D 3D GE O(N 3 ) O(N 3 ) JM O(N 2 ) O(N 5 3) GS O(N 2 ) O(N 5 3) SOR O(N 3 2) O(N 4 3) CG O(N 3 2) O(N 4 3) PCG O(N 1.2 ) O(N 1.17 ) Multigrid O(NlogN) O(NlogN) Remark 7.1. (1) CG can also be used to solve unconstrained optimization problems. 30

3 (2) The biconjugate gradient method (BiCG) is a generalization of CG for non-symmetric systems. (3) CG with preconditioning requires about N steps to determine an approximate solution. Wesaythattwonon-zerovectors u and v areconjugate withrespect toaif u, v A := u T A v =0 (7.5) ThelefthandsidedefinesaninnerproductsinceAisSPD,i.e., u, v A = u T A v = u,a v = A T u, v =A u, v = v, u A, where, isthestandardinnerproductdefinedinr N. Thisconjugateis not related to the notion of complex[ conjugate. 2 1 Example 7.2. Show that A = is SPD. Let [ 0 u = and [ v =. Showthat u, v=0,i.e., u v inthesenseof,. Does 0 u, v A =0? Findanytwovectors x and y suchthat x, y A =0,i.e., x y inthesenseof, A. Minimization Problem: With A being SPD, solve (7.1) for x via minimizing the quadratic function φ( x)= 2 1 x T A x x T b = 1 2 A x, b, x x x R N (7.6) Example7.3. Asimplestexamplefor(7.1)and(7.6)isgivenby N =1, A=2,b=1. Thesolutionofthelinear problem2x=1isx =1/2. Find theminimizerx forthefollowingquadraticfunction(anonlinearproblem) φ( x) = 1 2 xt Ax x T b = x2 x=x 2 x (7.7) 31

4 [ 1 0 Example 7.4. Solve the linear problem 0 9 then solve it via minimizing the quadratic function [ x1 x 2 = [ 0 0 and φ( x) = 1 2 xt Ax x T b = 1 2 [ x1 x 2 [ [ x1 x 2 [ x 1 x 2 [ 0 0 (7.8) 1 Minimization of the Quadratic Function Asnotedabove,φ( x)isnonlinearanda x = b islinear. Givenanyvectors x and p,lethbeafunctionofαdefinedby h(α) = φ( x +α p) (7.9) = 1 2 A( x +α p), x +α b, p x +α p = α2 2 A p, p+αa p, x+ 1 2 A x, b, x x +α p = 1 2 A p, pα 2 + A x b, p α+φ( x) (7.10) Theorem 7.1. x minimizesφ( x) A x = b (7.11) Proof: (= ) Foranyvector p,define h(α) =φ( x +α p) Sinceφ( x )isminimum,thefunctionh(α)attainsitsminimumatα=0 32

5 withtheminimumvalueofh(0)=φ( x ). Thisimpliesthat 0 = dh(0) dα =lim h(α) h(0) α 0 α φ( x +α p) φ( x ) = lim α 0 α 1 = lim 2 A p, pα 2 + α 0 ( 1 = lim α 0 2 A p, pα+ = A x b, p = A x = b. A x b, p α+φ( x ) φ( x ) α A x b, p, p 0. ) ( =) A x = b = φ( x + p) = φ( x )+ = φ( x )isminimum. A x b, p A p, p φ( x ), p Asaby-productfromtheproof,weseethat,foranygivenvectors x and p 0, d 0 = dα φ( x +α p)=h (α)=a p, pα+ A x b, p = α= A p, p A x b, p (7.12) which gives an optimal stepping length as defined in(s1). Remark 7.2. This theorem says that solving the linear system(7.1) is equivalent to minimizing the nonlinear function. In general, a linear problem ismucheasiertosolvethananonlinearone. Question7.1. Whydowemakeadetourfromaneasyroadtoarough road? 33

6 2 Directional Derivative(Gradient Operator) D p φ( φ( x+t p) φ( x) x) = D p φ(x 1,x 2 ) =lim t 0 t ( x = (x 1,x 2 )anyfixedpoint, p =(p 1,p 2 )anyunitdirection.) φ(x 1 +tp 1,x 2 +tp 2 ) φ(x 1,x 2 ) = lim t 0 t φ(x 1 +tp 1,x 2 +tp 2 ) φ(x 1,x 2 +tp 2 )+φ(x 1,x 2 +tp 2 ) φ(x 1,x 2 ) = lim t 0 t = lim t 0 [φ(x 1 +tp 1,x 2 +tp 2 ) φ(x 1,x 2 +tp 2 )p 1 tp 1 + [φ(x 1,x 2 +tp 2 ) φ(x 1,x 2 )p 2 tp 2 (7.13 [φ(x 1 + x 1,x 2 +tp 2 ) φ(x 1,x 2 +tp 2 )p 1 = lim + [φ(x 1,x 2 + x 2 ) φ(x 1,x 2 )p 2 t 0 x x 2 = φ(x 1,x 2 ) p 1 + φ(x 1,x 2 ) p 2 x 1 x 2 ( =, = grad = del, the vector differential operator gradient) x 1 x 2 = φ(x 1,x ) 2 p (7.14 = φ p cosθ MaxvalueofD p φ is φ withθ=0and p =1 MinvalueofD p φis φ in p = φ/ φ directionθ=180 Conclusion 7.1. At any point x, the functional φ decreases most (

7 rapidly(steepest)inthedirectionofthenegativegradient φ( x). φ( x) = φ(x 1,x 2,,x N ) = 1 2 A x, b, x x = 1 [ x1 x N 2 = 1 2 = 1 2 [ b 1 b N N N a ij x j x i j=1 N x i b i a 11 a 1N a 21 a 2N.. a N1 a NN x 1.. x N x 1.. x N N (a i1 x 1 +a i2 x 2 + +a ii x i + +a ik x k + )x i = 1 2 (a 11x 1 +a 12 x 2 + +a 1i x i + +a 1k x k + )x 1 N x i b i (a 21x 1 +a 22 x 2 + +a 2i x i + +a 2k x k + )x (a k1x 1 +a k2 x 2 + +a ki x i + +a kk x k + )x k (7.16) N 2 (a N1x 1 +a N2 x 2 + +a Ni x i + +a Nk x k + )x N x i b i i φ xk ( x) = φ( x) x k = N a ik x i b k, ( a ik =a ki ) (7.17) φ( x) = ( φ x1,φ x2,,φ xn ) = b A x (7.18) 35

8 The slope (grade) of a line φ(x) = ax+b (a scalar function) is defined as the rise over the run, m = φ = x φ (x) = dφ. The gradient (or gradient dx vectorfield)of ascalarfunctionφ( x)withrespecttoavectorvariable x isdenotedby φ=gradφwhere (thenablasymbol)denotesthevector differential operator del(grad). By definition, the gradient is a vector field whose components are the partial derivatives of φ as(7.18). HW 7.1. Consider a unit circle. Define the unit circle by using a quadraticfunctionalφ( x). Find φatthepoint(1,0)onthecircle. What canyousay(geometrically)aboutthegradientatthatpointandthenatany point? Tangent vector? Normal vector? So what is your conclusion? If we changetheunitcircletoasetoflevelcurves,doesyourconclusionstillhold? Prove and draw a picture to explain the conclusion. 3 Method of Steepest Descent Therefore, we should choose the direction p as b A x if we want to minimize φ(x, y) in the steepest way, i.e., D p φ(x,y) = φ( x) p (7.19) p = φ( x) (7.20) p = b A x (7.21) r = b A x (7.22) Iteration: x p = x (k 1) +α k p (7.23) = φ( x (k 1) )= r (k 1) (7.24) Algorithm MSD: Method of Steepest Descent x (0) = 0 for k=1,2,,kmax r (k 1) = b A x (k 1) if r (k 1) = 0 thenquit else p = r (k 1) (initialguessoranyothervector) (searchdirection p = φ) (solutionfound) 36

9 p, r (k 1) r (k 1) α k = p,a p =, r (k 1) r (k 1),A r (k 1) x = x (k 1) +α kp (by(7.12)) Remark 7.3. The convergence results for MSD are(see J. R. Shewchuk 1994) ( ) k e κ 1 A e (0) κ+1 A (7.25) e (0) = x x (0) e = x x e A = e,a e κ = λ max λ min, {λ i } areeigenvalueofa (7.26) A SPD 0<λ 1 λ 2 λ 3 λ N κ = λ N λ 1 (iii) TheconvergenceisslowifAisill-conditioned,i.e.,κ>>1. HW 7.2. Prove(7.25). Example 7.5. [ 3 2 A= 2 6 Solution: det(a λi) = det 3 λ λ =0 (3 λ)(6 λ) 4=0 0 = λ 2 9λ+14 λ = 9± λ max = 7 λ min = 2 Example 7.6. A= κ = 7 2 [ b = [ 2 8

10 Solution: φ( x) = 1 2 A x, b, x x = 1 [ [ [ [ 3x1 +2x 2 x1 2 x1,, 2 2x 1 +6x 2 x 2 8 x 2 = 1 ( ) 3x x 1 x 2 +6x 2 2 (2x1 8x 2 ) = C (Different contours with different constants C) Example 7.7. N =2,A= Solution: x 1 = x 1 cosθ+x 2 sinθ x 2 = x 1sinθ+x 2cosθ (x 1 p 1 ) 2 a 2 + (x 2 p 2 ) 2 [ d1 0, [ 0 b = 0 d 2 0 φ( x) = 1 2 A x, x= 1 2 = 1 2 (d 1x 2 +d 2 y 2 ) b 2 =1, [ x x = y, d 2 d 1 >0 [ x, y [ d d 2 [ x y = C 1 (constant)>0 a level curve x2 a 2 +y2 b 2 =1 a b d2 d 1 κ HW7.3. Letd 1 =1andd 2 =1/9. Drawlevelcurves(contours)ofthe functionφ( x)andthegradient φ( x)atsomepoint x =(1,1/9). Find thetangentlineat x =(1,1/9)tothelevelcurvepassingthrough(1,1/9). Whatistherelationbetween φ(1,1/9)andthisline? Findtheminimum valueandminimizerofφ( x)bymsdstartingfromthepoint(1,1/9). 38

11 Figure 1: Steepest Descent vs. Conjugate Gradient Conclusion 7.2. Case (1) If the contour is a circle, i.e. κ = 1, the solutionisfoundinone stepbymsd.case(2)aisill-conditioned κ 1 level curves very elongated requires a lot of iterations to find an approximation solution MSD converges very slowly. HW7.4. ProveCase(1). Example 7.8. A comparison (Fig. 1) of the convergence of gradient (steepest) descent with optimal step size(in green) and conjugate gradient (in red) for minimizing the quadratic form associated with a given linear system. Conjugate gradient converges in at most N steps where N is the sizeofthematrixofthesystem(heren =2). 4 Conjugate Gradient Method To avoid Case(2), another approach was proposed by choosing { p (1), p (2), p (3), } { r (0), r (1), r (2), } (7.27) 39

12 thatsatisfy p (i),a p (j) =0 i j (7.28) This is called an A-orthogonal (A-conjugate) condition and the set { p (1), p (2), p (3), } is said to be A-orthogonal. The vectors p (i) are calledconjugatedirections. SinceAisSPD,theset { p (1), p (2), p (3),, p (N)} is linearly independent, i.e., R N = span { p (1), p (2), p (3),, p (N)}? (7.29) (7.12) h (α) = 0 b A x (k 1), p α k = x A p, p = x (k 1) +α k p = r (k 1), p A p, p (7.30) (Next Iterate) (7.31) Theorem 7.2. Let { p (1), p (2), p (3),, p (N)} be an A-orthogonal set of nonzero vectors associated with the SPD matrix A and let x (0) be arbitrary. Definetheiterates(7.31) fork=1,2,3,,with(7.30). Then, assuming exact arithmetic, we have A x (N) = b (N stepstogetsolution) (7.32) 40

13 Proof: A x = b, A:N N, x R N φ( x) = 1 2 A x, b, x x, x R N h(α) = φ( x +α p)=φ( x)+α A x b, p + α2 2 A p, p φ( x) = r = b A x x = x (k 1) +α kp A x (N) = A x (N 1) +α N A p (N) = A x (N 2) +α N 1 A p (N 1) +α N A p (N) = A x (0) +α 1 A p (1) +α 2 A p (2) + +α N 1 A p (N 1) +α N A p (N) A x (N) b = A x (0) N b + α i A p (i) A x (N) b, p = = A x (0) b, p + N α i A p (i), p A x (0) b, p +α k A p, p ( { p (1), p (2), p (3),, p (N)} isa-orthogonal ) b = A x (0) b, A x (k 1), p p + A p, A p, p p = A x (0) A x (k 1), p (A x (k 1) = A x (k 2) +α k 1 A p (k 1) =A x (0) +α 1 A p (1) + +α k 1 A p (k 1) ) k 1 = α i A p (i), p =0 k=1,,n = A x (N) = b HowtodeterminetheA-orthogonalset { p (1), p (2), p (3),, p (N)} (set of search directions, conjugate directions)? 41

14 Lemma 7.1. Show that r, p = 0 fork=1,,n, (7.33) p (k+1),a p = 0 fork=1,,n 1, (7.34) p,a p = r (k 1),A p fork=1,,n, (7.35) r, r (k 1) = 0 fork=1,,n, (7.36) Proof: r = b A x (x) ( x = x (k 1) +α kp ) b = A x b A (k 1), p x (k 1) A p, A p p r, p b = A x (k 1), b p A x (k 1), p = 0 (7.33) (S5) A p (k+1) =A r +β k A p A p (k+1), p = A r, p +β k A p, p =0 (7.34)by(S4)andthisistheanswertothefirst? in(s4). Fork=1, p (1),A p (1) = r (0),A p (1) bythedefinitionof p (1). For2 k N, p,a p = r (k 1) +β k 1 p (k 1),A p by(s5) = r (k 1),A p +β k 1 p (k 1),A p = r (k 1),A p by(7.34) (7.35) HW7.5. Showthesecondequalityin(S1)firstandthen(7.36). Theorem 7.3. p (j),a p (m) =0, r (j 1), r (m 1) =0 j m. (7.37) 42

15 Proof: Thisresultisprovedbyinductiononm. From(7.34)and(7.36), wefirstnotethat p (2),A p (1) = r (1), r (0) =0. Assumenowthat p (l),a p = r (l 1), r (k 1) =0 for 1 k<l m. (7.38) Wewanttoshowthatthesamerelationholdstruefor0 k<l m+1. It hasbeenshowninlemma7.1fork=mandl=m+1. Taking1 k<m andl=m+1,itfollows r (m), r (k 1) = r (m 1), r (k 1) α m A p (m), r (k 1) = α m A p (m), r (k 1) by(7.38) = α m A p (m), p β k 1 p (k 1) by(s5) = 0 by(7.38) (7.39) p (m+1),a p = r (m) +β m p (m),a p = r (m),a p +β m p (m),a p = r (m),a p by(7.38) = r (m), r (k 1) r α 1 k by(s3) = 0 by(7.39) 43

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

Basic concepts in Linear Algebra and Optimization

Basic concepts in Linear Algebra and Optimization Basic concepts in Linear Algebra and Optimization Yinbin Ma GEOPHYS 211 Outline Basic Concepts on Linear Algbra vector space norm linear mapping, range, null space matrix multiplication terative Methods

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions 301 APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION Let V be a finite dimensional inner product space spanned by basis vector functions {w 1, w 2,, w n }. According to the Gram-Schmidt Process an

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Goals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients

Goals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients Week #3 : Higher-Order Homogeneous DEs Goals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients 1 Second-Order Linear Equations

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Vector Multiplication. Directional Derivatives and the Gradient

Vector Multiplication. Directional Derivatives and the Gradient Vector Multiplication - 1 Unit # : Goals: Directional Derivatives and the Gradient To learn about dot and scalar products of vectors. To introduce the directional derivative and the gradient vector. To

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

Math 273 (51) - Final

Math 273 (51) - Final Name: Id #: Math 273 (5) - Final Autumn Quarter 26 Thursday, December 8, 26-6: to 8: Instructions: Prob. Points Score possible 25 2 25 3 25 TOTAL 75 Read each problem carefully. Write legibly. Show all

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Directional Derivatives and the Gradient

Directional Derivatives and the Gradient Unit #20 : Directional Derivatives and the Gradient Goals: To learn about dot and scalar products of vectors. To introduce the directional derivative and the gradient vector. To learn how to compute the

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Conjugate Gradients: Idea

Conjugate Gradients: Idea Overview Steepest Descent often takes steps in the same direction as earlier steps Wouldn t it be better every time we take a step to get it exactly right the first time? Again, in general we choose a

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised)

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised) Computational Optimization Mathematical Programming Fundamentals 1/5 (revised) If you don t know where you are going, you probably won t get there. -from some book I read in eight grade If you do get there,

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Course Notes: Week 4

Course Notes: Week 4 Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived

More information

Lecture 3: Linesearch methods (continued). Steepest descent methods

Lecture 3: Linesearch methods (continued). Steepest descent methods Lecture 3: Linesearch methods (continued). Steepest descent methods Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 3: Linesearch methods (continued).

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Applied Differential Equation. November 30, 2012

Applied Differential Equation. November 30, 2012 Applied Differential Equation November 3, Contents 5 System of First Order Linear Equations 5 Introduction and Review of matrices 5 Systems of Linear Algebraic Equations, Linear Independence, Eigenvalues,

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 3 Mar. 21, 2017 1 / 38 Overview Recap Nonlinear systems: existence and uniqueness of a solution of differential equations Preliminaries Fields and Vector Spaces

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Review of Classical Optimization

Review of Classical Optimization Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization

More information

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Math 21b Final Exam Thursday, May 15, 2003 Solutions Math 2b Final Exam Thursday, May 5, 2003 Solutions. (20 points) True or False. No justification is necessary, simply circle T or F for each statement. T F (a) If W is a subspace of R n and x is not in

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Mathematical modelling Chapter 2 Nonlinear models and geometric models

Mathematical modelling Chapter 2 Nonlinear models and geometric models Mathematical modelling Chapter 2 Nonlinear models and geometric models Faculty of Computer and Information Science University of Ljubljana 2018/2019 3. Nonlinear models Given is a sample of points {(x

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3. SYMMETRIES 88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

2. A die is rolled 3 times, the probability of getting a number larger than the previous number each time is

2. A die is rolled 3 times, the probability of getting a number larger than the previous number each time is . If P(A) = x, P = 2x, P(A B) = 2, P ( A B) = 2 3, then the value of x is (A) 5 8 5 36 6 36 36 2. A die is rolled 3 times, the probability of getting a number larger than the previous number each time

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Lecture 7. Logistic Regression. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 11, 2016

Lecture 7. Logistic Regression. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 11, 2016 Lecture 7 Logistic Regression Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 11, 2016 Luigi Freda ( La Sapienza University) Lecture 7 December 11, 2016 1 / 39 Outline 1 Intro Logistic

More information