On the Solution of the GPS Localization and Circle Fitting Problems
|
|
- Randall King
- 6 years ago
- Views:
Transcription
1 On the Solution of the GPS Localization and Circle Fitting Problems Amir Beck Technion - Israel Institute of Technology Haifa, Israel Joint work with Dror Pan, Technion. Applications of Optimization in Science and Engineering Institute of Pure and Applied Mathematics (IPAM), Los Angeles, Nov. 30- Dec. 3
2 The Global Positioning System a system of 31 transmitting satellites (originally 24). objective: provide reliable location and time information anywhere on Earth where there is unobstructed line of sight to four or more satellites.
3 Localization via GPS Each satellite transmits its location (x i, y i, z i ) and a time stamp t i. The GPS receiver estimates the distances to at least 4 satellites via d i = c(t t i ) c - speed of light. T - GPS receiver s clock. The receiver s clock is inaccurate (an error of one microsecond corresponds to an error or 300 meters). The measured distances are called pesudoranges and include an unknown clock bias.
4 The GPS Localization Problem d i x a i r, i = 1,..., m a i - satellite s location. r - unknown bias. x - unknown user s location. d i - i-th pseudorange (can even be negative) m n + 1.
5 The GPS Localization Problem d i x a i r, i = 1,..., m a i - satellite s location. r - unknown bias. x - unknown user s location. d i - i-th pseudorange (can even be negative) m n + 1. A Least Squares Problem: { m } ( x a i d i r) 2. min x,r
6 The GPS Localization Problem: without loss of generality: d i 0 (otherwise make a shift and redefine r). a mild assumption: r 0. The GPS Least Squares Problem { m } ( x a i d i r) 2 : r 0. min x,r
7 The GPS Localization Problem: without loss of generality: d i 0 (otherwise make a shift and redefine r). a mild assumption: r 0. The GPS Least Squares Problem { m } ( x a i d i r) 2 : r 0. min x,r Problem Reduction (minimizing with respect to r): The GPS Least Squares Problem-reduced form { m } (GPS-LS) : min ( x a i d i r(x)) 2 x " # 1 mx r(x) := ( x a i d i) m +
8 The Bad News... The GPS-LS problem is nonsmooth and nonconvex
9 The Bad News... The GPS-LS problem is nonsmooth and nonconvex In principal, it should be difficult to find a global optimal solution
10 The Circle Fitting Problem - d i = 0 When d i = 0, the problem reduces circle fitting: Given m points a 1,..., a m, find the circle that fits them in the best way. x a i r
11 The Circle Fitting Least Squares Problem The CF Least Squares Problem { m } ( x a i r) 2 : r 0. min x,r
12 The Circle Fitting Least Squares Problem The CF Least Squares Problem { m } ( x a i r) 2 : r 0. min x,r The CF Least Squares Problem-reduced form { m } ( (CF-LS) : min ( x a i r(x)) 2 r(x) := 1 x m ) m x a i
13 The Circle Fitting Least Squares Problem The CF Least Squares Problem { m } ( x a i r) 2 : r 0. min x,r The CF Least Squares Problem-reduced form { m } ( (CF-LS) : min ( x a i r(x)) 2 r(x) := 1 x m ) m x a i (CF-LS) is the geometric fitting problem: find the circle that minimizes the distances between the circle and the given points
14 Applications of Circle Fitting Archaeology Computer graphics Coordinate metrology Petroleum engineering Quality inspection for mechanical parts Statistics
15 Literature GPS Localization: Abel (1994) - A variable projection method. Source localization from range-differences (usage of a reference measurement): Huang, Benesty, Elko and Mersereau (2001), Stoica and Li (2006), Beck, Stoica and Li (2008), Sensor network localization from range difference (2009), Yang, Wang,Luo (usage of all differences, SDR approach). Circle Fitting: Kasa (1976) - solution of a related squared least squares problem in the 2D case. Gander, Golub and Strebel (1994): algebraic fit + Gauss Newton for (CF-LS). Chernov, Lesort (2005) - Analysis in the 2D case.
16 The least squares GPS localization problem Advantage: Has a statistical and geometrical meaning. Disadvantage Nonconvex and nonsmooth - seems to be intractable.
17 The least squares GPS localization problem Advantage: Has a statistical and geometrical meaning. Disadvantage Nonconvex and nonsmooth - seems to be intractable. It is therefore important to find a good approximate solution/solution to an approximate problem
18 The Squared Least Squares Approach replace x a i r + d i with x a i 2 (r + d i ) 2 remove the constraint r 0
19 The Squared Least Squares Approach replace x a i r + d i with x a i 2 (r + d i ) 2 remove the constraint r 0 The Squared Least Squares GPS problem: { m } ( (GPS-SLS): min x ai 2 (r + d i ) 2) 2. Disadvantage: loses the statistical/geomretrical meaning of LS. Advantage: tractable!! (although quartic)
20 Equivalence to GTRS (GPS-SLS): min { m } ( x ai 2 (r + d i ) 2) 2.
21 Equivalence to GTRS min x,r,α (GPS-SLS): { m min { m } ( x ai 2 (r + d i ) 2) 2. } ( 2a T i x 2d i r + α + a i 2 di 2 ) 2 : α = x 2 r 2.
22 Equivalence to GTRS min x,r,α Lemma (GPS-SLS): { m min { m } ( x ai 2 (r + d i ) 2) 2. } ( 2a T i x 2d i r + α + a i 2 di 2 ) 2 : α = x 2 r 2. Problem (GPS-SLS) is equivalent to { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 0 2a T d a 1 2 d 2 1 2a T d 2 x I n 0 n 0 n B = C.. A, y αa, D 0 T n 0 0 A a 2 2 d2 2, b = B C r 0 2a T n 0 1. A m 1 2d m a m 2 dm 2
23 Tractability of GTRS Problems Generalized Trust Region Subproblem (GTRS): (GTRS): min{x T A 1 x + 2b T 1 x + c 1 : x T A 2 x + 2b T 2 x + c 2 = 0},
24 Tractability of GTRS Problems Generalized Trust Region Subproblem (GTRS): (GTRS): min{x T A 1 x + 2b T 1 x + c 1 : x T A 2 x + 2b T 2 x + c 2 = 0}, Theorem (More, 93) Suppose that A 2 0. Then x is an optimal solution of (GTRS) if and only if there exists λ R such that (A 1 + λa 2 )x + (b 1 + λb 2 ) = 0, x T A 2 x + 2b T 2 x + c 2 = 0, A 1 + λa 2 0,
25 Tractability of GTRS Problems Generalized Trust Region Subproblem (GTRS): (GTRS): min{x T A 1 x + 2b T 1 x + c 1 : x T A 2 x + 2b T 2 x + c 2 = 0}, Theorem (More, 93) Suppose that A 2 0. Then x is an optimal solution of (GTRS) if and only if there exists λ R such that (A 1 + λa 2 )x + (b 1 + λb 2 ) = 0, x T A 2 x + 2b T 2 x + c 2 = 0, A 1 + λa 2 0, The problem can be solved by a dual approach via a one-dimensional search.
26 Tractability of the GTRS problem, Cont d The global optimal solution of (GPS-SLS) is comprised of the first n components of the vector where λ is the root of y(λ ) = (B T B + λ D) 1 (B T b + λ g), over a predefined interval [µ 1, µ 2 ]. φ(λ) y(λ) T Dy(λ) 2g T y(λ) = 0,
27 Existence of an Optimal Solution of (GPS-SLS) Under what assumption does (GPS-SLS) attains an optimal solution?
28 Existence of an Optimal Solution of (GPS-SLS) Under what assumption does (GPS-SLS) attains an optimal solution? Assumption The Basic Assumption: The matrix à defined by 2a T 1 1 2a T 2 1 à =.. Rm (n+1) 1 2a T m has full column rank. That is, a 1,..., a m do not reside in a lower-dimensional space.
29 Existence of an Optimal Solution of (GPS-SLS) Under what assumption does (GPS-SLS) attains an optimal solution? Assumption The Basic Assumption: The matrix à defined by 2a T 1 1 2a T 2 1 à =.. Rm (n+1) 1 2a T m has full column rank. That is, a 1,..., a m do not reside in a lower-dimensional space. Rather mild when m n + 1.
30 Existence of an Optimal Solution of (GPS-SLS) Under what assumption does (GPS-SLS) attains an optimal solution? Assumption The Basic Assumption: The matrix à defined by 2a T 1 1 2a T 2 1 à =.. Rm (n+1) 1 2a T m has full column rank. That is, a 1,..., a m do not reside in a lower-dimensional space. Rather mild when m n + 1. Further assumptions?
31 Existence of an Optimal Solution of (GPS-SLS) (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 0 2a T d d 1 2a T 2 1 2d 2 I n 0 n 0 n B = C = (Ã, 2d), D 0 T n 0 0 A d 2, d = B.. A 0 C 2a T n 0 1. A. m 1 2d m d m Theorem The minimum of the GTRS problem is attained if at least one of the following conditions is satisfied: i. d / Range(Ã) [(ÃT ] ii. d Range(Ã) and Ã) 1 Ã T d 1 2. n.
32 Existence of an Optimal Solution of (GPS-SLS) (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 0 2a T d d 1 2a T 2 1 2d 2 I n 0 n 0 n B = C = (Ã, 2d), D 0 T n 0 0 A d 2, d = B.. A 0 C 2a T n 0 1. A. m 1 2d m d m Theorem The minimum of the GTRS problem is attained if at least one of the following conditions is satisfied: i. d / Range(Ã) mild when m n + 2, impossible when m = n + 1. [(ÃT ] ii. d Range(Ã) and Ã) 1 Ã T d 1 2. n
33 Existence of an Optimal Solution of (GPS-SLS) (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 0 2a T d d 1 2a T 2 1 2d 2 I n 0 n 0 n B = C = (Ã, 2d), D 0 T n 0 0 A d 2, d = B.. A 0 C 2a T n 0 1. A. m 1 2d m d m Theorem The minimum of the GTRS problem is attained if at least one of the following conditions is satisfied: i. d / Range(Ã) mild when m n + 2, impossible when m = n + 1. [(ÃT ] ii. d Range(Ã) and Ã) 1 Ã T d 1 n 2. mild when m = n + 1
34 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive.
35 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive. ii. A known sufficient condition: λ R : B T B + λd 0.
36 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive. ii. A known sufficient condition: λ R : B T B + λd 0. (ÃT ) ( ) Ã + λe 2ÃT d In 0 2d T Ã 4 d 2 0, E = n λ 0 T. n 0
37 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive. ii. A known sufficient condition: λ R : B T B + λd 0. (ÃT ) ( ) Ã + λe 2ÃT d In 0 2d T Ã 4 d 2 0, E = n λ 0 T. n 0 +Schur complement λ R : g(λ) := 4 d 2 λ 4d T Ã(Ã T Ã + λe) 1 Ã T d > 0
38 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive. ii. A known sufficient condition: λ R : B T B + λd 0. (ÃT ) ( ) Ã + λe 2ÃT d In 0 2d T Ã 4 d 2 0, E = n λ 0 T. n 0 +Schur complement λ R : g(λ) := 4 d 2 λ 4d T Ã(Ã T Ã + λe) 1 Ã T d > 0 + g(0) = 0 it is enough to prove that g (0) 0
39 Proof Layout (GTRS) : { min By b 2 : y T Dy 2g T y = 0 }, y R n+2 i. B = (Ã, 2d) B with full column rank obj. function coercive. ii. A known sufficient condition: λ R : B T B + λd 0. (ÃT ) ( ) Ã + λe 2ÃT d In 0 2d T Ã 4 d 2 0, E = n λ 0 T. n 0 +Schur complement λ R : g(λ) := 4 d 2 λ 4d T Ã(Ã T Ã + λe) 1 Ã T d > 0 + g(0) = 0 it is enough to prove that g (0) 0 Final step: [ ] g (0) 0 (Ã T Ã) 1 Ã T d 1/2. n
40 The Circle Fitting SLS Problem (CF-SLS) : min x,r { m } ( x a i 2 r 2 ) 2 : x R n, r R.
41 The Circle Fitting SLS Problem (CF-SLS) : A stronger result: Theorem min x,r { m } ( x a i 2 r 2 ) 2 : x R n, r R. (CF-SLS) is equivalent to the linear least squares problem: min Ãy b 2 where b = ( a 1 2,..., a m 2 ) T. (2D case - Kasa 1976)
42 The Circle Fitting SLS Problem (CF-SLS) : A stronger result: Theorem min x,r { m } ( x a i 2 r 2 ) 2 : x R n, r R. (CF-SLS) is equivalent to the linear least squares problem: min Ãy b 2 where b = ( a 1 2,..., a m 2 ) T. (2D case - Kasa 1976) Proof Idea: m min ( 2a T i x + x 2 r 2 + a i 2 ) 2 : x R n, r R x,r }{{}. R +possible to discard the relation between R, x and r
43 So far... Main objective: solution of the nonsmooth/nonconvex GPS-LS problem: { m } (GPS-LS) : min ( x a i d i r(x)) 2 x
44 So far... Main objective: solution of the nonsmooth/nonconvex GPS-LS problem: { m } (GPS-LS) : min ( x a i d i r(x)) 2 x The related GPS-SLS problem: { m } ( (GPS-SLS): min x ai 2 (r + d i ) 2) 2. is tractable (equivalent to GTRS, or least squares)
45 So far... Main objective: solution of the nonsmooth/nonconvex GPS-LS problem: { m } (GPS-LS) : min ( x a i d i r(x)) 2 x The related GPS-SLS problem: { m } ( (GPS-SLS): min x ai 2 (r + d i ) 2) 2. is tractable (equivalent to GTRS, or least squares) Attainability of the GPS-SLS under rather mild conditions.
46 So far... Main objective: solution of the nonsmooth/nonconvex GPS-LS problem: { m } (GPS-LS) : min ( x a i d i r(x)) 2 x The related GPS-SLS problem: { m } ( (GPS-SLS): min x ai 2 (r + d i ) 2) 2. is tractable (equivalent to GTRS, or least squares) Attainability of the GPS-SLS under rather mild conditions. What about the GPS-LS problem?
47 Illustration of the superiority of CF-LS over CF-SLS (CF-LS) can give pretty good results, but... 5 LS SLS
48 Analysis of the GPS-LS Problem (GPS-LS) : min x { f (x) := } m ( x a i d i r(x)) 2 [ ] 1 m r(x) := ( x a i d i ) m +
49 Existence of an Optimal Solution A related question: what is liminf x f (x)?
50 Existence of an Optimal Solution Theorem A related question: what is liminf x f (x)? ( liminf f (x) = min {(Az + d) T I m 1 ) } x z m 1 m1 T m (Az + d) : z = 1 }{{} f liminf a T 1 a T A :=. 2., 1 m = ones(m,1) a T m f liminf can be efficiently computed via a solution of a GTRS.
51 Auxiliary Lemmata Lemma Let z be an optimal solution of the liminf problem. Then the sequence defined by x k = kz satisfies x k and lim f (x k) = f liminf. k i.e., liminf x f (x) f liminf.
52 Auxiliary Lemmata Lemma Let z be an optimal solution of the liminf problem. Then the sequence defined by x k = kz satisfies x k and lim f (x k) = f liminf. k i.e., liminf x f (x) f liminf. Lemma f (x) A(x)f liminf + C(x) (A(x) 1, C(x) 0). i.e., liminf x f (x) f liminf.
53 Auxiliary Lemmata Lemma Let z be an optimal solution of the liminf problem. Then the sequence defined by x k = kz satisfies x k and lim f (x k) = f liminf. k i.e., liminf x f (x) f liminf. Lemma f (x) A(x)f liminf + C(x) (A(x) 1, C(x) 0). i.e., liminf x f (x) f liminf. liminf x f (x) = f liminf
54 Sufficient Conditions for Attainability [SC1]: there exists x R n such that f ( x) < f liminf Essentially incomputable
55 Sufficient Conditions for Attainability [SC1]: there exists x R n such that f ( x) < f liminf Essentially incomputable [SC2]: f (x sls ) < f liminf, x sls - an optimal solution of (GPS-SLS). Verifiable.
56 Is the sufficient condition likely to be satisfied? m = 6, n = 2. a j and x randomly generated from [ 10, 10] [ 10, 10]. r randomly generated via N(0, 10 2 ) realizations. d j = x a j r + ε j, ε j N(0, σ 2 ). N σ - number of iteration in which [SC2] is satisfied. σ N σ
57 Is the sufficient condition likely to be satisfied? m = 6, n = 2. a j and x randomly generated from [ 10, 10] [ 10, 10]. r randomly generated via N(0, 10 2 ) realizations. d j = x a j r + ε j, ε j N(0, σ 2 ). N σ - number of iteration in which [SC2] is satisfied. σ N σ When σ is not large, [SC2] is satisfied. σ = 10 is a huge standard deviation (the pseudoranges are essentially random).
58 The meaning of f liminf for (CF-SLS) ( f liminf = min {z T A T I m 1 ) } z m 1 m1 T m Az : z = 1
59 The meaning of f liminf for (CF-SLS) ( f liminf = min {z T A T z I m 1 m 1 m1 T m [ = λ min A (I T m 1 m 1 m1 T m ) ) A ] } Az : z = 1
60 The meaning of f liminf for (CF-SLS) ( f liminf = min {z T A T z I m 1 m 1 m1 T m [ = λ min A (I T m 1 m 1 m1 T m ) ) A Question: What is the meaning of this eigenvalue? ] } Az : z = 1
61 The meaning of f liminf for (CF-SLS) ( f liminf = min {z T A T z I m 1 m 1 m1 T m [ = λ min A (I T m 1 m 1 m1 T m ) ) A ] } Az : z = 1 Question: What is the meaning of this eigenvalue? Answer: It is the optimal value of the Orthogonal Regression Problem.
62 The Orthogonal Regression Problem Given a set of points {a 1,..., a m }, find an hyperplane H x,y := { a R n : x T a = y } minimizing the sum of squared Euclidean distances to the set points: { m } f OR = min d(a i, H x,y ) 2 : 0 x R n, y R x,y a 3 a a 2 a a
63 The Orthogonal Regression Problem Given a set of points {a 1,..., a m }, find an hyperplane H x,y := { a R n : x T a = y } minimizing the sum of squared Euclidean distances to the set points: { m } f OR = min d(a i, H x,y ) 2 : 0 x R n, y R x,y Theorem f OR = f liminf a 3 a a 2 a a
64 Circle Fitting versus Orthogonal Regression A sequence of circles with corresponding obj. function value converging to the liminf. 20 k= 1 20 k= k= k=
65 [SC1] for (CF-LS) Revisited [SC1]f < f liminf [SC1] It is better to fit with a circle than with a line
66 A Fixed Point Method for Solving (GPS-LS) First Observation: m m f (x) = ( x a i d i r(x)) 2 = ( x a i d i ) 2 mr(x) 2. P m ( x ai di)2 - obj. function of the source localization problem. mr(x) 2 = m ˆ 1 m P m ( x ai di) 2 - a convex function. +
67 A Fixed Point Method for Solving (GPS-LS) First Observation: m m f (x) = ( x a i d i r(x)) 2 = ( x a i d i ) 2 mr(x) 2. P m ( x ai di)2 - obj. function of the source localization problem. mr(x) 2 = m ˆ 1 m The Source Localization Problem: Given noisy observations of the distances between the source and the sensors d i x a i, find a good estimate of x. P m ( x ai di) 2 - a convex function. +
68 A Fixed Point Method for Solving (GPS-LS) A generalization of a FP method constructed for the source localization problem (Beck, Teboulle, Chikishev, 2008) A = {a 1, a 2,..., a m }
69 A Fixed Point Method for Solving (GPS-LS) A generalization of a FP method constructed for the source localization problem (Beck, Teboulle, Chikishev, 2008) Optimality condition (x / A): A = {a 1, a 2,..., a m } f (x) = 0.
70 A Fixed Point Method for Solving (GPS-LS) A generalization of a FP method constructed for the source localization problem (Beck, Teboulle, Chikishev, 2008) Optimality condition (x / A): Equivalent to x = 1 m m a i + 1 m A = {a 1, a 2,..., a m } f (x) = 0. m (r(x) + d i ) x a i x a i T (x).
71 A Fixed Point Method for Solving (GPS-LS) A generalization of a FP method constructed for the source localization problem (Beck, Teboulle, Chikishev, 2008) Optimality condition (x / A): Equivalent to x = 1 m m a i + 1 m A = {a 1, a 2,..., a m } f (x) = 0. m (r(x) + d i ) x a i x a i T (x). A fixed point method for solving (GPS-LS) Initialization. Choose x 0 / A. General step. x k+1 = T (x k ), k = 0, 1, 2,...
72 A fixed point method for solving (CF-LS) Initialization. Choose x 0 / A. General Step. [ ] x k+1 = 1 m 1 m x k a i a i + r(x k ), k = 0, 1, 2,... m m x k a i
73 A fixed point method for solving (CF-LS) Initialization. Choose x 0 / A. General Step. [ ] x k+1 = 1 m 1 m x k a i a i + r(x k ), k = 0, 1, 2,... m m x k a i Question I: Can we prove monotonicity/convergence to a stationary point? Question II: Can we avoid the nondifferentiability points A?
74 Convergence Analysis Technique for FP Methods FP method: x k+1 = T (x k ), k = 01, 2,... for solving min{f (x) : x R n }.
75 Convergence Analysis Technique for FP Methods FP method: x k+1 = T (x k ), k = 01, 2,... for solving min{f (x) : x R n }. Analysis Technique: find an auxiliary function h(x, y) for which T (y) = argmin h(x, y) x (x k+1 = argmin h(x, x k )) x
76 Convergence Analysis Technique for FP Methods FP method: for solving x k+1 = T (x k ), k = 01, 2,... min{f (x) : x R n }. Analysis Technique: find an auxiliary function h(x, y) for which T (y) = argmin h(x, y) x f (x) h(x, y) x, y. (x k+1 = argmin h(x, x k )) x
77 Convergence Analysis Technique for FP Methods FP method: for solving x k+1 = T (x k ), k = 01, 2,... min{f (x) : x R n }. Analysis Technique: find an auxiliary function h(x, y) for which T (y) = argmin h(x, y) x f (x) h(x, y) x, y. f (x) = h(x, x) x. (x k+1 = argmin h(x, x k )) x
78 Convergence Analysis Technique for FP Methods FP method: for solving x k+1 = T (x k ), k = 01, 2,... min{f (x) : x R n }. Analysis Technique: find an auxiliary function h(x, y) for which T (y) = argmin h(x, y) x f (x) h(x, y) x, y. f (x) = h(x, x) x. Results: (x k+1 = argmin h(x, x k )) x The FP method is a decreasing scheme (f (x k+1 ) < f (x k )). Any accumulation point is a stationary point. f (x k ) converges to a function value of a stationary point.
79 Convergence Analysis Technique for FP Methods FP method: for solving x k+1 = T (x k ), k = 01, 2,... min{f (x) : x R n }. Analysis Technique: find an auxiliary function h(x, y) for which T (y) = argmin h(x, y) x f (x) h(x, y) x, y. f (x) = h(x, x) x. Results: (x k+1 = argmin h(x, x k )) x The FP method is a decreasing scheme (f (x k+1 ) < f (x k )). Any accumulation point is a stationary point. f (x k ) converges to a function value of a stationary point. Another example: Gradient method. T (y) = y 1 L f (y) h(x, y) = f (y) + f (y), x y + L x y 2 2
80 The Auxiliary Function h(x, y) = m x a i (r(y) + d i ) y a 2 i y a i,
81 Choosing the Initial Point Potential Difficulties: x k A for some k x k.
82 Choosing the Initial Point Potential Difficulties: x k A for some k x k. The solution : Choose x 0 satisfying f (x 0 ) < min{f (a 1 ),..., f (a m ), f liminf }
83 Choosing the Initial Point Potential Difficulties: x k A for some k x k. The solution : Choose x 0 satisfying f (x 0 ) < min{f (a 1 ),..., f (a m ), f liminf } How? f liminf min{f (a 1 ),..., f (a m )} x 0 = x sls
84 Choosing the Initial Point Potential Difficulties: x k A for some k x k. The solution : Choose x 0 satisfying How? f (x 0 ) < min{f (a 1 ),..., f (a m ), f liminf } f liminf min{f (a 1 ),..., f (a m )} x 0 = x sls f liminf > min{f (a 1 ),..., f (a m )}... Pick p argmin{f (a i)}.,...,m Find a descent direction f (a p, d) < 0. Define x 0 = a p + εd. Result: it is always possible to construct x 0 satisfying.
85 Comparing (GPS-LS) and (GPS-SLS) m = 6, n = 2. a j and x randomly generated from [ 10, 10] [ 10, 10] realizations. d j = x a j r + ε j, ε j N(0, σ 2 ). N σ - number of iteration in which [SC2] is satisfied. σ rel. er. SLS rel. er. LS I σ
86 Comparing (GPS-LS) and (GPS-SLS) m = 6, n = 2. a j and x randomly generated from [ 10, 10] [ 10, 10] realizations. d j = x a j r + ε j, ε j N(0, σ 2 ). N σ - number of iteration in which [SC2] is satisfied. σ rel. er. SLS rel. er. LS I σ I σ - no. of runs in which the LS solution is better than the SLS solution ( ) rel. er. SLS (LS) - average of xsls xtrue xls x true x true x true.
87 The End Thank you!
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that
More informationLecture 4 - The Gradient Method Objective: find an optimal solution of the problem
Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...
More information8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition
CS/ECE/ISyE 54 Introduction to Optimization Spring 017 18 8. Least squares ˆ Review of linear equations ˆ Least squares ˆ Eample: curve-fitting ˆ Vector norms ˆ Geometrical intuition Laurent Lessard (www.laurentlessard.com)
More informationLecture 4 - The Gradient Method Objective: find an optimal solution of the problem
Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...
More informationA convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid
Math. Program., Ser. A (2009) 118:13 35 DOI 10.1007/s10107-007-0181-x FULL LENGTH PAPER A convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid Amir
More informationOn the Minimization Over Sparse Symmetric Sets: Projections, O. Projections, Optimality Conditions and Algorithms
On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions and Algorithms Amir Beck Technion - Israel Institute of Technology Haifa, Israel Based on joint work with Nadav Hallak
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationSequential convex programming,: value function and convergence
Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional
More informationKey words. first order methods, trust region subproblem, optimality conditions, global optimum
GLOBALLY SOLVING THE TRUST REGION SUBPROBLEM USING SIMPLE FIRST-ORDER METHODS AMIR BECK AND YAKOV VAISBOURD Abstract. We consider the trust region subproblem which is given by a minimization of a quadratic,
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationThe geometry of the statistical model for range-based localization
Range geometry of the statistical model for range-based Algebraic Statistics 2015 Genova, Italy June 9, 2015 Range Joint work with Roberto Notari, Andrea Ruggiu, Fabio Antonacci Augusto Sarti. Range Range
More information6. Proximal gradient method
L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping
More informationA First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems
A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems Amir Beck and Shoham Sabach July 6, 2011 Abstract We consider a general class of convex optimization problems
More informationLecture 3 - Least Squares
Lecture 3 - Least Squares In January 1, 1801, an Italian monk pages 1,2 are from Giuseppe Piazzi, discovered a faint, http://www.keplersdiscovery. nomadic object through his telescope in com/asteroid.html
More informationA user s guide to Lojasiewicz/KL inequalities
Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f
More informationLecture 3 - Least Squares
Lecture 3 - Least Squares In January 1, 1801, an Italian monk pages 1,2 are from Giuseppe Piazzi, discovered a faint, http://www.keplersdiscovery. nomadic object through his telescope in com/asteroid.html
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationTropical Polynomials
1 Tropical Arithmetic Tropical Polynomials Los Angeles Math Circle, May 15, 2016 Bryant Mathews, Azusa Pacific University In tropical arithmetic, we define new addition and multiplication operations on
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the
More information6. Proximal gradient method
L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationALGORITHM CONSTRUCTION BY DECOMPOSITION 1. INTRODUCTION. The following theorem is so simple it s almost embarassing. Nevertheless
ALGORITHM CONSTRUCTION BY DECOMPOSITION JAN DE LEEUW ABSTRACT. We discuss and illustrate a general technique, related to augmentation, in which a complicated optimization problem is replaced by a simpler
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationComposite nonlinear models at scale
Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)
More informationLecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then
Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S
More informationStrong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints
Strong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints Amir Beck and Yonina C. Eldar April 12, 2005 Abstract We consider the problem of minimizing an indefinite quadratic function
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 8: Optimization Cho-Jui Hsieh UC Davis May 9, 2017 Optimization Numerical Optimization Numerical Optimization: min X f (X ) Can be applied
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationSatellite Navigation PVT estimation and integrity
Satellite Navigation PVT estimation and integrity Picture: ESA AE4E8 Sandra Verhagen Course 1 11, lecture 7 1 Satellite Navigation (AE4E8 Lecture 7 Today s topics Position, Velocity and Time estimation
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationMotivation Subgradient Method Stochastic Subgradient Method. Convex Optimization. Lecture 15 - Gradient Descent in Machine Learning
Convex Optimization Lecture 15 - Gradient Descent in Machine Learning Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 21 Today s Lecture 1 Motivation 2 Subgradient Method 3 Stochastic
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationBlock Coordinate Descent for Regularized Multi-convex Optimization
Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline
More informationBISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption
BISTA: a Bregmanian proximal gradient method without the global Lipschitz continuity assumption Daniel Reem (joint work with Simeon Reich and Alvaro De Pierro) Department of Mathematics, The Technion,
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationAppendix A: Separation theorems in IR n
Appendix A: Separation theorems in IR n These notes provide a number of separation theorems for convex sets in IR n. We start with a basic result, give a proof with the help on an auxiliary result and
More informationConditional Gradient Algorithms for Rank-One Matrix Approximations with a Sparsity Constraint
Conditional Gradient Algorithms for Rank-One Matrix Approximations with a Sparsity Constraint Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with Ronny Luss Optimization and
More informationMath 164-1: Optimization Instructor: Alpár R. Mészáros
Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More information1. Introduction. Consider the following quadratically constrained quadratic optimization problem:
ON LOCAL NON-GLOBAL MINIMIZERS OF QUADRATIC OPTIMIZATION PROBLEM WITH A SINGLE QUADRATIC CONSTRAINT A. TAATI AND M. SALAHI Abstract. In this paper, we consider the nonconvex quadratic optimization problem
More informationContinuous methods for numerical linear algebra problems
Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on
More informationA semi-algebraic look at first-order methods
splitting A semi-algebraic look at first-order Université de Toulouse / TSE Nesterov s 60th birthday, Les Houches, 2016 in large-scale first-order optimization splitting Start with a reasonable FOM (some
More informationPrimal and Dual Variables Decomposition Methods in Convex Optimization
Primal and Dual Variables Decomposition Methods in Convex Optimization Amir Beck Technion - Israel Institute of Technology Haifa, Israel Based on joint works with Edouard Pauwels, Shoham Sabach, Luba Tetruashvili,
More informationA New Trust Region Algorithm Using Radial Basis Function Models
A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations
More informationLinear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the
More informationCOURSE SUMMARY FOR MATH 504, FALL QUARTER : MODERN ALGEBRA
COURSE SUMMARY FOR MATH 504, FALL QUARTER 2017-8: MODERN ALGEBRA JAROD ALPER Week 1, Sept 27, 29: Introduction to Groups Lecture 1: Introduction to groups. Defined a group and discussed basic properties
More informationMath 273a: Optimization Overview of First-Order Optimization Algorithms
Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization
More informationMaster 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique
Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some
More informationAmir Beck August 7, Abstract. 1 Introduction and Problem/Model Formulation. In this paper we consider the following minimization problem:
On the Convergence of Alternating Minimization for Convex Programming with Applications to Iteratively Reweighted Least Squares and Decomposition Schemes Amir Beck August 7, 04 Abstract This paper is concerned
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationA New Look at the Performance Analysis of First-Order Methods
A New Look at the Performance Analysis of First-Order Methods Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with Yoel Drori, Google s R&D Center, Tel Aviv Optimization without
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationGlobally Solving the Trust Region Subproblem Using Simple First-Order Methods
Globally Solving the Trust Region Subproblem Using Simple First-Order Methods Amir Beck and Yakov Vaisbourd October 2, 2017 Abstract We consider the trust region subproblem which is given by a minimization
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationDescent methods. min x. f(x)
Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationA Quick Tour of Linear Algebra and Optimization for Machine Learning
A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009
UC Berkele Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 6 Fall 2009 Solution 6.1 (a) p = 1 (b) The Lagrangian is L(x,, λ) = e x + λx 2
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationA memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration
A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard
More information10-725/36-725: Convex Optimization Prerequisite Topics
10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS
ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS Mau Nam Nguyen (joint work with D. Giles and R. B. Rector) Fariborz Maseeh Department of Mathematics and Statistics Portland State
More informationLow-Rank Factorization Models for Matrix Completion and Matrix Separation
for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so
More informationThe proximal mapping
The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function
More informationDoubly Constrained Robust Capon Beamformer with Ellipsoidal Uncertainty Sets
Doubly Constrained Robust Capon Beamformer with Ellipsoidal Uncertainty Sets 1 Amir Beck and Yonina C. Eldar Abstract The doubly constrained robust DCR) Capon beamformer with a spherical uncertainty set
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationREAL RENORMINGS ON COMPLEX BANACH SPACES
REAL RENORMINGS ON COMPLEX BANACH SPACES F. J. GARCÍA PACHECO AND A. MIRALLES Abstract. In this paper we provide two ways of obtaining real Banach spaces that cannot come from complex spaces. In concrete
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationAlgorithms for Nonsmooth Optimization
Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization
More informationNumerical Optimization Techniques
Numerical Optimization Techniques Léon Bottou NEC Labs America COS 424 3/2/2010 Today s Agenda Goals Representation Capacity Control Operational Considerations Computational Considerations Classification,
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)
More informationMATHEMATICAL OPTIMIZATION FOR THE INVERSE PROBLEM OF INTENSITY MODULATED RADIATION THERAPY
MATHEMATICAL OPTIMIZATION FOR THE INVERSE PROBLEM OF INTENSITY MODULATED RADIATION THERAPY Yair Censor, DSc Department of Mathematics, University of Haifa, Haifa, Israel The 2003 AAPM Summer School on
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationEE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1
EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex
More informationLecture 12 Unconstrained Optimization (contd.) Constrained Optimization. October 15, 2008
Lecture 12 Unconstrained Optimization (contd.) Constrained Optimization October 15, 2008 Outline Lecture 11 Gradient descent algorithm Improvement to result in Lec 11 At what rate will it converge? Constrained
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More information13. Nonlinear least squares
L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 29, 2016 Outline Convex vs Nonconvex Functions Coordinate Descent Gradient Descent Newton s method Stochastic Gradient Descent Numerical Optimization
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More information