ECE580 Exam 2 November 01, 2011 1 Name: Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc. I do not want the decimal equivalents. Please write on only one side of each page. Extra paper is available from the instructor. Section I: Calculations: Solve the following problems. 1. (20 points) You are given a two data sets and A 0 = 0 1 1 1 1 0 b 0 = 0 1 1 (1) A 1 = [ 0 1 ] b 1 = [ 2 ] (2) Using the recursive least squares method to solve the combined system of equations, find x (1).
ECE580 Exam 2 November 01, 2011 2 2. (10 points) An armature-controlled DC motor with negligible armature inductance is modeled by the transfer function corresponding to the differential equation ω(s) V a (s) = (1/2), (3) K 1 s + K 2 K 1 ω + K 2 ω = V a /2. (4) Given the following measurements for ω, ω, and V a, find the least squares estimates of K 1 and K 2. ω ω V a 0 1 3-1 0-4 0-1 -5
ECE580 Exam 2 November 01, 2011 3 3. (5 points) With x(0) = 0, use the conjugate direction algorithm to find the minimizer of f(x 1, x 2 ) = 1 [ ] [ ] 2 1 1 2 x T x x T, 1 1 2 determine d (1).
ECE580 Exam 2 November 01, 2011 4 4. (10 points) Starting at the origin, use the conjugate gradient algorithm to find the minimizer of f(x 1, x 2 ) = 1 [ ] [ ] 2 1 1 2 x T x x T, 1 1 2 determine x (1).
ECE580 Exam 2 November 01, 2011 5 5. (10 pts) Using Newton s method to find the minimizer of with x (0) = [2 1] T, find x (1). f(x 1, x 2 ) = 3x 2 1 + 3x 1 x 2 x 2 2 + x 1 x 2 + 8 (5)
ECE580 Exam 2 November 01, 2011 6 Section II: (30 pts.) Answer the following multiple choice questions. Circle all answers that apply. (If no answer applies, don t circle any of them.) 1. If A is n n, which algorithm requires only n steps to solve for the minimizer? (a) Newton s Method (b) the Conjugate Direction Method (c) the Conjugate Gradient Method 2. What is the difference between Newton s method and the Quasi-Newton methods? (a) the Quasi-Newton method uses two parameters α k and β k instead of just α k (b) the Quasi-Newton method avoids calculating the Hessian (c) the Quasi-Newton method is more accurate 3. How is the initial direction chosen for the conjugate gradient algorithm? (a) at random (b) as the gradient of the function evaluated at the initial point (c) opposite the gradient of the function evaluated at the initial point 4. In the rank one correction formula, which of the following has rank one? (a) H k (b) α k (c) z (k) z (k)t 5. How does the conjugate gradient method for nonquadratic problems differ from that for the conjugate gradient method for quadratic problems? (a) β k is not a function of Q (b) β k = 0 (c) the direction update is a function of α k rather than β k 6. Which of the probabilistic methods for global search involve(s) simultaneous evaluation of the function at a number of different points? (a) Simulated Annealing (b) Particle Swarm Optimization (c) Genetic Algorithms 7. Which of the probabilistic methods for global search involves choosing a temperature schedule? (a) Simulated Annealing (b) Particle Swarm Optimization (c) Genetic Algorithms
ECE580 Exam 2 November 01, 2011 7 8. Which of the probabilistic methods for global search involves a temperature schedule? (a) Simulated Annealing (b) Particle Swarm Optimization (c) Genetic Algorithms 9. Where is the best overall value used in the Particle Swarm Optimiztion method? (a) velocity update (b) position update (c) gradient update 10. Which of the following formulas for the pseudoinverse should be used in the case that A is m n and has rank n which is less than or equal to m? (a) A = (A T A) 1 A T (b) A = A T (AA T ) 1 (c) A = A T (A T A) 1
ECE580 Exam 2 November 01, 2011 8 Section III: (20 pts) Short Answers 1. Is the pseudoinverse unique? 2. What is the initialization step of the Particle Swarm Optimization method? 3. Which optimization algorithms explicitly calculate a direction vector at each iteration? 4. When is the factorization method used for solving linear equations?
ECE580 Exam 2 November 01, 2011 9 5. Why does one use recursive least squares?