QUALIFYING EXAM IN SYSTEMS ENGINEERING Written Exam: MAY 23, 2017, 9:00AM to 1:00PM, EMB 105 Oral Exam: May 25 or 26, 2017 Time/Location TBA (~1 hour per student) CLOSED BOOK, NO CHEAT SHEETS BASIC SCIENTIFIC CALCULATOR PERMITTED ALL EXAM MATERIALS STAY IN THE EXAM ROOM GENERAL INSTRUCTIONS: 1) Please write on every sheet: a. Your Exam Number b. The page numbers (example: Page 1 of 4) 2) Only write on 1 side. Exams may be scanned and emailed to the faculty for grading. If using pencil, make sure it is dark. COMPLETE THE REQUIRED SECTIONS AS BELOW: The exam consists of three topical sections. Select three of the following five sections: A. Dynamic Systems Theory (SE/EC/ME 501) B. Continuous Stochastic Processes (EC505) OR Discrete Stochastic Processes (EK500 and SE/ME 714) C. Optimization (SE/EC 524) D. Dynamic Programming and Stochastic Control (SE/EC/ME 710) E. Nonlinear Systems and Control (SE/ME 762)
Section A: SE 501, Baillieul SE Linear Systems Qualifying Exam - 2017 1. Let A,B,C, and D, be n n matrices. Compute the inverse of [ ] A B M = C D in terms of A,B,C, and D. State clearly any additional assumptions needed. 2. A real square matrix A is symmetric if A T = A. Show that for a symmetric matrix (a) the eigenvalues are all real; (b) if e i and e j are eigenvectors associated with eignevalues λ i and λ j where λ i λ j, then the inner product e i e j = 0. 3. Consider a linear control system ẋ = A(t)x + B(t)u, (1) where x R n, u(t) R m, A(t) is an n n matrix whose matrix entires are real continuous functions of t, and B(t) is an n m real matrix, also having real continuous entries. (a) State in words what it means for this system to be controllable. (b) Write down an algebraic condition that is necessary and sufficient for the system to be controllable. (c) Let x, y R n, and let u xy ( ) be a continuous control input taking values in R m that steers (1) in T 0 units of time from x(0) = x to x(t ) = y such that η = T 0 u xy (s) 2 ds takes on the minimum value among all control inputs that steer the system from x to y in T units of time. Write down an explicit expression for u xy (t). (d) Consider the linear system ( ẋ1 (t) ẋ 2 (t) ) = ( 0 1 0 0 ) ( x1 (t) x 2 (t) ) ( 0 + u(t) ). (2) Is it controllable? (Relate your answer to your response to part (a) above.) ( ) ( ) 0 cos θ (e) Find the control input that steers the state of (1) from to in one unit of time so 0 sin θ as to minimize the performance metric (f) Fnd θ such that the point of the quasimetric η. ( cos θ sin θ η = 1 0 u(t) 2 dt. (3) ) is the minimum cost-to-reach point on the unit circle in terms
Section B: EC 505, Saligrama SE 2017 Stochastic Processes Qualifiers You are the head engineer on a star ship which has just made contact with the warlike Zap-ari, who are preparing to attack with their deadly Zap-a-tron TM ray. Your only hope for survival is to estimate the ray process X(t) so that you can cancel its effects. (a) The original Zap-a-tron device functioned by passing a zero-mean, white Gaussian process V (t) with autocorrelation function E[V (t)v (t + τ)] R vv (τ) = 2δ(τ) through a plasma resonance chamber which acted as a linear filter, as depicted in the figure below. What is the power spectral density S xx (f) of the ray X(t)? Is the ray X(t) a stationary, weakly (or wide-sense) stationary, or non-stationary process? Explain. 8 J : J M Note: The parts (b) (f) are independent of part (a). (b) The Zap-ari, suspecting that you might try the old ray cancellation trick, have upgraded to Zap-a-tron98, which functions by randomly firing one of two possible zero-mean, stationary, Gaussian processes X 0 (t) or X 1 (t), with associated autocorrelation functions R x0 x 0 (τ) R x1 x 1 (τ), chosen with equal probability. In other words, the actual beam fired X(t) = X z (t) is controlled by an independent Bernoulli random variable Z with Pr(Z = 0) = Pr(Z = 1) = 1/2, which is chosen once at the start of the battle. Is the ray process X(t) so obtained a Gaussian random process? Explain. (c) As the beam fires you can obtain one observation of the beam at time t = 0: Y = X(0). If you knew that the beam being fired was X 0 (t) (i.e. you knew that the value of Z = 0), what would be the minimum mean square error estimate X(t) of the beam X(t) at any other time based on Y? Give your answer in terms of R x0 x 0 (τ), R x1 x 1 (τ), and Y. (d) What is the mean square error MSE if you are wrong about which beam is being fired (i.e. what is E{[X(t) X(t)] 2 Z = 1})? Give your answer in terms of R x0 x 0 (τ) and R x1 x 1 (τ). (e) Since you can block more power when you are correct about which process is being fired, you decide to use detection theory to decide which beam is being used. What is the minimum probability of error decision rule for deciding which beam is being fired (i.e. for deciding the value of Z) based on observation of Y (assuming R x1 x 1 (0) > R x0 x 0 (0))? (f) Consider two approaches to estimating the unknown ray X(t). One is to make the optimal decision for the value of Z based on Y (as in (e)) then generate an estimate for X(t) based on this value (as in (c)), resulting in the estimate X det (t). Another approach would be to estimate the conditional mean: X con (t) = E[X(t) Y ]. Which estimate would give the lower overall MSE? Explain. (Note: You do not need to find the MSE for the two cases to answer this question.)
Section B: EK 500/SE 714, Vakili Qualifying Examination Discrete Stochastic Processes Spring 2017 Problem 1. Consider an infinite server system where the arrivals are according to a Poisson process with rate λ and the service times are deterministic equal to s units of time. Let {N(t); t 0} denote the number of arrivals up to time t and {S 1, S 2, } denote the arrival times. In what follows assume t > s. (a) Assuming N(t) = 1, find the probability that the first customer has completed its service and left by time t. (b) Find the conditional density of S 2 conditioned on N(t) = 2. (c) Assuming N(t) = 2, find the probability that the second customer has completed its service and has left by time t. (Note that your answers in parts (a)-(c) will be expressions in terms of λ and c.) Problem 2. Let {X n ; n 1} be a renewal process such that X n Uniform(0, 4). Find 1 t lim P (Y (u) > 1)du. t t 0 Show your derivation. (Y (t) = S N(t)+1 t is the residual lifetime at time t.) Problem 3. Consider k nodes, denoted by 1, 2,, k. A particle moves on these nodes according to a Markov chain with the following transition probabilities: P i,i+1 = 0.8, i = 1,, k 1, P k,1 = 0.8; P i,i 1 = 0.2, i = 2,, k, P 1,k = 0.2, and P i,j = 0 otherwise. In other words, we can think of the nodes as being on a circle and at each transition step the particle moves clockwise with probability 0.8 and counter-clockwise with probability 0.2. (a) Does this chain have a stationary distribution? Explain your answer. If you answer yes, find the stationary distribution. (b) Does this chain have a steady state distribution? Explain your answer. If you answer yes, find the steady state distribution. (c) Is this Markov chain reversible? Explain your answer. 1
Section C: SE 524, Paschalidis May 2017 Boston University Division of Systems Engineering Area Exam Area Qualifying Exam Optimization There are two problems in this exam for a total of 100 points. Please justify your answers and provide detailed derivations. Answers without full and valid justifications will not get full credit. Good luck! Problem 1 (40 points) For each one of the following statements please state whether they are true or false, with some valid justification (no rigorous proof is required). Answers without justification will get only partial credit. Grading will be done as follows: Correct answer: 4 points, Wrong answer: 0 points, No answer: 1 point, Correct answer with no sufficient and valid justification: 2 points. 1. Consider the problem max n i=1 c ix 2 i subject to n i=1 a ix 2 i b, where c i, a i 0 for all i = 1,..., n. This problem can be formulated as a linear programming problem. 2. If a linear programming problem has a degenerate basic feasible solution then it must have at least one redundant constraint. 3. If Ax b has a solution, then the constraint p b 0 is redundant in the system of equations {p A = 0, p 0, p b 0}. 4. In an uncapacitated network flow problem, every optimal solution has a tree structure. 5. Each iteration of the affine scaling algorithm for linear programming involves solving a convex quadratic optimization problem. 6. There exists a polynomial-time algorithm for semi-definite optimization problems. 7. Consider an optimization problem and let K be the largest number among the problem data (e.g., entries of A, b, and c in a linear program). Suppose we have an algorithm to find a feasible solution with cost that is within ɛ of the optimal. Assume that the algorithm runs in O( K log(1/ɛ)) time. Then it is a polynomial time algorithm. 8. There exists a polynomial time algorithm for the assignment problem. 9. Consider the problem of minimizing c x subject to Ax b. If we increase some component of b, then the optimal cost cannot increase. 1 1 of 2
Section C: SE 524, Paschalidis 10. Consider the optimization problem of minimizing c x subject to Ax = b and x i γ for all i, where γ 0. This problem can be formulated as a linear programming problem. Problem 2 (10+15+10+10+15=60 points) Consider a variation of the Knapsack problem, in which we introduce an additional constraint, which fixes the number of items in the knapsack to some given constant M: max s.t. n c i x i i=1 n w i x i K, (1) i=1 n x i = M, (2) i=1 x i {0, 1}. (3) Assume that K, M, c i, and w i for all i = 1,..., n, are positive integers. We will denote by Z LP the optimal cost of the LP relaxation of this problem. Z 1 D Let us know consider a Lagrangean dual of the problem in which constraint (1) is dualized. Let the optimal cost of this dual problem. (a) (b) Formulate this dual problem by writing down the dual function and the dual problem. Use the dualization convexification Theorem to write down an equivalent problem to the dual. How would you solve the dual? (c) Is it true that Z 1 D = Z LP? Justify your answer. Consider next a different Lagrangean dual of the original problem in which constraint (2) is dualized. Let ZD 2 the optimal cost of this dual problem. (d) Is it true that Z 2 D = Z LP? Justify your answer. Finally, consider a Lagrangean dual of the original problem in which both constraints (1) and (2) are dualized. Let Z 12 D (e) the optimal cost of this dual problem. If we are interested in the best (i.e., tightest) possible upper bound to the optimal value of the original problem, which one out of Z 1 D, Z2 D, Z12 D would you pick? Justify your answer. 2 2 of 2
Section D: SE 710, Caramanis Systems PhD Written Qualifying Examination in Dynamic Programming. May, 2017 Question 1 Consider the continuous imperfect state information problem below: Scalar state and control variables with dynamics x x u w, w uniformly distributed over [0, x ] k 1 k k k k k Allowable control space U={ u : u 0} 1 Period cost g ( x, u ) x, g ( x ) x u k k k k N N N k k k Observation equation z k 1 xk 1 ukvk 1 where vk 1 normal with mean 0 and standard deviation v Assuming that you know the sufficient statistic at k, P ( x ) with range space R =[0, ) xk Ik k xk 1.1 Derive the conditional Probability distribution of zk 1 given xk+1, namely, P( z x, u, P ) k 1 k 1 k xk Ik 1.2 Derive the prior cumulative probability distribution of the random variable of the state at k+1, X k 1, after u k is applied but before is observed, namely, F ( x u, P ) Pr( X x u, P ). zk 1 Xk 1 k 1 k xk Ik k 1 k 1 k xk Ik Note: P x k I k is the sufficient statistic of the state variable at time k, after z k was been observed. Please provide your answer as a definite double integral. Make sure to define the range space. 2.3 Derive an expression for the estimator of the state variable after is observed, namely, P ( P, u, z ) xk 1 Ik 1 xk Ik k k 1 2.4 Comment on what you consider to be the trade off in the optimal decision of, namely, why is it not simply optimal to adopt the myopic choice of a large in magnitude control that minimizes the period cost? Question 2 Consider an infinite horizon discounted cost dynamic programming problem with known 1 2 discount factor α, discrete state space {,,..., n 1 2 S x x x } control space C { u, u,..., u m } and allowable control space U ( x) C. Period cost function, g( xt, ut, w t ), state dynamics xt 1 f ( xt, ut, wt ), and the probability distribution of the disturbance P( wt xt, u t ) are all unknown, although one may assume that they describe a discounted cost infinite horizon function that is well behaved and admits an optimal feedback control policy * ( x). The entity responsible for deciding the control at each time t, has access to the following information: At time t it observes x t, applies control u t, and then observes the value of the period cost g( xt, utw t ). The process repeats at t+1, t+2,., t+n; N. Define a machine learning process that estimates the function Q( x, u ) for all x S, u U ( x) zk 1 which asymptotically approaches E{ g( x, u, w) J * ( f ( x, u, w))} where J * ( x) discounted cost function, x S and u U ( x) w u k is the optimal. Comment on conditions that the machine learning process must satisfy for convergence and how you can use the function Q to determine the optimal feedback policy * ( x).
Section E: SE 762, Wang Nonlinear Systems and Control Problem 1. Consider a simple pendulum system θ + θ + sin θ = 0 Using three different methods, show that the equilibrium point θ = 0, θ = 0 is asymptotically stable. Problem 2. Consider the following nonlinear control system ẋ 1 = ( 1 α)x 1 2x 2 + (1 + α)u ux 1 (1 α) ẋ 2 = (1 α)x 1 + (1 α 2 )u ux 2 (1 α) where α is a real number. Determine for which values of α there exists a continuously differential feedback control u = k(x), k(0) = 0 such that 0 is an asymptotically stable equilibrium point of the resulting closed loop system.