QUALIFYING EXAM IN SYSTEMS ENGINEERING

Similar documents
STUDENT NAME: STUDENT SIGNATURE: STUDENT ID NUMBER: SECTION NUMBER RECITATION INSTRUCTOR:

Linear Differential Equations. Problems

Math 331 Homework Assignment Chapter 7 Page 1 of 9

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels)

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Reduction to the associated homogeneous system via a particular solution

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Controls Problems for Qualifying Exam - Spring 2014

Qualifying Exam Math 6301 August 2018 Real Analysis I

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

MATH 251 Final Examination August 14, 2015 FORM A. Name: Student Number: Section:

ECE302 Spring 2006 Practice Final Exam Solution May 4, Name: Score: /100

FINAL EXAM: 3:30-5:30pm

Solution to Final, MATH 54, Linear Algebra and Differential Equations, Fall 2014

Name (NetID): (1 Point)

Midterm: CS 6375 Spring 2015 Solutions

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Final Exam Practice 3, May 8, 2018 Math 21b, Spring Name:

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

EE C128 / ME C134 Final Exam Fall 2014

MATH 251 Examination II April 7, 2014 FORM A. Name: Student Number: Section:

CMSC Discrete Mathematics FINAL EXAM Tuesday, December 5, 2017, 10:30-12:30

MA 262 Spring 1993 FINAL EXAM INSTRUCTIONS. 1. You must use a #2 pencil on the mark sense sheet (answer sheet).

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Discrete Event Systems Exam

MATH 251 Examination II July 28, Name: Student Number: Section:

Math 322. Spring 2015 Review Problems for Midterm 2

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Deterministic Dynamic Programming

EE221A Linear System Theory Final Exam

21 Linear State-Space Representations

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Math 215/255 Final Exam (Dec 2005)

ECE534, Spring 2018: Solutions for Problem Set #5

MATH 1553-C MIDTERM EXAMINATION 3

Problem Point Value Points

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

Stochastic Processes

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Awards Screening Test. February 25, Time Allowed: 90 Minutes Maximum Marks: 40

Math 273 (51) - Final

Model Counting for Logical Theories

Lecture 5 Linear Quadratic Stochastic Control

Math 310 Introduction to Ordinary Differential Equations Final Examination August 9, Instructor: John Stockie

Name of the Student: Problems on Discrete & Continuous R.Vs

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL of ELECTRICAL & COMPUTER ENGINEERING FINAL EXAM. COURSE: ECE 3084A (Prof. Michaels)

AIMS Exercise Set # 1

MATH 1553, C. JANKOWSKI MIDTERM 3

Approximate Counting and Markov Chain Monte Carlo

ECE 602 Exam 2 Solutions, 3/23/2011.

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

Name of the Student: Problems on Discrete & Continuous R.Vs

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Fall 2016 MATH*1160 Final Exam

MATH 251 Examination II April 3, 2017 FORM A. Name: Student Number: Section:

Machine Learning, Midterm Exam

FINAL: CS 6375 (Machine Learning) Fall 2014

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Lecture Examples of problems which have randomized algorithms

1 Markov decision processes

Massachusetts Institute of Technology

Math 21b Final Exam Thursday, May 15, 2003 Solutions

PH.D. PRELIMINARY EXAMINATION MATHEMATICS

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Mathematics Qualifying Exam Study Material

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Convex Optimization M2

University of Illinois ECE 313: Final Exam Fall 2014

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Chapter III. Stability of Linear Systems

Test 2 - Answer Key Version A

Written Exam Linear and Integer Programming (DM554)

Qualifier: CS 6375 Machine Learning Spring 2015

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Final exam (practice) UCLA: Math 31B, Spring 2017

Math 2114 Common Final Exam May 13, 2015 Form A

Question Points Score Total: 70

MATH 260 LINEAR ALGEBRA EXAM II Fall 2013 Instructions: The use of built-in functions of your calculator, such as det( ) or RREF, is prohibited.

Math 116 Second Midterm November 14, 2012

Math 116 Second Midterm November 16, 2011

Robotics. Control Theory. Marc Toussaint U Stuttgart

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Only this exam and a pen or pencil should be on your desk.

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

Machine Learning. Support Vector Machines. Manfred Huber

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Examiner: D. Burbulla. Aids permitted: Formula Sheet, and Casio FX-991 or Sharp EL-520 calculator.

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

1 Machine Learning Concepts (16 points)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Exam. 135 minutes, 15 minutes reading time

Stochastic process for macro

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

CDS 101/110a: Lecture 2.1 Dynamic Behavior

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

1.225 Transportation Flow Systems Quiz (December 17, 2001; Duration: 3 hours)

Transcription:

QUALIFYING EXAM IN SYSTEMS ENGINEERING Written Exam: MAY 23, 2017, 9:00AM to 1:00PM, EMB 105 Oral Exam: May 25 or 26, 2017 Time/Location TBA (~1 hour per student) CLOSED BOOK, NO CHEAT SHEETS BASIC SCIENTIFIC CALCULATOR PERMITTED ALL EXAM MATERIALS STAY IN THE EXAM ROOM GENERAL INSTRUCTIONS: 1) Please write on every sheet: a. Your Exam Number b. The page numbers (example: Page 1 of 4) 2) Only write on 1 side. Exams may be scanned and emailed to the faculty for grading. If using pencil, make sure it is dark. COMPLETE THE REQUIRED SECTIONS AS BELOW: The exam consists of three topical sections. Select three of the following five sections: A. Dynamic Systems Theory (SE/EC/ME 501) B. Continuous Stochastic Processes (EC505) OR Discrete Stochastic Processes (EK500 and SE/ME 714) C. Optimization (SE/EC 524) D. Dynamic Programming and Stochastic Control (SE/EC/ME 710) E. Nonlinear Systems and Control (SE/ME 762)

Section A: SE 501, Baillieul SE Linear Systems Qualifying Exam - 2017 1. Let A,B,C, and D, be n n matrices. Compute the inverse of [ ] A B M = C D in terms of A,B,C, and D. State clearly any additional assumptions needed. 2. A real square matrix A is symmetric if A T = A. Show that for a symmetric matrix (a) the eigenvalues are all real; (b) if e i and e j are eigenvectors associated with eignevalues λ i and λ j where λ i λ j, then the inner product e i e j = 0. 3. Consider a linear control system ẋ = A(t)x + B(t)u, (1) where x R n, u(t) R m, A(t) is an n n matrix whose matrix entires are real continuous functions of t, and B(t) is an n m real matrix, also having real continuous entries. (a) State in words what it means for this system to be controllable. (b) Write down an algebraic condition that is necessary and sufficient for the system to be controllable. (c) Let x, y R n, and let u xy ( ) be a continuous control input taking values in R m that steers (1) in T 0 units of time from x(0) = x to x(t ) = y such that η = T 0 u xy (s) 2 ds takes on the minimum value among all control inputs that steer the system from x to y in T units of time. Write down an explicit expression for u xy (t). (d) Consider the linear system ( ẋ1 (t) ẋ 2 (t) ) = ( 0 1 0 0 ) ( x1 (t) x 2 (t) ) ( 0 + u(t) ). (2) Is it controllable? (Relate your answer to your response to part (a) above.) ( ) ( ) 0 cos θ (e) Find the control input that steers the state of (1) from to in one unit of time so 0 sin θ as to minimize the performance metric (f) Fnd θ such that the point of the quasimetric η. ( cos θ sin θ η = 1 0 u(t) 2 dt. (3) ) is the minimum cost-to-reach point on the unit circle in terms

Section B: EC 505, Saligrama SE 2017 Stochastic Processes Qualifiers You are the head engineer on a star ship which has just made contact with the warlike Zap-ari, who are preparing to attack with their deadly Zap-a-tron TM ray. Your only hope for survival is to estimate the ray process X(t) so that you can cancel its effects. (a) The original Zap-a-tron device functioned by passing a zero-mean, white Gaussian process V (t) with autocorrelation function E[V (t)v (t + τ)] R vv (τ) = 2δ(τ) through a plasma resonance chamber which acted as a linear filter, as depicted in the figure below. What is the power spectral density S xx (f) of the ray X(t)? Is the ray X(t) a stationary, weakly (or wide-sense) stationary, or non-stationary process? Explain. 8 J : J M Note: The parts (b) (f) are independent of part (a). (b) The Zap-ari, suspecting that you might try the old ray cancellation trick, have upgraded to Zap-a-tron98, which functions by randomly firing one of two possible zero-mean, stationary, Gaussian processes X 0 (t) or X 1 (t), with associated autocorrelation functions R x0 x 0 (τ) R x1 x 1 (τ), chosen with equal probability. In other words, the actual beam fired X(t) = X z (t) is controlled by an independent Bernoulli random variable Z with Pr(Z = 0) = Pr(Z = 1) = 1/2, which is chosen once at the start of the battle. Is the ray process X(t) so obtained a Gaussian random process? Explain. (c) As the beam fires you can obtain one observation of the beam at time t = 0: Y = X(0). If you knew that the beam being fired was X 0 (t) (i.e. you knew that the value of Z = 0), what would be the minimum mean square error estimate X(t) of the beam X(t) at any other time based on Y? Give your answer in terms of R x0 x 0 (τ), R x1 x 1 (τ), and Y. (d) What is the mean square error MSE if you are wrong about which beam is being fired (i.e. what is E{[X(t) X(t)] 2 Z = 1})? Give your answer in terms of R x0 x 0 (τ) and R x1 x 1 (τ). (e) Since you can block more power when you are correct about which process is being fired, you decide to use detection theory to decide which beam is being used. What is the minimum probability of error decision rule for deciding which beam is being fired (i.e. for deciding the value of Z) based on observation of Y (assuming R x1 x 1 (0) > R x0 x 0 (0))? (f) Consider two approaches to estimating the unknown ray X(t). One is to make the optimal decision for the value of Z based on Y (as in (e)) then generate an estimate for X(t) based on this value (as in (c)), resulting in the estimate X det (t). Another approach would be to estimate the conditional mean: X con (t) = E[X(t) Y ]. Which estimate would give the lower overall MSE? Explain. (Note: You do not need to find the MSE for the two cases to answer this question.)

Section B: EK 500/SE 714, Vakili Qualifying Examination Discrete Stochastic Processes Spring 2017 Problem 1. Consider an infinite server system where the arrivals are according to a Poisson process with rate λ and the service times are deterministic equal to s units of time. Let {N(t); t 0} denote the number of arrivals up to time t and {S 1, S 2, } denote the arrival times. In what follows assume t > s. (a) Assuming N(t) = 1, find the probability that the first customer has completed its service and left by time t. (b) Find the conditional density of S 2 conditioned on N(t) = 2. (c) Assuming N(t) = 2, find the probability that the second customer has completed its service and has left by time t. (Note that your answers in parts (a)-(c) will be expressions in terms of λ and c.) Problem 2. Let {X n ; n 1} be a renewal process such that X n Uniform(0, 4). Find 1 t lim P (Y (u) > 1)du. t t 0 Show your derivation. (Y (t) = S N(t)+1 t is the residual lifetime at time t.) Problem 3. Consider k nodes, denoted by 1, 2,, k. A particle moves on these nodes according to a Markov chain with the following transition probabilities: P i,i+1 = 0.8, i = 1,, k 1, P k,1 = 0.8; P i,i 1 = 0.2, i = 2,, k, P 1,k = 0.2, and P i,j = 0 otherwise. In other words, we can think of the nodes as being on a circle and at each transition step the particle moves clockwise with probability 0.8 and counter-clockwise with probability 0.2. (a) Does this chain have a stationary distribution? Explain your answer. If you answer yes, find the stationary distribution. (b) Does this chain have a steady state distribution? Explain your answer. If you answer yes, find the steady state distribution. (c) Is this Markov chain reversible? Explain your answer. 1

Section C: SE 524, Paschalidis May 2017 Boston University Division of Systems Engineering Area Exam Area Qualifying Exam Optimization There are two problems in this exam for a total of 100 points. Please justify your answers and provide detailed derivations. Answers without full and valid justifications will not get full credit. Good luck! Problem 1 (40 points) For each one of the following statements please state whether they are true or false, with some valid justification (no rigorous proof is required). Answers without justification will get only partial credit. Grading will be done as follows: Correct answer: 4 points, Wrong answer: 0 points, No answer: 1 point, Correct answer with no sufficient and valid justification: 2 points. 1. Consider the problem max n i=1 c ix 2 i subject to n i=1 a ix 2 i b, where c i, a i 0 for all i = 1,..., n. This problem can be formulated as a linear programming problem. 2. If a linear programming problem has a degenerate basic feasible solution then it must have at least one redundant constraint. 3. If Ax b has a solution, then the constraint p b 0 is redundant in the system of equations {p A = 0, p 0, p b 0}. 4. In an uncapacitated network flow problem, every optimal solution has a tree structure. 5. Each iteration of the affine scaling algorithm for linear programming involves solving a convex quadratic optimization problem. 6. There exists a polynomial-time algorithm for semi-definite optimization problems. 7. Consider an optimization problem and let K be the largest number among the problem data (e.g., entries of A, b, and c in a linear program). Suppose we have an algorithm to find a feasible solution with cost that is within ɛ of the optimal. Assume that the algorithm runs in O( K log(1/ɛ)) time. Then it is a polynomial time algorithm. 8. There exists a polynomial time algorithm for the assignment problem. 9. Consider the problem of minimizing c x subject to Ax b. If we increase some component of b, then the optimal cost cannot increase. 1 1 of 2

Section C: SE 524, Paschalidis 10. Consider the optimization problem of minimizing c x subject to Ax = b and x i γ for all i, where γ 0. This problem can be formulated as a linear programming problem. Problem 2 (10+15+10+10+15=60 points) Consider a variation of the Knapsack problem, in which we introduce an additional constraint, which fixes the number of items in the knapsack to some given constant M: max s.t. n c i x i i=1 n w i x i K, (1) i=1 n x i = M, (2) i=1 x i {0, 1}. (3) Assume that K, M, c i, and w i for all i = 1,..., n, are positive integers. We will denote by Z LP the optimal cost of the LP relaxation of this problem. Z 1 D Let us know consider a Lagrangean dual of the problem in which constraint (1) is dualized. Let the optimal cost of this dual problem. (a) (b) Formulate this dual problem by writing down the dual function and the dual problem. Use the dualization convexification Theorem to write down an equivalent problem to the dual. How would you solve the dual? (c) Is it true that Z 1 D = Z LP? Justify your answer. Consider next a different Lagrangean dual of the original problem in which constraint (2) is dualized. Let ZD 2 the optimal cost of this dual problem. (d) Is it true that Z 2 D = Z LP? Justify your answer. Finally, consider a Lagrangean dual of the original problem in which both constraints (1) and (2) are dualized. Let Z 12 D (e) the optimal cost of this dual problem. If we are interested in the best (i.e., tightest) possible upper bound to the optimal value of the original problem, which one out of Z 1 D, Z2 D, Z12 D would you pick? Justify your answer. 2 2 of 2

Section D: SE 710, Caramanis Systems PhD Written Qualifying Examination in Dynamic Programming. May, 2017 Question 1 Consider the continuous imperfect state information problem below: Scalar state and control variables with dynamics x x u w, w uniformly distributed over [0, x ] k 1 k k k k k Allowable control space U={ u : u 0} 1 Period cost g ( x, u ) x, g ( x ) x u k k k k N N N k k k Observation equation z k 1 xk 1 ukvk 1 where vk 1 normal with mean 0 and standard deviation v Assuming that you know the sufficient statistic at k, P ( x ) with range space R =[0, ) xk Ik k xk 1.1 Derive the conditional Probability distribution of zk 1 given xk+1, namely, P( z x, u, P ) k 1 k 1 k xk Ik 1.2 Derive the prior cumulative probability distribution of the random variable of the state at k+1, X k 1, after u k is applied but before is observed, namely, F ( x u, P ) Pr( X x u, P ). zk 1 Xk 1 k 1 k xk Ik k 1 k 1 k xk Ik Note: P x k I k is the sufficient statistic of the state variable at time k, after z k was been observed. Please provide your answer as a definite double integral. Make sure to define the range space. 2.3 Derive an expression for the estimator of the state variable after is observed, namely, P ( P, u, z ) xk 1 Ik 1 xk Ik k k 1 2.4 Comment on what you consider to be the trade off in the optimal decision of, namely, why is it not simply optimal to adopt the myopic choice of a large in magnitude control that minimizes the period cost? Question 2 Consider an infinite horizon discounted cost dynamic programming problem with known 1 2 discount factor α, discrete state space {,,..., n 1 2 S x x x } control space C { u, u,..., u m } and allowable control space U ( x) C. Period cost function, g( xt, ut, w t ), state dynamics xt 1 f ( xt, ut, wt ), and the probability distribution of the disturbance P( wt xt, u t ) are all unknown, although one may assume that they describe a discounted cost infinite horizon function that is well behaved and admits an optimal feedback control policy * ( x). The entity responsible for deciding the control at each time t, has access to the following information: At time t it observes x t, applies control u t, and then observes the value of the period cost g( xt, utw t ). The process repeats at t+1, t+2,., t+n; N. Define a machine learning process that estimates the function Q( x, u ) for all x S, u U ( x) zk 1 which asymptotically approaches E{ g( x, u, w) J * ( f ( x, u, w))} where J * ( x) discounted cost function, x S and u U ( x) w u k is the optimal. Comment on conditions that the machine learning process must satisfy for convergence and how you can use the function Q to determine the optimal feedback policy * ( x).

Section E: SE 762, Wang Nonlinear Systems and Control Problem 1. Consider a simple pendulum system θ + θ + sin θ = 0 Using three different methods, show that the equilibrium point θ = 0, θ = 0 is asymptotically stable. Problem 2. Consider the following nonlinear control system ẋ 1 = ( 1 α)x 1 2x 2 + (1 + α)u ux 1 (1 α) ẋ 2 = (1 α)x 1 + (1 α 2 )u ux 2 (1 α) where α is a real number. Determine for which values of α there exists a continuously differential feedback control u = k(x), k(0) = 0 such that 0 is an asymptotically stable equilibrium point of the resulting closed loop system.