EE364b Homework 7. t=1

Similar documents
EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

EE364a Homework 8 solutions

EE364b Homework 4. L(y,ν) = (1/2) y x ν(1 T y 1), minimize (1/2) y x 2 2 subject to y 0, 1 T y = 1,

EE363 homework 8 solutions

EE364b Homework 2. λ i f i ( x) = 0, i=1

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

EE363 homework 2 solutions

Lecture 19 Observability and state estimation

Observability and state estimation

Relaxations and Randomized Methods for Nonconvex QCQPs

Module 02 CPS Background: Linear Systems Preliminaries

EE363 homework 5 solutions

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

System Identification by Nuclear Norm Minimization

to have roots with negative real parts, the necessary and sufficient conditions are that:

Linear Least Squares. Using SVD Decomposition.

Control Systems Design

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Lecture 15 Perron-Frobenius Theory

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

16.410/413 Principles of Autonomy and Decision Making

EE263 homework 3 solutions

EE16B Designing Information Devices and Systems II

EE263 homework 9 solutions

Fiedler s Theorems on Nodal Domains

CONTROL DESIGN FOR SET POINT TRACKING

Extreme Values and Positive/ Negative Definite Matrix Conditions

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Parameter Estimation in a Moving Horizon Perspective

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Outline. Additional Nonlinear Systems. Abstract. Finding Equilibrium Points Numerically. Newton s Method

1 Some Facts on Symmetric Matrices

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

MA 527 first midterm review problems Hopefully final version as of October 2nd

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

There are several approaches to solve UBQP, we will now briefly discuss some of them:

Contents lecture 6 2(17) Automatic Control III. Summary of lecture 5 (I/III) 3(17) Summary of lecture 5 (II/III) 4(17) H 2, H synthesis pros and cons:

EE 227A: Convex Optimization and Applications October 14, 2008

Lecture 10: Powers of Matrices, Difference Equations

Topic # Feedback Control Systems

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

Lecture 4 Continuous time linear quadratic regulator

arxiv: v1 [math.oc] 23 Nov 2012

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

EE363 homework 7 solutions

Modern Optimal Control

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Modern Control Systems

From Convex Optimization to Linear Matrix Inequalities

Stochastic Integer Programming

MA 580; Numerical Analysis I

Convex Optimization M2

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Module 03 Linear Systems Theory: Necessary Background

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Controller Coefficient Truncation Using Lyapunov Performance Certificate

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Predator - Prey Model Trajectories are periodic

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Iterative Methods for Solving A x = b

Summer Session Practice Final Exam

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Math Models of OR: Branch-and-Bound

Identification Methods for Structural Systems

Part III: A Simplex pivot

ECE 602 Exam 2 Solutions, 3/23/2011.

ECEEN 5448 Fall 2011 Homework #4 Solutions

Lecture 8: Conditional probability I: definition, independence, the tree method, sampling, chain rule for independent events

Final exam. June7 8orJune8 9,2006.

Uniform Convergence Examples

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Section 5.4 (Systems of Linear Differential Equation); 9.5 Eigenvalues and Eigenvectors, cont d

ME 132, Fall 2015, Quiz # 2

ECE7850 Lecture 9. Model Predictive Control: Computational Aspects

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Math 502 Fall 2005 Solutions to Homework 3

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

FEL3210 Multivariable Feedback Control

Homework Set #8 - Solutions

x(t) = t[u(t 1) u(t 2)] + 1[u(t 2) u(t 3)]

Integer Linear Programming

Operations Research Lecture 6: Integer Programming

18.085, PROBLEM SET 1 SOLUTIONS

Math 308 Final Exam Practice Problems

Homework 3 Solutions

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Predator - Prey Model Trajectories are periodic

1 Kernel methods & optimization

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 9

Transcription:

EE364b Prof. S. Boyd EE364b Homework 7 1. MPC for output tracking. We consider the linear dynamical system x(t + 1) = Ax(t) + Bu(t), y(t) = Cx(t), t = 0,...,T 1, with state x(t) R n, input u(t) R m, and output y(t) R p. The matrices A and B are known, and x(0) = 0. The goal is to choose the input sequence u(0),...,u(t 1) to minimize the output tracking cost T J = y(t) y des (t) 2 2, t=1 subject to u(t) U max, t = 0,...,T 1. In the remainder of this problem, we will work with the specific problem instance with data 1 1 0 0 A = 0 1 1, B = 0.5, C = [ 1 0 1 ], 0 0 1 1 T = 100, and U max = 0.1. The desired output trajectory is given by 0 t < 30, y des (t) = 10 30 t < 70, 0 t 70. (a) Find the optimal input u, and the associated optimal cost J. (b) Rolling look-ahead. Now consider the input obtained using an MPC-like method: At time t, we find the values u(t),...,u(t + N 1) that minimize t+n τ=t+1 y(τ) y des (τ) 2 2, subject to u(τ) U max, τ = t,...,t + N 1, and the state dynamics, with x(t) fixed at its current value. We then use u(t) as the input to the system. (This is an informal description, but you can figure out what we mean.) In a tracking context, we call N the amount of look-ahead, since it tells us how much of the future of the desired output signal we are allowed to access when we decide on the current input. Find the input signal for look-ahead values N = 8, N = 10, and N = 12. Compare the cost J obtained in these three cases to the optimal cost J found in part (a). Plot the output y(t) for N = 8, N = 10, and N = 12. 1

Solution. The following code implements the method, and generates the plots. n = 3; m = 1; A = [1,1,0;0,1,1;0,0,1]; B = [0;0.5;1]; C = [-1,0,1]; Umax = 0.1; T = 100; ydes = zeros(150,1); ydes(1:30) = 0; ydes(31:70) = 10; ydes(71:150) = 0; % optimal variables X(n,T+1) U(m,T) max(u ) <= Umax; min(u ) >= -Umax; X(:,2:T+1) == A*X(:,1:T)+B*U; X(:,1) == 0; minimize (sum(sum_square(c*x-ydes(1:t+1) ))) Jopt = cvx_optval; tvec = 0:1:T; plot(tvec,ydes(1:t+1)); hold on; plot(c*x, r ); % mpc hor = 8; Xmpc = zeros(n,t+1); x = zeros(n,1); for t = 1:T variables X(n,hor+1) U(m,hor) max(u ) <= Umax; min(u ) >= -Umax; X(:,2:hor+1) == A*X(:,1:hor)+B*U; X(:,1) == x; minimize (sum(sum_square(c*x-ydes(t:t+hor) ))) x = X(:,2); Xmpc(:,t+1) = x; end J8 = sum(sum_square(c*xmpc-ydes(1:t+1) )); plot(c*xmpc, g ); hor = 10; Xmpc = zeros(n,t+1); x = zeros(n,1); for t = 1:T variables X(n,hor+1) U(m,hor) 2

max(u ) <= Umax; min(u ) >= -Umax; X(:,2:hor+1) == A*X(:,1:hor)+B*U; X(:,1) == x; minimize (sum(sum_square(c*x-ydes(t:t+hor) ))) x = X(:,2); Xmpc(:,t+1) = x; end J10 = sum(sum_square(c*xmpc-ydes(1:t+1) )); plot(c*xmpc, k ); hor = 12; Xmpc = zeros(n,t+1); x = zeros(n,1); for t = 1:T variables X(n,hor+1) U(m,hor) max(u ) <= Umax; min(u ) >= -Umax; X(:,2:hor+1) == A*X(:,1:hor)+B*U; X(:,1) == x; minimize (sum(sum_square(c*x-ydes(t:t+hor) ))) x = X(:,2); Xmpc(:,t+1) = x; end J12 = sum(sum_square(c*xmpc-ydes(1:t+1) )); plot(c*xmpc, m ); axis([0,100,-5,15]); xlabel( t ); ylabel( y ); print( -depsc, mpc_track.eps ); For our problem instance, the optimal cost is J = 112.5. The cost obtained by applying MPC with N = 8 is 365.1; with N = 10, the cost is 131.7; and with N = 12, the cost is 119.5. The following figure shows the output trajectories for N = 8 (green), N = 10 (black), N = 12 (magenta), the optimal output trajectory (red) and the desired output trajectory (blue). 3

15 10 y(t) 5 0 5 0 10 20 30 40 50 60 70 80 90 100 2. Branch and bound for partitioning. We consider the two-way partitioning problem (see pages 219, 226, and 285 in the book), minimize x T Wx subject to x 2 i = 1, t i = 1,...,n. We can, without any loss of generality, assume that x 1 = 1. You will perform several iterations of branch and bound for a random instance of this problem, with, say, n = 100. To run branch and bound, you ll need to find lower and upper bounds on the optimal value of the partitioning problem, with some of the variables fixed to given values, i.e., minimize x T Wx subject to x 2 i = 1, i = 1,...,n x i = x fixed i, i F, where F {1,...,n} is the set of indices of the fixed entries of x, and x fixed i { 1, 1} are the associated values. Lower bound. To get a lower bound, you will solve the SDP minimize Tr(W X) subject to X[ ii = 1, ] i = 1,...,n X x x T 0 1 x i = x fixed i, i F, 4

with variables X S n, x R n. (If we add the constraint that the (n + 1) (n + 1) matrix above is rank one, then X = xx T, and this problem gives the exact solution.) Upper bound. To get an upper bound we ll find a feasible point ˆx, and evaluate its objective. To do this, we solve the SDP relaxation above, and find an eigenvector v associated with the largest eigenvalue of the (n + 1) (n + 1) matrix appearing in the inequality. Choose ˆx i = sign(v n+1 )sign(v i ) for i = 1,...,n. Branch and bound. Develop a partial binary tree for this problem, as on page 21 of the lecture slides. Use n = 100. At each node, record upper and lower bounds. Along each branch, indicate on which variable you split. At each step, develop the node with the smallest lower bound. Develop four nodes. You can do this by hand ; you do not need to write a complete branch and bound code in Matlab (which would not be a pretty sight). You can use cvx to solve the SDPs, and manually constrain some variables to fixed values. Solution. We randomly generate a problem with n = 100. As mentioned in the problem statement, we can without loss of generality assume x 1 = 1. The partial binary tree developed using the above method appears below, followed by the Matlab code used to generate the tree. Of course, your tree and bounds will differ, unless you just happened to generate your data exactly the same way we did. [ 2449, 2090] x 40 = 1 x 40 = 1 [ 2437, 2087] [ 2434, 2101] x 26 = 1 x 26 = 1 [ 2399, 2011] [ 2431, 2067] x 91 = 1 x 91 = 1 [ 2418, 2075] [ 2400, 2028] % bb.m: Develops a partial binary tree for a branch and bound two-way % partitioning problem. % Generate a random example. randn( state, 109) n = 100; 5

W = randn(n); W = W + W ; % Set x_1 to 1 without loss of generality. fixedind = [1]; fixedvals = [1]; % Evaluate and display the bounds at the root node. [v, lower, upper] = bbsdp(n, W, fixedind, fixedvals); disp([lower upper]); % Choose which node to split. [y, i] = min(abs(v)); fixedind = [fixedind i]; % Solve new problems and display bounds. [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals -1]); disp([i -1]); disp([lower upper]); [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals 1]); disp([i 1]); disp([lower upper]); % We will split with x_40 == -1. fixedvals = [fixedvals -1]; % Find the node to split next time. [y, i] = min(abs(v)); fixedind = [fixedind i]; % Solve new problems and display bounds. [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals -1]); disp([i -1]); disp([lower upper]); [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals 1]); disp([i 1]); disp([lower upper]); % We will split with x_26 == 1. fixedvals = [fixedvals 1]; % Find the node to split next time. [y, i] = min(abs(v)); fixedind = [fixedind i]; % Solve new problems and display bounds. [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals -1]); disp([i -1]); disp([lower upper]); [v, lower, upper] = bbsdp(n, W, fixedind, [fixedvals 1]); disp([i 1]); disp([lower upper]); % bbsdp.m: Solves an SDP relaxation for a branch and bound two-way partitioning % problem. function [v, lower, upper] = bbsdp(n, W, fixedind, fixedvals) cvx_quiet(true); variable x(n); 6

variable X(n, n) symmetric; minimize(trace(w*x)) [X x; x 1] == semidefinite(n + 1); diag(x) == 1; x(fixedind) == fixedvals ; lower = cvx_optval; % Find a feasible point. [V, D] = eig([x x; x 1]); v = V(:,end); v = v(1:end-1) * sign(v(end) + 1e-5); xstar = sign(v); upper = xstar *W*xstar; 7