Feedback Control of Quantum Systems. Matt James ANU

Similar documents
Robust Control of Linear Quantum Systems

COHERENT CONTROL VIA QUANTUM FEEDBACK NETWORKS

An Overview of Risk-Sensitive Stochastic Optimal Control

Robustness and Risk-Sensitive Control of Quantum Systems. Matt James Australian National University

Quantum Linear Systems Theory

Robust observer for uncertain linear quantum systems

AN INTRODUCTION CONTROL +

Control of elementary quantum flows Luigi Accardi Centro Vito Volterra, Università di Roma Tor Vergata Via di Tor Vergata, snc Roma, Italy

Solution of Stochastic Optimal Control Problems and Financial Applications

C.W. Gardiner. P. Zoller. Quantum Nois e. A Handbook of Markovian and Non-Markovia n Quantum Stochastic Method s with Applications to Quantum Optics

A Quantum Particle Undergoing Continuous Observation

Control and Robustness for Quantum Linear Systems

Quantum Linear Systems Theory

Quantum feedback control and classical control theory

APPROXIMATIONS FOR NONLINEAR FILTERING

arxiv: v3 [quant-ph] 30 Oct 2014

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Further Results on Stabilizing Control of Quantum Systems

Feedback and Time Optimal Control for Quantum Spin Systems. Kazufumi Ito

Taylor expansions for the HJB equation associated with a bilinear control problem

Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations.

Hamilton-Jacobi-Bellman equations for Quantum Optimal Feedback Control

An Uncertain Control Model with Application to. Production-Inventory System

arxiv: v1 [quant-ph] 16 Mar 2016

A Simple Model of Quantum Trajectories. Todd A. Brun University of Southern California

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

Quantum Control Theory; The basics. Naoki Yamamoto Keio Univ.

Robust Linear Quantum Systems Theory

MEASUREMENT THEORY QUANTUM AND ITS APPLICATIONS KURT JACOBS. University of Massachusetts at Boston. fg Cambridge WW UNIVERSITY PRESS

Stability of Stochastic Differential Equations

arxiv: v1 [quant-ph] 13 Nov 2016

Optimal feedback control for the rapid preparation of a single qubit

APPLICATION OF THE KALMAN-BUCY FILTER IN THE STOCHASTIC DIFFERENTIAL EQUATIONS FOR THE MODELING OF RL CIRCUIT

arxiv:quant-ph/ v3 25 Jun 2004

Continuous Quantum Hypothesis Testing

OPTIMAL CONTROL AND ESTIMATION

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Projection Filters. Chapter Filter projection in general

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Risk-Sensitive Filtering and Smoothing via Reference Probability Methods

Control Theory in Physics and other Fields of Science

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Overview of the Seminar Topic

Robust Control 9 Design Summary & Examples

On the first experimental realization of a quantum state feedback

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR

On some relations between Optimal Transport and Stochastic Geometric Mechanics

Statistical Control for Performance Shaping using Cost Cumulants

Steady State Kalman Filter

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Finite-Time Thermodynamics of Port-Hamiltonian Systems

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Reaction-Diffusion Equations In Narrow Tubes and Wave Front P

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

6.241 Dynamic Systems and Control

Linear Quadratic Zero-Sum Two-Person Differential Games

Robust control and applications in economic theory

Quantum Dissipative Systems and Feedback Control Design by Interconnection

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

10 Transfer Matrix Models

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Chapter 2 Event-Triggered Sampling

Methodology for the digital simulation of open quantum systems

NPTEL

IT IS A basic fact of nature that at small scales at the level

A Comparative Study on Automatic Flight Control for small UAV

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

MPC/LQG for infinite-dimensional systems using time-invariant linearizations

EL2520 Control Theory and Practice

Stochastic and Adaptive Optimal Control

Optimal control of entanglement via quantum feedback

Design and analysis of autonomous quantum memories based on coherent feedback control

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Quantum Dissipative Systems and Feedback Control Design by Interconnection

Schrodinger Bridges classical and quantum evolution

ON SPECTRA OF LÜDERS OPERATIONS

SUB-OPTIMAL RISK-SENSITIVE FILTERING FOR THIRD DEGREE POLYNOMIAL STOCHASTIC SYSTEMS

MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

arxiv: v1 [math.pr] 18 Oct 2016

A Wavelet Construction for Quantum Brownian Motion and Quantum Brownian Bridges

arxiv:quant-ph/ v1 6 Nov 2006

WHITE NOISE APPROACH TO FEYNMAN INTEGRALS. Takeyuki Hida

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Quantum error correction for continuously detected errors

Performance Evaluation of Generalized Polynomial Chaos

Optimal Control Theory

arxiv: v1 [quant-ph] 3 Sep 2008

Real Time Stochastic Control and Decision Making: From theory to algorithms and applications

Deterministic Dynamic Programming

A path integral approach to the Langevin equation

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

CHAPTER V. Brownian motion. V.1 Langevin dynamics

Control of Dynamical System

Matthew Zyskowski 1 Quanyan Zhu 2

Lecture 6: Bayesian Inference in SDE Models

Numerical Optimal Control Overview. Moritz Diehl

Transcription:

Feedback Control of Quantum Systems Matt James ANU

Dedicated to Slava Belavkin who pioneered the way

Outline What is quantum control? A Little History Optimal Control Using Quantum Langevin Equations H-Infinity Control for Linear Quantum Systems Discussion

What is quantum control? Watt used a governor to control steam engines - very macroscopic. Now we want to control things at the nanoscale - e.g. atoms.

Quantum Control: Control of physical systems whose behaviour is dominated by the laws of quantum mechanics. 2003: Dowling and Milburn, The development of the general principles of quantum control theory is an essential task for a future quantum technology.

Types of Quantum Control: Open loop - control actions are predetermined, no feedback is involved. control actions controller quantum system

Closed loop - control actions depend on information gained as the system is operating. quantum system control actions information controller

Closed loop means feedback, just like in Watt s steam engines.

Boulton and Watt, 1788 (London Science Museum)

Types of Quantum Feedback: Using measurement The classical measurement results are used by the controller (e.g. classical electronics) to provide a classical control signal. quantum system classical control actions classical controller measurement classical information

Not using measurement The controller is also a quantum system, and feedback involves a direct flow of quantum information. quantum system quantum control actions quantum information quantum controller

The study of quantum feedback control has practical and fundamental value.

A Little History To the best of my knowledge, Slava Belavkin was the first to publish results concerning quantum feedback control (late 1970 s). However, there were independent pioneers in the physics community in the 1990s including Wiseman, Milburn, Doherty and Jacobs, who also made fundamental contributions.

[Open Quantum Systems, Measurement and Filtering] Quantum Feedback Chronology Late 1970 s Belavkin Linear, Gaussian filtering and control Early 1980 s Late 1980 s Early 1990 s Belavkin Belavkin Wiseman, Milburn Optimal Control using Quantum Operations Optimal Filtering and Control using Quantum Stochastic Differential Equations Quantum optical measurement feedback Mid-late 1990 s Doherty, Jacobs Mabuchi et al LQG optimal control Experiments 2000 s Ahn, Belavkin, D Helon, Doherty, Edwards, Gough, Bouten, van Handel, James, Kimura, Lloyd, Thomsen, Petersen, Schwab, Wiseman, Yanigisawa, and others Optimal control Lyapunov control robust control applications experiments

From Slava Belavkin s webpage... Quantum Filtering and Control (QFC) as a dynamical theory of quantum feedback was initiated in my end of 70's papers and completed in the preprint [1]. This was my positive response to the general negative opinion that quantum systems have uncontrollable behavior in the process of measurement. As was showen in this and the following discrete [2] and continuous time [3] papers, the problem of quantum controllability is related to the problem of quantum observability which can be solved by means of quantum filtering. Thus the quantum dynamical filtering first was invented for the solution of the problems of quantum optimal control. The explicit solution [4] of quantum optimal linear filtering and control problem for quantum Gaussian Markov processes anticipated the general solution [5, 6] of quantum Markov filtering and control problems by quantum stochastic calculus technics. The derived in [5, 6] quantum nonlinear filtering equation for the posterior conditional expectations was represented also in the form of stochastic wave equations. The general solution of these filtering equations in the renormalized (in mean-square sense) linear form was constructed for bounded coefficients in [7] by quantum stochastic iterations. Quantum Filtering Theory (QFT) and the corresponding stochastic quantum equations have now wide applications for quantum continuos measurements, quantum dynamical reductions, quantum spontaneous localizations, quantum state diffusions, and quantum continuous trajectories. All these new quantum theories use particular types of stochastic Master equation which was initially derived from an extended quantum unitary evolution by quantum filtering method.

Slava Belavkin s quantum feedback control papers (pre-2000) 1 V. P. Belavkin: Optimal Measurement and Control in Quantum Dynamical Systems. Preprint Instytut Fizyki 411 3--38. Copernicus University, Torun', 1979. quant-ph/0208108, PDF. 2 V. P. Belavkin: Optimization of Quantum Observation and Control. In Proc of 9th {IFIP} Conf on Optimizat Techn. Notes in Control and Inform Sci 1, Springer-Verlag, Warszawa 1979. 3 V. P. Belavkin: Theory of the Control of Observable Quantum Systems. Automatica and Remote Control 44 (2) 178--188 (1983). quant-ph/0408003, PDF. 4 V. P. Belavkin: Non-Demolition Measurement and Control in Quantum Dynamical Systems. In Information Complexity and Control in Quantum Systems 311--329, Springer--Verlag, Wien 1987. 5 V. P. Belavkin: Non-Demolition Measurements, Nonlinear Filtering and Dynamic Programming of Quantum Stochastic Processes. Lecture notes in Control and Inform Sciences 121 245--265, Springer--Verlag, Berlin 1989. 6 V. P. Belavkin: Non-Demolition Stochastic Calculus in Fock Space and Nonlinear Filtering and Control in Quantum Systems. In Stochastic Methods in Mathematics and Physics 310- -324, World Scientific, Singapore 1989. 7 V. P. Belavkin: Stochastic Equations of Quantum Filtering. In Proc of 5th International Conference on Probability Theory and Mathematical Statistics 10--24. Vilnius 1990. 8 V. P. Belavkin: Continuous Non-Demolition Observation, Quantum Filtering and Optimal Estimation. In Quantum Aspects of Optical Communication Lecture notes in Physics 45 131-145, Springer, Berlin 1991. quant-ph/0509205, PDF. 9 V. P. Belavkin: Dynamical Programming, Filtering and Control of Quantum Markov Processes. In Stochastic Analysis in Mathematical Physics Lecture notes in Physics 9--42, World Scientific, Singapore 1998. 10 V. P. Belavkin: Measurement, Filtering and Control in Quantum Open Dynamical Systems. Rep. on Math. Phys. 43 (3) 405-425 (1999).

V.P. Belavkin, 1987 [1985 conference paper based on 1979 paper]

V.P. Belavkin, 1983

V.P. Belavkin, 1989 [1988 conference paper]

Optimal Control Using Quantum Stochastic Differential Equations Quantum Langevin equations (QLE) provide a general framework for open quantum systems and contain considerable physical information. QLEs expressed in terms of quantum stochastic differential equations (QSDE) are very well suited for control engineering. Quantum operation models arise naturally after suitable conditioning The system Hamiltonian may depend on a classical control parameter.

Model: Belavkin, 1988; Bouten-van Handel, 2005; Edwards-Belavkin, 2005; Gough-Belavkin-Smolyanov, 2005; James, 2005 Quantum probability space (B W, ρ φ), where B = B(h) is the algebra of system operators with state ρ, and W = B(F) is the algebra of operators on Fock space with vacuum state φ. Unitary dynamics du t = { LdA t L da t 1 } 2 L Ldt ih(u(t))dt U t, U 0 = I System operators X B evolve according to X t = j t (X) = U t XU t dj t (X) = j t (L u(t) (X)) dt + j t ([L, X]) da t + j t ([X, L]) da t where Lindblad L u (X) = i[h(u), X] + L XL 1 2 (L LX + XL L)

Measurements Y t = U t (A t + A t)u t, dy t = j t (L + L ) dt + da t + da t. Y t = vn{y s, s t} is a commutative family of von Neumann algebras, so Y is equivalent to a classical process, which can be measured via homodyne detection. C t = vn{z s = A s + A s, s t} is also commutative, and Belavkin non-demolition condition: Y t = U t C t U t [X, Z s ] = [X t, Y s ] = 0, 0 s t so that (commutants) X C t, j t (X) Y t Thus conditional expectations P[X C t ], P[X t Y t ] are well-defined. (hence filtering is possible)

With respect to the state P, the process Z t = A t + A t is equivalent to a standard Wiener process. Furthermore, with respect to the state P 0 T defined by P 0 T [X] = P[U T XUT ], X B W, the measurement process Y t is equivalent to a standard Wiener process (cf. reference measure in classical filtering). A controller is a causal function from measurement data to control signals: u(t) = K(t, y [0,t] ) The coefficients of the stochastic differential equations are now adapted to the measurement filtration Y t. The filtering theory continues to apply. Belavkin, 1988; Bouten-van Handel, 2005

The Optimal Control Problem: Cost operators [non-negative, self-adjoint] used to specify performance integral T 0 C 1 (u), C 2 C 1 (t)dt + C 2 (T ), where C 1 (t) = j t (u, C 1 (u(t)), C 2 (t) = j t (u, C 2 ). We wish to minimize the expected cost over all controllers K. J(K) = P[ T 0 C 1 (t)dt + C 2 (T )]

Dynamic Programming: First, we represent the expected cost in terms of filtered quantities, as follows. Define Ũt by Holevo, 1990; Belavkin, 1992; Bouten-van Handel, 2005; James, 2005 { dũt = L(dA t + da t ) 1 2 L Ldt ih(u(t))dt}ũt, Ũ 0 = I Then Ũ t XŨt C t (X B) and the unnormalized filter state σ t (X) = U t P[Ũ t XŨt C t ]U t is well-defined. It evolves according to Belavkin, 1992 ( dσ t (X) = σ t L u(t) (X) ) dt + σ t (L X + XL)dY t This is the form of the Belavkin quantum filter we use. It is analogous to the Duncan-Mortensen-Zakai equation of classical filtering. σ t is an information state (control theory terminology).

Belavkin, 1992

We derive the following representation in terms of σ t : J(K) = P[ = P[ = P[ = P 0 T [ T 0 T 0 T 0 T 0 Ũ t C 1 (u(t))ũtdt + Ũ T C 2 Ũ T ] P[Ũ t C 1 (u(t))ũt C t ]dt + P[Ũ T C 2 Ũ T C T ]] U t U t P[Ũ t C 1 (u(t))ũt C t ]U t U t dt + U T U T P[Ũ T C 2 Ũ T C T ]U T U T ] σ t (C 1 (u(t)))dt + σ T (C 2 )] This last expression is equivalent to classical expectation with respect to Wiener measure: T J(K) = E 0 T [ σ t (C 1 (u(t)))dt + σ T (C 2 )] 0 The fact that the information state σ t is computable from dynamics driven by the measured data means that the methods of dynamic programming are applicable.

Define the value function S(σ, t) = inf K E0 σ,t[ T t σ s (C 1 (u(s)))ds + σ T (C 2 )] which quantifies the optimal cost to go from a current state σ at time t. The dynamic programming principle states that S(σ, t) = inf K E0 σ,t[ s t σ r (C 1 (u(r)))dr + S(σ s, s)] At least formally, S(σ, t) satisfies the Hamilton-Jacobi-Bellman equation S(σ, t) + inf t S(σ, t) + C 1 (u)} u U = 0 S(σ, T ) = σ(c 2 ) where L u is the Markov generator of σ t (for fixed value u U).

Optimal Controller: Suppose we have a solution S(σ, t) of the HJB equation. Define u (σ, t) = arg min u {L u S(σ, t) + C 1 (u)} This defines the optimal feedback controller: ( dσ K t (X) = σ t L u(t) (X) ) dt + σ t (L X + XL)dY t : u(t) = u (σ t, t) This controller has the separation structure, with filter dynamics the Belavkin quantum filter for the information state σ t. filter

Another Type of Optimal Control Problem - Risk Sensitive Classical: Jacobson, 1973; Whittle, 1980; Bensoussan-van Schuppen, 1985; James-Baras-Elliott, 1993 Classical criterion: [Note the exponential] Here, µ > 0 is a risk parameter. t J µ (K) = E[exp{µ( C 1 (u(t), t)dt + C 2 (T ))}] 0 Quantum criterion: James, 2004, 2005 J µ (K) = P[R (T )e µc 2(T ) R(T )] where R(t) is the time-ordered exponential defined by i.e. dr(t) dt = µ 2 C 1(t)R(t), R(0) = I R(t) = exp ( µ 2 t 0 ) C 1 (s)ds.

In general, it does not appear possible to solve this problem using the unnormalized conditional state σ t. Accordingly, we introduce a risk-sensitive information state where Ṽt C t is given by σ µ t (X) = U t P[Ṽ t XṼt C t ]U t, X B, dṽt = {L(dA t + da t ) 1 2 L L i H + µ 2 C 1(u(t))}Ṽt We then have the representation which facilitates dynamic programming. J µ (K) = P 0 T [σ µ T (eµc 2 )] The Hamilton-Jacobi-Bellman equation for this problem is t Sµ (σ, t) + inf u U {L µ,u S µ (σ, t)} = 0 S µ (σ, T ) = σ(e µc 2 ) where L µ,u is the Markov generator of σ µ t (for fixed value u U).

Optimal Risk-Sensitive Controller: Suppose we have a solution S µ (σ, t) of the risk-sensitive HJB equation. Define u µ, (σ, t) = arg min u {L µ,u S µ (σ, t)} This defines the optimal feedback controller: ( dσ µ K µ, t (X) = σ µ t L µ,u(t) (X) ) dt + σ µ t (L X + XL)dY t : u(t) = u µ, (σ µ t, t) where L µ,u (X) = L u (X) + µ 2 (C 1(u)X + XC 1 (u)) [Note the inclusion of the cost observable in the modified Lindblad.] This controller also has the separation structure, with filter dynamics the modified Belavkin quantum filter for the risk-sensitive information state σ µ t.

Comments: The risk-sensitive problem is of interest for at least two reasons: Robustness: Risk-sensitive controllers have the practical benefit that they can cope with uncertainty better than standard. #& #% 234 '()*+,)-./01 $& $% & 32534 %!!!"!#!$ % $ # "!! Fundamentals of quantum mechanics: The risk-sensitive information state can be viewed as a subjective state that includes knowledge and purpose, extending the Copenhagen interpretation in the feedback context.

H-Infinity Control for Linear Quantum Systems H-Infinity refers to the Hardy space which provides the setting for a frequency domain approach to robust control system design (initiated by Zames, late 1970 s). Robustness refers to the ability of a control system to tolerate uncertainty, noise and disturbances, to some extent at least. Feedback is fundamental to this, and in fact is the raison d etra for feedback.

Motivation: Feedback Stability Even when individual components are stable, feedback interconnections need not be. u 3 u 0 u 1 Σ A y 1 u 2 Σ B y 2 y 0 y 3 The small gain theorem asserts stability of the feedback loop if the loop gain is less than one: g A g B < 1 Classical: Zames, Sandberg, 1960 s Quantum: D Helon-James, 2006

Stability is quantified in a mean-square sense as follows. u Σ q y du(t) = β u (t)dt + db u (t) dy (t) = β y (t)dt + db y (t) β y 2 t µ + λt + g 2 β u 2 t

The H-Infinity Robust Control Problem: Given a system (plant), find another system (controller) so that the gain from w to z is small. This is one way of reducing the effect of uncertainty or environmental influences. uncertainty /environment w v u plant z y controller v K Classical: Zames, late 1970 s Quantum: D Helon-James, 2005; James-Nurdin-Petersen, 2006

Linear/Gaussian Model: Plant: dx(t) = Ax(t)dt + B 0 dv(t) + B 1 dw(t) + B 2 du(t); x(0) = x; dz(t) = C 1 x(t)dt + D 12 du(t); dy(t) = C 2 x(t)dt + D 20 dv(t) + D 21 dw(t) Controller: dξ(t) = A K ξ(t)dt + B K1 dv K (t) + B K dy(t) du(t) = C K ξ(t)dt + B K0 dv K (t) Signals: w, u, v, z, y, v K are semimartingales, e.g. dw(t) = β w (t)dt + d w(t) where w(t) is the noise which is assumed white Gaussian with Ito table d w(t)d w T (t) = F w dt where F w is non-negative Hermitian. [As in Belavkin s work]

Idea of Result: Under some assumptions, then roughly speaking: James-Nurdin-Petersen, 2006 (i) If the closed loop system regarded as an operator w z has gain less than g then there exists solutions X and Y to the algebraic Riccati equations (A B 2 E 1 1 D T 12C 1 ) T X + X(A B 2 E 1 1 D T 12C 1 ) + X(B 1 B T 1 g 2 B 2 E 1 1 B 2)X +g 2 C T 1 (I D 12 E 1 1 D T 12)C 1 = 0; (A B 1 D T 21E 1 2 C 2 )Y + Y (A B 1 D T 21E 1 2 C 2 ) + Y (g 2 C T 1 C 1 C T 2 E 1 2 C 2 )Y +B 1 (I D T 21E 1 2 D 21 )B T 1 = 0. satisfying stabilizability conditions and XY has spectral radius less than one.

(ii) Conversely, if there exists solutions X, Y of these Riccati equations satisfying stabilizability conditions and XY has spectral radius less than one, then the controller defined by A K = A + B 2 C K B K C 2 + (B 1 B K D 21 )B T 1 X; B K = (I Y X) 1 (Y C T 2 + B 1 D T 21)E 1 2 ; C K = E 1 1 (g 2 B T 2 X + D T 12C 1 ). and an arbitrary choice of v K, B K1, B K0, achieves a closed loop with gain less than g. Note: Physical realizability may impose conditions on the controller noise terms v K, B K1, B K0.

Example with Quantum Controller: w v a y z u plant v K1 a K controller v K2

Example with Classical Controller: w v a z u plant Mod HD v K y controller

Comments: These results provide the beginning of a robust control theory for quantum systems. Controllers themselves may be quantum or classical. It is important to note that the controller may need quantum noise inputs - this broadens the concept of controller, like randomization in classical optimal control.

Discussion We have sketched some recent work in quantum control that I have been involved with. Slava Belavkin s work was a crucial foundation. Quantum control has practical and foundational importance. Controllers may themselves be quantum, and may require additional quantum noise There is an important and exciting future for feedback control of quantum systems.