Robust Quantum Error-Correction. via Convex Optimization

Similar documents
Quantum error correction Convex optimization

Quantum Error-Correcting Codes by Concatenation

Operator Quantum Error Correcting Codes

Introduction to Quantum Error Correction

Non-Zero Syndromes and Syndrome Measurement Order for the [[7,1,3]] Quantum Error Correction Code

Analog quantum error correction with encoding a qubit into an oscillator

Recovery in quantum error correction for general noise without measurement

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

arxiv: v2 [quant-ph] 16 Mar 2014

Logical error rate in the Pauli twirling approximation

How Often Must We Apply Syndrome Measurements?

Achievable rates of Go1esman-Kitaev-Preskill (GKP) codes for Gaussian thermal loss channels

MP 472 Quantum Information and Computation

arxiv:quant-ph/ v5 6 Apr 2005

Detecting genuine multipartite entanglement in higher dimensional systems

SHORTER GATE SEQUENCES FOR QUANTUM COMPUTING BY MIXING UNITARIES PHYSICAL REVIEW A 95, (2017) ARXIV:

Quantum Error Correction and Fault Tolerance. Classical Repetition Code. Quantum Errors. Barriers to Quantum Error Correction

arxiv:quant-ph/ v1 18 Apr 2000

Teleportation of Quantum States (1993; Bennett, Brassard, Crepeau, Jozsa, Peres, Wootters)

Quantum Error Correction Codes - From Qubit to Qudit

high thresholds in two dimensions

arxiv: v2 [quant-ph] 5 Dec 2013

High-fidelity Z-measurement error encoding of optical qubits

arxiv: v3 [quant-ph] 17 Sep 2017

Logical operators of quantum codes

Energetics and Error Rates of Self-Correcting Quantum Memories

Lectures on Fault-Tolerant Quantum Computation

Surface Code Threshold in the Presence of Correlated Errors

ROBUST DYNAMICAL DECOUPLING: FEEDBACK-FREE ERROR CORRECTION*

Quantum Error Correction Codes-From Qubit to Qudit. Xiaoyi Tang, Paul McGuirk

Methodology for the digital simulation of open quantum systems

Estimating entanglement in a class of N-qudit states

Magic States. Presented by Nathan Babcock

A scheme for protecting one-qubit information against erasure. error. Abstract

Transmitting and Hiding Quantum Information

Functional SVD for Big Data

arxiv: v2 [quant-ph] 14 May 2017

Experimental state and process reconstruction. Philipp Schindler + Thomas Monz Institute of Experimental Physics University of Innsbruck, Austria

Channel-Adapted Quantum Error Correction. Andrew Stephen Fletcher

Quantum correlations and decoherence in systems of interest for the quantum information processing

Logical Quantum Computing. Sarah Sheldon, IBM T.J. Watson Research Center

Exploring Quantum Chaos with Quantum Computers

Private quantum subsystems and error correction

quantum error-rejection

Jian-Wei Pan

General Qubit Errors Cannot Be Corrected

Lecture: Quantum Information

Exploring finite-dimensional Hilbert spaces by Quantum Optics. PhD Candidate: Andrea Chiuri PhD Supervisor: Prof. Paolo Mataloni

arxiv: v1 [quant-ph] 3 Jan 2008

arxiv: v7 [quant-ph] 20 Mar 2017

CSE 599d - Quantum Computing Fault-Tolerant Quantum Computation and the Threshold Theorem

arxiv: v1 [quant-ph] 15 Oct 2010

Semidefinite programming strong converse bounds for quantum channel capacities

arxiv: v2 [quant-ph] 7 Apr 2014

What is possible to do with noisy quantum computers?

arxiv:quant-ph/ v1 22 Aug 2005

Day 3 Selected Solutions

arxiv: v2 [quant-ph] 14 Feb 2019

Towards a quantum calculus

Robust Characterization of Quantum Processes

Dynamical Decoupling and Quantum Error Correction Codes

Some Bipartite States Do Not Arise from Channels

Sparse Covariance Selection using Semidefinite Programming

2.0 Basic Elements of a Quantum Information Processor. 2.1 Classical information processing The carrier of information

Instantaneous Nonlocal Measurements

Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871

Introduction to Quantum Information Hermann Kampermann

Linear Algebra and Dirac Notation, Pt. 3

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Introduction into Quantum Computations Alexei Ashikhmin Bell Labs

From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism

Quantum Entanglement and Measurement

Quantum principal component analysis

Quantum computers still work with 25% of their qubits missing

Los A 96.- Los Alamos National Laboratory Los Alamos New Mexico ASSUMPTIONS FOR FAULT TOLERANT QUANTUM COMPUTING. E.

Quantum process tomography of a controlled-not gate

Quantum state discrimination with post-measurement information!

ROM-BASED COMPUTATION: QUANTUM VERSUS CLASSICAL

CS120, Quantum Cryptography, Fall 2016

Limitations on transversal gates

Quantum Information Processing and Diagrams of States

Lecture Note 5: Semidefinite Programming for Stability Analysis

THE ABC OF COLOR CODES

Optimal discrimination of quantum measurements. Michal Sedlák. Department of Optics, Palacký University, Olomouc, Czech Republic

Application of Structural Physical Approximation to Partial Transpose in Teleportation. Satyabrata Adhikari Delhi Technological University (DTU)

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

Techniques for fault-tolerant quantum error correction

Fast and Robust Phase Retrieval

A Framework for Non-Additive Quantum Codes

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Talk by Johannes Vrana

Lecture 2: From Classical to Quantum Model of Computation

Storage of Quantum Information in Topological Systems with Majorana Fermions

Electrical quantum engineering with superconducting circuits

becomes at most η th ( η ) 2t. As a result, it requires only O(log log T ) levels of concatenation to achieve sucient accuracy (η effective 1 T

arxiv: v1 [quant-ph] 30 Sep 2018

Fault-tolerant conversion between the Steane and Reed-Muller quantum codes

Optimal Hamiltonian Simulation by Quantum Signal Processing

Quantum Memory Hierarchies

Large-Scale Quantum Architectures

Transcription:

Robust Quantum Error-Correction via Convex Optimization Robert Kosut SC Solutions Systems & Control Division Sunnyvale, CA Alireza Shabani EE Dept. USC Los Angeles, CA Daniel Lidar EE, Chem., & Phys. Depts. USC Los Angeles, CA Presented at QEC-07 University of Southern California, Los Angeles Dec. 17-21, 2007 Research supported by DARPA QuIST Program (Quantum Information Science & Technology)

Robust Quantum Error-Correction via Convex Optimization Robert Kosut SC Solutions Systems & Control Division Sunnyvale, CA Alireza Shabani EE Dept. USC Los Angeles, CA Daniel Lidar EE, Chem., & Phys. Depts. USC Los Angeles, CA Presented at QEC-07 University of Southern California, Los Angeles Dec. 17-21, 2007 Research supported by DARPA QuIST Program (Quantum Information Science & Technology)

Quantum Mechanical Computers, Optics News, February 1985. Richard P. Feynman In a machine such as this there are very many other problems due to imperfections.... At least some of these problems can be remedied in the usual way by techniques such as error correcting codes... But until we find a specific implementation for this computer, I do not know how to proceed to analyze these effects. However, it appears that they would be very important in practice. This computer seems to be very delicate and these imperfections may produce considerable havoc.

Quantum Mechanical Computers, Optics News, February 1985. Richard P. Feynman In a machine such as this there are very many other problems due to imperfections.... At least some of these problems can be remedied in the usual way by techniques such as error correcting codes... But until we find a specific implementation for this computer, I do not know how to proceed to analyze these effects. However, it appears that they would be very important in practice. This computer seems to be very delicate and these imperfections may produce considerable havoc.

Concerns Robustness Perfect QEC schemes, as well those produced by optimization tuned to specific errors, are often not robust to even small changes in the noise channel. Cost The encoding and recovery effort in perfect QEC typically grows exponentially with the number of errors in the noise channel. A robust optimization approach to QEC can address these. Robustness Noise channels which do not satisfy the standard assumptions for perfect correction can be incorporated. Cost If robust fidelity levels are sufficiently high, then no further increases in codespace dimension and/or levels of concatenation are necessary. This assessment, which is critical to any specific implementation, is not knowable without performing the robust optimization.

Outline Problem formulation Direct fidelity maximization Indirect fidelity maximization Numerical Examples Conclusions

Error correction model OSR model ρ S ρ C σ C C E R ρ S = (R r E e C c )ρ S (R r E e C c ) r,e,c R = { R r n S n C,r=1,...,m R r R r R r = I nc } E = { E e n C n C,e=1,...,m E e E ee e = I nc } C = { C c n C n S,c=1,...,m C c C c C c = I ns } Design goal Determine encoding C and recovery R to so that the noisy channel, REC, is as close as possible to a desired unitary U S.

O C N T R L O Ubiquitous for QIP/QEC Control is required Everywhere there is a desired Unitary. Source: 1969 Doubleday Book Cover UBIK by Philip K. Dick

Performance measures abc Fidelity between REC and U S f = 1 n 2 s r,e,c Tr U S Rr E e C c 2 Perfect error correction, f =1, if and only if there are constants α rec such that R r E e C c = α rec U S, r,e,c α rec 2 =1. This suggests the indirect measure of fidelity, the distance-like error, d = r,e,c R r E e C c α rec U S 2 fro a B. Schumacher, Phys. Rev. A, (1996) b E. Knill, R. Laflamme, Phys. Rev. A, (1997) c M.A. Nielsen, I.L. Chuang, Quantum Computation and Quantum Information, Cambridge (2000)

Direct Fidelity Maximization maximize subject to f = 1 n 2 s r r,e,c R r R r = I nc, Tr U S R r E e C c 2 c C c C c = I ns minimize subject to d = r Indirect Fidelity Maximization r,e,c R r E e C c α rec U S 2 fro R r Rr = I nc, c C c Cc = I ns, r,e,c α rec 2 =1 Both are non-convex optimizations for which local solutions can be found from a bi-convex iteration between recovery and encoding. abcde a M. Reimpell & R. F. Werner, Phys. Rev. Lett. (2005) b N. Yamamoto, S. Hara, & K. Tsumura, Phys. Rev. A, (2005) c A. S. Fletcher, P. W. Shor, & M. Z. Win, arxiv: quant-ph/0606035, Phys. Rev. A (2007) d R.L. Kosut & D.A. Lidar, arxiv: quant-ph/0606078 e R.L. Kosut, A. Shabani, & D.A. Lidar, arxiv:quant-ph/0703274, (to appear Phys. Rev. Lett.)

Direct fidelity maximization maximize f = 1 n 2 S r,e,c Tr U S R re e C c ) 2 subject to c C c Cc = I S, r R r Rr = I C Expand OSR elements in a fixed basis C c = n S n C i=1 x ci B Ci, B Ci n C n S R r = n S n C i=1 x ri B Ri, B Ri n S n C Direct fidelity maximization is equivalent to maximize f = ijkl (X R ) ij (X C ) kl F ijkl (E) subject to ij (X R ) ij B Ri B Rj = I nc, kl (X C ) kl B Ck B Cl = I ns (X R ) ij = r x ri x rj, (X C ) kl = c x ck x cl F ijkl (E) = e (Tr U S B Ri E e B Ck )(Tr U S B Rj E e B Cl ) /n 2 S

Process matrices X R, X C n S n C n S n C (X R ) ij = r x ri x rj (X C ) kl = c x ck x cl relax quadratic constraints to semidefinite constraints X R 0 X C 0 Relaxed direct fidelity maximization maximize f = ijkl (X R ) ij (X C ) kl F ijkl (E) subject to X R 0, X C 0, ij (X R ) ij B Ri B Rj = I nc, kl (X C ) kl B Ck B Cl = I ns not a convex optimization jointly in X R and X C convex (SDP) in X R for a given X C and in X C for a given X R local solution obtained from a bi-convex iterative optimization Semidefinite Programs (SDPs) in encoding and recovery process matrices X C and X R

Given encoding C, optimal recovery process matrix is solution of the SDP maximize f = Tr X R W R (E, C) subject to X R 0, i,j (X R ) ij B Ri B Rj = I nc OSR elements R r obtained from X R via the singular value decomposition X R = VSV R r = n S n C i=1 sr V ir B Ri, r =1,...,n S n C

Given encoding C, optimal recovery process matrix is solution of the SDP maximize f = Tr X R W R (E, C) subject to X R 0, i,j (X R ) ij B Ri B Rj = I nc OSR elements R r obtained from X R via the singular value decomposition X R = VSV R r = n S n C i=1 sr V ir B Ri, r =1,...,n S n C Given recovery R, optimal encoding process matrix is solution of the SDP maximize f = Tr X C W C (E, R) subject to X C 0, i,j (X C ) ij B Ci B Cj = I ns OSR elements C c obtained from X C via the singular value decomposition X C = VSV C c = n S n C i=1 sc V ic B Ci, c =1,...,n S n C

Iterative direct fidelity maximization (X R -X C )-SDP Initialize OSR C Process Matrix X C Repeat 1. Compute X R (via SDP) maximize Tr X R W R (E, X C ) subject to X R 0, i,j (X R ) ij B Ri B Rj = I nc 2. Compute X C (via SDP) maximize Tr X C W C (E, X R ) subject to X C 0, Until f(x R, X C ) stops increasing i,j (X C ) ij B Ci B Cj = I ns Each step can only increase f(x R, X C ). Compute OSRs (R, C) via SVDs at termination.

Error system uncertainty error system is one of a number of possible error systems: sources of uncertainty E β = { E βe e =1,...,m E },β=1,...,l different runs of a tomography experiment can yield different error channels. a physical model generated by a Hamiltonian H(θ) dependent upon an uncertain set of parameters θ taking a sample {H(θ β )} l β=1 will result in a set of error systems errors can be mitigated by fault-tolerant methods which rely on several levels of code concatenation and/or robust error correction

Robust error correction Worst-Case maximize min β Tr X R W R (E β, C) maximize Average-Case Tr X R W R (E β, C) β subject to X R 0, i,j (X R ) ij B Ri B Rj = I nc subject to X R 0, i,j (X R ) ij B Ri B Rj = I nc maximize min β Tr X C W C (E β, R) maximize Tr X C W C (E β, R) β subject to X C 0, i,j (X C ) ij B Ci B Cj = I ns subject to X C 0, i,j (X C ) ij B Ci B Cj = I ns Iterating between R and C results again in bi-convex (SDP) iterations

Indirect approach to fidelity maximization System-ancilla model with unitary encoding ρ C σ C ρ S U C E U R ρ S 0 CA 0 RA U C = [C ], C n C n S, C C = I ns (n C = n S n CA ) U R = [R ], R = Indirect measure R 1. R mr, R i n S n C,m R = n CA n RA, R R = I nc d = r,e R r E e C α re U S 2 fro = RE(I m E C) α U S 2 fro α m R m E, α fro =1

Indirect fidelity maximization minimize d = RE(I me C) α U S 2 fro subject to R R = I nc, C C = I ns, Tr α α =1 R E C α n CA n RA n S n C n C m E n C n C n S n CA n RA m E

Indirect fidelity maximization minimize d = RE(I me C) α U S 2 fro subject to R R = I nc, C C = I ns, Tr α α =1 R E C α n CA n RA n S n C n C m E n C n C n S n CA n RA m E Optimization over R gives the equivalent problem minimize n S + Tr E(I me CC )E 2Tr subject to C C = I ns, Tr γ =1, γ 0 E(γ CC )E optimization variables are C and γ = α α m E m E. optimizing R is E(α CU S )=USV R = VU U n C n C unitary S =diag(s 1 s nc ),s 1 s nc 0 V n C n RA n C,V V = I nc

Indirect fidelity maximization minimize d = RE(I me C) α U S 2 fro subject to R R = I nc, C C = I ns, Tr α α =1 R E C α n CA n RA n S n C n C m E n C n C n S n CA n RA m E Optimization over R gives the equivalent problem minimize n S + Tr E(I me CC )E 2Tr subject to C C = I ns, Tr γ =1, γ 0 E(γ CC )E optimization variables are C and γ = α α m E m E. optimizing R is R = ( α U S C ) E (E ( γ CC ) E ) 1

Given C and γ, the following construction for α achieves the optimizing R n CA m E n RA =1 α = [ γ 0 nca m E m E ] R is unitary (n C n C ) n CA <m E n RA n CA = m E α = γ R is tall (m E n S n C ) If n CA m E, then R is unitary and no recovery ancillas are needed, i.e., n RA =1, and U R = R. a If n CA <m E, add recovery ancilla so that m E = n CA n RA. n RA =2 q RA may require padding R with zeroes. a Related results in D. Kribs & R. Spekkens, Phys. Rev. A (2006).

Iterative indirect fidelity maximization Given encoding C, solve for γ from the SDP maximize Tr E(γ CC )E subject to γ 0, Tr γ =1 Obtain recovery (R, α) from the previous construction γ (R, α)

Iterative indirect fidelity maximization Given encoding C, solve for γ from the SDP maximize Tr E(γ CC )E subject to γ 0, Tr γ =1 Obtain recovery (R, α) from the previous construction γ (R, α) Given recovery (R, α) solve for C minimize subject to d(r, C,α)= RE(I me C) α U S 2 fro C C = I ns

Iterative indirect fidelity maximization Given encoding C, solve for γ from the SDP maximize Tr E(γ CC )E subject to γ 0, Tr γ =1 Obtain recovery (R, α) from the previous construction γ (R, α) Given recovery (R, α) solve for C minimize subject to d(r, C,α)= RE(I me C) α U S 2 fro C C = I ns Equivalent to Constrained Least-Squares (C C = C ls ls C ) 1, ls Cls = α re (R r E e ) =min r,e C d(r, C,α)

Iterative indirect fidelity maximization Given encoding C, solve for γ from the SDP maximize Tr E(γ CC )E subject to γ 0, Tr γ =1 Obtain recovery (R, α) from the previous construction γ (R, α) Given recovery (R, α) solve for C minimize subject to d(r, C,α)= RE(I me C) α U S 2 fro C C = I ns Equivalent to Constrained Least-Squares via SVD: (C C = C ls ls C ) 1, ls Cls = α re (R r E e ) =min r,e C C ls = U [ S 0 ] V C = U [ InS 0 ] V d(r, C,α)

Iterative indirect fidelity maximization (R-γ-C)-SDP Initialize R and C Repeat 1. Compute γ α (via SDP) maximize Tr E(γ CC )E subject to γ 0, Tr γ =1 2. Compute R (via SVD) R = ( α U S C ) E ( E ( α α CC ) E ) 1/2 3. Compute C (via SVD) ( C = C ls C ls C ) 1/2 ls, Cls = r,e α re (R r E e ) Until d(r, C, α) stops decreasing Each step can only decrease d(r, C, α).

Alternate computation of indirect optimal recovery Relation between fidelity f(r, C) and distance d(r, C, α): f(r, C) ( ) 1 d(r, C, α)/2n 2 S Equality iff α solves the constrained least-squares problem: min α,α α=1 d(r, C, α) α re = Tr R re e C n S f(r, C) = Tr R r E e C r,e Tr R r E e C 2

Alternate computation of indirect optimal recovery Relation between fidelity f(r, C) and distance d(r, C, α): f(r, C) ( 1 d(r, C, α)/2n S ) 2 Equality iff α solves the Constrained Least-Squares problem: min α,α α=1 d(r, C, α) α re = Tr R re e C n S f(r, C) = (R-α)-LS Initialize R, C Repeat 1. Compute α from above 2. Compute R = VU from SVD: E(α CU S )=USV Until d(r,c,α) stops decreasing No convex optimization required: LS for α and SVD to get R Tr R r E e C r,e Tr R r E e C 2 Conjecture Given C, returns a global d-optimal recovery equivalent to the previous (R-γ)-SDP

Iterative indirect fidelity maximization (R-α-C)-LS Initialize R and C Repeat 1. Compute α ( α re = (Tr R r E e C) r,e Tr R r E e C 2) 1/2 2. Compute R (via SVD) R = ( α U S C ) E ( E ( α α CC ) E ) 1/2 3. Compute C (via SVD) ( C = C ls C ls C ) 1/2 ls, Cls = r,e α re (R r E e ) Until d(r, C, α) stops decreasing Each step can only decrease d(r, C, α). Each step is a form of Constrained-Least-Squares.

Structure of recovery For m E >n CA, α m E m E (= γ) with SVD Then, α = VSW { V, W me m E unitaries S =diag(s 1 s k 0 0), s k > 0 R = ( (E α U S C ) E ( γ CC ) E ) 1 (F = (V I ns )(S U S C )F (S 2 CC )F ) 1 with F = E(W I nc )=[F 1 F me ] W is unitary freedom in error OSR V is unitary freedom in recovery OSR reduces to k OSR elements R i = s i U S C F i k j=1 s 2 j F jcc F j 1 more efficient then reduction via SVD of process matrix,i=1,...,k

Computational cost Operation Method SDP Variables (n C = n S n CA ) Recovery Direct Primal-SDP (n 2 S 1)n 2 C Dual-SDP n 2 C Indirect SDP (n CA n RA ) 2 LS 0 Encoding Direct Primal-SDP (n 2 C 1)n 2 S Dual-SDP n 2 S Indirect LS 0 Iterations to solve SDPs depend on number of variables, constraints, etc. Dual Primal requires solving linear equations Process matrix via SDP Recvery or Encoding OSR requires SVD. Recovery via (R-γ)-SDP Recovery OSR requires iterations + SVD Recovery via (R-α)-LS Recovery OSR requires SVD Encoding LS Encoding OSR requires SVD

Exponential scaling with qubits seems unavoidable The indirect methods also support both the robust worst-case & average-case criteria for the error system set E { E β β =1,...,l }

Recovery & Encoding via Iterative Optimization 1 0.98 4 qubit weight 2 single unitary error, p = 0.2 f=0.98450 5 min 0.96 0.94 f=0.98177 5 min f=0.98551 2.6 sec fidelity 0.92 0.9 0.88 (R α C) LS (R γ) SDP, C LS _ C DFS4 C 0 0.86 f 0 0.7 0.84 0 20 40 60 80 100 iterations

Iterated recovery (α, γ) 11 11 for weight-2 unitary errors α 0.8 SVD γ 0.8 0.7 0.6 0.6 0.5 0.4 0.4 0.2 0.3 0 1 2345 6 78910 11 1 2 3 4 5 6 7 8 9 10 11 0.2 0.1 0 1 2 3 4 5 6 7 8 9 10 11 α is a complex matrix looks like an 8 11 SVD of γ shows 8 significant terms (explains reduction from 11) 8 significant singular values in 32 32 recovery process matrix iterated encoding far from DFS4 encoding: f(c DFS4, C) =0.13

10 runs initlaized at random encodings (C 16 2) 0.9858 4 qubit, weight 2 unitary error, p = 0.2 0.8 SVD( γ ), 10 runs, 1000 iterations 0.9856 0.7 0.9854 0.6 0.5 fidelity 0.9852 0.985 0.4 0.3 0.9848 0.9846 t CPU 6 min / 1000 iterations 0.2 0.1 100 200 300 400 500 600 700 800 900 1000 iterations 0 1 2 3 4 5 6 7 8 9 10 11 final fidelities appear to be converging initial encodings are very different from each other final recoveries & encodings are all very different from each other global optimum is not unique?

10 runs initlaized at random encodings (C 16 2) 0.9858 4 qubit, weight 2 unitary error, p = 0.2 0.8 SVD( γ ), 10 runs, 1000 iterations 0.9856 (R γ) SDP, C LS, 6 min / 100 iterations 0.7 0.9854 0.6 0.5 fidelity 0.9852 0.985 0.4 0.3 0.9848 0.9846 t CPU 6 min / 1000 iterations 0.2 0.1 100 200 300 400 500 600 700 800 900 1000 iterations 0 1 2 3 4 5 6 7 8 9 10 11 final fidelities appear to be converging initial encodings are very different from each other final recoveries & encodings are all very different from each other global optimum is not unique?

Weight-2 single-random unitary errors for 4-qubit codes. 1 0.9 0.8 f avg (R,E,C) 0.7 0.6 10 2 ( 1 f DFS 4 ) / ( 1 f ) 0.5 0.4 10 1 optimal iterated avg case iterated avg case recovery 10 0 DFS 4 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 p, random (single) unitary errors, 100 runs (DFS-4 perfect against collective errors)

Optimal Recovery Preserve single qubit (n S =2 1,U S = I 2 ) in 5-qubit codespace (n C =2 5 ) [5, 1, 3] encoding perfect against arbitrary weight-1 errors and against weight-2 bit-flip errors with ideal recovery

Optimal Recovery Preserve single qubit (n S =2 1,U S = I 2 ) in 5-qubit codespace (n C =2 5 ) [5, 1, 3] encoding perfect against arbitrary weight-1 errors and against weight-2 bit-flip errors with ideal recovery Weight-2 bit-flip errors p = 0.2, E 32 32 16 m Method Fidelity E CPU (sec) (R 2 32 m E ) Direct-SDP 1.0 64 16 38.8 Indirect (R-γ)-SDP 1.0 16 12.8 Indirect (R-α)-LS 1.0 16 0.063 (5 )

Optimal Recovery Preserve single qubit (n S =2 1,U S = I 2 ) in 5-qubit codespace (n C =2 5 ) [5, 1, 3] encoding perfect against arbitrary weight-1 errors and against weight-2 bit-flip errors with ideal recovery Weight-2 bit-flip errors p = 0.2, E 32 32 16 m Method Fidelity E CPU (sec) (R 2 32 m E ) Direct-SDP 1.0 64 16 38.8 Indirect (R-γ)-SDP 1.0 16 12.8 Indirect (R-α)-LS 1.0 16 0.063 (5 ) Weight-3 bit-flip errors p = 0.2, E 32 32 26 m Method Fidelity E CPU (sec) (R 2 32 m E ) Direct-SDP 0.94845 64 16 40.6 Indirect (R-γ)-SDP 0.94845 26 16 31.6 Indirect (R-α)-LS 0.94845 26 16 0.41 (10 )

Recovery (α, γ) 26 26 for weight-3 bit-flip errors α from (R α) LS γ from (R α) LS = γ from (R γ) SDP 0.4 0.2 0 0.3 0.2 0.1 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 all elements of α and γ are real (in this example) γ is diagonal with 16 non-zero elements explains reduction of recovery to 16 OSR elements

Weight-2 & 3 bit-flip errors, [5, 1, 3] encoding, optimal & average-case recoveries 1 0.9 0.8 optimal & avg case weight 2 errors 0.7 f avg (R,E,C) 0.6 0.5 0.4 0.3 0.2 optimal weight 3 errors avg case weight 3 errors [5,1,3] weight 2 errors [5,1,3] weight 3 errors optimal weight 2 0.1 for weight 3 errors 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 p, bit flip error O optimal for each p avg-case all p [5,1,3]

Recovery & Encoding via Iterative Optimization 0.9975 5 qubit, weight 3 unitary error, p =0.2 0.997 f = 0.99656 f = 0.99661 0.9965 fidelity 0.996 0.9955 0.995 (R γ) SDP, C LS 84.5 min/40 iterations (R α C) LS 27 sec/100 iterations (max 6 (R α) iterations) Initial [5,1,3] f = 0.93601 0.9945 0.994 0 20 40 60 80 100 iterations

Recovery & Encoding via Iterative Optimization 0.9975 0.997 Initial C 0 f = 0.62304 5 qubit, weight 3 unitary error, p =0.2 f = 0.99656 f = 0.99661 0.9965 fidelity 0.996 0.9955 0.995 (R γ) SDP, C LS 84.5 min/40 iterations (R α C) LS 27 sec/100 iterations (max 6 (R α) iterations) Initial [5,1,3] f = 0.93601 0.9945 0.994 0 20 40 60 80 100 iterations

Iterated recovery (α, γ) 26 26 for weight-3 unitary errors γ from (R α C) LS 0.7 SVD γ 0.4 0.6 0.3 0.5 0.2 0.4 0.1 0.3 0 0 0.2 10 20 30 5 10 15 20 25 0.1 0 0 5 10 15 20 25 30 γ is a complex matrix (in this example) SVD of γ shows 16 significant terms (explains reduction from 26) iterated encoding similar to [5,1,3] encoding: f(c [5,1,3], C) =0.95 iterated recovery far from [5,1,3] recovery: f(r [5,1,3], R) =0.05

Weight-2 all-random unitary error for 5-qubit codes. 1 f avg (R,E,C) 0.95 0.9 0.85 10 2 ( 1 f [5,1,3] ) / ( 1 f ) optimal iterated avg case iterated avg case recovery [5,1,3] 10 1 0.8 0.75 10 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 p, random (all) unitary errors, 100 runs

Conclusions optimal and/or robust codes potentially mitigate the need for higher dimensional codes or further levels of concatenation. codes can be tuned for a range of error models optimal and robust designs via indirect approach is more efficient than direct approach least-squares (SVD) vs. convex optimization (SDP) indirect approach reveals some structure of the recovery

Issues & extensions incorporate structure, e.g., fault-tolerant and concatenation architectures non-unique global optimum may provide this design freedom incorporate specific knowledge, e.g., specific inputs, specific physical models, specific knowledge about the imperfections how to optimize one module at a time, a robust design over a large set of modules,... implementation & control comprehensibility often overides optimiality black box error correction: use state/process tomography to identify errors for optimal/robust encoding/recovery what is the character of the encoding/recovery landscape

... and these imperfections may produce considerable havoc.

... and these imperfections may produce considerable havoc. Oxford, 1907 It ain t what you don t know that gets you into trouble. It s what you know for sure that just ain t so. Mark Twain