Noise and errors in state reconstruction

Similar documents
Experimental state and process reconstruction. Philipp Schindler + Thomas Monz Institute of Experimental Physics University of Innsbruck, Austria

Permutationally invariant quantum tomography

Learning From Data Lecture 3 Is Learning Feasible?

Violation of Bell s inequality in Josephson phase qubits

Bayesian Models in Machine Learning

Statistical Data Analysis Stat 3: p-values, parameter estimation

Z. Hradil, J. Řeháček

Do we need quantum light to test quantum memory? M. Lobino, C. Kupchak, E. Figueroa, J. Appel, B. C. Sanders, Alex Lvovsky

Bayesian Methods for Machine Learning

Detection Theory. Composite tests

Uncertainty and Bias UIUC, 403 Advanced Physics Laboratory, Fall 2014

ION TRAPS STATE OF THE ART QUANTUM GATES

Different ion-qubit choises. - One electron in the valence shell; Alkali like 2 S 1/2 ground state.

Supplementary Information

Lecture 3: More on regularization. Bayesian vs maximum likelihood learning

Single-Particle Interference Can Witness Bipartite Entanglement

6.867 Machine Learning

Europe PMC Funders Group Author Manuscript Nat Photonics. Author manuscript; available in PMC 2013 September 01.

Detecting genuine multipartite entanglement in higher dimensional systems

6.867 Machine Learning

An iterative hard thresholding estimator for low rank matrix recovery

An introduction to Bayesian reasoning in particle physics

Tutorial: Device-independent random number generation. Roger Colbeck University of York

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

CS 361: Probability & Statistics

Robust Characterization of Quantum Processes

High Fidelity to Low Weight. Daniel Gottesman Perimeter Institute

Estimation of Quantiles

Inequalities for Dealing with Detector Inefficiencies in Greenberger-Horne-Zeilinger Type Experiments

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Quantum estimation to enhance classical wavefront sensing

Numerical Analysis for Statisticians

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013

Quantum State Tomography via Compressed Sensing

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Modern Methods of Data Analysis - WS 07/08

DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling

Quantum Fisher information and entanglement

How to use the simulator

Driving Qubit Transitions in J-C Hamiltonian

Experimental quantum teleportation. Dirk Bouwmeester, Jian Wei Pan, Klaus Mattle, Manfred Eibl, Harald Weinfurter & Anton Zeilinger

Exploring Quantum Control with Quantum Information Processors

Quantum Computation 650 Spring 2009 Lectures The World of Quantum Information. Quantum Information: fundamental principles

Quantum entanglement and its detection with few measurements

Lecture 2: Review of Basic Probability Theory

MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION

Exploring Quantum Control with Quantum Information Processors

Quantum Measurements: some technical background

Asymptotic Analysis of a Three State Quantum Cryptographic Protocol

Supplementary information for Quantum delayed-choice experiment with a beam splitter in a quantum superposition

Redundant Information and the Quantum-Classical Transition

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

CS 361: Probability & Statistics

Practical quantum-key. key- distribution post-processing

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

. Find E(V ) and var(v ).

Introduction and Overview STAT 421, SP Course Instructor

ST495: Survival Analysis: Hypothesis testing and confidence intervals

Physics 509: Bootstrap and Robust Parameter Estimation

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Lecture 2: Conjugate priors

Kicked rotor and Anderson localization with cold atoms

Logical error rate in the Pauli twirling approximation

Ph 219/CS 219. Exercises Due: Friday 3 November 2006

The Quantum Supremacy Experiment

Linear Models for Regression CS534

Tunable Ion-Photon Entanglement in an Optical Cavity

Towards quantum metrology with N00N states enabled by ensemble-cavity interaction. Massachusetts Institute of Technology

Implementing Quantum walks

Exploring Quantum Chaos with Quantum Computers

Error Analysis. V. Lorenz L. Yang, M. Grosse Perdekamp, D. Hertzog, R. Clegg PHYS403 Spring 2016

Introduction to Probability

arxiv: v2 [quant-ph] 14 Mar 2018

Some Statistics. V. Lindberg. May 16, 2007

Modeling Environment

Achilles: Now I know how powerful computers are going to become!

Parametric Models: from data to models

Supplementary Information for

Classical boson sampling algorithms and the outlook for experimental boson sampling

Probabilistic Graphical Models

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Lecture 35: December The fundamental statistical distances

Advanced Statistical Methods. Lecture 6

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Error analysis for efficiency

Not all entangled states are created equal (continued)

Violation of Bell s inequality in Josephson phase qubits

Bayesian Paradigm. Maximum A Posteriori Estimation

Introduction to entanglement theory & Detection of multipartite entanglement close to symmetric Dicke states

1-Bit Matrix Completion

INFORMATION THEORY AND STATISTICS

Machine Learning CSE546 Sham Kakade University of Washington. Oct 4, What about continuous variables?

COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, What about continuous variables?

1-Bit Matrix Completion

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Transcription:

Noise and errors in state reconstruction Philipp Schindler + Thomas Monz Institute of Experimental Physics University of Innsbruck, Austria Tobias Moroder + Matthias Kleinman University of Siegen, Germany

A few comments about scalable tomographies Only ½ (N2 + 3N + 2) settings!!! Phys. Rev. Lett. 105, 250403 (2010) Nat. Commun., 1 (9) 149

Permutational invariant tomography Ion 1 Ion 2 Global interactions No single-qubit readout required Ideal for GHZ, W, Dicke Local errors might kill your PI symmetry Ideal (resources/effort) for ion trap experiments

Matrix product states Needs single qubit addressing and detection. But covers more interesting states.

State of the art: MPS VS PI MPS currently possible with 8 qubits PI currently possible with 20 qubits Both are not extended for processes yet.

Overview Tomographic reconstruction Our quantum processor Direct characterization of quantum dynamics State tomography with statistical noise Detection of systematic errors in tomographic data Noise + errors in our setup

Types of noise and errors (1) Statistical uncertainty (2) Unknown but constant errors (3) Fluctuating errors (4) Dephasing and spontaneous decay

How-To State-Tomography 3N measurement settings for an N-qubit system

Error-bars: How do you do it? Why? You throw it 100 times. You catch 50 times head. This is an exact statement. Now please try to assign a 'fairness'. You must have an error estimate. The statement of my estimate ought to be 'the real fairness is within my bounds with a probability of X%'

Coin-toss: A binomial example What if f = 0? Laplace' Rule of Succession:

State estimators Linear Reconstruction: Linear reconstruction may return a quantum state with negative eigenvalues Additional constraint: ρ 0

State estimators Likelihood (1) Take data (2) Choose model for measurement process Provide likelihood function (3) For the obtained data, what is the most likely state? Data: <X>, <Y>, <Z> arg maxρ L(Data, ρ) Π L(Datai,ρ) arg minρ -log L(Data, ρ) -Σ log L(Datai,ρ)

State estimators Likelihood The actual optimization depends on the noise model Gauss Multinomial L Log L Π Exp( -(f(x)-y)2 / σi2) min Σ (f(x)-y)2 / σi2 Π pk Min Σ k log (tr(ρ P)) Additional constraints: tr(ρ) = 1, ρ 0

State estimators Likelihood In principle a full convex optimisation can be performed Slow Alternative methods for states and process reconstruction Iterative Iterative method for multinomial measurements Phys. Rev. A 68, 012305 (2003) Wizard make linear reconstruction, fix eigenvalues Phys. Rev. Lett. 108, 070502 (2012) Compressed sensing If close to a pure/unitary object, Get away with less measurements Phys. Rev. Lett. 105, 150401 (2010) Max Lik Max Ent For incomplete data, take the most likely and most mixed state Phys. Rev. Lett. 107, 020404 (2011) Hedged tomography Inferred states should never be rank-deficient Phys. Rev. Lett. 105, 200504 (2010) All these papers come with good arguments and some with theoretical proofs So which one should / would you pick?

500 Haar-measure dist., pure states Predicted-state comparison ml (m) w lsq (g) ml w lsq hed g L1 g hed m wiz L1 m 1 0.998 0.997 0.998 0.997 0.986 1 1 0.998 1 0.995 0.988 0.998 1 0.998 0.998 0.988 0.997 1 0.995 0.987 0.997 1 0.986 0.996 hed g (g) L1 (g) Hed m (m) Wiz (g) 1 L1 (m) 0.986 1 (m) multinomial model (g) gaussian error model mi: Iterative method by Hradil w lsq: Weighted least squares fit hed g/m:hedged tomography (Robin Blume-Kohout) L1 g/m: L1 regularized tomography (Steve Flammia, work in progress) Wiz: Efficient method by Smolin/Gambetta only for Gaussian noise

500 Haar-measure dist., pure states Predicted-state comparison ml w lsq hed g L1 g hed m wiz L1 m 1 0.998 0.997 0.998 0.997 0.986 1 1 0.998 1 0.995 0.988 0.998 1 0.998 0.998 0.988 0.997 1 0.995 0.987 0.997 1 0.986 0.996 Maxlik (multin) w lsq (gauss) hed gauss L1 gauss Hed multinom Wiz (gauss) 1 L1 multinom mean F 0.986 1 ml w lsq hed g L1 g hed m wiz 0.980(14) 0.983(13) 0.981(12) 0.983(12) 0.976(13) 0.964(23) 50 pure states L1 m 0.980(14) L1 gives the same result as standard reconstruction (gauss, multinomial), but can also work with incomplete datasets via compressed sensing. Hedged tomography (β = 0.1) yields almost the same results and standard and L1 and makes sure the state is not rank-deficient. Wizard does not do any weighting and returns states significantly deviating from states returned by all other methods.

State estimator comparison Process tomography of the Quantum Fourier Transform: Send in 43 = 64 input states, investigate the output states Poor-man's process tomography: Look at the fidelity of the output States compared to the expected output states mean F ml w lsq hed g L1 g hed m wiz L1 m 0.79(4) 0.83(4) 0.83(4) 0.84(4) 0.79(4) 0.76(5) 0.79(4) Same data, different evaluation technique >1 σ deviation in the final numbers!!!

Error bars: the q-tomo way Fisher information: the classical, asymptotic approach Non-parametric bootstrap: multinomial or dirichlet Parametric bootstrap: doable, but correct? work in progress

Fisher Information At the maximum, the first derivatives are zero The second derivatives has all the information For enough data (asymptotic regime) everything behaves Gaussian What is enough data? What do you do when at an edge? With the Fisher matrix, you can calculate errors for every subsequently derived parameter.

Region estimators Calculate directly confidence region This should be reliable That seems to be the correct way It seems to be hard to calculate Work with M. Christandl, P. Feist and R. Blume-Kohout

Bootstrapping Rather than calculate error bars from raw data (from your error estimates) simulate new data sets get new density matrices calculate a set of (fidelities/...) provide mean + std of the desired parameter How do you generate those new data sets?

Bootstrapping: non-parametric I Take frequencies fi Assume that fi corresponds to probabilities pi Use pi to simulate new data assuming a multinomial distribution referred to as Monte Carlo simulation A zero-frequency will always remain zero in all resamplings A significant problem when there are more possible results than measurements. Steffen, Pan, Zeilinger, Blatt,...

Bootstraping: non-parametric II Take frequencies fi Use the inverse of the multinomial distribution for the observed fi = dirichlet distribution Resample from this dirichlet distribution Dirichlet fixes the 'zero-frequency' problem Too much noise during resampling? Conservative

Bootstrapping: parametric Take raw-data Obtain density matrix from MLE Calculate probabilities for tomography results from the MLE density matrix Resample based on those probabilities Zero-elements in the raw data should not affect this approach Might generate too little noise?

Problems with MLE If your data is not physical it does not detect it model testing

Data validation Maximum likelihood estimation (MLE), or some form of it. MLE will always give you a physical state.

Validity: Prove me wrong Matthias Kleinmann Tobias Moroder We model the measurements by Ideal measurements of the Pauli operators. We try to prove that this model is not statistically sound Linear Witnesses Likelihood ratio test arxiv:1204.3644

Validity: Witness tests a) The experimentalist claims that the measurement routine is correctly implemented. the density matrix is physical If your model is correct, the data ought to be (statistically) compatible with a semi-positive definite state. b) The experimentalist takes an overcomplete set of data. Overcomplete means that there are linear dependencies; one part of the data should not contradict another part arxiv:1204.3644

Validity: Witness tests arxiv:1204.3644

Validity: Witness tests Positivity I. Take first half of the data II. Make a linear reconstruction III. Find most-negative eigenvalue and corresponding state. IV. Calculate the expectation value corresponding to the most negative state. V. Employ Hoeffding's tail inequality to check trust Define witness from First dataset

Validity: Witness tests We learn something about negative Eigenvalues The probability to guess a state with a corresponding negative Eigenvalue is exponentially suppressed. Define witness from First dataset

Validity: Witness tests Linear Dependencies Reduction from 6N to 4N parameters I. Take first half of the data II. Make a linear reconstruction III. Define witness IV. Calculate the expectation value. V. Employ Hoeffding's tail inequality to check trust

Interpretation The p-value is a probability under which -at mostthe data is consistent with the model. What is the threshold to discard data? It's a matter of taste and we chose 1%

How is it quantitatively If data is generated by the quantum mechanical model then Witness Measured frequencies Number of samples Depends on the witness and the projector Optima of witness over all outcomes of a single setting

How to interpret? Fix error probability, before announcing an error (e.g 1%) Calculate threshold: If one experiences Then the probability that the data originates from a Physical state is less than the threshold

How to interpret? Fixing the error probability to discard a model is a matter of taste, but we can State the probability under which the data is consistent with the model.

Validity: Likelihood ratio Compare likelihood of the most likely state with the likelihood of the most likely physical state any state If λ >> 2N-1 physical data set physical states

Validity: Likelihood ratio We use Wilk's theorem to test the model. But for this we cannot use a quantum mechanical model. Quantum mechanical model Allow for negative Eigenvalues, but all measurement outcomes must be positive We checked the negative Eigenvalues already Most general model Multinomial statistics that allows also negative measurement outcomes

Validation via Bootstrap Data / MLE Occurences, a.u. Occurences, a.u. Bootstrap / MLE Use Bootstrap data to test the validity of your data for free!!! Work in progress, together with Madalin + Theo

Validation via Bootstrap Discard? Occurences, a.u. Occurences, a.u. Accept? Use Bootstrap data to test the validity of your data for free!!! Work in progress, together with Madalin Guta

For good data, everything is fine And you get a validity test for free!!! Work in progress

We are not the first to do this Things can go wrong in the lab but we need tools to notice that. New J. Phys. 11, 023028, (2009)

Experimental setup - Noise ~70 µm

Taking Data P D Projective measurement: Either dark or bright Repeat N times (usually 100): S P1 P2 P3 P4 ZZ 35 65 0 0 ZX 14 72 8 6............... Put raw data directly into the likelihood function Assume perfect projectors and same state each time

Data and Noise P For ions: usually multinomial process D S Additional noise sources: Dark counts: 1 kcounts/s Bright ion: 50 kcounts/s Spontaneous decay: 8ms detection vs 1s lifetime Use good model likelihood function Never modify raw data Negative examples: Data far in the non-physical space: New J. Phys. 11, 023028 (2009) CHSH beyond all bounds: Phys. Rev. A 80, 030101 (2009) HowTo Imperfect photon detection: New J. Phys. 11, 113052 (2009)

Noise and Decoherence Dephasing: T2 <= 2 T1 1) Energy fluctuations of the qubits 2) Frequency and/or phase fluctuations of the phase reference

Local operations

Crosstalk

Noise sources in the experiment Noise source Magnitude / Timescale Dephasing 30-100ms Spontaneous decay 1s Crossstalk <3% Misscalibration of rotation angles 1-3% Fluctuation of rotation angles 1.5% Qubit initialization 0.3% per qubit Measurement error 0.3% per qubit

Decoherence during tomography Example: 6-qubit state tomography of GHZ Worst case duration for tomography operations: 200us Single-qubit coherence time: 30ms No problem? Correlated noise: TN=T/N2 T6=800us

Validity test with varying crosstalk State needs to be generated without the addressed beam. Probability under which the data is consistent with the model.

Validity test on good data

Conclusions Model testing for quantum measurements Tools allow only to falsify data We can detect easily if something went horribly wrong We cannot detect what the error source was One can prevent detection if one takes less data Is not restricted to state tomgraphies Is there a better way to do this?

The international Team 2012 FWF SFB Industrie Tirol AQUTE $ IQI GmbH