Noise and errors in state reconstruction Philipp Schindler + Thomas Monz Institute of Experimental Physics University of Innsbruck, Austria Tobias Moroder + Matthias Kleinman University of Siegen, Germany
A few comments about scalable tomographies Only ½ (N2 + 3N + 2) settings!!! Phys. Rev. Lett. 105, 250403 (2010) Nat. Commun., 1 (9) 149
Permutational invariant tomography Ion 1 Ion 2 Global interactions No single-qubit readout required Ideal for GHZ, W, Dicke Local errors might kill your PI symmetry Ideal (resources/effort) for ion trap experiments
Matrix product states Needs single qubit addressing and detection. But covers more interesting states.
State of the art: MPS VS PI MPS currently possible with 8 qubits PI currently possible with 20 qubits Both are not extended for processes yet.
Overview Tomographic reconstruction Our quantum processor Direct characterization of quantum dynamics State tomography with statistical noise Detection of systematic errors in tomographic data Noise + errors in our setup
Types of noise and errors (1) Statistical uncertainty (2) Unknown but constant errors (3) Fluctuating errors (4) Dephasing and spontaneous decay
How-To State-Tomography 3N measurement settings for an N-qubit system
Error-bars: How do you do it? Why? You throw it 100 times. You catch 50 times head. This is an exact statement. Now please try to assign a 'fairness'. You must have an error estimate. The statement of my estimate ought to be 'the real fairness is within my bounds with a probability of X%'
Coin-toss: A binomial example What if f = 0? Laplace' Rule of Succession:
State estimators Linear Reconstruction: Linear reconstruction may return a quantum state with negative eigenvalues Additional constraint: ρ 0
State estimators Likelihood (1) Take data (2) Choose model for measurement process Provide likelihood function (3) For the obtained data, what is the most likely state? Data: <X>, <Y>, <Z> arg maxρ L(Data, ρ) Π L(Datai,ρ) arg minρ -log L(Data, ρ) -Σ log L(Datai,ρ)
State estimators Likelihood The actual optimization depends on the noise model Gauss Multinomial L Log L Π Exp( -(f(x)-y)2 / σi2) min Σ (f(x)-y)2 / σi2 Π pk Min Σ k log (tr(ρ P)) Additional constraints: tr(ρ) = 1, ρ 0
State estimators Likelihood In principle a full convex optimisation can be performed Slow Alternative methods for states and process reconstruction Iterative Iterative method for multinomial measurements Phys. Rev. A 68, 012305 (2003) Wizard make linear reconstruction, fix eigenvalues Phys. Rev. Lett. 108, 070502 (2012) Compressed sensing If close to a pure/unitary object, Get away with less measurements Phys. Rev. Lett. 105, 150401 (2010) Max Lik Max Ent For incomplete data, take the most likely and most mixed state Phys. Rev. Lett. 107, 020404 (2011) Hedged tomography Inferred states should never be rank-deficient Phys. Rev. Lett. 105, 200504 (2010) All these papers come with good arguments and some with theoretical proofs So which one should / would you pick?
500 Haar-measure dist., pure states Predicted-state comparison ml (m) w lsq (g) ml w lsq hed g L1 g hed m wiz L1 m 1 0.998 0.997 0.998 0.997 0.986 1 1 0.998 1 0.995 0.988 0.998 1 0.998 0.998 0.988 0.997 1 0.995 0.987 0.997 1 0.986 0.996 hed g (g) L1 (g) Hed m (m) Wiz (g) 1 L1 (m) 0.986 1 (m) multinomial model (g) gaussian error model mi: Iterative method by Hradil w lsq: Weighted least squares fit hed g/m:hedged tomography (Robin Blume-Kohout) L1 g/m: L1 regularized tomography (Steve Flammia, work in progress) Wiz: Efficient method by Smolin/Gambetta only for Gaussian noise
500 Haar-measure dist., pure states Predicted-state comparison ml w lsq hed g L1 g hed m wiz L1 m 1 0.998 0.997 0.998 0.997 0.986 1 1 0.998 1 0.995 0.988 0.998 1 0.998 0.998 0.988 0.997 1 0.995 0.987 0.997 1 0.986 0.996 Maxlik (multin) w lsq (gauss) hed gauss L1 gauss Hed multinom Wiz (gauss) 1 L1 multinom mean F 0.986 1 ml w lsq hed g L1 g hed m wiz 0.980(14) 0.983(13) 0.981(12) 0.983(12) 0.976(13) 0.964(23) 50 pure states L1 m 0.980(14) L1 gives the same result as standard reconstruction (gauss, multinomial), but can also work with incomplete datasets via compressed sensing. Hedged tomography (β = 0.1) yields almost the same results and standard and L1 and makes sure the state is not rank-deficient. Wizard does not do any weighting and returns states significantly deviating from states returned by all other methods.
State estimator comparison Process tomography of the Quantum Fourier Transform: Send in 43 = 64 input states, investigate the output states Poor-man's process tomography: Look at the fidelity of the output States compared to the expected output states mean F ml w lsq hed g L1 g hed m wiz L1 m 0.79(4) 0.83(4) 0.83(4) 0.84(4) 0.79(4) 0.76(5) 0.79(4) Same data, different evaluation technique >1 σ deviation in the final numbers!!!
Error bars: the q-tomo way Fisher information: the classical, asymptotic approach Non-parametric bootstrap: multinomial or dirichlet Parametric bootstrap: doable, but correct? work in progress
Fisher Information At the maximum, the first derivatives are zero The second derivatives has all the information For enough data (asymptotic regime) everything behaves Gaussian What is enough data? What do you do when at an edge? With the Fisher matrix, you can calculate errors for every subsequently derived parameter.
Region estimators Calculate directly confidence region This should be reliable That seems to be the correct way It seems to be hard to calculate Work with M. Christandl, P. Feist and R. Blume-Kohout
Bootstrapping Rather than calculate error bars from raw data (from your error estimates) simulate new data sets get new density matrices calculate a set of (fidelities/...) provide mean + std of the desired parameter How do you generate those new data sets?
Bootstrapping: non-parametric I Take frequencies fi Assume that fi corresponds to probabilities pi Use pi to simulate new data assuming a multinomial distribution referred to as Monte Carlo simulation A zero-frequency will always remain zero in all resamplings A significant problem when there are more possible results than measurements. Steffen, Pan, Zeilinger, Blatt,...
Bootstraping: non-parametric II Take frequencies fi Use the inverse of the multinomial distribution for the observed fi = dirichlet distribution Resample from this dirichlet distribution Dirichlet fixes the 'zero-frequency' problem Too much noise during resampling? Conservative
Bootstrapping: parametric Take raw-data Obtain density matrix from MLE Calculate probabilities for tomography results from the MLE density matrix Resample based on those probabilities Zero-elements in the raw data should not affect this approach Might generate too little noise?
Problems with MLE If your data is not physical it does not detect it model testing
Data validation Maximum likelihood estimation (MLE), or some form of it. MLE will always give you a physical state.
Validity: Prove me wrong Matthias Kleinmann Tobias Moroder We model the measurements by Ideal measurements of the Pauli operators. We try to prove that this model is not statistically sound Linear Witnesses Likelihood ratio test arxiv:1204.3644
Validity: Witness tests a) The experimentalist claims that the measurement routine is correctly implemented. the density matrix is physical If your model is correct, the data ought to be (statistically) compatible with a semi-positive definite state. b) The experimentalist takes an overcomplete set of data. Overcomplete means that there are linear dependencies; one part of the data should not contradict another part arxiv:1204.3644
Validity: Witness tests arxiv:1204.3644
Validity: Witness tests Positivity I. Take first half of the data II. Make a linear reconstruction III. Find most-negative eigenvalue and corresponding state. IV. Calculate the expectation value corresponding to the most negative state. V. Employ Hoeffding's tail inequality to check trust Define witness from First dataset
Validity: Witness tests We learn something about negative Eigenvalues The probability to guess a state with a corresponding negative Eigenvalue is exponentially suppressed. Define witness from First dataset
Validity: Witness tests Linear Dependencies Reduction from 6N to 4N parameters I. Take first half of the data II. Make a linear reconstruction III. Define witness IV. Calculate the expectation value. V. Employ Hoeffding's tail inequality to check trust
Interpretation The p-value is a probability under which -at mostthe data is consistent with the model. What is the threshold to discard data? It's a matter of taste and we chose 1%
How is it quantitatively If data is generated by the quantum mechanical model then Witness Measured frequencies Number of samples Depends on the witness and the projector Optima of witness over all outcomes of a single setting
How to interpret? Fix error probability, before announcing an error (e.g 1%) Calculate threshold: If one experiences Then the probability that the data originates from a Physical state is less than the threshold
How to interpret? Fixing the error probability to discard a model is a matter of taste, but we can State the probability under which the data is consistent with the model.
Validity: Likelihood ratio Compare likelihood of the most likely state with the likelihood of the most likely physical state any state If λ >> 2N-1 physical data set physical states
Validity: Likelihood ratio We use Wilk's theorem to test the model. But for this we cannot use a quantum mechanical model. Quantum mechanical model Allow for negative Eigenvalues, but all measurement outcomes must be positive We checked the negative Eigenvalues already Most general model Multinomial statistics that allows also negative measurement outcomes
Validation via Bootstrap Data / MLE Occurences, a.u. Occurences, a.u. Bootstrap / MLE Use Bootstrap data to test the validity of your data for free!!! Work in progress, together with Madalin + Theo
Validation via Bootstrap Discard? Occurences, a.u. Occurences, a.u. Accept? Use Bootstrap data to test the validity of your data for free!!! Work in progress, together with Madalin Guta
For good data, everything is fine And you get a validity test for free!!! Work in progress
We are not the first to do this Things can go wrong in the lab but we need tools to notice that. New J. Phys. 11, 023028, (2009)
Experimental setup - Noise ~70 µm
Taking Data P D Projective measurement: Either dark or bright Repeat N times (usually 100): S P1 P2 P3 P4 ZZ 35 65 0 0 ZX 14 72 8 6............... Put raw data directly into the likelihood function Assume perfect projectors and same state each time
Data and Noise P For ions: usually multinomial process D S Additional noise sources: Dark counts: 1 kcounts/s Bright ion: 50 kcounts/s Spontaneous decay: 8ms detection vs 1s lifetime Use good model likelihood function Never modify raw data Negative examples: Data far in the non-physical space: New J. Phys. 11, 023028 (2009) CHSH beyond all bounds: Phys. Rev. A 80, 030101 (2009) HowTo Imperfect photon detection: New J. Phys. 11, 113052 (2009)
Noise and Decoherence Dephasing: T2 <= 2 T1 1) Energy fluctuations of the qubits 2) Frequency and/or phase fluctuations of the phase reference
Local operations
Crosstalk
Noise sources in the experiment Noise source Magnitude / Timescale Dephasing 30-100ms Spontaneous decay 1s Crossstalk <3% Misscalibration of rotation angles 1-3% Fluctuation of rotation angles 1.5% Qubit initialization 0.3% per qubit Measurement error 0.3% per qubit
Decoherence during tomography Example: 6-qubit state tomography of GHZ Worst case duration for tomography operations: 200us Single-qubit coherence time: 30ms No problem? Correlated noise: TN=T/N2 T6=800us
Validity test with varying crosstalk State needs to be generated without the addressed beam. Probability under which the data is consistent with the model.
Validity test on good data
Conclusions Model testing for quantum measurements Tools allow only to falsify data We can detect easily if something went horribly wrong We cannot detect what the error source was One can prevent detection if one takes less data Is not restricted to state tomgraphies Is there a better way to do this?
The international Team 2012 FWF SFB Industrie Tirol AQUTE $ IQI GmbH