Computation in Quantum Space-time Could Lead to a Super-polynomial Speedup

Similar documents
Towards A Physics-Motivated Small-Velocities Approximation to General Relativity

Checking Monotonicity is NP-Hard Even for Cubic Polynomials

Computing With Tensors: Potential Applications of Physics-Motivated Mathematics to Computer Science

Full Superposition Principle Is Inconsistent with Non-Deterministic Versions of Quantum Physics

How to Define Relative Approximation Error of an Interval Estimate: A Proposal

In Some Curved Spaces, We Can Solve NP-Hard Problems in Polynomial Time: Towards Matiyasevich s Dream

KINEMATIC SPACES AND DE VRIES ALGEBRAS: TOWARDS POSSIBLE PHYSICAL MEANING OF DE VRIES ALGEBRAS. O. Kosheleva, F. Zapata

Product of Partially Ordered Sets (Posets), with Potential Applications to Uncertainty Logic and Space-Time Geometry

Computing Standard-Deviation-to-Mean and Variance-to-Mean Ratios under Interval Uncertainty is NP-Hard

In Some Curved Spaces, One Can Solve NP-Hard Problems in Polynomial Time

Towards a More Physically Adequate Definition of Randomness: A Topological Approach

Why Feynman Path Integration?

On Quantum Versions of Record-Breaking Algorithms for SAT

How to Define "and"- and "or"-operations for Intuitionistic and Picture Fuzzy Sets

Towards Decision Making under Interval Uncertainty

Why in Mayan Mathematics, Zero and Infinity are the Same: A Possible Explanation

Use of Grothendieck s Inequality in Interval Computations: Quadratic Terms are Estimated Accurately (Modulo a Constant Factor)

Random Utility Functions are Uniquely Determined by User Preferences

Exact Bounds on Sample Variance of Interval Data

Logic of Scientific Discovery: How Physical Induction Affects What Is Computable

Computability of the Avoidance Set and of the Set-Valued Identification Problem

Decision Making under Interval and Fuzzy Uncertainty: Towards an Operational Approach

When Can We Reduce Multi-Variable Range Estimation Problem to Two Fewer-Variable Problems?

Towards Designing Optimal Individualized Placement Tests

How to Distinguish True Dependence from Varying Independence?

Decision Making under Interval Uncertainty: What Can and What Cannot Be Computed in Linear Time and in Real Time

Exponential Disutility Functions in Transportation Problems: A New Theoretical Justification

New Algorithms for Statistical Analysis of Interval Data

We Live in the Best of Possible Worlds: Leibniz's Insight Helps to Derive Equations of Modern Physics

Monchaya Chiangpradit 1, Wararit Panichkitkosolkul 1, Hung T. Nguyen 1, and Vladik Kreinovich 2. 2 University of Texas at El Paso

Propagation of Probabilistic Uncertainty: The Simplest Case (A Brief Pedagogical Introduction)

Quantum Versions of k-csp Algorithms: a First Step Towards Quantum Algorithms for Interval-Related Constraint Satisfaction Problems

How to Make a Proof of Halting Problem More Convincing: A Pedagogical Remark

Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory

University of Texas at El Paso. Vladik Kreinovich University of Texas at El Paso,

Computations Under Time Constraints: Algorithms Developed for Fuzzy Computations can Help

Computational Aspects of Aggregation in Biological Systems

Kekule's Benzene Structure: A Case Study of Teaching Usefulness of Symmetry

How to Explain Usefulness of Different Results when Teaching Calculus: Example of the Mean Value Theorem

Towards Applying Computational Complexity to Foundations of Physics

Why Copulas? University of Texas at El Paso. Vladik Kreinovich University of Texas at El Paso,

New Algorithms for Statistical Analysis of Interval Data

Membership Functions Representing a Number vs. Representing a Set: Proof of Unique Reconstruction

When Are Two Wave Functions Distinguishable: A New Answer to Pauli s Question, with Potential Application to Quantum Cosmology

Decision Making Beyond Arrow s Impossibility Theorem, with the Analysis of Effects of Collusion and Mutual Attraction

Decision Making Under Interval Uncertainty as a Natural Example of a Quandle

Interval Computations as an Important Part of Granular Computing: An Introduction

The Schwarzschild Metric

Towards a Localized Version of Pearson's Correlation Coefficient

Why Bellman-Zadeh Approach to Fuzzy Optimization

Towards Fast and Reliable Localization of an Underwater Object: An Interval Approach

Towards Symmetry-Based Explanation of (Approximate) Shapes of Alpha-Helices and Beta- Sheets (and Beta-Barrels) in Protein Structure

Similarity Beyond Correlation: Symmetry-Based Approach

Decision Making Under Interval Uncertainty: Beyond Hurwicz Pessimism-Optimism Criterion

ONLY PARTICLES WITH SPIN 2 ARE MEDIATORS FOR FUNDAMENTAL FORCES: WHY?

Lecture 29: Tractable and Intractable Problems

How to Bargain: An Interval Approach

Quantum gravity and information theories linked by the physical properties of the bit

Towards Optimal Allocation of Points to Different Assignments and Tests

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI

On Decision Making under Interval Uncertainty: A New Justification of Hurwicz Optimism-Pessimism Approach and Its Use in Group Decision Making

Approaching the Event Horizon of a Black Hole

Communicating with accelerated observers in Minkowski spacetime

Why Curvature in L-Curve: Combining Soft Constraints

Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones

How to Measure Loss of Privacy

CSC 373: Algorithm Design and Analysis Lecture 30

What is spin? André Gsponer Independent Scientific Research Institute Box 30, CH-1211 Geneva-12, Switzerland

Minimum Description Length (MDL) Principle as a Possible Approach to Arc Detection

Modeling Extremal Events Is Not Easy: Why the Extreme Value Theorem Cannot Be As General As the Central Limit Theorem

Towards Decision Making under Interval, Set-Valued, Fuzzy, and Z-Number Uncertainty: A Fair Price Approach

Gravitation and Electromagnetism

Computational Complexity

Granularity Helps. Oscar N. Garcia 1, Vladik Kreinovich 2, Luc Longpre 2, and Hung T. Nguyen 3

How to Assign Weights to Different Factors in Vulnerability Analysis: Towards a Justification of a Heuristic Technique

Towards Symmetry-Based Explanation of (Approximate) Shapes of Alpha-Helices and Beta-Sheets (and Beta-Barrels) in Protein Structure

The Kruskal-Szekeres Extension : Counter-Examples

problems from this class. For each combination algorithm C, and for each actual size s, we can dene the worst-case computation time T (C; s) of the co

Towards Decision Making under General Uncertainty

Graph Non-Isomorphism Has a Succinct Quantum Certificate

GEOMETRY OF PROTEIN STRUCTURES. I. WHY HYPERBOLIC SURFACES ARE A GOOD APPROXIMATION FOR BETA-SHEETS

General Relativity: Einstein s Theory of Gravitation. Arien Crellin-Quick and Tony Miller SPRING 2009 PHYS43, SRJC

PHYM432 Relativity and Cosmology fall Introduction. Dr. David K. Sing

Increased Climate Variability Is More Visible Than Global Warming: A General System-Theory Explanation

The P-vs-NP problem. Andrés E. Caicedo. September 10, 2011

Why Trapezoidal and Triangular Membership Functions Work So Well: Towards a Theoretical Explanation

String Theory. Quantum Mechanics and Gravity: Cliff Burgess, McGill. The start of a beautiful relationship?

2 General Relativity. 2.1 Curved 2D and 3D space

Voting Aggregation Leads to (Interval) Median

Positive Consequences of Negative Attitude: Game-Theoretic Analysis

Group representation theory and quantum physics

Generating Linear Temporal Logic Formulas for Pattern-Based Specifications

VIII. NP-completeness

Introduction. An Introduction to Algorithms and Data Structures

NP-Completeness Review

Quantum Information Science (QIS)

Tensor Calculus, Relativity, and Cosmology

THE ROTATION OF GALAXIES AS A CONSEQUENCE OF A ROTATION OF THE UNIVERSE. E Chaliasos 365 Thebes Street GR Aegaleo Greece

Complexity Theory. Knowledge Representation and Reasoning. November 2, 2005

Transcription:

Journal of Uncertain Systems Vol.6, No.2, pp.146-151, 2012 Online at: www.jus.org.uk Computation in Quantum Space-time Could Lead to a Super-polynomial Speedup Michael Zakharevich 1, Vladik Kreinovich 2, 1 Department of Mathematics, Stanford University, Stanford, CA 94305, USA 2 Department of Computer Science, University of Texas at El Paso, El Paso, Texas 79968, USA Received 19 August 2011; Revised 18 April 2012 Abstract In theoretical computer science, researchers usually distinguish between feasible problems (that can be solved in polynomial time) and problems that require more computation time. A natural question is: can we use new physical processes, processes that have not been used in modern computers, to make computations drastically faster e.g., to make intractable problems feasible? Such a possibility would occur if a physical process provides a super-polynomial (= faster than polynomial) speed-up. In this direction, the most active research is undertaken in quantum computing. It is well known that quantum processes can drastically speed up computations; however, there are no proven super-polynomial quantum speedups of the overall computation time. Parallelization is another potential source of speedup. In Euclidean space, parallelization only leads to a polynomial speedup. We show that in quantum space-time, parallelization could potentially lead to (provably) super-polynomial speedup of computations. c 2012 World Academic Press, UK. All rights reserved. Keywords: feasible computations, quantum computing, parallelization, quantum space-time, superpolynomial speedup 1 Which Problems are Feasible: Brief Reminder In theoretical computer science (see, e.g., [8]), researchers usually distinguish between problems that can be solved in polynomial time (i.e., in time that is bounded by a polynomial P (n) of the length n of the input) and problems that require more computation time. Problems solvable in polynomial time are usually called feasible, while others are called intractable. Of course, this definition is not a perfect description of the intuitive notion of feasibility. For example, an algorithm that requires 10 100 n steps is polynomial time but not practically feasible. However, this is the best available definition of feasibility. 2 Is Speedup Possible? The big challenge is that some computational problems are intractable i.e., require algorithms which are too slow. At first glance, if a problem is intractable in this sense, there is not much we can do about it. However, there is hope. Traditional notions of computational complexity are based on the assumption that we only use the physical processes which are traditionally used in computers. So, a natural question is: what if we use new physical processes, processes that have not been used in modern computers? Will the use of these physical processes enable us to make computations drastically faster? Will these new physical processes make intractable problems feasible? Such a possibility would occur if a physical process provides a super-polynomial (= faster than polynomial) speedup. Corresponding author. Email: vladik@utep.edu (V. Kreinovich).

Journal of Uncertain Systems, Vol.6, No.2, pp.146-151, 2012 147 3 Quantum Computing: A Possible Path to a Super-polynomial Speedup In this direction, the most active research is undertaken in quantum computing. It is well known that quantum processes can speed up computations; see, e.g., [7]. For example, in non-quantum computing, a search in an un-sorted list of size N requires, in the worst case, at least N computational steps, in contrast, in quantum computing, Grover s algorithm enables us to search in an un-sorted list of size N in time N. Grover s algorithm can be applied to problems for which, currently, no algorithm is known which is drastically faster than exhaustive search. A good example of such a problem is the well-known propositional satisfiability problem SAT: Given: a propositional formula F (x 1,..., x n ), i.e., an expression that is formed from variables x 1,..., x n by applying the propositional connectives & ( and ), ( or ), and ( not ). Find: values x 1,..., x n {false, true} = {0, 1} that make the given formula F (x 1,..., x n ) true or, if no such values exist, return the corresponding message. In principle, this problem can be solved by exhaustive search, i.e., by trying all 2 n possible combinations of truth values x i = 1 ( true ) and x i = 0 ( false ). This exhaustive search requires 2 n computational steps. By using Grover s algorithm, we can reduce the computation time from 2 n to 2 n = 2 n/2. This is a drastic speedup, but this speedup is still polynomial, it does not allow us to replace the original longer-than-polynomial time with a polynomial one. Specifically, in this case, if we denote by T c (n) the non-quantum computation time, then the quantum computation time is T q (n) = T c (n). So, if the quantum computation time T q (n) is polynomial, so is the non-quantum computation time T c (n) = Tq 2 (n). Some known quantum algorithms are exponentially faster than the best known non-quantum ones. A known example of such a situation is Shor s algorithm for factoring large integers. However, it is still not clear whether a similar fast non-quantum algorithm is possible. In general, there are no proven super-polynomial quantum speedups of the overall computation time. 4 Parallelization Another Potential Source of Speedup Parallelization is another potential source of speedup: if we have several computers working in parallel, then we can perform computations faster. Parallelization in Euclidean space. In the usual (Euclidean) space, parallelization only leads to a polynomial speed-up; see, e.g., [3, 4, 6]. The main reason for this limitation is that, according to modern physics, the speed of all the physical processes is bounded by the speed of light c. Thus, in time T, we can only reach computational units at a distance not exceeding R = c T. In other words, within time T, we can only use computational devices located inside the sphere of radius R = c T. In the Euclidean space, the volume V (R) of this area (inside of the sphere of radius R = c T ) is equal to V (R) = 4 3 π R3 = 4 3 π c3 T 3, and is, thus, proportional to T 3. So, if we denote by V the volume of the smallest possible computational device, we can conclude that for computations that last time T, we can use no more than V (R)/ V T 3 computational elements. If we denote by t, the smallest time of a single computational operation, we can conclude that during time T, each computational element can perform T/ t T computational steps. Thus, all T 3 computational elements can perform T 3 T = T 4 computational steps. So, if we simulate this parallel computer elementby-element on a sequential computer, we will thus be able to perform the same computations in time T 4.

148 M. Zakharevich and V. Kreinovich: Computation in Quantum Space-time Thus, if a parallel computer can perform computations in polynomial time T P (n), the same computations can be performed on a sequential computer in time C T 4 C (P (n)) 4 which is still polynomial. In other words, parallelization in Euclidean space can only lead to a polynomial speedup. Parallelization in non-euclidean space. It is well known (see, e.g., [5]) that the actual space is not Euclidean, it is curved. Interestingly, in some non-euclidean spaces e.g., in Lobachevsky space, historically the first non-euclidean space model volume V (R) grows exponentially with the radius V (R) exp(r) Polynomial(R). Thus, in principle, we can fit exponentially many processors within a sphere of radius R = c T and hence get a drastic (exponential) speed-up [4, 6]. The problem is that while there are physically reasonable space models that allow such an exponential speedup, these models are very hypothetic. The space models corresponding to mainstream cosmological models do not have this exponential growth and thus, only allow polynomial parallelization speedup. 5 Quantum Space-time Models: A New Possibility of Speedup Main idea. So far, we have described two separate approaches to computation speedup: use of quantum effects, and use of curved space-time. In physics, the corresponding quantum and space-time effects are related. Specifically, non-quantum spacetime is only an approximate description. A more accurate description of space-time requires that we take into account quantum effects, i.e., that we consider quantum space-time models. Thus, it is reasonable to combine the two above approaches to computation speedup, and consider the use of quantum space-time models for computations. To do that, let us recall how quantum effects affect space-time; for details, see, e.g., [2, 5]. How quantum effects affect space-time: qualitative description. The main reason why quantum effects affect space-time is Heisenberg s Uncertainty Principle. Its most well-known form is that we cannot simultaneously measure the coordinate x and the momentum p with absolute accuracy: the accuracy x with which we can determine the coordinate x and the accuracy p with which we can determine the momentum p are related by the inequality p x, where is Planck s constant, one of the fundamental constants behind quantum physics. A similar (less well-known) inequality holds for energy E and time t: the accuracy E with which we can determine the energy E and the accuracy t with which we can determine the time t are related by the inequality E t. As a result, when we restrict ourselves to a region of spatial size ε, i.e., to times t ε/c, we get the energy uncertainty E ε 1. When ε 0, we get E. So, when size ε of the region is small, a lot of energy enters this region. According to the famous relativity theory formula E = m c 2, this energy has the mass m = E c 2, and this mass causes a gravitational field. According to General Relativity theory, the gravitational field means curving space-time, so we can say that the energy E curves the space-time. The smaller the scale ε, the more energy we have, so the more the space-time is curved. Hence, on a small scale, space-time is very curved ( foam -like) [5].

Journal of Uncertain Systems, Vol.6, No.2, pp.146-151, 2012 149 How quantum effects affect space-time: quantitative description. The above qualitative description can be transformed into a quantitative one [2]. Indeed, as we have mentioned, the total energy of all the fluctuations in area of size ε is equal to E ε 1. To find out how many fluctuations of smaller size k ε (k < 1, k 1) can fit into this area, let us estimate the energy δe of a single fluctuation of this smaller size. For that, we need to take into account that in modern physics, all fundamental physical equations are determined by minimizing action, crudely speaking, a product of energy and time; see, e.g., [1]. In Einstein s General Relativity, action is proportional to L = R dv dt, where R is a scalar curvature [1, 5]. Since action is energy times time, we have δe t R dv R V t, hence, by dividing both sides by δt, we get δe R V. For a fluctuation of linear size k ε ε, volume is V ε 3 and curvature which is inverse proportional to the square of the linear size is R ε 2. Thus, we conclude that δe ε. So, the total number n of fluctuations of size k ε in an area of size ε can be determined by dividing the total energy E of such fluctuations by the energy δe of a single fluctuation of size k ε: n = E δe ε 2. Comment. The above physical derivations are described, in detail, in [2]. Application to speedup: analysis. What we are interested in is how many processors N q (ε) of size ε can we fit in a given region? Ideally, we may be able to fit one processor (once it is sufficiently small) in each fluctuation. In this case, the desired number of processors is equal to the number N q (ε) of regions of size ε that can fit into a given region. In every region of spatial size ε, we can fit n ε 2 regions of size k ε. Thus, the total number N q (k ε) of regions of size k ε is equal to the total number N q (ε) of regions of size ε times n ε 2 : N q (c ε) ε 2 N q (ε). Let us use this functional equation to find N(ε). By substituting ε = 1, ε = k, ε = k 2, etc., into the above formula, and taking into account that N q (1) 1, we conclude that N q (k) k 2 ; N q (k 2 ) k 4 N q (k), hence N q (k 2 ) 2 k (2+4) ; N q (k 3 ) k 6 N q (k 2 ), hence N q (k 3 ) 3 k (2+4+6) ;... N q (k m ) m k (2+4+...+2m). Here, 2 + 4 +... + 2m = 2 (1 + 2 +... + m) = 2 m (m + 1) 2 m 2, so N q (k m ) m k m2. To get the expression for N q (ε), we need to find the value m for which k m = ε. This value is m ln(ε). Substituting this value into the above formula, we conclude that for some real value α. N q (ε) exp(α ln 2 (ε))

150 M. Zakharevich and V. Kreinovich: Computation in Quantum Space-time How does this formula translate into a speedup? As we have mentioned, in modern physics, the speeds of all the processes are limited by the speed of light. Thus, the time of performing a single computational operation on a processing unit of linear size ε is ε/c. So, to make computers faster, we need to make computational units smaller. The linear size ε of the computational unit can thus serve as the description of a technological level. To find out how much speedup we can achieve by using quantum space-time, for the same technological level ε, we need to compare parallel computations in non-quantum space-time, and parallel computations in quantum space-time. In non-quantum space-time, we have units of volume ε 3, so within the region of volume V 0 we can fit N n (ε) V 0 ε 3 ε 3 such units. In quantum space-time, the number N q (ε) of computational units is equal to N q (ε) exp(α ln 2 (ε)). In order to describe N q in terms of N n, we need to express ε in terms of N n and then substitute the resulting expression into the formula for N q. From the formula for N n, we conclude that ε Nn 1/3. Therefore, ln(ε) ln(n n ). Substituting these expressions for ε and ln(ε) into the formula for N q, we conclude that N q exp(β ln 2 (N n )) = N β ln(nn) n. This expression grows faster than any polynomial. Hence, in quantum space-time, parallelization can potentially leads to super-polynomial speedup of computations. Comments. This result was first announced in [9]. From the practical viewpoint, parallelization it is not sufficient to guarantee the computational speedup: in order to gain speed by parallelization, one must have a strategy how a particular input is distributed among multiple processors. Our suggestion is to follow a strategy outlined for a similar situation in [4]. Acknowledgments This work was supported in part by two grants from the US National Science Foundation (NSF): Cyber-ShARE Center of Excellence (HRD-0734825), Computing Alliance of Hispanic-Serving Institutions CAHSI (CNS-0540592), and by Grant 1 T36 GM078000-01 from the US National Institutes of Health (NIH). The authors are thankful to all the participants of the 7th Joint UTEP/NMSU Workshop on Mathematics, Computer Science, and Computational Sciences (Las Cruces, New Mexico, April 3, 2010) for valuable discussions. References [1] Feynman, R., R. Leighton, and M. Sands, The Feynman Lectures on Physics, Addison Wesley, Boston, Massachusetts, 2005. [2] Finkelstein, A.M., and V. Kreinovich, The singularities in quantum cosmology, Astrophysics and Space Science, vol.137, no.1, pp.73 76, 1987. [3] Kreinovich, V., and A.M. Finkelstein, Towards applying computational complexity to foundations of physics, Notes of Mathematical Seminars of St. Petersburg Department of Steklov Institute of Mathematics, vol.316, pp.63 110, 2004.

Journal of Uncertain Systems, Vol.6, No.2, pp.146-151, 2012 151 [4] Kreinovich, V., and M. Margenstern, In some curved spaces, one can solve NP-hard problems in polynomial time, Notes of Mathematical Seminars of St. Petersburg Department of Steklov Institute of Mathematics, vol.358, pp.224 250, 2008. [5] Misner, C.W., K.S. Thorne, and J.A. Wheeler, Gravitation, Freeman, San Francisco, 1973. [6] Morgenstein, D., and V. Kreinovich, Which algorithms are feasible and which are not depends on the geometry of space-time, Geombinatorics, vol.4, no.3, pp.80 97, 1995. [7] Nielsen, M.A., and I.L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, Cambridge, Massachusetts, 2000. [8] Papadimitriou, C.H., Computational Complexity, Addison Wesley, San Diego, California, 1994. [9] Zakharevich, M., and V. Kreinovich, Computation in quantum space-time can lead to a super-polynomial speedup, Abstracts of the 7th Joint UTEP/NMSU Workshop on Mathematics, Computer Science, and Computational Sciences, Las Cruces, New Mexico, 2010.