Solving Polynomial Equations in Smoothed Polynomial Time

Similar documents
Finding one root of a polynomial system

Algunos ejemplos de éxito en matemáticas computacionales

A deterministic algorithm to compute approximate roots of polynomial systems in polynomial average time

Certified predictor-corrector tracking for Newton homotopies

Certifying reality of projections

A note on parallel and alternating time

Exact algorithms for linear matrix inequalities

The Collected Papers of STEPHEN SMALE. Edited by. F. Cucker R. Wong. City University of Hong Kong

Newton Method on Riemannian Manifolds: Covariant Alpha-Theory.

Real Interactive Proofs for VPSPACE

The symmetric subset-sum problem over the complex numbers

FAST LINEAR HOMOTOPY TO FIND APPROXIMATE ZEROS OF POLYNOMIAL SYSTEMS

Elimination Theory in the 21st century

Bergman Kernels and Asymptotics of Polynomials II

Tutorial: Numerical Algebraic Geometry

Asymptotic Geometry. of polynomials. Steve Zelditch Department of Mathematics Johns Hopkins University. Joint Work with Pavel Bleher Bernie Shiffman

Computational Geometric Uncertainty Propagation for Hamiltonian Systems on a Lie Group

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

Non-linear least squares

Multiplicity of singularities is not a bi-lipschitz invariant

Almost transparent short proofs for NP R

GEOMETRIC QUANTIZATION

Uniform K-stability of pairs

List of Symbols, Notations and Data

CS369N: Beyond Worst-Case Analysis Lecture #7: Smoothed Analysis

Local and Global Sensitivity Analysis

Math 127: Course Summary

Robustly transitive diffeomorphisms

Introduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany

Introduction to Index Theory. Elmar Schrohe Institut für Analysis

Hyperuniformity on the Sphere

POLYNOMIAL HIERARCHY, BETTI NUMBERS AND A REAL ANALOGUE OF TODA S THEOREM

Kronecker s and Newton s approaches to solving : A first comparison *

Variations by complexity theorists on three themes of Euler, Bézout, Betti, and Poincaré

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday January 20, 2015 (Day 1)

Real Analysis Qualifying Exam May 14th 2016

Jacobians of Matrix Transformations and Functions of Matrix Argument

Average Case Analysis. October 11, 2011

Adapted metrics for dominated splittings

Fractional Index Theory

Improving the Accuracy of the Adomian Decomposition Method for Solving Nonlinear Equations

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

Roberto Castelli Basque Center for Applied Mathematics

THE HIDDEN CONVEXITY OF SPECTRAL CLUSTERING

A LOWER BOUND ON THE SUBRIEMANNIAN DISTANCE FOR HÖLDER DISTRIBUTIONS

Certificates in Numerical Algebraic Geometry. Jan Verschelde

Approximation of Geometric Data

Active sets, steepest descent, and smooth approximation of functions

THE INVERSE FUNCTION THEOREM

On a class of pseudodifferential operators with mixed homogeneities

The Potential and Challenges of CAD with Equational Constraints for SC-Square

Average-case Analysis for Combinatorial Problems,

A Riemannian Framework for Denoising Diffusion Tensor Images

Information geometry for bivariate distribution control

Discretization of SDEs: Euler Methods and Beyond

Author copy. for some integers a i, b i. b i

The nonsmooth Newton method on Riemannian manifolds

REPRESENTATION THEORY WEEK 7

New constructions of Hamiltonian-minimal Lagrangian submanifolds

Exponential tail behavior for solutions to the homogeneous Boltzmann equation

On the roots of a random system of equations. The theorem of Shub & Smale and some extensions.

Dynamics of Group Actions and Minimal Sets

Counting patterns in digital expansions

FAMILIES OF ALGEBRAIC CURVES AS SURFACE BUNDLES OF RIEMANN SURFACES

SELF-ADJOINTNESS OF SCHRÖDINGER-TYPE OPERATORS WITH SINGULAR POTENTIALS ON MANIFOLDS OF BOUNDED GEOMETRY

ON THE ZETA MAHLER MEASURE FUNCTION OF THE JACOBIAN DETERMINANT, CONDITION NUMBERS AND THE HEIGHT OF THE GENERIC DISCRIMINANT

Linear Analysis Lecture 5

Zero-Order Methods for the Optimization of Noisy Functions. Jorge Nocedal

Eigenvalues and eigenfunctions of the Laplacian. Andrew Hassell

LECTURE 26: THE CHERN-WEIL THEORY

Small Ball Probability, Arithmetic Structure and Random Matrices

Introduction to Algebraic and Geometric Topology Week 3

SYMPLECTIC MANIFOLDS, GEOMETRIC QUANTIZATION, AND UNITARY REPRESENTATIONS OF LIE GROUPS. 1. Introduction

Abstracts. A 1 -Milnor number Kirsten Wickelgren (joint work with Jesse Leo Kass) f/ f

Chapter 1: Introduction and mathematical preliminaries

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

CHARACTERISTIC CLASSES

Ph.D. Qualifying Exam: Algebra I

Bounded uniformly continuous functions

CHAPTER 8. Smoothing operators

Introduction to Machine Learning CMU-10701

Examples of numerics in commutative algebra and algebraic geo

Donoghue, Golowich, Holstein Chapter 4, 6

Linear Algebra: Matrix Eigenvalue Problems

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains

Expansions with P = NP. Christine Gaßner Greifswald

Representation of Markov chains. Stochastic stability AIMS 2014

Qualifying Examination HARVARD UNIVERSITY Department of Mathematics Tuesday, January 19, 2016 (Day 1)

Lecture 2: Some basic principles of the b-calculus

THE DOUBLE COVER RELATIVE TO A CONVEX DOMAIN AND THE RELATIVE ISOPERIMETRIC INEQUALITY

A mathematician s foray into signal processing

Comparison for infinitesimal automorphisms. of parabolic geometries

Asymptotics for posterior hazards

Robust Smoothed Analysis of a Condition Number for Linear Programming

Uniform individual asymptotics for the eigenvalues and eigenvectors of large Toeplitz matrices

Uniformly bounded sets of orthonormal polynomials on the sphere

Foliation dynamics, shape and classification

Smale s Mean Value Conjecture. and Related Problems. The University of Hong Kong

Magnetic symmetries:

Topics in Representation Theory: Cultural Background

Transcription:

Solving Polynomial Equations in Smoothed Polynomial Time Peter Bürgisser Technische Universität Berlin joint work with Felipe Cucker (CityU Hong Kong) Workshop on Polynomial Equation Solving Simons Institute for the Theory of Computing, October 16, 2014

Overview Context and Motivation I Solving polynomial equations is a fundamental mathematical problem, studied for several hundred years. I The problem is NP-complete over the field F 2 (equivalent to SAT). I Traditionally, the problem is studied over C. There, it is NP-complete in the model of Blum-Shub-Smale. I Methods of symbolic computation (Gröbner bases etc) solve polynomial equations, but the running time is exponential. And these algorithms are also slow in practice. I Numerical methods provide less information on the solutions, but perform much better in practice. I Theoretical explanation?

Overview Smale s 17th Problem I The 17th of Steve Smale s problems for the 21st century asks: Can a zero of n complex polynomial equations in n unknowns be found approximately, on the average, in polynomial time with a uniform algorithm? I The problem has its origins in the series of papers Complexity of Bezout s Theorem I-V by Shub and Smale (1993-1996). I Beltrán and Pardo (2008) answered Smale s 17th problem a rmatively, when allowing randomized algorithms.

Overview Our Contributions Near solution to Smale s 17th problem We design a deterministic numerical algorithm for Smale s 17th problem with expected running time N O(log log N),whereN denotes input size. For systems of bounded degree the expected running time is polynomial. E.g., O(N 2 ) for quadratic polynomials. Smoothed analysis is a blend of average-case and worst-case analysis. It was proposed by Spielman and Teng (2001) and successfully applied to the simplex algorithm. Smoothed polynomial time We perform a smoothed analysis of the randomized algorithm of Beltrán and Pardo, proving that its smoothed expected running time is polynomial.

Newton iteration, condition, and homotopy continuation Setting I For degree vector d =(d 1,...,d n )defineinput space H d := {f =(f 1,...,f n ) f i 2 C[X 0,...,X n ] homogeneous of degree d i }. Input size N:= dim C H d. I Output space is complex projective space P n : Look for zero 2 P n with f ( ) = 0. I Metric d on P n (angle). I Fix unitary invariant hermitian inner product h, i on H d (Weyl). This defines a norm kf k := hf, f i 1/2 and an (angular) distance d on the projective space P(H d ), respectively on the sphere S(H d ). I Solution variety (smooth manifold) V := {(f, ) f ( ) =0 H d P n.

Newton iteration, condition, and homotopy continuation Condition number I Let f ( ) = 0. How much does change when we perturb f alittle? I This can be quantified by the condition number of (f, ): µ(f, ) := kf k km k, where (k k = 1, M stands for pseudo-inverse) M := diag( p d 1,..., p d n ) 1 Df ( ) 2 C n (n+1). I µ is well defined on P(H d ) P n : µ(tf, )=µ(f, )fort 2 C.

Newton iteration, condition, and homotopy continuation Newton iteration and approximate zeros I Projective Newton iteration x k+1 = N f (x k ) with Newton operator N f : P n! P n and starting point x 0. I Gamma Theorem (Smale): Put D := max i d i.if d(x 0, ) apple 0.3 D 3/2 µ(f, ), then immediate convergence of x k+1 = N f (x k )withquadratic speed: Call x 0 approximate zero of f. d(x k, ) apple 1 2 2k 1 d(x 0, ).

Newton iteration, condition, and homotopy continuation From local to global search: homotopy continuation I Given a start system (g, ) 2 V := in the solution manifold V. n (f, ) f ( ) =0 H d P n. I Connect input f 2H d to g by line segment [g, f ]={q t t 2 [0, 1]}. I If none of the q t has multiple zero, there exists unique lifting of t 7! q t to a solution path in V such that (q 0, 0 )=(g, ). :[0, 1]! V, t 7! (q t, t )

Newton iteration, condition, and homotopy continuation Adaptive linear homotopy I Adaptive Linear Homotopy ALH: follow solution path numerically. Put t 0 =0, q 0 := g, z 0 :=. Compute t i+1, q i+1, z i+1 adaptively from t i, q i := q ti, z i by Newton s method: d(q i+1, q i ) = 7.5 10 3 D 3/2 µ(q i, z i ), 2 z i+1 = N qi+1 (z i ). I Let K(f, g, ) denote the number k of Newton continuation steps needed to follow the homotopy. I Shub-Smale & Shub (2007): z i is approximate zero of ti Z 1 K(f, g, ) apple 217 D 3/2 µ( (t)) 2 k q t k dt. 0 and

Newton iteration, condition, and homotopy continuation Randomization I How to choose the start system? I Almost all (g, ) 2 V are good : µ(g, )=N O(1) (Shub-Smale). I Unknown how to e ciently construct such (g, ): problem to find hay in a haystack. I We may choose g 2 S(H d ) uniformly at random. I Alternatively, we may choose g according to the standard Gaussian distribution on H d :ithasthedensity (g) =(2 ) N exp( 1 2 kgk2 ).

Newton iteration, condition, and homotopy continuation ALasVegasalgorithm I Standard distribution on solution variety V : I choose g 2Hd from standard Gaussian, I choose one of the d1 d n many zeros of g uniformly at random. I E cient sampling of (g, ) 2 V is possible (Beltrán & Pardo 2008). I Las Vegas algorithm LV: on input f,draw(g, ) 2 V at random, run ALH on (f, g, ) I LV has expected running time K(f ) := E g, K(f, g, ). Average of LV (Beltrán and Pardo) for standard Gaussian f 2H d. E f K(f ) = O D 3/2 Nn

Results Smoothed expected polynomial time Smoothed analysis: Fix f 2H d and >0. The isotropic Gaussian on H d with mean f and variance 2 has the density (f )= 1 (2 2 ) N exp 1 2 2 kf f k2. We write f N(f, 2 I ). Technical issue: we truncate this Gaussian by requiring kf f kapple p 2N, obtaining the distribution N T (f, 2 I ). Smoothed analysis of LV D 3/2 sup E f NT (f, 2 I ) K(f ) = O Nn. kf k=1

Results Near solution to Smale s 17th problem The deterministic algorithm below computes an approximate zero of f 2H d with an expected number of arithmetic operations N O(log log N), for standard Gaussian input f 2H d. I (I) D apple n: Run ALH with the start system (g, ), where g i = X d i i X d i 0, =(1,...,1) µ(g, ) 2 apple 2(n + 1) D. I (II) D n: Use known method from computer algebra (Renegar), taking roughly D n steps. If D apple n 1 ",forfixed">0, then n D and hence the running time is polynomially bounded in N. SimilarlyforD n 1+".

Results On the proof I Reduce to smoothed analysis of mean square condition number defined as 1 X µ 2 (q) := µ(q, ) 2 1/2 for q 2H d. d 1 d n I Main auxiliary result: q( )=0 µ2 (q) 2 sup E q N(q, 2 I ) kqk=1 kqk 2 = O n 2. I Proof is involved and proceeds by the analysis of certain probability distributions on fiber bundles (coarea formula etc). I This way, the proof essentially reduces to a smoothed analysis of a matrix condition number.