The Cut-Norm and Combinatorial Optimization. Alan Frieze and Ravi Kannan

Similar documents
Grothendieck s Inequality

Szemerédi s Lemma for the Analyst

Statistical Inference on Large Contingency Tables: Convergence, Testability, Stability. COMPSTAT 2010 Paris, August 23, 2010

Partitioning Metric Spaces

Introduction to Semidefinite Programming I: Basic properties a

The Algorithmic Aspects of the Regularity Lemma

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Szemerédi s Lemma for the Analyst

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Approximation Algorithms

Approximation of Global MAX-CSP Problems

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Constructing c-ary Perfect Factors

An Approximation Algorithm for Approximation Rank

Combinatorial Enumeration. Jason Z. Gao Carleton University, Ottawa, Canada

A Unified Theorem on SDP Rank Reduction. yyye

REGULARITY LEMMAS FOR GRAPHS

Matrices: 2.1 Operations with Matrices

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012

Approximating sparse binary matrices in the cut-norm

MATRICES. a m,1 a m,n A =

On the Tightness of an LP Relaxation for Rational Optimization and its Applications

Very few Moore Graphs

Conflict-Free Colorings of Rectangles Ranges

On the complexity of approximate multivariate integration

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

The large deviation principle for the Erdős-Rényi random graph

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Undecidability of Linear Inequalities Between Graph Homomorphism Densities

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

MATH 205C: STATIONARY PHASE LEMMA

Knowledge Discovery and Data Mining 1 (VO) ( )

Not all counting problems are efficiently approximable. We open with a simple example.

Reverse mathematics of some topics from algorithmic graph theory

Data Mining and Matrices

Finite Model Theory and Graph Isomorphism. II.

Differentiable Functions

Quick Tour of Linear Algebra and Graph Theory

Advanced Linear Programming: The Exercises

Institutionen för matematik, KTH.

Explicit bounds on the entangled value of multiplayer XOR games. Joint work with Thomas Vidick (MIT)

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

SAMPLE TEX FILE ZAJJ DAUGHERTY

Alexander Ostrowski

Spectral gaps and geometric representations. Amir Yehudayoff (Technion)

ACO Comprehensive Exam October 14 and 15, 2013

Convex Optimization M2

Induced subgraphs with many repeated degrees

Elementary Equivalence in Finite Structures

Basic Calculus Review

Functional Analysis Review

Closest String and Closest Substring Problems

Discrepancy Theory in Approximation Algorithms

Approximation Basics

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

Out-colourings of Digraphs

Construction of Smooth Fractal Surfaces Using Hermite Fractal Interpolation Functions. P. Bouboulis, L. Dalla and M. Kostaki-Kosta

Topological properties

REPRESENTATION THEORY WEEK 7

Lecture 1 Introduction

Sampling Subproblems of Heterogeneous Max-Cut Problems and Approximation Algorithms*

Math/CS 466/666: Homework Solutions for Chapter 3

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Graph Limits and Parameter Testing

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs

Convergent sequences of dense graphs II. Multiway cuts and statistical physics

Optimal File Sharing in Distributed Networks

EE263 Review Session 1

Differential equations

A method for construction of Lie group invariants

Analysis Comprehensive Exam Questions Fall F(x) = 1 x. f(t)dt. t 1 2. tf 2 (t)dt. and g(t, x) = 2 t. 2 t

Analysis-3 lecture schemes

Matrix Arithmetic. j=1

PARTITIONING PROBLEMS IN DENSE HYPERGRAPHS

LINEAR ALGEBRA REVIEW

Nonlinear Discrete Optimization

Some notes on Coxeter groups

Self-dual interval orders and row-fishburn matrices

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Maximum union-free subfamilies

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Planning maximum capacity Wireless Local Area Networks

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

Branching proofs of infeasibility in low density subset sum problems

Finding Hamilton cycles in robustly expanding digraphs

Determinants of Partition Matrices

Lecture: Examples of LP, SOCP and SDP

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP

Quick Tour of Linear Algebra and Graph Theory

1 + z 1 x (2x y)e x2 xy. xe x2 xy. x x3 e x, lim x log(x). (3 + i) 2 17i + 1. = 1 2e + e 2 = cosh(1) 1 + i, 2 + 3i, 13 exp i arctan

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Singular value decomposition (SVD) of large random matrices. India, 2010

SOME STRUCTURE THEOREMS FOR INVERSE LIMITS WITH SET-VALUED FUNCTIONS

Polynomial-Time Perfect Matchings in Dense Hypergraphs

Phys 201. Matrices and Determinants

THREE CASES AN EXAMPLE: THE ALTERNATING GROUP A 5

Transcription:

The Cut-Norm and Combinatorial Optimization Alan Frieze and Ravi Kannan

Outline of the Talk The Cut-Norm and a Matrix Decomposition. Max-Cut given the Matrix Decomposition. Quadratic Assignment given the Matrix Decomposition. Constructing Matrix Decomposition via the Grothendieck Identity Alon and Naor Multi-Dimensional Matrices

The cut-norm of the R C matrix A is defined to be A = max A(S, T). S R T C

The cut-norm of the R C matrix A is defined to be A = max A(S, T). S R T C Relation to regularity: D is an R C matrix with D(i, j) = d. W is an R C matrix with W ǫ R C. A = D + W. A(S, T) d S T ǫ R C.

Cut Matrices Given S R, T C and real value d: R C Cut Matrix C = CUT(S, T, d): { d if (i, j) S T, C(i, j) = 0 otherwise. d d d d 0 0 d d d d 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

D (t) = CUT(R t, C t, d t ). Matrix Decomposition A = D (1) + D (2) + + D (s) + W.

D (t) = CUT(R t, C t, d t ). Matrix Decomposition A = D (1) + D (2) + + D (s) + W. We want s, max t { d t } and W to be small.

D (t) = CUT(R t, C t, d t ). Matrix Decomposition A = D (1) + D (2) + + D (s) + W. We want s, max t { d t } and W to be small. W ǫmn m = R, n = C max{ d t } t is achievable. s = O(1/ǫ 2 ) = O(1)

Let A F = i,j A(i, j) 2 1/2 be the Frobenius Norm of A.

Assume inductively that we have found cut matrices D (j) = CUT(R j, C j, d j ), such that W (t) = A (D (1) + D (2) + + D (t) ) satisfies W (t) 2 F (1 ǫ2 t) A 2 F.

Assume inductively that we have found cut matrices D (j) = CUT(R j, C j, d j ), such that W (t) = A (D (1) + D (2) + + D (t) ) satisfies W (t) 2 F (1 ǫ2 t) A 2 F. Suppose there exist S R, T C such that W (t) (S, T) ǫ mn A F. Let R t+1 = S, C t+1 = T, d t+1 = W(t) (S, T). S T

W (t+1) 2 F W(t) 2 F = W(t) D (t+1) 2 F W(t) 2 F = ((W (t) (i, j) d t+1 ) 2 W (t) (i, j) 2 ) = i R t+1 j C t+1 R t+1 C t+1 d 2 t+1 = W(t) (R t+1, C t+1 ) 2 R t+1 C t+1 ǫ 2 A 2 F.

W (t+1) 2 F W(t) 2 F = W(t) D (t+1) 2 F W(t) 2 F = ((W (t) (i, j) d t+1 ) 2 W (t) (i, j) 2 ) = i R t+1 j C t+1 R t+1 C t+1 d 2 t+1 = W(t) (R t+1, C t+1 ) 2 R t+1 C t+1 ǫ 2 A 2 F. Conclusion: D (1),...,D (s), s ǫ 2 such that. W (s) ǫ mn A F

Refinements Suppose that we can only compute R t+1, C t+1 such that W (t) (R t+1, C t+1 ) ρ W (t) where ρ 1.

Refinements Suppose that we can only compute R t+1, C t+1 such that W (t) (R t+1, C t+1 ) ρ W (t) where ρ 1. Conclusion: We can compute D (1),..., D (s), s ρ 2 ǫ 2 such that W (s) ǫ mn A F.

Suppose that W (t) ǫ mn A F and we have computed R t+1, C t+1 such that W (t) (R, C t+1 ) ρǫ mn A F.

Suppose that W (t) ǫ mn A F and we have computed R t+1, C t+1 such that W (t) (R, C t+1 ) ρǫ mn A F. If R t+1 < m/2 then either (i) W (t) (R, C t+1 ) 1 2 ρǫ mn A F or (ii) W (t) (R \ R t+1, C t+1 ) 1 2 ρǫ mn A F

Suppose that W (t) ǫ mn A F and we have computed R t+1, C t+1 such that W (t) (R, C t+1 ) ρǫ mn A F. If R t+1 < m/2 then either (i) W (t) (R, C t+1 ) 1 2 ρǫ mn A F or (ii) W (t) (R \ R t+1, C t+1 ) 1 2 ρǫ mn A F Conclusion: We can compute D (1),..., D (s), s 4ρ 2 ǫ 2 such that W (s) ǫ mn A F and such that R i m/2 and C i n/2.

Suppose that W (t) ǫ mn A F and we have computed R t+1, C t+1 such that W (t) (R, C t+1 ) ρǫ mn A F. If R t+1 < m/2 then either (i) W (t) (R, C t+1 ) 1 2 ρǫ mn A F or (ii) W (t) (R \ R t+1, C t+1 ) 1 2 ρǫ mn A F Conclusion: We can compute D (1),..., D (s), s 4ρ 2 ǫ 2 such that W (s) ǫ mn A F and such that R i m/2 and C i n/2. Then s s R t C t dt 2 A 2 F = dt 2 4 A. t=1 t=1

MAX-CUT

G = (V, E) is a graph with n n adjacency A and A = D (1) + D (2) + + D (s) + W. D (t) = CUT(R t, C t, d t ), d t 2, s = O(1/ǫ 2 ) and W ǫn 2

G = (V, E) is a graph with n n adjacency A and A = D (1) + D (2) + + D (s) + W. D (t) = CUT(R t, C t, d t ), d t 2, s = O(1/ǫ 2 ) and W ǫn 2 If (S, S) is a cut in G then the weight of this cut satisfies A(S, S) (D (1) + D (2) + + D (s) )(S, S) ǫn 2.

G = (V, E) is a graph with n n adjacency A and A = D (1) + D (2) + + D (s) + W. D (t) = CUT(R t, C t, d t ), d t 2, s = O(1/ǫ 2 ) and W ǫn 2 If (S, S) is a cut in G then the weight of this cut satisfies A(S, S) (D (1) + D (2) + + D (s) )(S, S) ǫn 2. where s D (t) (S, S) s = d t f t g t t=1 t=1 f t = S R t and g t = S C t

So we look for S to (approximately) minimize s t=1 d tf t g t.

So we look for S to (approximately) minimize s t=1 d tf t g t. Let ν = ǫn and ft ft = ν, ḡ t = ν gt ν ν Then s f t g t d t f t ḡ t d t 6νns 6ǫn 2. t=1

So we look for S to (approximately) minimize s t=1 d tf t g t. Let ν = ǫn and ft ft = ν, ḡ t = ν gt ν ν Then s f t g t d t f t ḡ t d t 6νns 6ǫn 2. t=1 So we can look for S to (approximately) minimize s t=1 d t f t ḡ t.

There are (2/ǫ) 2s choices for the sequence ( f t, ḡ t ) and so we enumerate all possiblities and see if there is a cut with (approximately) these parameters.

There are (2/ǫ) 2s choices for the sequence ( f t, ḡ t ) and so we enumerate all possiblities and see if there is a cut with (approximately) these parameters. P = V 1, V 2,...,V k is the coarsest partition of V (with at most 2 2s parts in it) such that each R t, C t is the union of sets in P.

There are (2/ǫ) 2s choices for the sequence ( f t, ḡ t ) and so we enumerate all possiblities and see if there is a cut with (approximately) these parameters. P = V 1, V 2,...,V k is the coarsest partition of V (with at most 2 2s parts in it) such that each R t, C t is the union of sets in P. We check ( f t, ḡ t ) by solving the LP relaxation of the integer program 0 x P P P P ft x P < f t + ν 1 t s P Rt ḡ t P C t ( P x P ) ḡ t + ν and doing some adjusting. (x P = S P ).

There are (2/ǫ) 2s choices for the sequence ( f t, ḡ t ) and so we enumerate all possiblities and see if there is a cut with (approximately) these parameters. P = V 1, V 2,...,V k is the coarsest partition of V (with at most 2 2s parts in it) such that each R t, C t is the union of sets in P. The partition P has the property that for disjoint S, T V we have e(s, T) d i,j S i T j 2 W 2ǫn 2 i [k] j [k] where d i,j = e(v i, V j )/( V i V j ).

There are (2/ǫ) 2s choices for the sequence ( f t, ḡ t ) and so we enumerate all possiblities and see if there is a cut with (approximately) these parameters. P = V 1, V 2,...,V k is the coarsest partition of V (with at most 2 2s parts in it) such that each R t, C t is the union of sets in P. The partition P has the property that for disjoint S, T V we have e(s, T) d i,j S i T j 2 W 2ǫn 2 i [k] j [k] where d i,j = e(v i, V j )/( V i V j ). We could replace P with an ordinary regular partition. The constants as a function of ǫ get worse.

Quadratic Assignment Minimise A i,j,p,q z i,p z j,q i,j,p,q subject to z i,k = z k,j = 1 k k z i,j = 0, 1. i, j.

A set of n items V have to be assigned to a set of n locations X, one per location. z i,p = 1: Place item i in position p = π(i) T(i, i ) 1 is the amount of traffic between item i and i. D(x, x ) is the distance between location x and x. If item i is assigned to location π(i) for i [n] the total cost c(π) is defined by c(π) = n n T(i, i )D(π(i), π(i )). i=1 i =1 The problem is to minimise c(π) over all bijections π : V X.

Metric space X with metric D. Metric QAP. 1 diam(x)=1 i.e. max x,y D(x, y) = 1. 2 For all ǫ > 0 there exists a partition X = X 1 X 2 X l, l = l(ǫ), such that diam(x j ) ǫ. We call this an ǫ refinement of X. So there is a l l matrix ˆD such that if x X j and x X j then D(x, x ) ˆD(j, j ) 2ǫ. This partition must be computable in time polynomial in n and 1/ǫ. We call this the metric QAP.

We decompose T = T 1 + T 2 + + T s + W where W ǫn 2. For bijection π : V X we have c(π) = s k=1 i,j=1 n T k (i, j)d(π(i), π(j)) + 1 We compute an O(ǫ 3 )-refinement of X and let S (π) i = π 1 (X i ). s k=1 i,j=1 n T k (i, j)d(π(i), π(j)) = s l k=1 i,j=1 d k R k S (π) i C k S (π) j ˆD(i, j) + 2.

Computing a decomposition.

Computing a decomposition. Reducible to finding an approximation to the cut-norm.

Computing a decomposition. Reducible to finding an approximation to the cut-norm. Our paper gave algorithms with a small additive error.

Computing a decomposition. Reducible to finding an approximation to the cut-norm. Our paper gave algorithms with a small additive error. Alon and Naor gave an approximation algorithm with multiplicative error!

Let A 1 = max x i,y j {±1} A(i, j)x i y j. i,j

Let A 1 = max x i,y j {±1} A(i, j)x i y j. i,j i,j A(i, j)x i y j = A i,j x i =1 x i =1 y j =1 y j = 1 A i,j x i = 1 y j =1 + x i = 1 y j = 1 A i,j. So A 1 4 A.

Let A 1 = max x i,y j {±1} A(i, j)x i y j. i,j i,j A(i, j)x i y j = A i,j x i =1 x i =1 y j =1 y j = 1 A i,j x i = 1 y j =1 + x i = 1 y j = 1 A i,j. So A 1 4 A. A similar argument gives A A 1.

Grothendieck s Identity u, v are unit vectors in a Hilbert space H. z is chosen uniformly from B = {x : x = 1}. π E[sign(u z) sign(v z)] = arcsin(u.v). 2

Let (u i, v j ) define L A = max u i,v j A(i, j)u i v j ( A 1 ) i,j where (u i, v j ) lie in R m+n and u i = v j = 1.

Let (u i, v j ) define L A = max u i,v j A(i, j)u i v j ( A 1 ) i,j where (u i, v j ) lie in R m+n and u i = v j = 1. (u i, v j ) are computable via Semi-Definite Programming.

Let c = sinh 1 (1) = ln(1 + 2). sin(cu i v j ) = = ( 1) k c 2k+1 (2k + 1)! (u i v k=0 j ) 2k+1 ( 1) k c 2k+1 (2k + 1)! (u i ) (2k+1) (v k=0 = S(u i ) T(v j ). j ) (2k+1)

Here and S(u i ) = T(v j ) = ( 1) k c 2k+1 (2k + 1)! (u i ) (2k+1) c 2k+1 (2k + 1)! (v j ) (2k+1) k=0 k=0 (u 1, u 2, u 3, u 4 ) (3) = (u 3 1, u2 1 u 2, u 2 1 u 3, u 2 1 u 4, u 1 u 2 u 3,...).

Here and S(u i ) = T(v j ) = ( 1) k c 2k+1 (2k + 1)! (u i ) (2k+1) c 2k+1 (2k + 1)! (v j ) (2k+1) k=0 k=0 (u 1, u 2, u 3, u 4 ) (3) = (u 3 1, u2 1 u 2, u 2 1 u 3, u 2 1 u 4, u 1 u 2 u 3,...). Note that c has been chosen so that S(u i ) = T(v j ) = 1.

L A = i,j A i,j u i v j = c 1 i,j = c 1π A i,j E[sign(S(ui 2 i,j A i,j arcsin(s(u i ) T(v j )) ) z) sign(t(v j ) z)]

L A = i,j A i,j u i v j A i,j arcsin(s(u i ) T(v j )) = c 1 i,j = c 1π A i,j E[sign(S(ui 2 i,j ) z) sign(t(v j ) z)] We embed the S(ui ), T(v j ) in R m+n and choose z randomly and put x i = sign(t(ui z), y j = sign(t(vj ) z)). By choosing many z we get a good estimate of A 1. One can recover a good solution (x i, y j ) by first deciding whether x 1 = 1 or x 1 = 1 etcetera.

L A = i,j A i,j u i v j = c 1 i,j = c 1π A i,j E[sign(S(ui 2 i,j A i,j arcsin(s(u i ) T(v j )) ) z) sign(t(v j ) z)] One can extend the idea to approximate the cut-norm, with the same guarantee.

In Frieze, Kannan we gave a randomised algorithm for computing a weak partition using only 2Õ(ǫ 2) time.

In Frieze, Kannan we gave a randomised algorithm for computing a weak partition using only 2Õ(ǫ 2) time. Subsequently, Alon,Fernandez de la Vega,Kannan, Karpinski, Yuster show how to compute such a partition using only Õ(ǫ 4 ) probes. (A similar but weaker result was obtained by Anderson, Engebretson). See also Borgs, Chayes, Lovász, Sós, Vesztergombi.

In Frieze, Kannan we gave a randomised algorithm for computing a weak partition using only 2Õ(ǫ 2) time. Subsequently, Alon,Fernandez de la Vega,Kannan, Karpinski, Yuster show how to compute such a partition using only Õ(ǫ 4 ) probes. (A similar but weaker result was obtained by Anderson, Engebretson). See also Borgs, Chayes, Lovász, Sós, Vesztergombi. Above also applies to Multi-Dimensional Arrays.

A multi-dimensional version Max-r-CSP is the following problem: We are given m Boolean functions f i defined on Y i = (y 1, y 2,...,y r ) where {y 1, y 2,..., y r } {x 1, x 2,...,x n } and the aim to choose a setting for the variables x 1, x 2,...,x n that makes as many of the functions f i as possible, true.

A multi-dimensional version Max-r-CSP is the following problem: We are given m Boolean functions f i defined on Y i = (y 1, y 2,...,y r ) where {y 1, y 2,..., y r } {x 1, x 2,...,x n } and the aim to choose a setting for the variables x 1, x 2,...,x n that makes as many of the functions f i as possible, true. For each z {0, 1} r and (i 1, i 2,...,i r ) [n] r we define A (z) (i 1, i 2,...,i r ) = { j : Y j = x i1,...,x ir and f j (z 1,...,z r ) = T }.

A multi-dimensional version Max-r-CSP is the following problem: We are given m Boolean functions f i defined on Y i = (y 1, y 2,...,y r ) where {y 1, y 2,..., y r } {x 1, x 2,...,x n } and the aim to choose a setting for the variables x 1, x 2,...,x n that makes as many of the functions f i as possible, true. For each z {0, 1} r and (i 1, i 2,...,i r ) [n] r we define A (z) (i 1, i 2,...,i r ) = { j : Y j = x i1,...,x ir and f j (z 1,...,z r ) = T }. Problem becomes to maximize, over x 1, x 2,...,x n, r (z) (i 1, i 2,...,i r )( 1) r z (x it + z t 1). A z i 1,i 2,...,i r t=1

For this it is useful to have a decomposition for r-dimensional matrices: An r-dimensional matrix A on X 1 X 2... X r is a map A : X 1 X 2 X r R. If S i X i for i = 1, 2,... r, and d is a real number the matrix M satisfying { d for e S1 S M(e) = 2 S r 0 otherwise is called a cut matrix and is denoted M = CUT(S 1, S 2,... S r ; d).

B is the (2-dimensional) matrix with rows indexed by Y 1 = X 1 Xˆr, ˆr = r/2 and columns indexed by Y 2 = Xˆr+1 X r. For i = (x 1,..., xˆr ) Y 1 j = (xˆr+1,..., x r ) Y 2 let B(i, j) = A(x 1, x 2,...,x r ).

Applying a decompositon algorithm we obtain B = D (1) + D (2) + + D (s0) + W where for 1 t s 0, D (t) = CUT(R t, C t, d t ), and W is small.

Each R t defines an ˆr-dimensional 0-1 matrix R (t) where R (t) (x 1,..., xˆr ) = 1 iff (x 1,..., xˆr ) R t. C (t) is defined similarly. Assume inductively that we can further decompose R (t) = D (t,1) + + D (t,s 1) + W (t) C (t) = ˆDt, 1 + + ˆDt, ŝ 1 + Ŵ(t) Here D (t,u) = CUT(R t,u,1,...,r t,u,ˆr, d t,u ) ˆDt, û = CUT(R t,û,ˆr+1,...,r t,û,r, ˆd t,û )

Each R t defines an ˆr-dimensional 0-1 matrix R (t) where R (t) (x 1,..., xˆr ) = 1 iff (x 1,..., xˆr ) R t. C (t) is defined similarly. Assume inductively that we can further decompose R (t) = D (t,1) + + D (t,s 1) + W (t) C (t) = ˆDt, 1 + + ˆDt, ŝ 1 + Ŵ(t) Here D (t,u) = CUT(R t,u,1,...,r t,u,ˆr, d t,u ) ˆDt, û = CUT(R t,û,ˆr+1,...,r t,û,r, ˆd t,û ) It follows that we can write A = t,u,û CUT(R t,u,1,..., R t,u,ˆr, ˆR t,û,ˆr+1,..., ˆR t,û,r, d t,u,û ) + W 1, where W 1 is small.

THANK YOU