Combination Preconditioning of saddle-point systems for positive definiteness

Similar documents
Chebyshev semi-iteration in Preconditioning

Preconditioning for Nonsymmetry and Time-dependence

Block-triangular preconditioners for PDE-constrained optimization

On the accuracy of saddle point solvers

ON THE ROLE OF COMMUTATOR ARGUMENTS IN THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS FOR STOKES CONTROL PROBLEMS

ETNA Kent State University

Fast solvers for steady incompressible flow

Preconditioned GMRES Revisited

Numerical behavior of inexact linear solvers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Structured Preconditioners for Saddle Point Problems

9.1 Preconditioned Krylov Subspace Methods

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

The antitriangular factorisation of saddle point matrices

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

A Review of Preconditioning Techniques for Steady Incompressible Flow

Linear and Non-Linear Preconditioning

Efficient Solvers for the Navier Stokes Equations in Rotation Form

The Mixed Finite Element Multigrid Preconditioned Minimum Residual Method for Stokes Equations

PRECONDITIONING ITERATIVE METHODS FOR THE OPTIMAL CONTROL OF THE STOKES EQUATIONS

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Fast Iterative Solution of Saddle Point Problems

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

Structured Preconditioners for Saddle Point Problems

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

The Mixed Finite Element Multigrid Preconditioned MINRES Method for Stokes Equations

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Department of Computer Science, University of Illinois at Urbana-Champaign

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Preconditioners for the incompressible Navier Stokes equations

Iterative methods for positive definite linear systems with a complex shift

1 Conjugate gradients

Computational Linear Algebra

Computational Linear Algebra

On the interplay between discretization and preconditioning of Krylov subspace methods

M.A. Botchev. September 5, 2014

Preconditioners for state constrained optimal control problems with Moreau-Yosida penalty function

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Mathematics and Computer Science

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

ANALYSIS OF ITERATIVE METHODS FOR SADDLE POINT PROBLEMS: A UNIFIED APPROACH

A stable variant of Simpler GMRES and GCR

Some Preconditioning Techniques for Saddle Point Problems

Notes on Some Methods for Solving Linear Systems

Generalized MINRES or Generalized LSQR?

EECS 275 Matrix Computation

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS

arxiv: v1 [math.na] 18 Dec 2017

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

Block triangular preconditioner for static Maxwell equations*

Chapter 7 Iterative Techniques in Matrix Algebra

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Linear Solvers. Andrew Hazel

Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach

The Conjugate Gradient Method

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

7.4 The Saddle Point Stokes Problem

1. Introduction. We consider the solution of systems of linear equations with the following block 2 2 structure:

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Linear and Non-Linear Preconditioning

Iterative solution of saddle point problems

Algebra C Numerical Linear Algebra Sample Exam Problems

Preconditioning Techniques Analysis for CG Method

Alex Townsend Cornell University

On the choice of abstract projection vectors for second level preconditioners

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

C. Vuik 1 R. Nabben 2 J.M. Tang 1

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Preface to the Second Edition. Preface to the First Edition

Prince Chidyagwai, Scott Ladenheim, and Daniel B. Szyld. Report July 2015, Revised October 2015

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

IN this paper, we investigate spectral properties of block

Peter Deuhard. for Symmetric Indenite Linear Systems

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Natural preconditioners for saddle point systems

arxiv: v2 [math.na] 1 Feb 2013

Krylov Space Solvers

Research Article Residual Iterative Method for Solving Absolute Value Equations

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

Prince Chidyagwai, Scott Ladenheim, and Daniel B. Szyld. Report July 2015

Contents. Preface... xi. Introduction...

Postprint.

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Regularized HSS iteration methods for saddle-point linear systems

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

c 2004 Society for Industrial and Applied Mathematics

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Algorithms that use the Arnoldi Basis

Some minimization problems

Transcription:

Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30

compute iterates with residuals Krylov subspace methods for Ax = b x k x 0 + K k (A,r 0 ) b Ax k = r k r 0 + AK k (A,r 0 ) since ie. r k = p(a)r 0, p Π k,p(0) = 1 K k (A,r 0 ) = span{r 0,Ar 0,...,A k 1 r 0 } with some optimality or orthogonality condition. Eindhoven, 2012 p.2/30

Given an inner product, : R n R n R Conjugate Gradients (Hestenes & Stiefel (1952)) computes iterates which minimize A(x x k ),x x k when A is self-adjoint and positive definite in, MINRES (Paige & Saunders (1975)) computes iterates for which r k,r k is minimal when A is self-adjoint in, GMRES (Saad & Schultz (1986)) computes iterates for which r k,r k is minimal for general A Eindhoven, 2012 p.3/30

Conjugate Gradient Method (CG) Choose x 0, compute r 0 = b Ax 0, set p 0 = r 0 for k = 0 until convergence do α k = r k,r k / Ap k,p k x k+1 = x k + α k p k r k+1 = r k α k Ap k <Test for convergence> β k = r k+1,r k+1 / r k,r k p k+1 = r k+1 + β k p k enddo computes iterates {x k } such that A(x x k ),x x k is minimal when Au,v = u,av and Au,u > 0 Eindhoven, 2012 p.4/30

Thus given any symmetric and positive definite matrix H if u,v = u,v H = u T Hv Choose x 0, compute r 0 = b Ax 0, set p 0 = r 0 for k = 0 until convergence do α k = r k,r k H / Ap k,p k H x k+1 = x k + α k p k r k+1 = r k α k Ap k <Test for convergence> β k = r k+1,r k+1 H / r k,r k H p k+1 = r k+1 + β k p k enddo computes iterates {x k } such that A(x x k ),x x k H = (x x k ) T A T H(x x k ) is minimal when Au,v H = u,av H and Au,u H > 0 Eindhoven, 2012 p.5/30

Similarly for the MINRES method: v 0 = 0,w 0 = 0,w 1 = 0, choose x 0 Compute r 0 = b Ax 0, set v 1 = r 0,γ 1 = v 1,v 1 H Set η = γ 1,s 0 = s 1 = 0,c 0 = c 1 = 1 for j = 1 until convergence do v j = v j /γ j δ j = v j,av j H v j+1 = Av j δ j v j γ j v j 1 γ j+1 = v j+1,v j+1 H α 0 = c j δ j c j 1 s j γ j,α 1 = α 2 0 + γ2 j+1 α 2 = s j δ j + c j 1 c j γ j,α 3 = s j 1 γ j c j+1 = α 0 /α 1 ; s j+1 = γ j+1 /α 1 w j+1 = (v j+1 α 3 w j 1 α 2 w j )/α 1 x j = x j 1 + c j+1 ηw j+1 η = s j+1 η, then <Test for convergence> enddo minimises r k,r k H when Ax,y H = x,ay H Eindhoven, 2012 p.6/30

Self-adjointness: assume, : R n R n R is a symmetric bilinear form or an inner product A R n n is self-adjoint in, iff Ax,y = x,ay Self-adjointness of A in, H thus means for all x,y ( x,y H = x T Hy) x T A T Hy = Ax,y H = x,ay H = x T HAy for all x,y A T H = HA is the relation for self-adjointness of A in, H Eindhoven, 2012 p.7/30

Preconditioning For example left preconditioning: Âx = P 1 Ax = P 1 b = b induces a self-adjoint matrix in, H iff A T P T H = HP 1 A Eindhoven, 2012 p.8/30

An important example The Bramble-Pasciak CG for saddle point problems: A B T A0 0 A = with preconditioner P = B C B I The (left) preconditioned matrix  = P 1 A = A 1 0 A A 1 0 BT BA 1 0 A B BA 1 0 BT + C is not symmetric but is self-adjoint and positive definite when A A0 0 H = 0 I defines an inner product x,y H := x T Hy CG can be used in this inner product Eindhoven, 2012 p.9/30

Basic properties: If A 1 and A 2 are self-adjoint in, H then for LEMMA any α,β R, αa 1 + βa 2 is self-adjoint in, H Eindhoven, 2012 p.10/30

Basic properties: If A 1 and A 2 are self-adjoint in, H then for LEMMA any α,β R, αa 1 + βa 2 is self-adjoint in, H LEMMA If A is self-adjoint in, H1 and in, H2 then A is self-adjoint in, αh1 +βh 2 for every α,β R Eindhoven, 2012 p.10/30

Basic properties: If A 1 and A 2 are self-adjoint in, H then for LEMMA any α,β R, αa 1 + βa 2 is self-adjoint in, H LEMMA If A is self-adjoint in, H1 and in, H2 then A is self-adjoint in, αh1 +βh 2 for every α,β R and of relevance when preconditioning: For symmetric A, Â = P 1 A is self-adjoint in LEMMA, H if and only if P T H is self-adjoint in, A Eindhoven, 2012 p.10/30

Basic properties: If A 1 and A 2 are self-adjoint in, H then for LEMMA any α,β R, αa 1 + βa 2 is self-adjoint in, H LEMMA If A is self-adjoint in, H1 and in, H2 then A is self-adjoint in, αh1 +βh 2 for every α,β R and of relevance when preconditioning: For symmetric A, Â = P 1 A is self-adjoint in LEMMA, H if and only if P T H is self-adjoint in, A PROOF (P T H) T A = HP 1 A = (P 1 A) T H = A(P T H) Eindhoven, 2012 p.10/30

also combining the above: LEMMA If P 1 and P 2 are left preconditioners for the symmetric matrix A for which symmetric matrices H 1 and H 2 exist with P 1 1 A self-adjoint in, H 1 and P 1 2 A self-adjoint in, H2 and if for any α,β αp T 1 H 1 + βp T 2 H 2 = P T 3 H 3 for some matrix P 3 and some symmetric matrix H 3 then P 1 3 A is self-adjoint in, H 3. shows: if we have two instances of this structure and can find such a splitting we have found a new preconditioner and a bilinear form in which the matrix is self-adjoint. Eindhoven, 2012 p.11/30

also combining the above: LEMMA If P 1 and P 2 are left preconditioners for the symmetric matrix A for which symmetric matrices H 1 and H 2 exist with P 1 1 A self-adjoint in, H 1 and P 1 2 A self-adjoint in, H2 and if for any α,β αp T 1 H 1 + βp T 2 H 2 = P T 3 H 3 for some matrix P 3 and some symmetric matrix H 3 then P 1 3 A is self-adjoint in, H 3. shows: if we have two instances of this structure and can find such a splitting we have found a new preconditioner and a bilinear form in which the matrix is self-adjoint. Combination Preconditioners Eindhoven, 2012 p.11/30

Intriguing possibilities: if H 1,H 2,H 3 are all positive definite (so define inner products) but P 1 1 A,P 1 2 A are indefinite, can P 1 3 A be positive definite? Eindhoven, 2012 p.12/30

Intriguing possibilities: if H 1,H 2,H 3 are all positive definite (so define inner products) but P 1 1 A,P 1 2 A are indefinite, can P 1 3 A be positive definite? can H 1 and H 2 be indefinite but H 3 be positive definite? Eindhoven, 2012 p.12/30

Intriguing possibilities: if H 1,H 2,H 3 are all positive definite (so define inner products) but P 1 1 A,P 1 2 A are indefinite, can P 1 3 A be positive definite? can H 1 and H 2 be indefinite but H 3 be positive definite? Answers: YES and YES Eindhoven, 2012 p.12/30

Saddle Point examples Bramble-Pasciak CG (Bramble & Pasciak (1988)) widely used CG technique with preconditioner P 1 A 1 = 0 0 BA 1 0 I and inner product matrix H = A A0 0 0 I Eindhoven, 2012 p.13/30

Saddle Point examples Bramble-Pasciak CG (Bramble & Pasciak (1988)) widely used CG technique with preconditioner P 1 A 1 = 0 0 BA 1 0 I and inner product matrix H = A A0 0 0 I main drawback: requires A 0 < A but P 1 A is always positive definite when this is true Eindhoven, 2012 p.13/30

Examples: BP with Schur complement preconditioner (Klawonn (1998), Meyer et al. (2001), Simoncini (2001)) Inner product: P 1 = H = A 1 0 0 S 1 0 BA 1 0 S 1 0 A A0 0 0 S 0 similar conditions as BP for positive definiteness Eindhoven, 2012 p.14/30

Examples: Zulehner (Zulehner (2001), Schöberl & Zulehner (2007)) P = A0 B T B BA 1 0 BT S 0 = I 0 BA 1 0 I A0 B T 0 S 0 gives P 1 A self-adjoint in, H, A0 A 0 H = 0 BA 1 0 BT S 0 So H defines an inner product if A 0 > A and S 0 < BA 1 0 BT Whenever H is positive definite, then P 1 A is positive definite in, H. Eindhoven, 2012 p.15/30

Examples: Benzi-Simoncini (Benzi and Simoncini (2006)) extension of CG method of Fischer, Ramage, Silvester & W (1998) P 1 I 0 = 0 I inner product: H = A γi B T B γi Extension for C 0 (Liesen (2006), Liesen & Parlett (2007)): A γi B T H = B γi C Eindhoven, 2012 p.16/30

Example: Bramble-Pasciak + method (BP + )(Stoll & W(2008)) P 1 A 1 = 0 0 BA 1 0 I and inner product H = A + A0 0 0 I Note: H defines an inner product for any symmetric and positive definite preconditioner A 0 can always apply MINRES in this inner product Eindhoven, 2012 p.17/30

Example: Bramble-Pasciak + method (BP + )(Stoll & W(2008)) P 1 A 1 = 0 0 BA 1 0 I and inner product H = A + A0 0 0 I Note: H defines an inner product for any symmetric and positive definite preconditioner A 0 can always apply MINRES in this inner product But preconditioned matrix always indefinite in this inner product Eindhoven, 2012 p.17/30

Similarly there exists a Schöberl-Zulehner + method (SZ + ) (Pestana & W(2012)) A0 B T P = B BA 1 0 BT + S 0 = I 0 BA 1 0 I A0 0 0 S 0 I A 1 0 BT 0 I gives P 1 A self-adjoint in, H, A0 + A 0 H = 0 BA 1 0 BT + S 0 So H always defines an inner product, but P 1 A is always indefinite in this inner product. Eindhoven, 2012 p.18/30

final example: Block Diagonal Preconditioner (BD) (Silvester& W (1993), Murphy, Golub & W (2000), Korzak (1999), Kuznetsov (1995)) A0 0 P = H = 0 S 0 for which is clearly symmetric. HP 1 A = A = A B T B C But A indefinite P 1 A is always indefinite in, H Eindhoven, 2012 p.19/30

Many of the above are special cases of the Krzyzanowski preconditioner (Krzyzanowski (2011)) P = I 0 cba 1 0 I A0 0 I da 1 0 BT 0 S 0 0 I for which P 1 A is self-adjoint in, H with A0 ca 0 H = ǫ 0 S 0 + cdba 1 0 BT + dc,ǫ = ±1 General (but not simple in general) formulae for eigenvalues of P 1 A are available (Pestana & W (2012)) Eindhoven, 2012 p.20/30

Combination preconditioning Final lemma above shows that if can find P 3 and H 3 with αp T 1 H 1 + βp T 2 H 2 = P T 3 H 3 this gives a new preconditioner P 3 and the symmetric bilinear form, H3 in which P 1 3 A is self-adjoint Eindhoven, 2012 p.21/30

Combine Bramble-Pasciak and Benzi-Simoncini: αp 1 1 H 1 + βp 1 2 H 2 = (αa 1 0 + βi)a (α + βγ)i (αa 1 0 + βi)b T βb (α + βγ)i One possibility for αp T 1 H 1 + βp T 2 H 2 = P T P T 3 = αa 1 0 + βi 0 0 βi 3 H 3 is and H 3 = A (α + βγ)(αa 1 0 + βi) 1 B T B α+βγ β I Eindhoven, 2012 p.22/30

Combine BP and BP + αp T 1 H 1 + (1 α)p T 2 H 2 = A 1 0 A + (1 2α)I A 1 0 BT 0 (1 2α)I Eindhoven, 2012 p.23/30

Combine BP and BP + αp T 1 H 1 + (1 α)p T 2 H 2 = A 1 0 A + (1 2α)I A 1 0 BT 0 (1 2α)I can be split as P T A 1 3 = 0 A 1 0 BT 0 (1 2α)I,H 3 = A + (1 2α)A0 0 0 I Eindhoven, 2012 p.23/30

Combine BP and BP + αp T 1 H 1 + (1 α)p T 2 H 2 = A 1 0 A + (1 2α)I A 1 0 BT 0 (1 2α)I can be split as P T A 1 3 = 0 A 1 0 BT 0 (1 2α)I,H 3 = A + (1 2α)A0 0 0 I Recall P 1 3 A is self-adjoint in, H 3 Eindhoven, 2012 p.23/30

Combine BP and BP + αp T 1 H 1 + (1 α)p T 2 H 2 = A 1 0 A + (1 2α)I A 1 0 BT 0 (1 2α)I can be split as P T A 1 3 = 0 A 1 0 BT 0 (1 2α)I,H 3 = A + (1 2α)A0 0 0 I Recall P 1 3 A is self-adjoint in, H 3 (α = 1 BP, α = 0 BP + ) Eindhoven, 2012 p.23/30

Combine BP + and SZ + P 3 = H 3 = I 0 BA 1 0 I 1 α+β A 0 β α+β BT 0 S 0 A + A0 0 0 (α + β)s 0 + βba 1 0 BT Clearly H 3 positive definite at least for some α,β but P 1 3 A always indefinite when, H 3 defines an inner product. Eindhoven, 2012 p.24/30

Combine BP + and BD P 3 = H 3 = A 0 0 α α+β B 1 α+β S 0 α(a + A0 ) + βa 0 0 0 S 0, Theorem: if α > 0 and α + β < 0 then H 3 is positive definite and so, H3 defines an inner product with respect to which P 1 3 A is positive definite if and only if A 0 > α α + β A Eindhoven, 2012 p.25/30

CG iteration counts for the 4 standard ifiss Stokes test problems: Taylor-Hood elements (C = 0), A 0 : no-fill ichol, S 0 : mass matrix Problem h BP + BD Comb (α,β) % reduction Channel flow Backward step Regularized cavity Colliding flow 2 3 41 38 27 (1.3,-2) 29 2 4 59 57 43 (1.7,-2) 25 2 5 95 95 86 (0.7,-0.6) 9 2 3 57 55 41 (1.4,-2) 25 2 4 88 83 69 (1.4,-1.6) 17 2 5 147 148 140 (1.2,-1) 5 2 3 34 32 21(1.1,-1.8) 34 2 4 52 48 40 (1.2,-1.5) 17 2 5 88 81 73 (1.9,-2) 10 2 3 28 28 20(1.1,-1.8) 29 2 4 46 41 34 (0.8,-1) 17 2 5 72 71 56 (1.4,-1.5) 21 Eindhoven, 2012 p.26/30

Relative preconditioned residual 10 0 10 1 10 2 10 3 10 4 10 5 BDI BP+ BDW Comb MR Comb CG 10 6 0 20 40 60 80 100 Iteration Eindhoven, 2012 p.27/30

CG iteration counts for the 4 standard ifiss Stokes test problems: Taylor-Hood elements (C = 0), A 0 : 1 AMG V-cycle, S 0 : mass matrix Problem h BP + BD Comb (α,β) % reduction Channel flow Backward step Cavity flow Colliding flow 2 3 31 29 18 (1.1,-2) 38 2 4 36 33 19 (1.1,-2) 42 2 5 39 34 20 (1.1,-2) 41 2 3 47 43 25 (1.1,-2) 42 2 4 52 48 28 (1.1,-2) 42 2 5 53 50 28 (1.1,-2) 44 2 3 30 26 15 (1.1,-2) 42 2 4 34 30 18 (1.1,-2) 40 2 5 35 32 18 (1.1,-2) 44 2 3 23 24 15 (1.1,-2) 35 2 4 29 26 17 (1.1,-2) 35 2 5 29 28 18 (1.1,-2) 36 Eindhoven, 2012 p.28/30

Summary iterative methods more reliable/descriptive convergence theory/know what preconditioning is trying to achieve when matrix is symmetric or self-adjoint Eindhoven, 2012 p.29/30

Summary iterative methods more reliable/descriptive convergence theory/know what preconditioning is trying to achieve when matrix is symmetric or self-adjoint for saddle-point problems, examples of self-adjointness in non-standard inner products exist and can be combined to give further examples by interpolation between such examples Eindhoven, 2012 p.29/30

Summary iterative methods more reliable/descriptive convergence theory/know what preconditioning is trying to achieve when matrix is symmetric or self-adjoint for saddle-point problems, examples of self-adjointness in non-standard inner products exist and can be combined to give further examples by interpolation between such examples two indefinite examples can be combined to give a positive definite preconditioned matrix (and so allow CG in the associated inner product) Eindhoven, 2012 p.29/30

Summary iterative methods more reliable/descriptive convergence theory/know what preconditioning is trying to achieve when matrix is symmetric or self-adjoint for saddle-point problems, examples of self-adjointness in non-standard inner products exist and can be combined to give further examples by interpolation between such examples two indefinite examples can be combined to give a positive definite preconditioned matrix (and so allow CG in the associated inner product) application here to saddle-point matrices, but theory is more general Eindhoven, 2012 p.29/30

Acknowledgement This work is partially supported by Award No. KUK-C1-013-04 made by King Abdullah University of Science and Technology (KAUST) Eindhoven, 2012 p.30/30