A Sobolev trust-region method for numerical solution of the Ginz

Similar documents
A Simple Explanation of the Sobolev Gradient Method

A Sobolev Gradient Method for Treating the Steady-state Incompressible Navier-Stokes Equations

A Trust Region Method for Constructing Triangle-mesh Approximations of Parametric Minimal Surfaces

NEWTON S METHOD IN THE CONTEXT OF GRADIENTS

Geometric Multigrid Methods

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS

Chapter 7 Iterative Techniques in Matrix Algebra

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Uniform inf-sup condition for the Brinkman problem in highly heterogeneous media

A multigrid method for large scale inverse problems

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

On the interplay between discretization and preconditioning of Krylov subspace methods

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)

Stabilization and Acceleration of Algebraic Multigrid Method

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

Iterative Methods for Smooth Objective Functions

Notes on PCG for Sparse Linear Systems

2 Nonlinear least squares algorithms

13. Nonlinear least squares

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

On Multigrid for Phase Field

Trust-region methods for rectangular systems of nonlinear equations

Numerical Analysis Comprehensive Exam Questions

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

Line Search Methods for Unconstrained Optimisation

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Inverse Problems and Optimal Design in Electricity and Magnetism

Affine covariant Semi-smooth Newton in function space

Linear and Nonlinear Optimization

Chapter 3 Numerical Methods

Simulation based optimization

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Non-Newtonian Fluids and Finite Elements

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Local Inexact Newton Multilevel FEM for Nonlinear Elliptic Problems

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

A flexible framework for cubic regularization algorithms for non-convex optimization in function space

Least-Squares Fitting of Model Parameters to Experimental Data

Multigrid absolute value preconditioning

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

INTRODUCTION TO FINITE ELEMENT METHODS ON ELLIPTIC EQUATIONS LONG CHEN

On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities

The Conjugate Gradient Method

Preface to the Second Edition. Preface to the First Edition

You should be able to...

An Inexact Newton Method for Nonlinear Constrained Optimization

Numerical Methods I Non-Square and Sparse Linear Systems

New Multigrid Solver Advances in TOPS

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

Constrained Minimization and Multigrid

Numerical methods for computing vortex states in rotating Bose-Einstein condensates. Ionut Danaila

8 Numerical methods for unconstrained problems

AProofoftheStabilityoftheSpectral Difference Method For All Orders of Accuracy

Lecture 7 Unconstrained nonlinear programming

Spectral theory for magnetic Schrödinger operators and applicatio. (after Bauman-Calderer-Liu-Phillips, Pan, Helffer-Pan)

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

An Angular Multigrid Acceleration Method for S n Equations with Highly Forward-Peaked Scattering

Newton s Method and Efficient, Robust Variants

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Approximation of Phase Field Models

AMS526: Numerical Analysis I (Numerical Linear Algebra)

On Lagrange multipliers of trust-region subproblems

A NASH-MOSER THEOREM WITH NEAR-MINIMAL HYPOTHESIS

A Domain Decomposition Method for Quasilinear Elliptic PDEs Using Mortar Finite Elements

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Adaptive algebraic multigrid methods in lattice computations

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS

Programming, numerics and optimization

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

On Lagrange multipliers of trust region subproblems

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for positive definite linear systems with a complex shift

Generalized Newton-Type Method for Energy Formulations in Image Processing

Approximation of Geometric Data

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere

Parallelizing large scale time domain electromagnetic inverse problem

Chebyshev semi-iteration in Preconditioning

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

Numerical simulation of Bose-Einstein Condensates based on Gross-Pitaevskii Equations

arxiv: v2 [math.oc] 22 Nov 2016

CONDITION-DEPENDENT HILBERT SPACES FOR STEEPEST DESCENT AND APPLICATION TO THE TRICOMI EQUATION Jason W. Montgomery

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Master Thesis Literature Study Presentation

On solving linear systems arising from Shishkin mesh discretizations

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

On the choice of abstract projection vectors for second level preconditioners

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

PROSPECTS FOR A CENTRAL THEORY OF PARTIAL DIFFERENTIAL EQUATIONS

Transcription:

A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

Outline of Talk Sobolev Gradient Nonlinear Least Squares Preconditioned Gradient Descent Sobolev Trust Region Method Ginzburg-Landau Equations and Test Results

Sobolev Gradient Suppose φ is a real-valued C 1 function on a Hilbert space H so that for each u H the Fréchet derivative of φ at u is a bounded linear functional on H. Then, by the Riesz Representation theorem, there is a unique element of H, termed the Sobolev gradient S φ(u) such that Note that φ (u)δu = δu, S φ(u) H δu H. S φ(u) H = sup φ (u)δu, u H. δu H =1

Gradient System Theorem: Suppose φ is a nonnegative C 1 function with a locally Lipschitz continuous gradient on a Hilbert space H. Then for each x H there is a unique function z : [0, ) H such that z(0) = x, z (t) = S φ(z(t)), t 0. Also, if u = lim t z(t) exists, then S φ(u) = 0. J. W. Neuberger, Sobolev Gradients and Differential Equations, Second ed., Springer Lecture Notes in Mathematics #1670, Springer, 2010.

Finite Difference Discretization Suppose D : R n R 3m is a finite difference approximation of the differential operator ( I ) : H 1 (Ω) L 2 (Ω) 3 for Ω R 2. Then the discretized Sobolev gradient satisfies φ (u)δu = δu, S φ(u) S = Dδu, D S φ(u) R 3m = δu, D t D S φ(u) R n δu R n. The Fréchet derivative is also represented by a Euclidean gradient φ(u) in the Euclidean inner product: φ (u)δu = δu, φ(u) R n. The gradients are thus related by the preconditioner D t D = I : S φ(u) = (D t D) 1 φ(u).

Finite Element Discretization For Ω R 2 define D : H 1 (Ω) L 2 (Ω) 3 by Du = ( u u). Let u h = u j ψ j, where {ψ j } n j=1 is a basis for an n-dimensional subspace V h H 1 (Ω). The Fréchet derivative φ (u h ) is usually not continuous in the L 2 norm, and φ therefore has no L 2 gradient. The components of the Euclidean gradient e R n, however, are projections in H 1 (Ω) of the Sobolev gradient S φ(u h ) = g j ψ j onto the basis functions: for i = 1,..., n, e i = φ u i = φ (u h )ψ i = ψ i, S φ(u h ) H 1 (Ω) = Dψ i, D j g j ψ j L 2 (Ω) 3 = j Dψ i, Dψ j L 2 (Ω) 3g j. Thus the coefficients of the Sobolev gradient are computed from Lg = e for stiffness matrix L ij = Dψ i, Dψ j L 2 (Ω) 3 = (ψ i ψ j + ψ i ψ j ) Ω

Nonlinear Least Squares Problem Suppose D is a discretized differential operator. Energy Functional: φ(u) = F (Du) = 1 r(du), r(du) 2 Euclidean Gradient: φ (u)h = F (Du)Dh = r(du), r (Du)Dh φ(u) = D t F (Du) = D t r (Du) t r(du) Hessian: 2 φ(u) = D t [ 2 F (Du)]D = D t [r (Du) t r (Du)+ i r i (Du)r i (Du)]D

Linear Least Squares Problem Energy Functional: φ(u) = 1 Au b 2 2 Gradient: φ(u) = A t (Au b) Normal Equations: φ(u) = A t Au A t b = 0 Hessian: 2 φ(u) = A t A

Advantages of Least Squares over Galerkin Symmetric positive definite efficiently solved linear systems No restriction on choice of finite elements Universal type-independent formulation Robustness without upwinding or artificial viscosity required for stability Functional serves as a built-in error indicator used both to monitor convergence and for adaptive mesh refinement Treats multiphysics, and allows essential boundary conditions and more general side conditions to be treated as residuals

Preconditioned Gradient Descent Iteration Iteration: u n+1 = u n α n C 1 n φ(u n ), where Cn 1 φ is the gradient of φ with respect to the inner product v, w Cn = v, C n w. 1 C n = I : Standard steepest descent 2 C n = diag( 2 φ(u n )): Jacobi preconditioning 3 C n = D t D: Standard Sobolev gradient descent 4 C n = D t [r (Du n ) t r (Du n )]D: Gauss-Newton 5 C n = 2 φ(u n ): Damped Newton iteration

Preconditioner Tradeoffs I and D t D are independent of u n. The other three choices correspond to variable metric methods. The last three are associated with Sobolev norms, and preserve the regularity of u n. If r(du) = Du b they are identical. The Newton and Gauss-Newton iterations can achieve ultimate quadratic convergence rates. Each iterate can be computed by PCG with D t D as preconditioner. All except Gauss-Newton apply to a more general energy functional

Sobolev Preconditioning of Newton Iterations For D : H K and φ(u) = F (Du), consider D D as a preconditioning operator for g 0 (u) = D g F (Du)D, where g 0 and g F are L 2 gradients of φ and F, respectively. Denote the Sobolev gradient of φ by g 1. Then Hence φ (u)h = F (Du)Dh = Dh, g F (Du) K = h, g 1 (u) H. φ (u)kh = Dh, g F (Du)Dk K = h, g 1(u)k H = Dh, Dg 1(u)k K. Suppose g F (Du) is coercive and bounded on R(D) with spectral bounds 0 < m M < : m Dh, g F (Du)Dh K Dh, Dh K = Dh, Dg 1 (u)h K Dh, Dh K M h H.

Sobolev Preconditioning of Newton Iterations continued The above expression is a weak form of m h, D Dg 1 (u)h L 2 h, D Dh L 2 M, where (D D)g 1 (u) = g 0 (u). Hence g 0 (u) and D D are spectrally equivalent and define equivalent Sobolev norms. Using a discretization of D D as preconditioner for Newton steps in PCG results in convergence rate κ 1 r = κ + 1 for spectral condition number κ M/m in the Sobolev norm: e n / e 0 2r n.

Standard Method for Least Squares Treatment of PDE s Rather than treating the nonlinear least squares problem of computing critical points of φ, the usual approach is to first linearize r(du) and then solve a linear least squares problem for the perturbation u = u n+1 u n. Newton s method for the linearization leads to the functional ψ( u) = 1 2 r(du n) + r (Du n )D u 2. The linear system ψ( u) = 0 is a Gauss-Newton step for a zero of φ; i.e., Newton for r = 0 is GN for φ = 0. Our approach is more flexible. In the case of large residuals, a Newton iteration is usually required to achieve a quadratic rate of convergence.

Trust Region Method Trust region subproblem min q(d) = φ(u n ) + d t φ(u n ) + 1 2 d t H n d subject to d Cn = d t C n d, where H n 2 φ(u n ) and is a radius which is adjusted according to the ratio of actual to predicted reduction in φ: ρ = φ(u n) φ(u n + d). q(0) q(d)

Levenberg-Marquardt Method Theorem: The trust region subproblem has global solution d iff d is feasible, and there exists λ 0 such that (H n + λc n )d = φ(u n ) and either λ = 0 or d Cn =. The trust region method is thus equivalent to a blend of a Newton-like method and a steepest descent method with λ implicitly defined by.

Trust Region Norms The shape of the trust region is defined by the norm d Cn = d t C n d. Euclidean norm C n = I : Levenberg method Hyperellipsoidal norm C n = diag(h n ): Levenberg-Marquardt method Sobolev norm C n = D t D: Levenberg-Marquardt-Neuberger (LMN) method The other two choices for C n are not useful. Only C n = D t D restricts the trust region to the Sobolev space a Sobolev trust region.

Ginzburg-Landau Energy Functional in 2D G(ψ, A) = 1 2 Ω ψ iaψ 2 + A H 0 2 + κ2 2 ( ψ 2 1) 2 G = Gibbs free energy ψ = complex-valued order parameter; ψ 2 = density of superconducting electrons A = vector potential of induced magnetic field H = A H 0 = external magnetic field (normal to plane of Ω) κ = Ginzburg-Landau parameter; κ > 1/ 2 Type II mixed state

Superconductor States

Nonlinear Natural Boundary Conditions Integration by parts applied to the first variation of G gives the natural boundary conditions H H 0 = 0, ( ψ iaψ) n = 0. It follows from the second condition that on the boundary. J n = 0 and ( ψ ) n = 0

Gauge Transformation The functional G, electron density ψ 2, magnetic field A, and current density J = ψ 2 ( θ A) are invariant under a gauge transformation ψ ψe iχ, A A + χ for a real-valued functionχ H 2 (Ω). The Coulomb gauge constraint is A = 0 Ω and A n = 0 on the boundary.

Least Squares Formulation, Real Functions Let H = [H 1 (Ω)] 4, L 1 = [L 2 (Ω)] 12, and L 2 = [L 2 (Ω)] 7. For (ψ, A) H, with ψ = p + iq and A = ( a ( b), define D : H L1 by ( D(ψ, A) = p ) ( p, q ) ( q, a ) ( a, b b) ), and define r : R 12 R 7 by r(d(ψ, A)) = Then the energy functional is p 1 + aq q 1 ap p 2 + bq q 2 bp b 1 a 2 H 0 κ 2 (p 2 + q 2 1) a 1 + b 2 E(ψ, A) = 1 2 r(d(ψ, A)) 2 L 2.,

Numerical Methods for Ginzburg-Landau Equations Discretize the Euler-Lagrange equations and boundary conditions, linearize, and solve. Discretize the time-dependent equations and boundary conditions, and evolve the system to steady-state. Discretize the energy functional and compute a zero of the Euclidean gradient. The Sobolev gradient method is NOT a preconditioned Euler iteration for evolving the time-dependent equations. We use a Sobolev trust-region dogleg method with the Hessian modified by the Coulomb gauge constraint A n = 0 on the boundary.

Numerical Experiments Ω = [0, 5] [0, 5] Coarse Grid: 65 by 65, Fine Grid: 129 by 129 Initial estimate: constant functions ψ = 1, A = 0 Convergence defined by upper bounds of.5e-13 on the mean absolute Euclidean gradient component and 1.e-15 on the squared trust-region radius

Iteration Counts Grid H 0 Iterations E E Coarse 4 23 44.677 4.3e-14 Coarse 6 22 55.946 1.0e-13 Coarse 8 63 67.255 4.2e-16 Fine 4 87 44.046 2.0e-16 Fine 6 28 55.845 9.2e-11 Fine 8 88 65.858 6.4e-15 Table: Iteration counts, energy values, and errors for κ = 4.

Electron Density on Coarse Grid, κ = 4, H 0 = 4

Electron Density on Fine Grid, κ = 4, H 0 = 4

Electron Density on Fine Grid, κ = 4, H 0 = 6

Electron Density on Fine Grid, κ = 4, H 0 = 8