A new primal-dual path-following method for convex quadratic programming

Similar documents
A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

New Interior Point Algorithms in Linear Programming

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Second-order cone programming

15. Conic optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Local Self-concordance of Barrier Functions Based on Kernel-functions

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

On self-concordant barriers for generalized power cones

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Interior Point Methods for Nonlinear Optimization

On well definedness of the Central Path

CCO Commun. Comb. Optim.

10 Numerical methods for constrained problems

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Interior-Point Methods for Linear Optimization

Lecture 6: Conic Optimization September 8

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

12. Interior-point methods

Advances in Convex Optimization: Theory, Algorithms, and Applications

Largest dual ellipsoids inscribed in dual cones

Interior Point Methods for Linear Programming: Motivation & Theory

On Mehrotra-Type Predictor-Corrector Algorithms

Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Interior Point Methods for Mathematical Programming

Conic Linear Optimization and its Dual. yyye

Interior-Point Methods

4TE3/6TE3. Algorithms for. Continuous Optimization

New stopping criteria for detecting infeasibility in conic optimization

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Introduction to Nonlinear Stochastic Programming

CS711008Z Algorithm Design and Analysis

IE 521 Convex Optimization

Lecture 5. The Dual Cone and Dual Problem

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Using Schur Complement Theorem to prove convexity of some SOC-functions

LP. Kap. 17: Interior-point methods

Rachid Benouahboun 1 and Abdelatif Mansouri 1

Interior Point Methods in Mathematical Programming

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A New Self-Dual Embedding Method for Convex Programming

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Semidefinite Programming

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Nonsymmetric potential-reduction methods for general cones

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

DEPARTMENT OF MATHEMATICS

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

Semidefinite Programming Basics and Applications

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION


Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Robust linear optimization under general norms

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Primal-dual IPM with Asymmetric Barrier

More First-Order Optimization Algorithms

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Lecture: Cone programming. Approximating the Lorentz cone.

Supplement: Hoffman s Error Bounds

Lecture: Examples of LP, SOCP and SDP

4TE3/6TE3. Algorithms for. Continuous Optimization

New Primal-dual Interior-point Methods Based on Kernel Functions

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Chapter 6 Interior-Point Approach to Linear Programming

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Approximate Farkas Lemmas in Convex Optimization

SEMIDEFINITE PROGRAM BASICS. Contents

Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming

Transcription:

Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté des Sciences University Ferhat Abbas, Sétif 9000, Algérie E-mail: achache_m@yahoo.fr Abstract. In this paper, we describe a new primal-dual path-following method to solve a convex quadratic program (QP). The derived algorithm is based on new techniques for finding a new class of search directions similar to the ones developed in a recent paper by Darvay for linear programs. We prove that the short-update algorithm finds an ε-solution of (QP) in a polynomial time. Mathematical subject classification: 90C0, 90C5, 90C60. Key words: convex quadratic programming, interior point methods, primal-dual path-following methods, convergence of algorithms. Introduction In this paper we consider the following convex quadratic program in its classical standard (QP) form called the primal [ minimize c T x + ] x x T Qx, s.t. Ax b, x 0, () and its dual form maximize y, x, z [ b T y ] x T Qx, s.t. A T y + z Qx c, z 0, () #636/05. Received: /VI/05. Accepted: 6/X/05.

98 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS where Q isagiven(n n) matrix, c R n, b R m and A isagiven(m n) matrix. Recently, some other important formulations have been used to study convex quadratic program such as the (second-order) cone optimization problem (SOCP). They are currently the most often used for quadratic convex programs. See, e.g., [5]. Quadratic programming is a special case of nonlinear programming and has a wide range of applications. A classical method to solve convex (QP) is the Wolfe algorithm which can be considered as a direct extension of the well-known simplex method, in the worst case its complexity is exponential. Therefore, there is a growing interest for developing efficient, robust and polynomial time algorithms to solve the problem. Primal-dual path-following methods are among the most efficient algorithms of interior point methods (IPM) to solve linear, quadratic programming, complementarity problems (CP) and general conic optimization problems (COP). These algorithms are, on one hand, of Newton type and thus lead to an efficient practical implementation and on another hand they have a polynomial time complexity. Recently, many interior point methods have been proposed for solving (QP) problems in its standard format or via (SCOP). These methods are based on new search directions with best theoretical properties such as polynomial time complexity and efficient practical implementations. See, e.g., [,, 5, 6, 7]. In this paper, we extend a recent approach developed by Darvay [6] for linear programs to (QP) case. In this approach, based on the introduction of a monotonic function ϕ of one real variable. The equations x i z i µ in the nonlinear system which defines the central path (see Wright [8]) are replaced by ( ) xi z i ϕ ϕ(). µ Of course, if ϕ is the identical function we recover the classical path-following method. For ϕ(t) t we define a new class of search directions and hence a new primal-dual path-following method for approximating the central path. We prove that the related short-update algorithm solves (QP) locally and quadratically and in a polynomial time. We mention also some related and interesting IPM approaches studied recently by Bai et al. []. They have also definedanew class of search direction for linear optimization by using the so-called kernel Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 99 function, that is any univariate strictly convex function k(t) that is minimal at t and such that k() 0. However, by defining k(t) t ϕ(ξ ) ϕ() dξ and for ϕ(t) t, k(t) (t ) ξϕ (ξ ) that gives the corresponding kernel function which defines exactly the same search direction and complexity. Nevertheless, for ϕ(t) t it corresponds to the kernel function k(t) (t ) log t of the logarithmic barrier function, which gives the classical search direction. This new search direction is also analyzed by the authors (Peng, Roos and Terlaky) in a more general context of Self-regular proximities measures for linear and (second order) cone optimization problem. See, e.g., [7]. The paper is organized as follows. In the next section, the problem is presented. Section 3 deals with the new search directions. The description of the algorithm and its convergence analysis are presented in section. Section 5 ends the paper with a conclusion. Our notation is the classical one. In particular, R n denotes the space of real n-dimensional vectors. Given u, v R n, u T v is their inner product, and u is the Euclidean norm, whereas u is the l -norm. Given a vector u in R n, U diag(u) is the n n diagonal matrix with U ii u i for all i. The statement of the problem Through the paper the following assumptions hold. Positive semidefinitness (PSD). semidefinite. The matrix Q is symmetric and positive Interior point condition (IPC). There exists (x 0, y 0, z 0 ) such that Ax 0 b, A T y 0 + z 0 Qx 0 c, x 0 > 0, z 0 > 0. The full rank condition (FR). The matrix A is of rank m (m n). Comp. Appl. Math., Vol. 5, N., 006

00 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS The optimal solutions of the primal and the dual are the solutions of nonlinear system of (n + m) of equations: Ax b, x 0 A T y + z Qx c, z 0, x i z i 0, for i,...,n. (3) The first equation of the system (3) represents the primal feasibility, the second one the dual feasibility and the last one the complementarity condition. The classical path-following method consists in introducing a positive parameter µ. One considers the nonlinear system parameterized by µ Ax b, A T y + z Qx c, z > 0, x > 0, x i z i µ, for i,...,n. () It is shown, under our assumptions, that there exists one unique solution (x(µ), y(µ), z(µ)). The path µ (x(µ), y(µ), z(µ)) is called the central-path (see for instance Wright [8]). It is known that when µ 0, (x(µ), y(µ), z(µ)) goes to a solution of (3). Applying the Newton s method for the system (), we develop the classical primal-dual path-following algorithm. In the next section, we shall present a new method for approximating the central path. 3 New search directions In the proposed method, ( ) we replace the n equations x i y i µ in () by n equivalent equations ϕ xi y i ϕ() where ϕ is a real valued function on [0, + ) µ and differentiable on (0, + ) such that ϕ and ϕ (t) >0, for all t > 0. Given µ>0and (x, y, z) ( such ) that A T y + z Qx c, Ax b, x > 0 and z > 0 but not such that ϕ xi z i ϕ() for all i, a new triple (x + x, y + µ y, z + z) is obtained thanks to the Newton method for solving the nonlinear Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 0 system. One obtains µ ϕ Ax 0, A T y + z Qx 0, ( ) xi z i µ ( ) xi z i (z i x i + x i z i ) ϕ () ϕ µ for i,...,n. For commodity let us introduce the vectors v and p of R n defined by (5) v i xi z i µ Next let us introduce the vectors and the matrix and p i ϕ() ϕ(v i ) v i ϕ(v i ) for i,...,n. d x X V x, d z Z V z, d y y (6) Ā µ AV X and Q µ V XQV X then the system reduces to the system Ād x 0, Ā T d y d z + Qd x 0, d x + d z p, which leads to the system ( (I + Q) Ā T Ā 0 )( d x d y ) ( p 0 ) which is non singular since Q is positive semidefinite and Ā has a full rank. Then d z is computed by the formula d z p d x. Remark 3.. If we adopt ϕ(t) t, then we recover the classical primal-dual path-following method In this paper, we shall consider ϕ(t) t. Then ϕ (t). It follows that t ( p i Comp. Appl. Math., Vol. 5, N., 006 ) xi z i ( v i ), for all i µ

0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS or p (e v) (7) where e (,...,) T and the system (5) becomes Ax 0, A T y + z Qx 0, z i x i + x i z i µv i p i for i,...,n. (8) In addition with the notation in (6), we have x i z i + z i x i µv i (d xi + d zi ) for i,...,n, (9) and µd xi d zi x i z i for i,...,n. (0) Description of the algorithm and its convergence analysis Nowwedefine a proximity measure for the central path as δ(xze,µ) p In addition, we define the vector q by e v. () q i d xi d zi. Then and d xi d zi p i q i, q p. () This last inequality follows from p q + d T x d z since dx T d z dx T ( Qd x Ā T d y ) dx T positive semidefinite matrix. Qd x 0, with Ād x 0 and Q is a Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 03. The algorithm Let ε>0be the given tolerance, 0 <θ<the update parameter (default θ /( n)), and 0 <τ<the proximity parameter ( default τ ). Assume that for the triple (x 0, y 0, z 0 ) IPC holds and let µ 0 (x0 ) T z 0. In addition, we n assume that δ(x 0 Z 0 e,µ)<τ. begin x : x 0 ; y : y 0 ; z : z 0 ; µ : µ 0 ; while x T z nµε do begin µ : ( θ)µ; compute (x,y,z) from (8) x : x + x; y : y + y; z : z + z; end end In the next section we prove that the algorithm converges quadratically and locally and in a polynomial time.. Complexity analysis In the following lemma, we state a condition which ensures the feasibility of the full Newton step. Let x x + x and z z + z be the new iterate after a full Newton step. Lemma.. Let δ δ(xze,µ) <. Then the full Newton step is strictly feasible, hence x > 0 and z > 0. Proof. Hence For each 0 α. Let x(α) x + αx and z(α) z + αz. x i (α) z i (α) x i z i + α(x i z i + z i x i ) + α x i z i for all i,...,n. Comp. Appl. Math., Vol. 5, N., 006

0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Now, in view of (9) and (0) we have x i (α) z i (α) µ(v i + αv i (dx i + dz i ) + α dx i dz i ) for all i,...,n. In addition from (7), we have and thus Thereby v i + p i v i + v i p i p i, for i,...,n. for i,...,n. and ṽ i x i (α) z i (α) µ ( ( α)vi + α ( ) vi α ( )) + v i p i + p i qi, ( ) ( α)vi + α ( α) p i α q i, for i,...,n. (3) Thus the inequality x i (α) z i (α) > 0 holds if max i Using () and () we get max i ( ( α) p i α q i ( ( α) p i α q i ) ) <. ( α) p + α q ( α) p + α q ( ) p p δ <. Hence, x i (α) z i (α) > 0 for each 0 α. Since x i (α) and z i (α) are linear function of α, then they do not change sign on the interval [0, ] and for α 0 we have x i (0) >0 and z i (0) >0. This leads to x i () >0 and z i () >0 for all i. In the next lemma we state that under the condition δ(xze,µ)<, the full Newton step is quadratically convergent. Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 05 Lemma.. Let x x + x and z z + z be the iteration obtained after a full step. Suppose δ(xze,µ)<. Then δ( X Ze,µ) δ + δ. Thus δ( X Ze,µ)<δ, which means quadratic convergence of the full Newton step. Proof. Setting in (3) α, we get Hence ṽ i q i for i,...,n () min ṽi q i and thereby On the other hand, we have q p δ, min i ṽ i δ. (5) δ ( X Ze,µ) ( ṽ i ) i i i ( ṽ i ) ( +ṽ i) ( +ṽ i ) ( ṽ i ) ( +ṽ i ) ( + min i ṽ i ) ( ṽi ) i and using (5) we get δ ( X Ze,µ) ( + δ ) ( ṽi ) i Comp. Appl. Math., Vol. 5, N., 006

06 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Using (), () and () we get δ ( X Ze,µ) n i q i ( + δ ) ( n ) i q i ( + δ ) ( + δ ) ( + δ ) ( p ). ( q ) Hence This proves the Lemma. δ( X Ze,µ) δ + δ. The next Lemma gives an upper bound for the duality gap after a full Newton step. Lemma.3. hence Let x x + x and z z + z. Then the duality gap is ) x T z µ (n q, x T z µn. Proof. We have seen from the previous lemmas that Hence but since ṽ i ṽi i q i i for i,...,n. ( ) q i n q, x T z µ ṽi, (6) i Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 07 it follows that This completes the proof. ) x T z µ (n q. The next Lemma discusses the influence on the proximity measure of the Newton process followed by a step along the central path. Lemma.. Then Let δ δ(xze,µ)< and µ + ( θ)µ, where 0 <θ<. δ( X Ze,µ + ) θ n + δ θ + ( θ)( δ). Furthermore, if δ</, θ /( n ) and n, then we get δ( X Ze,µ + ). Proof. We have δ ( X Ze,µ + ) e x ỹ θ θ θ θ µ + θe ṽ ( ) θ ṽi i ( ) ( ) θ +ṽi θ ṽi ( ) θ +ṽi i ( ) ( θ) ṽ i ( ) θ +ṽi i ( ) θ θ + mini ṽ i ( ) ( θ) ṽ i i Comp. Appl. Math., Vol. 5, N., 006

08 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Hence by using (3) and () we get δ( X Ze,µ + ) ( ( n ( ) θ + θ θ)+ ( δ )) q i Now, in view of (0) and () we deduce that δ( X Ze,µ + ) Finally, we get θ + ( θ)( δ ) θ + ( θ)( δ ) i [ θ n + [ θ n + ( )] q ( p )]. δ( X Ze,µ + ) θ + [ ] θ n + δ. ( θ)( δ ) This completes the proof. It results from Lemma. that for the default θ /( n ), the (x, z) >0 and δ(xze,µ) < / are maintained during the algorithm. Hence the algorithm is well defined. In the next Lemma we compute an upper bound for the total number of iterations produced by the algorithm. Lemma.5. Assume that x 0 and z 0 are strictly feasible, µ 0 (x0 ) T z 0 and n δ(x 0 Z 0 e,µ) <. Moreover, let x k and z k be the vectors obtained after k iterations. Then the inequality ( x k) T z k ε is satisfied for k [ θ log (x 0 ) T z 0 ]. ε Proof. We have after k iterations that µ k ( θ) k µ 0. Using Lemma.3, it follows that ( x k) T z k nµ k ( θ) k (x 0 ) T z 0. Hence the inequality ( x k ) T z k ε holds if ( θ) k (x 0 ) T z 0 ε. By taking logarithms of both sides, we obtain k log( θ)+ log(x 0 ) T z 0 log ε. Comp. Appl. Math., Vol. 5, N., 006

MOHAMED ACHACHE 09 The estimate for the log function log( θ) θ for all 0 θ<. Therefore the inequality ( x k) T z k ε is fulfilled if k θ log (x0 ) T z 0 ε. For the default θ /( n), we obtain the following theorem. Theorem.6. Suppose that the pair (x 0, z 0 ) is strictly feasible and let µ 0 (x 0 ) T z 0 /n. If θ /( n), then the algorithm. requires at most [ n log (x 0 ) T z 0 ] ε iterations. For the resulting vectors we have ( x k) T z k ε. 5 Conclusion In this paper, we have described a new primal-dual path-following method to solve convex quadratic programs. We have proved that the short-update algorithm can find an ε-solution for (QP) in n log (x 0 ) T z 0 iterations, where x 0 and z 0 are the initial positive starting points. The initialization of the algorithm can be achieved as in linear optimization by using self-dual embedding techniques. Hence, if x 0 y 0 e, we get the best well-known short step complexity ε ( n ) n log. ε REFERENCES [] L. Adler and R.D.C. Monteiro, Interior path following primal-dual algorithms, Part II: Convex quadratic programming. Mathematical Programming, (989), 3 66. [] F. Alizadeh and D. Goldfarb, Second-order cone programming. Mathematical programming, 95 (003), 3 5. [3] K.M. Anstreicher, D. den hertog, C. Roos and T. Terlaky, A long step barrier method for convex quadratic programming. Delft University of technology. Report, (990), 90 53. Comp. Appl. Math., Vol. 5, N., 006

0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS [] Y.Q. Bai, M. El ghami and C. Roos, A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM Journal on Optimization, 3(3) (003), 766 78. [5] A. Ben-Tal and A. Nemirovski, Lectures on Modern convex Optimization. Analysis, Algorithms and engineering Applications, volume of MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA (00). [6] Zs. Darvay, A new algorithm for solving self-dual linear programming problems. Studia Universitatis Babes-Bolyai, Series Informatica, 7() (00), 5 6. [7] J. Peng, C. Roos and T. Terlaky, Primal-dual interior point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optimization, 3() (00), 79 03. [8] S.J. Wright. Primal-Dual Interior Point Methods. SIAM, Philadelphia, (997), USA. Comp. Appl. Math., Vol. 5, N., 006