LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

Size: px
Start display at page:

Download "LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS"

Transcription

1 LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT. Introduction. The linear least squares problems discussed here are often used to incorporate observations into mathematical models. For example in inversion, imaging and data assimilation in medical and geophysical applications. In many of these applications the variables in the mathematical models are known to lie within prescribed intervals. This leads to a bound constrained least squares problem: (.) min Ax b α x β, where x, α, β R n, A R mxn, and b R m. If the matrix A has full column rank, then this problem has a unique solution for any vector b [3]. In Section, we also consider the case when A does not have full column rank. Successful approaches to solving bound-constrained optimization problems for general linear or nonlinear objective functions can be found in [6], [3], [8], [4] and the MATLAB function fmincon. Approaches which are specific to least squares problem are described in [], [9] and [5] and the MATLAB function lsqlin. In this work, we implement a novel approach to solving the bound constrained least squares problem by writing the constraints in quadratic form, and solving the corresponding unconstrained least squares problem. Most methods for for solutions of bound-constrained least squares problems of the form (.) can be catagorized as active-set or interior point methods. In active-set methods, a sequence of equality constrained problems are solved with efficient solution methods. The equality constrained problem involves those variables x i which belong to the active set, i.e. those which are known to satisfy the equality constraint [8]. It is difficult to know the active set a priori but algorithms for it include Bounded Variable Least Squares (BVLS) given in [3]. These methods can be expensive for large-scale problems, and a popular alternative to them are interior point methods. Supported by NSF grant EPS , Boise State University, Department of Mathematicss, Boise, ID Tel: , Fax: jmead@boisestate.edu Supported by NSF grants DMS 534 and DMS Arizona State University, Department of Mathematics and Statistics, Tempe, AZ Tel: , Fax: renaut@asu.edu

2 Interior point methods use variants of Newton s method to solve the KKT equality conditions for (.). In addition, the search directions are chosen so the inequalities in the KKT conditions are are satisfied at each iteration. These methods can have slow convergence, but if high-accuracy solutions are not necessary, they are a good choice for large scale applications [8]. In this work we write the inequality constraints as quadratic constraints and solve the optimization problem with a penalty-type method that is commonly used for equality constrained problems. When A is not full rank, regularized solutions are necessary for both the constrained and unconstrained problem. A popular approach is Tikhonov regularization [5] (.) min Ax b + λ L(x x ), where x is an initial parameter estimate and L is typically chosen to yield approximations to the l th order derivative, l =,,. There are different methods for choosing the regularization parameter λ; the most popular of which include L-curve, Generalized Cross-Validation (GCV) and the Discrepancy principle [5]. In this work, we will use a χ method introduced in [] and further developed in []. The efficient implementation of this χ approach for choosing λ compliments the solution of bound-constrained least squares problem with quadratic constraints. The rest of the paper is organized as follows. In Section we write the bound-constraint least squares problem with quadratic constraints, and describe how to solve the problem with quadratic, inequality constraints using the weighted [7] or penalty [8] method. In Section 3 we apply the chi-squared method to the bound-constrained problem. In Section 4 we give some numerical results on a test problem, and on a Hydrological application. Section 5 gives some conclusions.. Bound-Constrained Least Squares... Quadratic Constraints. The unconstrained regularized least squares problem (.) can be viewed as a least squares problem with a quadratic constraint []: (.) min Ax b subject to L(x x ) δ. Equivalence of (.) and (.) can be seen by the fact that the necessary and sufficient KKT conditions for a feasible point x to be a solution of (.) are (A T A λ L T L)x = A T b

3 3 λ λ ( L(x x ) δ) = δ L(x x ) Here λ is the Lagrange multiplier, and equivalence with problem (.) corresponds to λ = λ. The advantage of viewing the problem in a quadratically constrained least squares formulation rather than as Tikhonov regularization, is that the constrained formulation can give the problem physical meaning. In [] they give the example in image restoration where δ represents the energy of the target image. For a more general set of problems, the authors in [] successfully find the regularization parameter λ by solving the quadratically constrained least squares problem. Here we introduce an approach whereby the bound constrained problem is written with n quadratic inequality constraints, i.e. (.) becomes (.) min Ax b subject to (x i x i ) σ i i =,..., n where x = [x i ; i =,..., n] T is the midpoint of the interval [α, β], i.e. x = (β + α)/ and σ = (β α)/. The necessary and sufficient KKT conditions for a feasible point x to be a solution of (.) are: (.3) (.4) (.5) (A T A + λ )x = λ x + A T b (λ ) i i =,..., n (λ ) i [σ i (x i ˆx i ) ] = i =,..., n (.6) σ i (x i ˆx i ) i =,..., n where λ = diag((λ ) i ). Reformulating the box constraints α x β as quadratic constraints (x i x i ) σi, i =,..., n effectively circumscribes an ellipsoid constraint around the original box constraint. In [9] box constraints were reformulated in exactly the same manner, however the optimization problem was not solved with the penalty or weighted approach as is done here, and described in the next section. Rather, in [9] parameters were found which ensure there is a convex combination of the objective function and the constraints. This ensures the ellipsoid defined by the objective function intersects that defined by the inequality constraints.

4 4.. Penalty or Weighted Approach. The penalty [8] or weighted [7] approach is typically used to solve a least squares problem with n equality constraints. A simple application of this approach would be to solve (.7) min Ax b subject to x = x. The objective function and constraints are combined in the following manner: (.8) min ɛ Ax b + x x, and a sequence of unconstrained problems are solved for ɛ. By examination, one can see that as ɛ, infinite weight is added to the contraint. More formally, in [7] they prove that if x is the solution of the least squares problem (.8) and ˆx the solution of the equality constrained problem (.7) then x(ɛ) ˆx η ˆx with η being machine precision. Note that (.8) is equivalent to Tikhonov regularization (.) with λ = ɛ, L = I and x =. We apply the penalty or weighted approach to the quadratic, inequality constrained problem (.). In this case, the penalty term x x must be multiplied by a matrix which represents the values of the inequality constraints, rather than a scalar involving λ or ɛ. This matrix will come from the first KKT condition (.3), which is the solution of the least squares problem: (.9) min Ax b + λ / (x x). We view the inequality constraints as a penalty term by choosing the value of λ to be C = diag((σ) i ). Since the quadratic constraints circumscribe the box constraints, a sequence of probems for decreasing ɛ are solved which effectively decreases the radius of the ellipsoid until the constraints are satisfied, i.e. solve (.) min Ax b + C / ɛ (x x), where C ɛ = ɛc. Starting with ɛ =, the penalty parameter ɛ decreases until the solution of the inequality constrained problem (.) is identified. Since ɛ solves the equality constrained problem, these iterations are guarenteed to converge when A is full rank.

5 ALGORITHM. Solve least squares problem with box constraints as quadratic constraints. Initialization: x = (β + α)/,c ɛ = diag((/(β i α i )) ), Z = {j : j =,..., n}, J = NULL count= Do (until all constraints are satisfied) Solve (A T A + C ɛ )y = A T (b A x) for y x = x + y if α j x j β j j P, else j Z if Z :=NULL, end ɛ = /( + count/ɛ if j Z, (C ɛ ) jj = ɛ(c ɛ ) jj End This algorithm performs poorly because it over-smoothes the solution x. In particular, if the interval is small, then the standard deviation is small, and the solution is heavily weighted towards the mean, x. This approach to inequality constraints is not recommended unless prior information about the parameters, or a regularization term is included in the optimization as described in the next section..3. Regularization and quadratic constraints. The above algorithm is not useful for well-conditioned or full rank matrices A because it over-smoothes the solution. In addition, for rank deficient or ill-conditioned Awe may not be able to calculate the least squares solution x to (.) 5 x = x + (AA T + C ɛ ) A T (b A x). because as ɛ decreases, (A T A + C ɛ ) will eventually become ill-conditioned. Thus the approach to inequality constraints in this work should be used with regularized methods, when typical solution techniques do not yield solutions in the desired interval. The most typical way to regularize a problem is with Tikhonov regularization: (.) min Ax b + λ L(x x ). As mentioned in the Introduction, there are many methods for choosing the parameter λ, and here we focus on a chi-squared method similar to the discrepancy principle [6] introduced in [], and further developed in []. This chi-squared method is based on the fact that C b and C x are invertible covariance matrices on the mean zero errors in data b and initial parameter estimates x, respectively, the minimum value of the functional J = (b Ax) T C b (b Ax)+(x x ) T C x (x x )

6 6 is a random variable which follows a χ distribution with m degrees of freedom [, ]. In particular, given value for C b and C x, the difference J( x) m is an estimate of confidence that C b and C x are accurate weighting matrices. Mead [] noted these observations, and suggested a matrix C x can be found by requiring that J( x), to within a specified ( α) confidence interval, is a χ random variable with m degrees of freedom, namely such that (.) m mz α/ < r T (AC x A T + C b ) r < m + mz α/, where z α/ is the relevant z-value for a χ -distribution with m degrees of freedom. If C x = λ I in [] it was shown that for accurate C b, this χ approach is more efficient and gives better results than the discrepancy principle, the L-curve and generalized cross validation (GCV) [5]..3.. χ regularization with quadratic constraints. Applying quadratic constraints to the chi-squared method amounts to solving the following problem: (.3) min J ɛ x where J ɛ = C / b (Ax b) + C / x (x x ) + C / ɛ (x x). First it must be the case that the last two terms in the objective function J ɛ are consistent. That is, Dx = f must be consistent where [ ] [ C / x D = C /, f = ɛ i.e. rank(d)=rank(d f). C / x x x C / ɛ This formulation has three terms in the objective function and seeks a solution which lies in an intersection of three ellipsoids. However it is possible, but not necessary, to write the regularization term and the inequality constrained term as one. The minimum value of J ɛ occurs at x ɛ = x + (A T C b A + C x + C ɛ where x = x x. The minimum value of J ɛ is J ɛ ( x ɛ ) = r T P ɛ r + x T Q x P ɛ = A(C x ] ) (A T C r + C x), + C ɛ ) A T + C b Q = C ɛ + (A T C b A + C x ). b, ɛ

7 7 Now r T P ɛ r is a random variable, while x T Q x is not. Define k ɛ = P / ɛ r, then k ɛ is normally distributed if r is, or if there are a large number of data m. It has zero mean, i.e. E(k ɛ ) =, if Ax is the mean of b, while its covariance This implies that E(k ɛ k T ɛ ) = P / ɛ (AC x A T + C b )P / ɛ. k = (AC x A T + C b )P / ɛ k ɛ has the identity as it s covariance thus kk T has the properties of a χ distribution. Note that k does not depend on the constraint term, and this is the typical χ property (.)..3.. Other regularization methods and generalized implementation of quadratic constraints. Any regularization method can be used to implement box constraints as quadratic constraints. In addition to χ regularization with quadratic constraints, in Section 3 we will show results where C x is found by the L-curve and maximum likelihood estimation. The L-curve approach finds the parameter λ in (.), for specified L by plotting the weighted parameter misfit L(x x ) versus the data misfit b Ax. The data misfit may also be weighted by the inverse covariance of the data error, i.e. C b. When plotted in a log-log scale the curve is typically in the shape of an L, and the solution is the parameter values at the corner. These parameter values are optimal in the sense that the error in the weighted parameter misfit and data misfit are balanced. The L-curve method can assume that the data contain random noise. Maximum likelihood estimation (MLE) and the χ regularization method not only assume that the data contain noise, but so do the initial parameter estimates. In MLE the data b are random, assumed to be independent and identically distributed, following a normal distribution with probability density function { ρ(b) = const exp } (.4) (b Ax)T C b (b Ax), with Ax the expected value of b and C b the corresponding covariance matrix. In addition, it is assumed that the parameter values x are also random following a normal distribution with probability density function { ρ(x) = const exp } (.5) (x x ) T C m (m m ),

8 8 with x the expected value of x and C x the corresponding covariance matrix. If the data and parameters are independent then their joint distribution is ρ(b, x) = ρ(b)ρ(x). In order to maximize the probability that the data were in fact observed we find x where the probability density is maximum. The maximum likelihood estimate of the parameters occurs when the joint probability density function is maximum, i.e. optimal parameter values are found by solving (.6) min x { (b Ax) T C b (b Ax) + (x x ) T C x (x x ) }. This optimal parameter estimate is found under the assumption that the data and parameters follow a normal distribution and are independent and identically distributed. The χ regularization method is based on this idea, but the assumptions are relaxed, see [] []. In addition the χ regularization method is an approach for finding C x (or C b ), while MLE assumes they are known. Note that the L-curve is equivalent to MLE when If C x = λ I. The advantage of MLE is when C x is not a constant matrix and hence the weights on the parameter misfits vary. Moreover, when C x has off diagonal elements, correlation in initial parameter estimate errors can be modeled. The disadvantage of MLE is that a priori information is needed. On the other hand the χ regularization method is an approach for finding elements of C x, thus matrices rather than parameters may be used for regularization, but not as much a priori information is needed as with MLE. ALGORITHM. Solve regularized least squares problem with box constraints as quadratic constraints. Initialization: x = (β + α)/,c ɛ = diag((/(β i α i )) ), Z = {j : j =,..., n}, J = NULL Find C x by any regularization method. count= Do (until all constraints are satisfied) Solve (A T A + C ɛ + C x x ɛ = x + y ɛ if α j x j β j j P, else j Z if Z :=NULL, end ɛ = /( + count/)ɛ if j Z, (C ɛ count = count + End ) jj = ɛ(c ɛ )y ɛ = (A T C r + C x) for y ɛ ) jj b ɛ

9 9 In the next two sections we give numerical results where the box constrained least squares problem (.) is solved by (.3), i.e. by implementing the box constraints a quadratic constraints using Algorithm. 3. Experiments. For validation of the algorithm against other standard approaches we present a series of representative results using benchmark cases, phillips, shaw, and wing from the Regularization Tool Box [5]. In addition, we present the results for a real model from hydrology. The box constraints for the benchmark cases were set arbitrarily, while for the hydrological model, parameter constraints from the literature were used. 3.. Benchmark Problems: Experimental Design. System matrices A, right hand side data b and solutions x are obtained for a specific benchmark problem from the Regularization Tool Box [5]. In all cases we generate a random matrix Θ of size m 5, with columns Θ c, c = : 5, using the Matlab R function randn. Then setting b c = b + level b Θ c / Θ c, for c = : 5, generates 5 copies of the right hand vector b with normally distributed noise, dependent on the chosen level. Results are presented for level =.. An example of the error distribution for one case of phillips with n = 8 is illustrated in Figure??. Because the noise depends on the right hand side b the actual error, as measured by the mean of b b c / b over all c, is.8. To obtain the weighting matrix W b, the resulting covariance C b between the measured components is calculated directly for the entire data set B with rows (b c ) T. Because of the design, C b is close to diagonal, C b diag(σb i ) and the noise is colored. In all experiments, regardless of parameter selection method, the same covariance matrix is used. The a priori reference solution x is generated using the exact known solution and noise added with level =. in the same way as for modifying b. The same reference solution x is used for all right hand side vectors b c, see Figure Benchmark Problems: Results. Algorithm was implemented with the χ regularization method, the L-curve and maximum likelihood estimation (MLE). For the benchmark results, we know the true solution, thus we can implement maximum likelihood estimation exactly, which is not true in real applications. Thus the MLE is an exact version of the χ regularization method because the covariance matrix C x is known, while in the χ regularization method it is solved for using the properties of a χ distribution. For comparison the Matlab R function lsqlin was used to implement the box constraints in the linear least squares problem. As evidenced in Figures 3.3(b), 3.5(b) and 3.7(b), the lsqlin solution stays

10 4 3 Problem Phillips Right Hand Side exact noise Problem Phillips Right Hand Side exact noise.8 3 (a) (b).. Problem Wing Right Hand Side exact noise (c) FIG. 3.. Illustration of the noise in the right hand side for problem (a)phillips, (b) shaw (c) wing.

11 .8.6 Problem Phillips Reference Solution exact noise.8.5 Problem Phillips Reference Solution exact noise (a) (b).4. Problem Phillips Reference Solution exact noise (c) FIG. 3.. Illustration of the reference solution x for problem (a)phillips, (b) shaw (c) wing.

12 .4. Solutions without constraints x MLE Quadratically constrained solutions,. < x < x MLE x Lcurve x χ x Lcurve x χ x lsqlin...4 (a) (b) FIG Phillips (a) unconstrained and (b) constrained solutions within the correct constraints, but does not retain the shape of the curve. Figures (3.4), (3.6) and (3.8) show that for any regularization method, the significant advantage of implementing the box constraints as quadratic constraints and solving (.3), is that we retain the shape of the curve Estimating data error: Example from Hydrology. The χ -curve method is used to find the weight σ b on field measurements b. The initial parameter estimates x and their respective covariance C x = diag(σx i ) are based on laboratory measurements, and specified a priori. In other words, the regularization parameter or initial parameter misfit weight is taken from laboratory measurements while the data weight is obtained by the χ -curve method. In nature, the coupling of terrestrial and atmospheric systems happens through soil moisture. Water must pass through soil on its way to groundwater and streams, and soil moisture feeds back to the atmosphere via evapotranspiration. Consequently, methods to quantify the movement of water through unsaturated soils are essential at all hydrologic scales. Traditionally, soil moisture movement is simulated using Richards [] equation for unsaturated

13 L curve with and without quadratic constraints x Lcurve.< x Lcurve <.6 Maximum likelihood with and without quadratic constraints. x MLE. < x MLE < (a) (b) Regularized χ with and without quadratic constraints.8.6 x χ.< x χ < (c) FIG Phillips test problem of (a) L-curve, (b) maximum likelihood estimation (c) regularized χ method.

14 x MLE x Lcurve x χ Solutions without constraints Quadratically constrained solutions,.5 < x <.5.5 x MLE x Lcurve x χ.5 x lsqlin.5.5 (a) (b) FIG Shaw (a) unconstrained and (b) constrained solutions flow: (3.) θ t = (K h) + K z. θ is the volumetric moisture content, h pressure head, and K(h) hydraulic conductivity. Solution of Richards equation requires reliable inputs for θ(h) and K(h). These relationships are collectively referred to as soil-moisture characteristic curves, and they are typically highly nonlinear functions. These curves are commonly parameterized using the van Genuchten [6] and Mualem [7] relationships: (3.) θ(h) = θ r + (θ s θ r ) ( + αh n ) m h < ( K(h) = K s θe l ( θe /m ) m) θ e = θ(h) θ r θ s θ r. h < These equations contain seven independent parameters: θ r and θ s are the residual and saturated water contents (cm 3 cm 3 ), respectively, α (cm ), n and m (commonly set = /n)

15 L curve with and without quadratic constraints x Lcurve.5 < x Lcurve <.5 Maximum likelihood with and without quadratic constraints x MLE.5 < x MLE < (a) (b) Regularized χ with and without quadratic constraints.5 x χ.5 < x χ < (c) FIG Shaw test problem of (a) L-curve, (b) maximum likelihood esitmation (c) regularized χ method.

16 6.8 Solutions without constraints. Quadratically constrained solutions, < x <..6.4 x MLE x Lcurve x χ..8 x MLE x Lcurve x χ..6 x lsqlin (a) (b) FIG Wing (a) unconstrained and (b) constrained solutions are empirical fitting parameters (-), K s is the saturated hydraulic conductivity (cm/sec) and l is the pore connectivity parameter (-). (Note that the variables used in this application, n, m, α, r, s, are common notation for the parameters in the model, and do not represent the size of the problem, or any previous referenced variables.) We focus on parameter estimates for θ(h) to be used in Richards equation. Field studies and laboratory measurements of soil moisture content and pressure head in the Dry Creek catchment near Boise, Idaho are used to obtain θ(h) estimates. The north facing slope is currently instrumented with a weather station, two soil pits recording temperature and moisture content at four depths, and four additional soil pits recording moisture content and pressure head with TDR/tensiometer pairs. We will show soil water retention curves from two of these pits: NU 5 and SU5 5. They represent pits upstream from a weir and 5 meters, respectively, both 5 meters from the surface. We rely on the laboratory measurements for good first estimates of the parameters x = [θ r, θ s, α, n] and their standard deviations σ xi. It takes -3 weeks to obtain one set of labora-

17 L curve with and without quadratic constraints x Lcurve < x Lcurve <. Maximum likelihood with and without quadratic constraints.4. x MLE < x. MLE < (a) (b) Regularized χ with and without quadratic constraints x χ < x χ <. (c) FIG Wing test problem of (a) L-curve, (b) maximum likelihood esitmation (c) regularized χ method.

18 8 tory measurements, but this procedure is done multiple times from which we obtain standard deviation estimates and form C x = diag(σ θ r, σ θ s, σ α, σ n). These standard deviations account for measurement technique or error. however, measurements on this core may not accurately reflect soils in entire watershed region. The quadratically constrained method described here requires a linear model, and we use the following technique to simulate the nonlinear behavior of (3.). Let θ(h, x) θ(h, x ) + θ (3.3) x (x x ) x=x (3.4) = A(h, x )x + q(h, x ). Then b i = θ(h i, x) q(h i, x ) is of dimension m, while A has dimension m 5. An optimal ˆx is found by the χ -curve method, x is updated with it, and (3.4) is iterated. Table (3.) gives parameter ranges (or constraints) in the form soil class averages found in [7]. In addition, the parameter values for both pits both constrained to fit into those class averages or ranges, and the unconstrained values are given. For both pits, the only parameter that did not fit into the appropriate range is θ s. This parameter represents the soil moisture content when the ground is nearly saturated. In the semi-arid environment of the Dry Creek Watershed, the soil does not typically come near saturation. We ve concluded from these results that the class averages, with a minimum value of θ s =.3, do not reflect the soils found in this region. A more appropriate minimum would be θ s =.. NU 5 SU5 5 Parameter Ranges Unconstrained Constrained Unconstrained Constrained log α [.86,.96] log n [.4,.68] θ s [.3,.568] θ r [.5,.3] TABLE 3. Hydrological Parameters Figure (3.9) shows the constrained and unconstrained results with θ(ψ) on the horizontal axis. The van Genuchten equation is typically plotted in this manner, and the curve is called the soil moisture retention curve. Near saturation, i.e. for ψ near, the soil moisture falls below.5 further indicating the the soil class averages found in [7] are not appropriate for this region and should not be used as a constraints. In Table 3. we see that the unconstrained parameters are appropriate when θ s is allowed to lie within a range with lower bound.. REFERENCES

19 9 SD5_5 NU_ Measurements Unconstrained Constrained 5 4 Measurements Unconstrained Constrained log ψ log ψ θ(ψ) θ(ψ) (a) (b) FIG Unconstrained and constrained soil moisture retention curves for (a) SD5 5 and (b) NU 5. [] Bennett, A., 5 Inverse Modeling of the Ocean and Atmosphere (Cambridge University Press) p 34. [] Bierlaire, M., Toint, Ph.L. and Tuyttens, On iterative algorithms for linear least squares problems with bound constraints, Lin. Alg. Appl., Vol. 43 () p. -43, 99. [3] Björck, Å, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, pp.48, 996. [4] Figueras, J., and M. M. Gribb, A new user-friendly automated multi-step outflow test apparatus, Vadose Zone Journal (in preparation). [5] Hansen, P. C., 994, Regularization Tools: A Matlab Package for Analysis and Solution of Discrete Ill-posed Problems, Numerical Algorithms [6] Huyer, W. and A. Neumaier, Global optimization by multilevel coordinate search, J. Global Optimization 4 (999), [7] Lawson, C.L. and Hanson, R.J. Solving Least Squares Problems, Prentic-Hall, p.34, 974. [8] Lin, C-J and J. Moreé Newton s method for large bound-constrained optimization problems, SIAM Journal on Optimization, Volume 9, Number 4, pp. -7, 999. [9] Lötstedt, P. Solving the minimal least squares problem subject to bounds on the variables, BIT 4 p. 6-4, 984. [] McNamara, J. P., Chandler, D. G., Seyfried, M., and Achet, S. 5. Soil moisture states, lateral flow, and streamflow generation in a semi-arid, snowmelt-driven catchment. Hydrological Processes, 9, [] Mead J., 8, A priori weighting for parameter estimation, J. Inv. Ill-posed Problems, to appear. [] J.L. Mead and R.A. Renaut, A Newton root-finding algorithm for estimating the regularization parameter for solving ill-conditioned least squares problems, submitted to Inverse Problems. [3] Michalewicz, Z. and C. Z. Janikow, GENOCOP: a genetic algorithm for numerical optimization problems with linear constraints, Comm. ACM, Volume 39, Issue es (December 996) [4] Mockus J., Bayesian Approach to Global Optimization.Kluwer Academic Publishers, Dordrecht, 989. [5] Morigi, S, Reichel, L., Sgallari, F., and Zama, F., An iterative method for linear discrete ill-posed problems with box constraints, J. Comp. Appl. Math. 98, p. 55-5, 7.

20 [6] V.A. Morozov, On the solution of functional equations by the method of regularization, Soviet Math. Dokl. 7 (966) [7] Mualem, Y A new model for predicting thehydraulic conductivity of unsaturated porous media. Water Resour. Res. :53. [8] Nocedal, J. and Wright S, Numerical Optimization, Springer-Verlag, New York, pp. 636, 999. [9] Pierce, J.E. and Rust, B.W. Constrained Least Squares Interval Estimation, SIAM Sci. Stat. Comput., Vol. 6, No. 3, p , 985. [] Rao, C. R., 973, Linear Statistical Inference and its applications, Wiley, New York. [] Richards, L.A. Capillary conduction of liquids in porous media, Physics, [] M. Rojas and D.C. Sorensen, A Trust-Region Approach to the Regularization of Large-Scale Discrete Forms of Ill-Posed Problems, SISC, Vol. 3, No. 6, p ,. [3] P.B. Stark and R.L. Parker, Bounded-Variable Least-Squares: An Algorithm and Applications, Computational Statistics, :9-4, 995. [4] Tarantola A 5 Inverse Problem Theory and Methods for Model Parameter Estimation (SIAM). [5] Tikhonov, A.N., Regularization of incorreclty posed problems, Soviet Math., 4, pp.64-67, 963. [6] van Genuchten, M.Th. 98. A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44:89?898. [7] Warrick, A.W. Soil Water Dynamics, Oxford University Press, 3, p 39.

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT Abstract. Linear least squares problems with box constraints are commonly solved to find model

More information

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION THE χ -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION JODI MEAD AND ROSEMARY A RENAUT Abstract. We discuss algorithms for estimating least squares regularization parameters

More information

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems Boise State University ScholarWorks Mathematics Faculty Publications and Presentations Department of Mathematics 1-1-2009 A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Discontinuous parameter estimates with least squares estimators

Discontinuous parameter estimates with least squares estimators Discontinuous parameter estimates with least squares estimators J.L. Mead Boise State University, Department of Mathematicss, Boise, ID 8375-1555. Tel: 846-43, Fax: 8-46-1354 Email jmead@boisestate.edu.

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem Calibration and Reliability in Groundwater Modelling (Proceedings of the ModelCARE 99 Conference held at Zurich, Switzerland, September 1999). IAHS Publ. no. 265, 2000. 157 A hybrid Marquardt-Simulated

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS PROJECED IKHONOV REGULARIZAION OF LARGE-SCALE DISCREE ILL-POSED PROBLEMS DAVID R. MARIN AND LOHAR REICHEL Abstract. he solution of linear discrete ill-posed problems is very sensitive to perturbations

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints Noname manuscript No. (will be inserted by the editor) A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints Mai Zhou Yifan Yang Received: date / Accepted: date Abstract In this note

More information

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Lavrentiev regularization is a popular approach to the solution of linear discrete illposed problems

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

An interior-point gradient method for large-scale totally nonnegative least squares problems

An interior-point gradient method for large-scale totally nonnegative least squares problems An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Comparison of Averaging Methods for Interface Conductivities in One-dimensional Unsaturated Flow in Layered Soils

Comparison of Averaging Methods for Interface Conductivities in One-dimensional Unsaturated Flow in Layered Soils Comparison of Averaging Methods for Interface Conductivities in One-dimensional Unsaturated Flow in Layered Soils Ruowen Liu, Bruno Welfert and Sandra Houston School of Mathematical & Statistical Sciences,

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

1. Water in Soils: Infiltration and Redistribution

1. Water in Soils: Infiltration and Redistribution Contents 1 Water in Soils: Infiltration and Redistribution 1 1a Material Properties of Soil..................... 2 1b Soil Water Flow........................... 4 i Incorporating K - θ and ψ - θ Relations

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Tools for Parameter Estimation and Propagation of Uncertainty

Tools for Parameter Estimation and Propagation of Uncertainty Tools for Parameter Estimation and Propagation of Uncertainty Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Outline Models, parameters, parameter estimation,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

A Recursive Trust-Region Method for Non-Convex Constrained Minimization A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Simulation of Unsaturated Flow Using Richards Equation

Simulation of Unsaturated Flow Using Richards Equation Simulation of Unsaturated Flow Using Richards Equation Rowan Cockett Department of Earth and Ocean Science University of British Columbia rcockett@eos.ubc.ca Abstract Groundwater flow in the unsaturated

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Binary choice 3.3 Maximum likelihood estimation

Binary choice 3.3 Maximum likelihood estimation Binary choice 3.3 Maximum likelihood estimation Michel Bierlaire Output of the estimation We explain here the various outputs from the maximum likelihood estimation procedure. Solution of the maximum likelihood

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

How good are extrapolated bi-projection methods for linear feasibility problems?

How good are extrapolated bi-projection methods for linear feasibility problems? Comput Optim Appl DOI 10.1007/s10589-011-9414-2 How good are extrapolated bi-projection methods for linear feasibility problems? Nicholas I.M. Gould Received: 7 March 2011 Springer Science+Business Media,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear inversion methods and generalized cross-validation

Linear inversion methods and generalized cross-validation Online supplement to Using generalized cross-validation to select parameters in inversions for regional caron fluxes Linear inversion methods and generalized cross-validation NIR Y. KRAKAUER AND TAPIO

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Linear Models in Machine Learning

Linear Models in Machine Learning CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,

More information

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints and Its Application to Empirical Likelihood

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints and Its Application to Empirical Likelihood Noname manuscript No. (will be inserted by the editor) A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints and Its Application to Empirical Likelihood Mai Zhou Yifan Yang Received:

More information

Unsaturated Flow (brief lecture)

Unsaturated Flow (brief lecture) Physical Hydrogeology Unsaturated Flow (brief lecture) Why study the unsaturated zone? Evapotranspiration Infiltration Toxic Waste Leak Irrigation UNSATURATAED ZONE Aquifer Important to: Agriculture (most

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information