A Study on Minimal-point Composite Designs

Size: px
Start display at page:

Download "A Study on Minimal-point Composite Designs"

Transcription

1 A Study on Minimal-point Composite Designs by Yin-Jie Huang Advisor Ray-Bing Chen Institute of Statistics, National University of Kaohsiung Kaohsiung, Taiwan 8 R.O.C. July 007

2 Contents Introduction Construction for Minimal-point Designs 3. The restriction for first-order designs The numerical method for finding added support points Minimal-point Designs 8 3. A- and A s -optimal minimal-point composite designs A- and A s -optimal minimal-point composite designs for k = A- and A s -optimal minimal-point composite designs for k = A- and A s -optimal minimal-point composite designs for k = A- and A s -optimal minimal-point composite designs for k = A- and A s -optimal minimal-point composite designs for k = A- and A s -optimal minimal-point composite designs for k = Slope-rotatable minimal-point composite designs Slope-rotatable minimal-point composite designs for k = Slope-rotatable minimal-point composite designs for k = Slope-rotatable minimal-point composite designs for k = Slope-rotatable minimal-point composite designs for k = Slope-rotatable minimal-point composite designs for k = Slope-rotatable minimal-point composite designs for k = Comparisons 9 5 Conclusion 35 A Appendix: The Figures of A- and A s -optimal and Slope-rotatable Criteria 36 A. A-optimal minimal-point composite designs A. A s -optimal minimal-point composite designs i

3 A.3 slope-rotatability minimal-point composite designs References 45 ii

4 ¼0>0: `Š 3ÍS Z &Æb ÝÎ3Þ$DT`«Ýƒ'ìœ x ÍtKF' Œ3Í Z à ÍË$ð œ x9 tkf'œ3ï Í$ð ó Íb K@ FÝÊ $ÿl'œq 3ÏÞ$ð µa!ýãjì 0Œ9 yõ@ F39 &ÆtÝbA-t ãj² ba s -t ãjc I» ÝE ãj.h qa!ýãj ÃyëË!lPÝ $ÿl'œ àÿa [j Õ ¼ 09 ETÝtKF'Œt &Æ à8e[ ¼f &ÆX0Õ ÝtKF'ŒõÍ Ý l)w'œctkf'œ Ž &ÆèŒÝ)W'Œ býý Œ n"c ³)W'ŒA-t ãja s -t ãj I»ÝE ãjÿa[j Õ iii

5 A Study on Minimal-point Composite Designs Advisor: Dr. Ray-Bing Chen Institute of Statistics National University of Kaohsiung Student: Yin-Jie Huang Institute of Statistics National University of Kaohsiung ABSTRACT In this work, we are interested in constructing minimal-point designs for second-order response surface. A two-stage method is used for finding the composite designs. First we choose a proper first-order design with small supports, and then the remaining supports are selected according to an pre-specified criterion. A modified simulated annealing algorithm is applied here for numerically finding these added supports according to the different criteria. Besides A-optimal criterion, A s -optimal and slope-rotatable criteria are also used here. Based on the three different types of first-order designs, the corresponding minimal-point designs are found. Finally these designs are compared with central composite designs, other small composite designs and minimal-point designs by relative efficiencies. It is shown that the proposed composite designs perform well in general. Keywords: Central composite designs, A-optimal criterion, A s -optimal criterion, sloperotatable criterion, simulated annealing algorithm iv

6 Introduction Response surface methodology (RSM) is connected with fitting a local response surface by a typically small set of observations, and the purpose of RSM is to determine what levels of the independent variables maximize or minimize the response. In RSM, the challenge is that the functional relationship, f, between the response y and the independent variables (factors), x i, i =,,..., k, is unknown. Under certain smooth assumptions, this function, f, shall be approximated well by lower-order polynomial models over a limited experimental region, X. There are two polynomial models used in RSM. The first approximation model is the first-order polynomial model, i.e. y = β 0 + β x + + β k x k + ε, where ε is a white noise for the response. If the surface curvature exists, then the firstorder polynomial model should be modified by adding the higher-order terms into this model. Therefore, at this time, the second-order polynomial model is employed for the surface approximation, i.e. k y = β 0 + β i x i + i= i= k β ii x i + β ij x i x j + ε, () i>j and totally there are p = (k + )(k + )/ parameters in the second-order polynomial model. The central composite design (CCD), initially proposed by Box and Wilson (95), is the most popular design used for RSM. Basically a central composite design consists of a k factorial (or a k q fractional factorial portion, of resolution V or higher), a set of k axial points at distances α from the origin, and n 0 center points. The values of α and n 0 need to be decided before the experiments. Usually α can be chosen to be k. Then except the center points, all the other experimental points are on the surface of the k-ball with radius k, and this kind of CCD is called the spherical CCD. In CCD, the k factorial (or a k q fractional factorial) design is used for fitting the first-order polynomial model, and then when the model changes to the second-order model, the axial points are added.

7 The weakness of the CCD s is that the total number of the support points of CCD is extremely large, especially for large k. Hence when experimentations are time-consuming and very expensive, the small composite design should be a better choice. In previous works, to find small composite designs, they usually reduce the number of design points for fitting the first-order polynomial model and then add k axial points and one center point. Hence k factorial (or k q fractional factorial) designs are replaced by the other small designs, for example: k fractional factorial designs with resolution III (Hartley, 959), some irregular fractions of k factorial designs (Westlake, 965), and Plackett and Burman designs (Draper, 985, and Draper and Lin, 990). But these small composite designs still have more than p experimental points. Thus in this thesis, we focus on the composite designs with minimal number of design points, and these designs are called the minimal-point designs. Notz (98) gave a definition of minimal-point design that a minimal-point design is one whose supports contain the minimal number of points such that the information matrix is nonsingular. Hence in this thesis, the minimal-point designs must satisfy the following two conditions: Condition () the number of total design points is equal to p, Condition () the corresponding information matrix is nonsingular. To find the minimal-point designs, the two-stage procedure proposed by Chen, Lin and Tsai (006) is used. Basically this procedure contains the two stages. The first stage is to find a proper first-order design and then remaining added points are selected in the second stage. Hence, these two stages are Stage Choose a proper first-order design and add one center point, Stage Add the remaining support points according to an optimal criterion over a design space. In this thesis, the design, ξ, has p trials at p distinct points in design space X, and

8 can be represented as x x x p ξ =, p p p where x i X, i =,..., p, are the distinct supports of ξ. Let X(ξ) be the p p model matrix of ξ and define ˆβ to be the least-square estimator of the parameter vector β of the polynomial model. Then the covariance matrix of ˆβ is equal to Cov( ˆβ) = σ (X (ξ)x(ξ)) ( p X (ξ)x(ξ)) = M (ξ), where M(ξ) = p X (ξ)x(ξ) is the information matrix of ξ. Generally we want to find a design ξ which minimize the convariance matrix in some senses, for instance, minimize the trace of the convariance matrix. Hence, in this thesis, the minimal-point designs would be constructed according to different optimal criteria. From two-stage method, our minimal-point designs are equal-weight designs and can be formulated as ξ = n + p ξ + ( n + )ξ () p where ξ is the design of the first-order portion and one center point; n is the number of the support points of the first-order design, and ξ is the equal-weight design with the (p n ) distinct added support points. Due to the Condition (), the support points of the first-order design, n, must be less than p, because there is one center point. This paper is organized as follows. In Section, the construction method for minimalpoint designs is given, and a simulated annealing algorithm is introduced to find the remaining added points numerically. In Section 3, minimal-point designs based on different first-order designs are found. Then our minimal-point designs are compared with central composite designs and other small composite designs in Section 4. Finally a conclusion is given in Section 5. Construction for Minimal-point Designs This section is divided into two subsections according to the two-stage procedure. In Section., a restriction for the proper first-order designs is given, and then a numerical 3

9 method for selecting added support points is introduced in Section... The restriction for first-order designs In this subsection, we discuss what first-order designs should be in the two-stage procedure. Since the information matrix of the minimal-point design is nonsingular, we have the following theorem for choosing first-order designs. According to Equation (), the second-order model matrix, X(ξ), is partitioned into two parts, ( ) X(ξ ) X(ξ) =, X(ξ ) where X(ξ ) is the (n + ) p matrix with first-order design and one center point, and X(ξ ) is the (p n ) p matrix for added support points. Therefore we have the following theorem about the rank of X(ξ ). Theorem. Since the information matrix of minimal-point design is nonsingular, ξ must satisfy that the rank of X(ξ ) is equal to n +. Proof: Due to the nonsingularity of the information matrix, we have ( ) X(ξ ) rank X (ξ)x(ξ) = rank X(ξ) = rank = p. X(ξ ) Without loss of generality, we suppose the rows of X(ξ ) and rows of X(ξ ) are linearly independent. Thus, if rank X(ξ ) < n +, then it is clearly a contradiction. Because rank X(ξ) < p, it induces that information matrix is singular. Hence we have the rank of X(ξ ) must be equal to n +. In this thesis, the first-order designs are only focused on the -level designs. Therefore the first-order designs in Chen et al. (006) are considered and are shown in Table. There are three types of -level designs, k factorial designs (or k fractional factorial designs with resolution V ), k fractional factorial designs with resolution III, and Plackett and Burman designs. Here, we need to check if the first-order designs in Table are all satisfied the condition in Theorem. In fact, except the resolution V design with k = 3, all the other 4

10 Factors, k Parameters Cube points k (V ) Cube points k (III ) Cube points k (P BD) Table : The number of supports for the different first-order designs cases are satisfied the condition in Theorem. For k = 3, when the first-order design is 3 factorial design, we have X(ξ ) = Because there are three identical columns in X(ξ ), the rank of X(ξ ) must be less than 9. Hence it is a contradict with the condition in Theorem, and this design can not be the first-order design under our construction method. Thus in this thesis, we consider designs in Table except 3 factorial design to be our first-order designs.. The numerical method for finding added support points Given a proper first-order design, we need to find the added points to form the minimal-point second-order response surface design. In Chen et al. (006), these added points are chosen from the design space according to an optimal criterion. The design space for the case of k factors is the sphere with radius k, i.e. X = {(x,..., x k ) x + 5

11 + x k k}, because we follow the same structure of spherical CCD s. In Chen et al. (006), only D-optimal criterion is used for selecting points. In this thesis, we consider three criteria: A- and A s -optimal criteria and slope-rotatable criterion. Here, A- and A s - optimal criteria are useful to measure the performance of estimation of the parameters, and slope-rotatable criterion is related to the information of predictions throughout the region of interest. The details about these criteria are described in next section. Due to spherical design space, it is a good idea to represent these added support points by polar coordinate or spherical coordinate. For example, when k =, our minimal-point design contains factorial points, one center point, and one added point. Thus this added support point is represented by (x, x ) = (r cos θ, r sin θ ), where 0 < r, 0 θ < π. Hence X(ξ) is X(ξ) = r cos θ r sin θ r sin θ cos θ r cos θ r sin θ This representation can be easy extended to the general cases. As a first-order design and one center point are given, there are still p n remaining support points. Applying the spherical coordinate, the coordinates of these remaining points are represented as x i = (x i, x i, x i3,..., x ik ), i =,..., (p n ), where x i = r i sin θ i(k ) sin θ i cos θ i ; x i = r i sin θ i(k ) sin θ i sin θ i ; x i3 = r i sin θ i(k ) sin θ i3 cos θ i ; 6

12 . x ik = r i cos θ i(k ), and 0 < r i k, 0 θ ij < π. Hence the information matrix of ξ now is a function of r i and θ ij. For simplicity, we use θ,..., θ q to index all the angles. Let r = (r,..., r p n ) and θ = (θ,..., θ q ). Define d(r, θ) to be the objective function of X (ξ)x(ξ). For example, when A-optimal criterion is used, the objective function is T r(x (ξ)x(ξ)). Hence, to find our minimal-point design is to minimize the objective function directly with respected to r and θ. However d(r, θ) may be too complicate to write down its close form, especially when k is large. Thus the Best Angle and Radius (BAR) algorithm, proposed by Chen et al. (006), is used to find the minimum of d(r, θ) numerically. Basically the BAR algorithm is a simulated annealing (SA) type algorithm (Metropolis et al., 953, and Kirkpatrick et al., 983) to optimize d(r, θ) with respected to r and θ. Since the objective function is d(r, θ) that we want to minimize, we denote a density π T (t) (r, θ) in our BAR sampler to be π T (t) (r, θ) exp( d(r, θ)/t (t)) where T (t) is the temperature at time t and is a decreasing function from initial temperature, T (0) > 0, to 0 +. Then BAR sampler is in the following: Step. Select the initial angles, θ (0) j, j =,..., q, and initial radiuses, r (0) i, i =,..., p n, 0 < r (0) i k, 0 θ (0) j < π. Step. Run N t iterations of the Gibbs sampler to sample r and θ from π T (t) (r, θ), and at each iteration of the Gibbs sampler,. Sampler r i, i =,..., p n, from π T (t) (r i r i, θ). Draw θ j, j =,..., q, from π T (t) (θ j r, θ j ) by the inversion method. 7

13 Step 3. Set t to t +, go to step until t is large enough. In Gibbs sampler, r i is a set of all radiuses r m except the i th radius and θ j is a set of all angles θ m except the j th angle, i.e. r i = (r,..., r i, r i+,..., r p n ) and θ j = (θ,..., θ j, θ j+,..., θ q ). Since the SA algorithm can only find the local extreme values and may be affected by the initial states, we would choose the initial angles and radiuses randomly. We also cheek the tendency of objective function d(r, θ) of BAR sampler to make sure that d(r, θ) is close to an extreme value. 3 Minimal-point Designs Applying our construction method, the minimal-point designs are found according to different criteria. As mentioned before, two types of criteria are chosen here, one is about the parameter estimates, and another one is about prediction. Thus this section is divided into two parts according to these two types of criteria. A- and A s -optimal minimal-point designs are shown in Section 3. and in Section 3., the slope-rotatable criterion is considered. 3. A- and A s -optimal minimal-point composite designs In A-optimality, T r{m (ξ)}, the average variance of the parameter estimates, is minimized. Hence when A-optimal criterion is considered, the objective function is defined as d A,k (r, θ) = T r(x (ξ)x(ξ)). In A-optimal criterion, the performance of all parameters in the model are considered. However, we may not be interested in whole parameters. Following Section.0 in Fedorov (97), we consider the A s -optimal criterion. In A s -optimality, the parameters of the second-order terms and interaction terms are treated as the parameters of interest, and the parameters in the first-order model are the nuisance parameters. Therefore, the 8

14 model is divided into two groups: E(y) = f (x)β + f (x)β, where f (x) = (, x,..., x k ) ; β = (β 0, β,..., β k ) ; f (x) contains all second-order terms and interaction terms, x i and x i x j, and β is the parameter vector of the corresponding parameters of interest, β ij, i j k. According to Equation (), we can divide the information matrix into four parts M(ξ) = M (ξ) M(ξ) M (ξ) M (ξ), where M (ξ) is the information matrix for β and M (ξ) is the information matrix for β. Define X (ξ) = M (ξ) M(ξ) M (ξ) M (ξ) = (0 I) M (ξ) 0 I. Thus a design ξ is A s -optimality if ξ = arg min T r( X (ξ)) ξ Hence the objective function for A s -optimality is d As,k(r, θ) = T r(0 I)(X (ξ)x(ξ)) 0 I = T r( M (ξ)). For the cases of k =,..., 7, the minimal-point composite designs for A- and A s -optimal criteria are shown, and we denote ξ A,k and ξ As,k to be the minimal-point designs for k factors found by A- and A s -optimal criteria. 9

15 3.. A- and A s -optimal minimal-point composite designs for k = For the case of k =, the first-order design is the factorial design. Since there are 6 parameters in the second-order polynomial model, we need to add one more support point to form a minimal-point design. This added support point is represented by polar coordinate, i.e. where 0 < r, 0 θ < π. A-optimal criterion (x, x ) = (r cos θ, r sin θ ), (3) Since there is only one added point, we have the following lemma to show the trace of (X (ξ)x(ξ)). Lemma. When k =, based on the factorial design, the trace of inverse of the information matrix is proportional to T r(x (ξ)x(ξ)) = (3 + 6 ) sec[θ 4 r 4 r ]. When θ = 0, the minimum value of sec[θ ] is, and then T r(x (ξ)x(ξ)) is a decreasing function with respected to r. Thus, the minimum value is attended at θ = 0 and r =. Hence the added support point is (, 0), and the corresponding trace is Now we would apply the BAR sampler to find the result numerically. With a random initial state, we set T (t) = (5t) /3 and the iteration number of Gibbs sampling, N t, is set to be 0. Totally our algorithm is iterated 500 times. The optimal added support point found by BAR sampler is ( ) This point is very close to the axial point of the spherical CCD of two factors and the value of the trace of (X (ξ A, )X(ξ A, )) is In fact, we can show that the added support point can be any one of four axial points. Figure displays the tendency of the trace of (X (ξ)x(ξ)) in BAR sampler. 0

16 Figure : For k=, the trace of A-optimal criterion for = 5000 steps. A s -optimal criterion Under our design structure, we have the following lemma for T r( M (ξ)). Lemma. When k =, based on the factorial design, the trace of M (ξ) is proportional to T r( M (ξ)) (3 + 6 ) sec[θ 4 r 4 r ]. It is easy to show that the minimum value of T r( M (ξ)) is attended when θ = 0 and r =. Hence the corresponding support point is (, 0). At this time, the value of T r( M (ξ)) is We also use the BAR sampler to find this added point. We suppose T (t) = (5t) /3 and N t = 0. After 500 iterations of BAR sampler, the optimal added support point is ( ), and the optimal value is The numerical result shows that the added support point is close to axial point. In fact, the added support point for A s -optimal minimal-point design is any one of four axial points. Figure displays the tendency of the trace of M (ξ).

17 Figure : For k=, the trace of A s -optimal criterion for = 5000 steps. From above results, we get the same added support point for A- and A s -optimal criteria. Hence for k =, A- and A s -optimal minimal-point designs are equal to each other. 3.. A- and A s -optimal minimal-point composite designs for k = 3 For k = 3, the first-order design is chosen from the column (,, 3) of 4-runs P -B design, and the design space is the 3-ball with radius 3. Since there are 0 parameters in the second-order polynomial model, five added support points are required for a minimalpoint second-order design. The coordinates of these added points are written as x i = (x i, x i, x i3 ), where x i = r i sin θ i cos θ i, x i = r i sin θ i sin θ i and x i3 = r i cos θ i, i =,..., 5. A-optimal criterion We find the A-optimal added points by the BAR sampler with T (t) = (3t) /3 and N t = 0. After 500 iterations, the minimum value of T r(x (ξ A,3 )X(ξ A,3 )) is

18 .8788 and A-optimal added support points are In fact, the radiuses of the five added points are all close to 3. A s -optimal criterion The BAR sampler is also used to find the A s -optimal support points. Here we assume T (t) = (5t) /3 and N t is set to be 0. Totally our algorithm is iterated 500 times, and the A s -optimal added support points are The corresponding optimal value is.3545 and those five radiuses are also close to 3. Here we find that A- and A s -optimal minimal-point designs for k = 3 are similar to each other from the following table. T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, A- and A s -optimal minimal-point composite designs for k = 4 For k = 4, the design space is the 4-ball with radius, and there are 5 parameters in the second-order polynomial model. The first-order design is the 4 fractional factorial 3

19 design with resolution III proposed by Hartley (959). Hence six added support points are required for a minimal-point design, and coordinates of these added points are x i = (x i, x i, x i3, x i4 ), where x i = r i sin θ i3 sin θ i cos θ i, x i = r i sin θ i3 sin θ i sin θ i, x i3 = r i sin θ i3 cos θ i and x i4 = r i cos θ i3, i =,..., 6. A-optimal criterion Here we use the BAR sampler to find the A-optimal added points with T (t) = (0t) /3 and N t = 0. After 500 iterations, the A-optimal added supports we get are The trace of (X (ξ A,4 )X(ξ A,4 )) is.86 and those six radiuses, r,..., r 6, are close to the boundary. A s -optimal criterion Applying our BAR sampler searches out the A s -optimal added supports with the assumptions, T (t) = (5t) /3 and N t = 0. After 500 iterations, the minimum trace of M (ξ As,4) is.463, and the A s -optimal added support points found by BAR sampler are

20 The numerically result is that the six radiuses, r i, i =,..., 6, are close to. We can follow the same process of case k = 3, then a table is given below T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, Hence we regard that A- and A s -optimal minimal-point designs are similar to each other for k = A- and A s -optimal minimal-point composite designs for k = 5 There are two first-order designs for k = 5, one is the 5 fractional factorial design with resolution V and the other one is Plackett and Burman design with k = 5. There are unknown parameters in the second-order polynomial model and the design space is the 5-ball with radius 5. 5 fractional factorial design with resolution V Since the first-order design is the 5 fractional factorial design with resolution V, four added support points are required for a minimal-point second-order design, and the coordinates of these four added points are x i = (x i, x i, x i3, x i4, x i5 ), where x i = r i sin θ i4 sin θ i3 sin θ i cos θ i, x i = r i sin θ i4 sin θ i3 sin θ i sin θ i, x i3 = r i sin θ i4 sin θ i3 cos θ i, x i4 = r i sin θ i4 cos θ i3 and x i5 = r i cos θ i4, i =,..., 4. A-optimal criterion We use the BAR sampler to find A-optimal radiuses, r,..., r 4, and angles, θ ij. Here we assume T (t) = (0t) /3 and N t = 0. Totally our algorithm is iterated 500 times. The A-optimal added support points that we get are , 5

21 and the trace of (X (ξ A,5 )X(ξ A,5 )) is.64. Those radiuses of the four added support points are all close to 5. Because these added supports are close to axial points, (0, 0, 0, 5, 0), (0, 0, 0, 0, 5), (0, 5, 0, 0, 0) and (0, 0, 5, 0, 0), we replace the A-optimal added points with these four axial points. However, the trace of this new design is.600. Thus, these axial points can not be our A-optimal added points. A s -optimal criterion Here we still use BAR sampler to find the A s -optimal added points with T (t) = (0t) /3 and N t = 0. After 500 iterations, those four radiuses are also close to 5, and the added support points are The trace of M (ξ As,5) is.307. We obtain that those added support points are also close to the axial points, (0, 0, 5, 0, 0), (0, 0, 0, 5, 0), (0, 0, 0, 0, 5) and ( 5, 0, 0, 0, 0). Hence we replace the A s -optimal added points with these four axial points. However the trace of this new design is It indicates that these axial points can not be our A s -optimal added support points. The table given below is to compare A- and A s -optimal minimal-point designs. T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, We can obtain that A- and A s -optimal minimal-point designs are equal to each other when the first-order design is 5 factional factorial design with resolution V. Plackett and Burman design for k = 5 In this part, the first-order design is the columns (,, 3, 5, 8) from -runs P -B design without run. Since there are parameters in the model, we have support 6

22 points in the first-order design and 9 added support points are required for a minimalpoint second-order design. The coordinates of these added points are x i = (x i, x i, x i3, x i4, x i5 ), where x i = r i sin θ i4 sin θ i3 sin θ i cos θ i, x i = r i sin θ i4 sin θ i3 sin θ i sin θ i, x i3 = r i sin θ i4 sin θ i3 cos θ i, x i4 = r i sin θ i4 cos θ i3 and x i5 = r i cos θ i4, i =,..., 9. A-optimal criterion Our BAR sampler are applied to find the added points of A-optimal criterion with the assumptions, T (t) = (0t) /3 and N t = 0. Totally our algorithm is iterated 500 times, and the A-optimal added support points found by BAR sampler are The trace of (X (ξ A,5 )X(ξ A,5 )) is.646 and those radiuses, r,..., r 9, are all close to the boundary 5. A s -optimal criterion Our BAR sampler is reapplied, we can find the A s -optimal radiuses, r,..., r 9, and angles, θ ij. Here we assume T (t) = (0t) /3 and N t = 0. After 500 iterations, the minimum value of T r( M (ξ As,5)) is.797 and the A s -optimal added support 7

23 points are The radiuses of those nine added support points are also close to 5. From the following table, when the first-order design is P -B design for k = 5, it seems that A- and A s -optimal minimal-point designs are similar to each other. T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, A- and A s -optimal minimal-point composite designs for k = 6 For k = 6, the design space is the 6-ball with radius 6. The first-order design is the 6 fractional factorial design with resolution III, and there are 8 unknown parameters in the second-order polynomial model. Therefore, added supports are required to form our minimal-point design. However from the above numerical results, the radiuses of those added support points are all close to the boundary k. In order to simplify our algorithm, we assume one radius for all those added points to find the optimal added support points, i.e. all added points have the same radius, r. Hence the coordinates of these points are represented as x i = (x i, x i, x i3, x i4, x i5, x i6 ), 8

24 where x i = r sin θ i5 sin θ i4 sin θ i3 sin θ i cos θ i, x i = r sin θ i5 sin θ i4 sin θ i3 sin θ i sin θ i, x i3 = r sin θ i5 sin θ i4 sin θ i3 cos θ i, x i4 = r sin θ i5 sin θ i4 cos θ i3, x i5 = r sin θ i5 cos θ i4 and x i6 = r cos θ i5, i =,...,. A-optimal criterion Our BAR sampler is applied for finding the added support points of A-optimal criterion. Here we assume T (t) = (5t) /3 and N t is set to be 0. After 500 iterations, the minimum trace of (X (ξ A,6 )X(ξ A,6 )) is.57, and the A-optimal added supports points we get are Here the radius, r, is close to 6 A s -optimal criterion Here we reapply the BAR sampler to find the A s -optimal added points with T (t) = (5t) /3 and N t = 0. Totally our algorithm is iterated 500 times. The minimum value of T r( M (ξ As,6)) is.3000, and the A s -optimal added support points found 9

25 by BAR sampler are The only one radius is also close to 6. From the following table, we conclude that A- and A s -optimal minimal-point designs are similar to each other for k = 6. T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, A- and A s -optimal minimal-point composite designs for k = 7 For k = 7, there are 36 parameters in the second-order polynomial model. The firstorder design is to choose the columns (,, 5, 6, 7, 9, 0) from 4-runs P -B design without run 3 and run 0. Hence the first-order design has supports, and we need to add 3 support points to form the minimal-point design. We also follow the same structure of case for k = 6, only one radius is used for these added support points. Hence the 3 added points are represented by the coordinates as x i = (x i, x i, x i3, x i4, x i5, x i6, x i7 ), 0

26 where x i = r sin θ i6 sin θ i5 sin θ i4 sin θ i3 sin θ i cos θ i, x i = r sin θ i6 sin θ i5 sin θ i4 sin θ i3 sin θ i sin θ i, x i3 = r sin θ i6 sin θ i5 sin θ i4 sin θ i3 cos θ i, x i4 = r sin θ i6 sin θ i5 sin θ i4 cos θ i3, x i5 = r sin θ i6 sin θ i5 cos θ i4, x i6 = r sin θ i6 cos θ i5 and x i7 = r cos θ i6, i =,..., 3. A-optimal criterion we employ the BAR sampler to find the A-optimal radius and angles. We suppose T (t) = (0t) /3 and set N t = 0. After 500 iterations, the A-optimal added support points are The minimum trace of (X (ξ A,7 )X(ξ A,7 )) is and the radius is close to 7. A s -optimal criterion Our BAR sampler is used to find the A s -optimal added supports points with supposition T (t) = (0t) /3 and N t = 0. Totally our algorithm is iterated 500 times,

27 and the A s -optimal added support points are The value of T r( M (ξ As,7)) is 347 and the radius is also close to 7. Here we find that A- and A s -optimal minimal-point designs for k = 7 are similar to each other from the table given below. T r(x (ξ)x(ξ)) T r( M (ξ)) ξ A, ξ As, Slope-rotatable minimal-point composite designs Slope-rotatability has been studied in Hader and Park (978), Park (987), and Park and Kim (99). They consider design aspects of response surface experiments in which emphasis is on estimation of differences in response rather than absolute value of the response variable. Estimation of differences in response at different points in the factor space will often be of great importance. If differences at points close together in the factor

28 space are involved, estimation of the local slopes (the rates of change) of the response surface is of interest. In this thesis, a measure of slope-rotatability for response surface designs, proposed by Park and Kim (99), is used. Define v i = Var( ˆβ i ), v ii = Var( ˆβ ii ), v ij = Var( ˆβ ij ), c i,ii = Cov( ˆβ i, ˆβ ii ), c i,ij = Cov( ˆβ i, ˆβ ij ), c ii,ij = Cov( ˆβ ii, ˆβ ij ), c ij,il = Cov( ˆβ ij, ˆβ il ). The measure of slope-rotatability, Q k (ξ), is where A k = B k = C k = Q k (ξ) = (k )σ 4 [(k + )(k + 4)A k + (k + 4)B k + 3C k ], k vi k k ( v i ), i= k v i (4v ii + i= k (6vii + i= + 3 i= k v ij ) k k k ( v i )[ (4v ii + j= j i k j= j i i= v ij) k + 3k i= k [ (4v ii + i= k k (4v ii v ij + k v ij v ii ) i= j= j i j<l j,l i k )v ij ] + j= j i k v ij )] j= j i k (4 i= k (4c i,ii + i= k c ii,ij + k j= j i j<l j,l i k j= j i c ij,il). c i,ij), Park and Kim (99) showed that Q k (ξ) is zeros if and only if a design ξ is slope-rotatable, and Q k (ξ) becomes larger as ξ deviates from a slope-rotatable design. Thus our objective function is set to be d Qk,k(r, θ) = (k ) [(k + )(k + 4)A k + (k + 4)B k + 3C k ]. In the following, for k =, 3,..., 7, the minimal point composite designs for sloperotatable criterion are shown. 3.. Slope-rotatable minimal-point composite designs for k = For k =, the design space is the -ball with radius and one more point is added to form a minimal-point design. Then using polar coordinate representation in Equation (4), we have the following lemma for the measure of slope-rotatability. 3

29 Lemma 3. When k =, based on the factorial design, the measure of slope-rotatability is proportional to Q (ξ) = 960 {5 + ( r ) sec[θ r ] + 4(6 6r + 3r) 4 sec[θ r 8 ] 4 }. It is easy to see that the minimum value of Q (ξ) is obtained when θ = 0. Since Q (ξ) is a decreasing function with respect to r, the minimum value of Q (ξ) is attended at θ = 0 and r =. Hence the added point is (, 0), and at this time, the corresponding measure is Now we use our BAR sampler to find the result numerically. With a random initial state, we suppose T (t) = (5t) /3 and N t is set to be 0. Totally our algorithm is iterated 500 times. The added point found by BAR sampler is ( ). This added point is very close to an axial point of the spherical CCD with two factors. In fact, the support point for slope-rotatable criterion is any one of four axial points. Figure 3 shows that the trend of the measure of slope-rotatability for the first 00 iterations, and from this figure, our algorithm quickly converges to a local extreme Figure 3: For k=, the measure of slope-rotatable criterion for 0 0 = 00 steps. 4

30 3.. Slope-rotatable minimal-point composite designs for k = 3 For k = 3, the first-order design is also chosen from the columns (,, 3) from 4-runs P -B design. Five added points are required to form a minimal-point design. Since the measure of slope-rotatability is too difficult to write down its close form, we use the BAR sampler to find these five added points. We assume T (t) = (5t) /3 and set N t = 0. Totally our algorithm is iterated 500 times, and the added points are The measure of slope-rotatability is , and the radiuses of those five added points are very close to 3. Hence, we conclude that these added points should be on the boundary of the design space X Slope-rotatable minimal-point composite designs for k = 4 For k = 4, there are 5 parameters in the second-order polynomial model, and the design space is the 4-ball with radius. The first-order design is 4 fractional factorial design with resolution III. Therefore six added points are required for a minimal-point design. We apply our BAR sampler to find the minimum value of the measure of sloperotatability with T (t) = (0t) /3 and N t = 0. After 500 iterations, the added points we get are

31 The minimum value for measure of slope-rotatability is These six radiuses are very close to. However, if the initial angles are set to be 0 π/ π/ π π/ π/ π/ π/ π/, π/ π/ π/ π/ 0 π/ π/ π π/ then after 500 iterations, these added points are The measure of slope-rotatability is Since these added points are close to axial points, we choose any six axial points to be our added points. We find that the measure of slope-rotatability is 0.38 with (±, 0, 0, 0), (0, ±, 0, 0) and (0, 0, ±, 0). So the slope-rotatable minimal-point designs should be constructed by these axial points, (±, 0, 0, 0), (0, ±, 0, 0) and (0, 0, ±, 0) Slope-rotatable minimal-point composite designs for k = 5 For k = 5, two kind of first-order design are used. One is the 5 fractional factorial design with resolution V and the other is Plackett and Burman design for k = 5. 5 fractional factorial design with resolution V When 5 fractional factorial design with resolution V is the first-order design, four added points are needed to form a minimal-point design. We use the BAR sampler to find the added points of slope-rotatable criterion with the assumptions, T (t) = (0t) /3 6

32 and N t = 0. After 500 iterations, the minimum value of measure of slope-rotatability is , and these added points are The radiuses of these four added points are close to 5, and these added support points are also close to axial points, (0, 0, 0, 0, 5), (0, 0, 5, 0, 0), ( 5, 0, 0, 0, 0) and (0, 0, 0, 5, 0). Then we replace the added points with these four axial points. However, the measure of slope-rotatability for four axial points is Thus, these axial points can not be our slope-rotatable added points. Plackett and Burman design for k = 5 Now we discuss the case of Plackett and Burman design with k = 5. Here we need nine added points to form a minimal-point design. Applying our BAR sampler, we can find the added points of slope-rotatable criterion with the supposition, T (t) = (0t) /3 and N t = 0. Finally our algorithm is iterated 500 times, and the added points that we find are Hence the value of measure of slope-rotatability is and the radiuses of nine added points are also close to 5. 7

33 3..5 Slope-rotatable minimal-point composite designs for k = 6 For k = 6, the 6 factional factorial design with resolution III is used as the first-order design. We need added points to form our minimal-point design. However we can obtain that for k =, 3,..., 5, the radiuses of those added points are all close to the boundary k. Therefore we set all added points have the same radius r. Set T (t) = (5t) /3 and N t = 0 applying our BAR sampler, the added points after 500 iterations, the added points we get are The numerically result is that the minimum value of measure of slope-rotatability is and the only radius is close to Slope-rotatable minimal-point composite designs for k = 7 For k = 7, the first-order design is the columns (,, 5, 6, 7, 9, 0) from 4-runs P -B design without 3 rd and 0 th runs. We need 3 added points to form the minimal-point design and only one radius is used for these added points. We employ the BAR sampler to find the added points. Here we assume T (t) = (0t) /3 and N t is set to be 0. Totally our algorithm is iterated 500 times. The minimum value for measure of slope-rotatability 8

34 is 0.046, and the only one radius is also close to 7. Hence those added points are Comparisons In the last section, the minimal-point designs for three criteria are found numerically. Here first minimal-point designs for different criteria are compared to each other, and the relative efficiencies are used for comparisons among our designs and other minimal-point (and small composite) designs. For convenience, we denote ξ Qk,k to be the minimal-point design for k factors found by slope-rotatable criterion. In order to compare designs, we use optimal relative efficiency. Based on φ-criterion, the φ relative efficiency of design ξ ι relative to ξ τ is defined as φ(ξ ι, ξ τ ) = φ(m(ξ ι)) φ(m(ξ τ )). If this value is close to, these two designs are about equally informative in terms of the φ-criterion. If φ(m(ξ ι )) is larger than φ(m(ξ τ )), then ξ τ contains more information than ξ ι in terms of the φ-criterion. From our numerical results in Section 3, A- and A s -optimal minimal-point designs 9

35 First-order design k p design ξ T r(x (ξ)x(ξ)) Q k (ξ) 6 ξ A, k fractional factorial ξ Q, designs with resolution V 5 ξ A, ξ Q5, ξ A, k fractional factorial ξ Q4, designs with resolution III 6 8 ξ A, ξ Q6, ξ A, ξ Q3, P -B designs 5 ξ A, ξ Q5, ξ A, ξ Q7, Table : Compare our A-optimal and slope-rotatable minimal-point designs. should be similar to each other. Therefore, here we only compare A-optimal and sloperotatable minimal-point designs here. The T r(x (ξ)x(ξ)) of slope-rotatable minimalpoint designs and the values of Q k (ξ) of A-optimal minimal-point designs are shown in Table. From this table, we observe that when the first-order design is k fractional factorial designs with resolution V, the A-optimal and slope-rotatable minimal-point designs are similar to each other, because the relative efficiencies, shown in Table 3, are, (A-optimal criterion) and, (Q k criterion). In fact, from Lemmas and 3, when k =, these two minimal-point designs are the same. From Table 3, the relative efficiencies A(ξ Qk, ξ A ) are all at least 80%. Hence the slope-rotatable minimalpoint designs are contain at least 80% information of A-optimal minimal-point designs. However, except k = 3, the Q k relative efficiencies of ξ A relative to ξ Qk are less than

36 First-order design k p A(ξ Qk, ξ A ) Q k (ξ A, ξ Qk ) k fractional factorial 6 designs with resolution V k fractional factorial designs with resolution III P -B designs Table 3: The relative efficiencies between A-optimal and slope-rotatable minimal-point designs. k p n CCD n small n C n min 6 9 ND Table 4: The number of supports in CCDs, small composite designs and minimal-point designs. Thus it seems that slope-rotatable minimal-point design is a robust designs among the A-, A s -optimal and slope-rotatable criteria. Before we compare our designs with other minimal-point and small composite designs, we have some notations. We define ξ CCD is the spherical CCD with n CCD supports. The small composite designs proposed by Draper and Lin (990) are denoted as ξ small with n small support points. Since the small composite designs still have more than p support points, the two types of minimal-point designs also are constructed here by choosing p n axial points according to A-optimal and Q k criteria. Here these two types of designs are denoted by ξ min,a and ξ min,qk. Hence we have Table 4 to show the differ- 3

37 k p ξ CCD ξ A A-Peff(ξ CCD, ξ A ) V P -B III V P -B III P -B Table 5: The trace of (X X) and relative A-optimal point efficiencies of ξ CCD and ξ A. ent numbers of design points for ξ CCD, ξ small, and ξ min. Next, we compare our designs with other minimal-point and small composite designs according to two different criteria, A-optimal and Q k criteria. Comparison according to A-optimal criterion In this part, we would compare our A-optimal minimal-point designs with spherical CCDs and other small composite designs. To fairly compare designs, we use the relative A-optimal point efficiencies, which are defined as n A A-Peff(ξ CCD, ξ A ) = A(ξ CCD, ξ A ); n CCD A-Peff(ξ small, ξ A ) = n A A(ξ small, ξ A ); n small A-Peff(ξ min, ξ A ) = A(ξ min, ξ A ). When the relative A-optimal point efficiency is less than, our A-optimal minimal-point design is better. The values of the trace of (X (ξ)x(ξ)) and the relative A-optimal point efficiencies are displayed in Table 5 and 6. From Table 5, our A-optimal minimalpoint designs are better than the CCD s, because all values of A-Peff are less than. When we compare with other small composite designs and minimal-point designs, except the cases that the first-order designs are resolution V designs, our A-optimal minimalpoint designs are always better than other designs significantly. For k = and 5 and 3

38 ξ k p ξ small ξ min,a ξ A A-Peff(ξ small, ξ A ) A-Peff(ξ min,a, ξ A ) V 6 ND ND 5 ND ND III P -B Table 6: The trace of (X X) and relative A-optimal point efficiencies of ξ small, ξ min and ξ A. k p ξ CCD ξ Qk Q k (ξ CCD, ξ Qk ) 6 56 V P -B III V P -B III P -B Table 7: The measure of Q k (ξ) and relative efficiencies of ξ CCD and ξ Qk. resolution V designs as the first-order designs, these two types of minimal-point designs should be similar to each other because A-Peff are equal to and Comparison according to slope-rotatable criterion Now the slope-rotatable criterion is considered. Park and Kim (99) showed that Q k is invariant with respect to the scaling constant. The values of Q k and relative Q k - efficiencies are displayed in Table 7 and 8. From these tables, we can observe that: Central composite designs vs. slope-rotatable minimal-point designs: From Table 7, spherical CCDs all provide the better information than on slope- 33

39 ξ k p ξ small ξ min,qk ξ Qk Q k (ξ small, ξ Qk ) Q k (ξ min,qk, ξ Qk ) V 6 ND ND 5 ND ND III P -B Table 8: The measure of Q k (ξ) and relative efficiencies for ξ small, ξ min and ξ Qk. rotatable minimal-point designs in terms of Q k criterion. But for large k, k = 6 and 7, CCD s and our minimal-point designs should close to slope-rotatable, because the values of Q k are close to 0. Small composite designs vs. slope-rotatable minimal-point designs: For k = 3 and 4, ξ small have better performances than our slope-rotatable minimalpoint designs. But for k = 5 and 7, our minimal-point designs are better than ξ small. Finally for k = 6, the values of Q k for both designs are close to 0. Other minimal-point designs vs. slope-rotatable minimal-point designs: When the first-order design is k fractional factorial design with resolution V, our slope-rotatable minimal-point designs have similar performances with ξ min,qk. This is because the added support points of our slope-rotatable minimal-point designs for k fractional factorial designs with resolution V are close to axial points. When P -B designs are the first-order designs, our slope-rotatable minimal-point designs have better performances than these of minimal-point designs in terms of Q k criterion, except the case of k = 4. It may due to P -B designs do not have symmetric structure in the design space. 34

8 RESPONSE SURFACE DESIGNS

8 RESPONSE SURFACE DESIGNS 8 RESPONSE SURFACE DESIGNS Desirable Properties of a Response Surface Design 1. It should generate a satisfactory distribution of information throughout the design region. 2. It should ensure that the

More information

Design Moments. Many properties of experimental designs are quantified by design moments. Given a model matrix 1 x 11 x 21 x k1 1 x 12 x 22 x k2 X =

Design Moments. Many properties of experimental designs are quantified by design moments. Given a model matrix 1 x 11 x 21 x k1 1 x 12 x 22 x k2 X = 8.5 Rotatability Recall: Var[ŷ(x)] = σ 2 x (m) () 1 x (m) is the prediction variance and NVar[ŷ(x)]/σ 2 = Nx (m) () 1 x (m) is the scaled prediction variance. A design is rotatable if the prediction variance

More information

D-Optimal Designs for Second-Order Response Surface Models with Qualitative Factors

D-Optimal Designs for Second-Order Response Surface Models with Qualitative Factors Journal of Data Science 920), 39-53 D-Optimal Designs for Second-Order Response Surface Models with Qualitative Factors Chuan-Pin Lee and Mong-Na Lo Huang National Sun Yat-sen University Abstract: Central

More information

Two-Stage Computing Budget Allocation. Approach for Response Surface Method PENG JI

Two-Stage Computing Budget Allocation. Approach for Response Surface Method PENG JI Two-Stage Computing Budget Allocation Approach for Response Surface Method PENG JI NATIONAL UNIVERSITY OF SINGAPORE 2005 Two-Stage Computing Budget Allocation Approach for Response Surface Method PENG

More information

Optimal design of experiments

Optimal design of experiments Optimal design of experiments Session 4: Some theory Peter Goos / 40 Optimal design theory continuous or approximate optimal designs implicitly assume an infinitely large number of observations are available

More information

Optimal experimental design, an introduction, Jesús López Fidalgo

Optimal experimental design, an introduction, Jesús López Fidalgo Optimal experimental design, an introduction Jesus.LopezFidalgo@uclm.es University of Castilla-La Mancha Department of Mathematics Institute of Applied Mathematics to Science and Engineering Books (just

More information

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren 1 / 34 Metamodeling ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 1, 2015 2 / 34 1. preliminaries 1.1 motivation 1.2 ordinary least square 1.3 information

More information

Response Surface Methodology:

Response Surface Methodology: Response Surface Methodology: Process and Product Optimization Using Designed Experiments RAYMOND H. MYERS Virginia Polytechnic Institute and State University DOUGLAS C. MONTGOMERY Arizona State University

More information

ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE

ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE Sankhyā : The Indian Journal of Statistics 999, Volume 6, Series B, Pt. 3, pp. 488 495 ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE By S. HUDA and A.A. AL-SHIHA King Saud University, Riyadh, Saudi Arabia

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Analysis of Variance and Design of Experiments-II

Analysis of Variance and Design of Experiments-II Analysis of Variance and Design of Experiments-II MODULE VIII LECTURE - 36 RESPONSE SURFACE DESIGNS Dr. Shalabh Department of Mathematics & Statistics Indian Institute of Technology Kanpur 2 Design for

More information

Response Surface Methodology

Response Surface Methodology Response Surface Methodology Process and Product Optimization Using Designed Experiments Second Edition RAYMOND H. MYERS Virginia Polytechnic Institute and State University DOUGLAS C. MONTGOMERY Arizona

More information

DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi

DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi + DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi + Outline n Definition of Response Surface Methodology n Method of Steepest Ascent n Second-Order Response Surface

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Useful Numerical Statistics of Some Response Surface Methodology Designs

Useful Numerical Statistics of Some Response Surface Methodology Designs Journal of Mathematics Research; Vol. 8, No. 4; August 20 ISSN 19-9795 E-ISSN 19-9809 Published by Canadian Center of Science and Education Useful Numerical Statistics of Some Response Surface Methodology

More information

arxiv: v1 [math.st] 22 Dec 2018

arxiv: v1 [math.st] 22 Dec 2018 Optimal Designs for Prediction in Two Treatment Groups Rom Coefficient Regression Models Maryna Prus Otto-von-Guericke University Magdeburg, Institute for Mathematical Stochastics, PF 4, D-396 Magdeburg,

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

A General Criterion for Factorial Designs Under Model Uncertainty

A General Criterion for Factorial Designs Under Model Uncertainty A General Criterion for Factorial Designs Under Model Uncertainty Steven Gilmour Queen Mary University of London http://www.maths.qmul.ac.uk/ sgg and Pi-Wen Tsai National Taiwan Normal University Fall

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology) 7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology) Hae-Jin Choi School of Mechanical Engineering, Chung-Ang University 1 Introduction Response surface methodology,

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

An Algorithm for Constructing Orthogonal and Nearly Orthogonal Arrays with Mixed Levels and Small Runs

An Algorithm for Constructing Orthogonal and Nearly Orthogonal Arrays with Mixed Levels and Small Runs An Algorithm for Constructing Orthogonal and Nearly Orthogonal Arrays with Mixed Levels and Small Runs Hongquan Xu Department of Statistics University of California 8130 Math Sciences Bldg Los Angeles,

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

UCLA Department of Statistics Papers

UCLA Department of Statistics Papers UCLA Department of Statistics Papers Title An Algorithm for Constructing Orthogonal and Nearly Orthogonal Arrays with Mixed Levels and Small Runs Permalink https://escholarship.org/uc/item/1tg0s6nq Author

More information

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter

More information

10.0 REPLICATED FULL FACTORIAL DESIGN

10.0 REPLICATED FULL FACTORIAL DESIGN 10.0 REPLICATED FULL FACTORIAL DESIGN (Updated Spring, 001) Pilot Plant Example ( 3 ), resp - Chemical Yield% Lo(-1) Hi(+1) Temperature 160 o 180 o C Concentration 10% 40% Catalyst A B Test# Temp Conc

More information

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation 1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations

More information

A-optimal Minimax Design Criterion for. Two-level Fractional Factorial Designs

A-optimal Minimax Design Criterion for. Two-level Fractional Factorial Designs A-optimal Minimax Design Criterion for Two-level Fractional Factorial Designs by Yue Yin BA. Beijing Institute of Technology 2011 A Thesis Submitted in Partial Fullfillment of the Requirements for the

More information

Response Surface Methodology IV

Response Surface Methodology IV LECTURE 8 Response Surface Methodology IV 1. Bias and Variance If y x is the response of the system at the point x, or in short hand, y x = f (x), then we can write η x = E(y x ). This is the true, and

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing Partially Collapsed Gibbs Samplers: Theory and Methods David A. van Dyk 1 and Taeyoung Park Ever increasing computational power along with ever more sophisticated statistical computing techniques is making

More information

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Math 489AB Exercises for Chapter 1 Fall Section 1.0 Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

E-optimal approximate block designs for treatment-control comparisons

E-optimal approximate block designs for treatment-control comparisons E-optimal approximate block designs for treatment-control comparisons Samuel Rosa 1 1 Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava, Slovakia We study E-optimal block

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

To Estimate or Not to Estimate?

To Estimate or Not to Estimate? To Estimate or Not to Estimate? Benjamin Kedem and Shihua Wen In linear regression there are examples where some of the coefficients are known but are estimated anyway for various reasons not least of

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

OPTIMIZATION OF FIRST ORDER MODELS

OPTIMIZATION OF FIRST ORDER MODELS Chapter 2 OPTIMIZATION OF FIRST ORDER MODELS One should not multiply explanations and causes unless it is strictly necessary William of Bakersville in Umberto Eco s In the Name of the Rose 1 In Response

More information

Likelihood and p-value functions in the composite likelihood context

Likelihood and p-value functions in the composite likelihood context Likelihood and p-value functions in the composite likelihood context D.A.S. Fraser and N. Reid Department of Statistical Sciences University of Toronto November 19, 2016 Abstract The need for combining

More information

Unit 12: Response Surface Methodology and Optimality Criteria

Unit 12: Response Surface Methodology and Optimality Criteria Unit 12: Response Surface Methodology and Optimality Criteria STA 643: Advanced Experimental Design Derek S. Young 1 Learning Objectives Revisit your knowledge of polynomial regression Know how to use

More information

Contents. 2 2 factorial design 4

Contents. 2 2 factorial design 4 Contents TAMS38 - Lecture 10 Response surface methodology Lecturer: Zhenxia Liu Department of Mathematics - Mathematical Statistics 12 December, 2017 2 2 factorial design Polynomial Regression model First

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Connections between the resolutions of general two-level factorial designs

Connections between the resolutions of general two-level factorial designs AISM (2006) 58: 609 68 DOI 0.007/s0463-005-0020-x N. Balakrishnan Po Yang Connections between the resolutions of general two-level factorial designs Received: 28 February 2005 / Published online: 3 June

More information

2 Introduction to Response Surface Methodology

2 Introduction to Response Surface Methodology 2 Introduction to Response Surface Methodology 2.1 Goals of Response Surface Methods The experimenter is often interested in 1. Finding a suitable approximating function for the purpose of predicting a

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

The Dual Lattice, Integer Linear Systems and Hermite Normal Form

The Dual Lattice, Integer Linear Systems and Hermite Normal Form New York University, Fall 2013 Lattices, Convexity & Algorithms Lecture 2 The Dual Lattice, Integer Linear Systems and Hermite Normal Form Lecturers: D. Dadush, O. Regev Scribe: D. Dadush 1 Dual Lattice

More information

A new algorithm for deriving optimal designs

A new algorithm for deriving optimal designs A new algorithm for deriving optimal designs Stefanie Biedermann, University of Southampton, UK Joint work with Min Yang, University of Illinois at Chicago 18 October 2012, DAE, University of Georgia,

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information

ST 740: Linear Models and Multivariate Normal Inference

ST 740: Linear Models and Multivariate Normal Inference ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /

More information

Contents. TAMS38 - Lecture 10 Response surface. Lecturer: Jolanta Pielaszkiewicz. Response surface 3. Response surface, cont. 4

Contents. TAMS38 - Lecture 10 Response surface. Lecturer: Jolanta Pielaszkiewicz. Response surface 3. Response surface, cont. 4 Contents TAMS38 - Lecture 10 Response surface Lecturer: Jolanta Pielaszkiewicz Matematisk statistik - Matematiska institutionen Linköpings universitet Look beneath the surface; let not the several quality

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

Quantum Gravity with Linear Action and Gravitational Singularities. George Savvidy. Testing Fundamental Physical Principles

Quantum Gravity with Linear Action and Gravitational Singularities. George Savvidy. Testing Fundamental Physical Principles Quantum Gravity with Linear Action and Gravitational Singularities George Savvidy Demokritos National Research Centre Greece, Athens Testing Fundamental Physical Principles Corfu, September 22-28 (T he

More information

UNIVERSITY OF DUBLIN

UNIVERSITY OF DUBLIN UNIVERSITY OF DUBLIN TRINITY COLLEGE JS & SS Mathematics SS Theoretical Physics SS TSM Mathematics Faculty of Engineering, Mathematics and Science school of mathematics Trinity Term 2015 Module MA3429

More information

TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS

TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS STAT 512 2-Level Factorial Experiments: Irregular Fractions 1 TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS A major practical weakness of regular fractional factorial designs is that N must be a

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Response Surface Methodology

Response Surface Methodology Response Surface Methodology Bruce A Craig Department of Statistics Purdue University STAT 514 Topic 27 1 Response Surface Methodology Interested in response y in relation to numeric factors x Relationship

More information

Efficient Bayesian Multivariate Surface Regression

Efficient Bayesian Multivariate Surface Regression Efficient Bayesian Multivariate Surface Regression Feng Li (joint with Mattias Villani) Department of Statistics, Stockholm University October, 211 Outline of the talk 1 Flexible regression models 2 The

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Optimality of Central Composite Designs Augmented from One-half Fractional Factorial Designs

Optimality of Central Composite Designs Augmented from One-half Fractional Factorial Designs of Central Composite Designs Augmented from One-half Fractional Factorial Designs Anthony F. Capili Department of Mathematics and Statistics, College of Arts and Sciences, University of Southeastern Philippines

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

14.0 RESPONSE SURFACE METHODOLOGY (RSM)

14.0 RESPONSE SURFACE METHODOLOGY (RSM) 4. RESPONSE SURFACE METHODOLOGY (RSM) (Updated Spring ) So far, we ve focused on experiments that: Identify a few important variables from a large set of candidate variables, i.e., a screening experiment.

More information

Machine Learning. Regression Case Part 1. Linear Regression. Machine Learning: Regression Case.

Machine Learning. Regression Case Part 1. Linear Regression. Machine Learning: Regression Case. .. Cal Poly CSC 566 Advanced Data Mining Alexander Dekhtyar.. Machine Learning. Regression Case Part 1. Linear Regression Machine Learning: Regression Case. Dataset. Consider a collection of features X

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Notes on Alekhnovich s cryptosystems

Notes on Alekhnovich s cryptosystems Notes on Alekhnovich s cryptosystems Gilles Zémor November 2016 Decisional Decoding Hypothesis with parameter t. Let 0 < R 1 < R 2 < 1. There is no polynomial-time decoding algorithm A such that: Given

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Optimal Two-Level Regular Fractional Factorial Block and. Split-Plot Designs

Optimal Two-Level Regular Fractional Factorial Block and. Split-Plot Designs Optimal Two-Level Regular Fractional Factorial Block and Split-Plot Designs BY CHING-SHUI CHENG Department of Statistics, University of California, Berkeley, California 94720, U.S.A. cheng@stat.berkeley.edu

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Specification Errors, Measurement Errors, Confounding

Specification Errors, Measurement Errors, Confounding Specification Errors, Measurement Errors, Confounding Kerby Shedden Department of Statistics, University of Michigan October 10, 2018 1 / 32 An unobserved covariate Suppose we have a data generating model

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

P -spline ANOVA-type interaction models for spatio-temporal smoothing

P -spline ANOVA-type interaction models for spatio-temporal smoothing P -spline ANOVA-type interaction models for spatio-temporal smoothing Dae-Jin Lee 1 and María Durbán 1 1 Department of Statistics, Universidad Carlos III de Madrid, SPAIN. e-mail: dae-jin.lee@uc3m.es and

More information

4.7 Confidence and Prediction Intervals

4.7 Confidence and Prediction Intervals 4.7 Confidence and Prediction Intervals Instead of conducting tests we could find confidence intervals for a regression coefficient, or a set of regression coefficient, or for the mean of the response

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Chap The McGraw-Hill Companies, Inc. All rights reserved.

Chap The McGraw-Hill Companies, Inc. All rights reserved. 11 pter11 Chap Analysis of Variance Overview of ANOVA Multiple Comparisons Tests for Homogeneity of Variances Two-Factor ANOVA Without Replication General Linear Model Experimental Design: An Overview

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Solving Systems of Polynomial Equations

Solving Systems of Polynomial Equations Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.

More information

Simulation of a Gaussian random vector: A propagative approach to the Gibbs sampler

Simulation of a Gaussian random vector: A propagative approach to the Gibbs sampler Simulation of a Gaussian random vector: A propagative approach to the Gibbs sampler C. Lantuéjoul, N. Desassis and F. Fouedjio MinesParisTech Kriging and Gaussian processes for computer experiments CHORUS

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Design and Analysis of Experiments

Design and Analysis of Experiments Design and Analysis of Experiments Part IX: Response Surface Methodology Prof. Dr. Anselmo E de Oliveira anselmo.quimica.ufg.br anselmo.disciplinas@gmail.com Methods Math Statistics Models/Analyses Response

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

Dissertation Defense

Dissertation Defense Clustering Algorithms for Random and Pseudo-random Structures Dissertation Defense Pradipta Mitra 1 1 Department of Computer Science Yale University April 23, 2008 Mitra (Yale University) Dissertation

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

A Bayesian perspective on GMM and IV

A Bayesian perspective on GMM and IV A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information