Optimal measurement combinations as controlled variables

Size: px
Start display at page:

Download "Optimal measurement combinations as controlled variables"

Transcription

1 Available online at Journal of Process Control 9 (009) Optimal measurement combinations as controlled variables Vidar Alstad, Sigurd Skogestad *, Eduardo S. Hori Department of Chemical Engineering, Norwegian Universit of Science and Technolog, Trondheim, Norwa Received 5 March 007; received in revised form 9 November 007; accepted 3 Januar 008 Abstract This paper deals with the optimal selection of linear measurement combinations as controlled variables, c ¼ H. The objective is to achieve self-optimizing control, which is when fixing the controlled variables c indirectl gives near-optimal stead-state operation with a small loss. The nullspace method of Alstad and Skogestad [V. Alstad, S. Skogestad, Null space method for selecting optimal measurement combinations as controlled variables, Ind. Eng. Chem. Res. 46 (3) (007) ] focuses on minimizing the loss caused b disturbances. We here provide an explicit expression for H for the case where the objective is to minimize the combined loss for disturbances and measurement errors. In addition, we extend the nullspace method to cases with extra measurements b using the extra degrees of freedom to minimize the loss caused b measurement errors. inall, the results are interpreted more generall as deriving linear invariants for quadratic optimization problems. Ó 008 Elsevier Ltd. All rights reserved. Kewords: Control structure selection; Self-optimizing control; Measurement selection; Process control; Quadratic optimization; Plantwide control. Introduction Optimizing control is an old research topic, and one method for ensuring optimal operation in chemical processes is real-time optimization (RTO)[0]. Using RTO, the optimal values (setpoints) for the controlled variables c are recomputed online based on online measurements and a model of the process, see ig.. In most RTO-applications, a stead-state model is used for the reconciliation (parameter/disturbance estimation) and optimization steps [0,9], however dnamic versions of the RTO-framework have also been reported [7]. However, the cost of installing and maintaining RTO sstems can be large. In addition, the sstem can be sensitive to uncertaint. A completel different approach to optimizing control is to focus on selecting the right variables c to control, which is the idea of self-optimizing control [3]. The objective is * Corresponding author. Tel.: ; fax: address: skoge@chemeng.ntnu.no (S. Skogestad). Present address: StatoilHdro, Oil and Energ Research Centre, N Porsgrunn, Norwa. to search for combinations of measurements (), for example, linear combinations c ¼ H, which when controlled, will (indirectl) keep the process close to the optimum operating conditions despite disturbances and measurement errors. The need for a RTO laer to compute new optimal setpoints c s can then be reduced, or in man cases even eliminated. Thus, the implementation is trivial and the maintenance requirements are minimized. The idea in this paper is extend this approach, b providing explicit formulas for the optimal matrix H. The issue of selecting H can also be viewed as a squaring down problem, as illustrated in ig.. The number of output variables that can be independentl controlled (n c ) is equal to the number of independent inputs (n u ), but in most cases the number of available measurements (n )is larger, that is, n > n u. The issue is then to select the nonsquare matrix H such that the map (transfer function) G ¼ HG from u to c is square, see ig.. However, selecting H such that G is square is not the onl issue. More importantl, as mentioned above, control of c should (directl or indirectl) result in acceptable operation of the sstem /$ - see front matter Ó 008 Elsevier Ltd. All rights reserved. doi:0.06/j.jprocont

2 V. Alstad et al. / Journal of Process Control 9 (009) Optimizer (RTO) eedback Controller Measurement combination Process ig.. eedback implementation of optimal operation with separate laers for optimization (RTO) and control. ig.. Combining measurements to get controlled variables c (linear case). To quantif acceptable operation we introduce a scalar cost function J which should be minimized for optimal operation. In this paper, we assume that the (economic) cost mainl depends on the (quasi) stead-state behavior, which is a good assumption for most continuous plants in the process industr. Self-optimizing control [3] can now be defined. It is when a constant setpoint polic (c s constant) ields acceptable loss, L ¼ Jðu; dþ J opt ðdþ, in spite of the presence of uncertaint, which is here assumed to be represented b () external disturbances d and () implementation errors n,c s c, see ig.. The implementation error n has two sources: () the stead-state control error n c and () the measurement error n ; and for linear measurement combinations n ¼ n c þ Hn. In ig., the control error n c is shown as an exogenous signal, although in realit it is determined b the controller. In an case, we assume here that all controllers have integral action, so we can neglect the stead-state control error, i.e. n c ¼ 0. The implementation error n is then given b the measurement error, i.e. n ¼ Hn. Ideas related to self-optimizing control have been presented repeatedl in the process control literature, but the first quantitative treatment was that of Morari et al. []. Skogestad [3] defined the problem more carefull, linked it to previous work, and also was the first to include the implementation error. He mainl considered the case where single measurements are used as controlled variables, that is, H is a selection matrix where each row has a single and the rest 0 s. Halvorsen at al. [3] considered the approximate maximum gain method and also proposed an exact local method for the optimal measurement combination H. The proposed to obtain H numericall b solving min H r½mðhþš, but did not sa anthing about the properties of this optimization problem. Kariwala [8] proposed an iterative method involving singular value and eigenvalue decomposition. Hori et al.[5] considered indirect control, which can be formulated as a subproblem of the extended null space method presented in this paper. Additional related work is presented in [5 7] on measurement-based optimization to enforce the necessar condition of optimalit under uncertaint, with application to batch processes. Bonvin et al. [] extend these ideas and focus on stead-state optimal sstems, where a clear distinction is made between enforcing active constraints and requiring the sensitivit of the objective to be zero. This paper is an extension of the nullspace method of Alstad and Skogestad [], where it was found that, in the absence of implementation errors (i.e., n ¼ 0), it is possible to have zero loss with respect to disturbances, provided the number of (independent) measurements (n ) is at least equal to the number of (independent) inputs (n u ) and disturbances (n d ), i.e., n P n u þ n d.itis then optimal to select H such that H ¼ 0, where ¼ d opt =dd T is the optimal sensitivit with respect to disturbances d []. Note that it is not possible to have zero loss with respect to implementation errors, because each new measurement adds a disturbance through its associated measurement error, n. In this paper, we include the implementation error and provide the following new results: () Optimal H for combined disturbances and implementation errors (Section 3). () Optimal H for disturbances using possible extra measurements to minimize the effect of implementation error (extended null space method, Section 4).

3 40 V. Alstad et al. / Journal of Process Control 9 (009) inall, in the discussion section, the results are interpreted in terms of adding linear constraints to quadratic optimization problems. L ¼ ðu uopt Þ T J ðu u opt Þ¼ zt z where ð4þ. Background The material in this section is based on [3], unless otherwise stated. The most important notation is given in Table. The objective is to achieve optimal stead-state operation, where the degrees of freedom u are selected such that the scalar cost function Jðu; dþ is minimized for an given disturbance d. Parameter variations ma also be included as disturbances. We assume that an optimall active constraints have been implemented, so that u includes onl the remaining unconstrained stead-state degrees of freedom. The reduced space optimization problem then becomes min Jðu; dþ ðþ u The objective of this work is to find a set of controlled variables c, or more specificall an optimal measurement combination c ¼ H, such that a constant setpoint polic (where u is adjusted to keep c constant; see ig. ) ields optimal operation (), at least locall. With a given d, solving Eq. () for u gives J opt ðdþ, u opt ðdþ and opt ðdþ. In practice, it is not possible to have u ¼ u opt ðdþ, for example, because of implementations errors and changing disturbances. The resulting loss (L) is defined as the difference between the cost J, when using a non-optimal input u, and J opt ðdþ [4]: L ¼ Jðu; dþ J opt ðdþ The local second-order accurate Talor series expansion of the cost function around the nominal point (u ; d ) can be written as Jðu; dþ ¼Jðu ; d Þþ½J u J d Š T Du þ Du T J J ud Du ð3þ J T ud J dd where Du ¼ðu u Þ and ¼ðd d Þ. or a given disturbance ( ¼ 0), the second-order accurate expansion of the loss function around the optimum (J u ¼ 0) then becomes Table Notation u Vector of n u unconstrained inputs (degrees of freedom) d Vector of n d disturbances Vector of n selected measurements used in forming c c Vector of selected controlled variables (to be identified) with dimension n c ¼ n u n Measurement error associated with ðþ z,j = ðu uopt Þ In this paper, we consider a constant setpoint polic where the controlled variables are linear combinations of the measurements Dc ¼ HD We assume that n c ¼ n u, that is, the number of (independent) controlled variables c is equal to the number of (independent) stead-state degrees of freedom ( inputs ) u. The constant setpoint polic implies that u is adjusted to give c s ¼ c þ n where n is the implementation error for c (see ig. ). As mentioned in Section, we assume that the implementation error is caused be the measurement error, i.e. n ¼ Hn. We now want to express the loss variables z in terms of d and n when we use a constant setpoint polic, but first some additional notation is needed. The linearized (local) model in terms of deviation variables is written as D ¼ G Du þ G d ¼ G e Du ð7þ Dc ¼ GDu þ G d ð8þ where ð5þ ð6þ eg ¼½G G d Š ð9þ is the augmented plant. rom Eqs. (6) (8) we get G ¼ HG and G d ¼ HG d ð0þ The magnitudes of the disturbances d and measurement errors n are quantified b the diagonal scaling matrices W d and W n, respectivel. More precisel, we write ¼ W d d 0 n ¼ W n n 0 ðþ ðþ where we assume that d 0 and n 0 are an vectors satisfing d 0 6 ð3þ n 0 A justification for using the combined vector -norm in Eq. (3) is given in the discussion section of Halvorsen et al. [3]. The nonlinear functions u opt ðdþ and opt ðdþ are also linearized, and it can be shown that [3] Du opt ¼ J J ud D opt ¼ ðg J J ud G d fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Þ n c Control error associated with c (this paper: n c ¼ 0) n Implementation error associated with c; n ¼ n c þ Hn We use D to denote deviation variables. Often, the D is omitted and we write, for example, c ¼ H. ð4þ ð5þ

4 V. Alstad et al. / Journal of Process Control 9 (009) where we have introduced the optimal sensitivit matrix for the measurements. In terms of the controlled variables c we then have ðu u opt Þ¼G ðc c opt Þ¼G ðdc Dc opt Þ Dc opt ¼ HD opt ¼ H Dc ¼ Dc s n ¼ n ¼ Hn ð6þ ð7þ ð8þ where we in the last equation have assumed a constant setpoint polic (Dc s ¼ 0). Upon introducing the magnitudes of and n from Eqs. () and () we then get for the loss variables z in (5) for the constant setpoint polic: z ¼ M d d 0 þ M n n 0 where ð9þ M d ¼ J = ðhg Þ HW d ð0þ M n ¼ J = ðhg Þ HW n ðþ Introducing M, ½M d M n Š ðþ gives z ¼ M d0, which is the desired expression for the n 0 loss variables. A non-zero value for z gives a loss L ¼ kzk (4), and the worst-case loss for the expected disturbances and noise in (3) is then[3] L wc ¼ max d 0 n 0 6 L ¼ ðr½mšþ ð3þ where the last equalit follows from the definition of the singular value r and the assumption about the normalized disturbances and measurement errors being -norm bounded, see Eq. (3). Thus, to minimize the worst-case loss we need to minimize rðmþ with respect to H. This is the exact local method in Halvorsen et al.[3], and note that we have expressed M d in (0) in terms of the easil available optimal sensitivit matrix. 3. Explicit formula for optimal H for combined disturbances and measurement errors rom (3), the optimal measurement combination is obtained b solving the problem ( exact local method ) H ¼ arg min H rðmþ ð4þ It ma seem that this optimization problem is non-trivial as M depends nonlinearl on H, as shown in (0) (). Halvorsen et al. [3] proposed a numerical solution and Kariwala [8] provides an iterative solution for the optimal H involving the singular value and eigenvalue decompositions. However, (4) is in fact eas to solve, as shown in the following. We start b introducing M n,j = ðhg Þ ¼ J = G ð5þ which ma be viewed as the effect of n on the loss variables z. We then have M ¼ ½M d M n Š ¼ M n HW ½ d W n Š ð6þ Next, we use the fact that the solution of Eq. (4) is not unique, so that if H is an optimal solution, then another optimal solution is H ¼ DH, where D is a non-singular matrix of dimension n u n u. or example, this follows because M d and M n in (0) and () are unaffected b the choice of D. One implication is that G ¼ HG ma be chosen freel (which also is clear from ig. since we ma add an output block after H which allows G to be selected freel). Alternativel, and this is used here, it follows from (5) that M n ma be selected freel. However, the fact that M n ma be selected freel, does not mean that one can, for example, simpl set M n ¼ I in (6) and then minimize rðmþ with M ¼ HW ½ d W n Š. Rather, one needs to minimize rðmþ subject to the constraint M n ¼ I. Introducing e, ½W d W n Š ð7þ the optimization problem (4) can then be stated as H ¼ arg min rðhþ e subject to HG ¼ J = ð8þ H This is fairl eas to solve numericall because of the linearit in H in both the matrix H e and in the equalit constraints. In fact, an explicit a solution ma be found, as shown below. Choice of norm. The optimization problems (4) and (8) involve the singular value (induced -norm) of M, rðmþ, which represents the worst-case effect of combined -norm bounded disturbances and measurement errors on the loss. A closel related problem is to minimize the robenius norm (Euclidean or -norm) of M, kmk ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pi;j jm ijj, which represents some average effect of combined disturbances and measurement errors on the loss. Actuall, which norm to use is more a matter of preference or mathematical convenience than of correctness. irst, the difference in minimizing the two norms is generall minor; the main difference is that minimizing rðmþ usuall puts more focus on minimizing the largest elements. Second, as discussed below, it appears that for this particular problem, we have a kind of super-optimalit, where the choice of H that minimizes kmk, also minimizes rðmþ [9]. Scalar case. or the scalar case (c is a scalar and n u ¼ n c ¼ ), M and H T are (column) vectors, rðmþ ¼kMk, and an analtic solution to (8) is easil derived. The optimization problem (8) becomes min k et H T k subject to G T H T ¼ J = H T ð9þ and from standard results for constrained quadratic optimization, the optimal solution is (see proof in Appendix) H T ¼ð e et Þ G ðg T ð e et Þ G Þ J = where it is assumed that e et has full rank. ð30þ

5 4 V. Alstad et al. / Journal of Process Control 9 (009) General case. The explicit expression for H in (3) holds also for the general case, that is, for minimizing kmk for the case when H is a matrix. This can be proved b rewriting the general optimization problem (8) for the matrix case, into a vector problem b stacking the columns of H T into a long vector (see Appendix A.). In addition, Kariwala et al. [9] have shown, as alread mentioned, that the matrix H that minimizes the robenius-norm of M also minimizes the singular value of M [9]. However, the reverse does not necessaril hold, that is, a solution that minimizes rðmþ does not necessaril minimize kmk [9], which is because the solution to the problem of minimizing rðmþ is not unique. Our findings can be summarized in the following Theorem (see Appendix A. for proof). Theorem. or combined disturbances and measurement errors, the optimal measurement combination problem in terms of the robenius-norm, min H kmk with M given b (0) (), can be reformulated as min H khk e subject to HG ¼ J =, where e ¼½W d W n Š. is the optimal measurement sensitivit with respect to disturbances, and W d and W n are diagonal weighting matrices, giving the magnitudes of the disturbances and measurement noise, respectivel. Assuming e et is full rank, we have the following explicit solution for the combination matrix H, H T ¼ð e et Þ G ðg T ð e et Þ G Þ J = ð3þ This solution also minimizes the singular value of M, rðmþ, that is, provides the solution to the exact local method in (4). Note that e e T ¼ ½W d W n Š½W d W n Š T in (3) needs to be full rank. This implies that (3) does not generall appl to the case with no measurement error, W n ¼ 0, but otherwise the expression for H applies generall for an number n of measurements. One special case, when the expression for H in (3) applies also for W ¼ 0, is when n 6 n d, because e et then remains full rank. 4. Extended nullspace method The solution for H in (3) minimizes the loss with respect to combined disturbances and measurements errors. An alternative approach is to first minimize the loss with respect to disturbances, and then, if there are remaining degrees of freedom, minimize the loss with respect to measurement errors. One justification is that disturbances are the reason for introducing optimization and feedback in the first place. Another justification is that it ma be easier later to reduce measurements errors than to reduce disturbances. If we neglect the implementation error (M n ¼ 0), then we see from (0) that M d ¼ 0 (zero loss) is obtained b selecting H such that H ¼ 0 ð3þ This provides an alternative derivation of the nullspace method of []. It is alwas possible to find a non-trivial solution (i.e. H 0) H satisfing H ¼ 0 provided the number of independent measurements (n ) is greater than the number of independent inputs (n u ) and disturbances (n d ), i.e. n P n u þ n d []. One solution is to select H as the nullspace of T []: H ¼ Nð T Þ ð33þ The main disadvantage with the nullspace method is that we have no control of the loss caused b measurement errors as given b the matrix M n ¼ M n HW n. In this section, we stud this in more detail, b deriving an explicit expression for H,see(37) and (4), that allows us to compute the resulting M n,see(4) and (44). The explicit expression for H allows us to extend the nullspace method to cases with extra or too few measurements, i.e., to cases when n n u þ n d. 4.. Explicit expression for H for original null space method rom the expansion of the loss function we have, see eqs. (5) and (4) e J zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} fflfflfflfflfflfflfflfflfflfflfflfflfflffl{ z ¼½J = J = J J Du ud Š ð34þ We assume that H is selected to have zero disturbance loss, which is possible if n P n u þ n d. Then from (9) and (6), z ¼ M n Hn. With the controlled variables c ¼ H fixed at constant setpoints (Dc ¼ Dc s ¼ 0) we then have D ¼ n, and get z ¼ M n Hn ¼ M n HD ¼ M n HG e Du ð35þ where G e ¼½G G d Š is the augmented plant. Comparing Eqs. (34) and (35) ields M n H e G ¼ e J ð36þ where e J is defined in (34). We then have the following explicit expression for H for the case where n ¼ n u þ n d such that e G is invertible H ¼ M n e J½ e G Š ð37þ This explicit expression gives H for a case with zero disturbance sensitivit (M d ¼ 0), and thus gives the same result as (33). Note that M n can be regarded as a free parameter (e.g. we ma set M n ¼ I, see Remark below). 4.. Extended nullspace method The explicit solution for H in (37) forms the basis for extending the nullspace method to cases where we have extra measurements (n > n u þ n d ) or too few measurements (n < n u þ n d ). Assume that we have n u independent unconstrained free variables u, n d disturbances d, n measurements, and we want to obtain n c ¼ n u independent controlled variables c that are linear combinations of the measurements, c ¼ H. rom the results in Section, the loss imposed b a constant

6 V. Alstad et al. / Journal of Process Control 9 (009) setpoint polic is L ¼ zt z where z ¼ M d d 0 þ M n n 0. Define E as the error in satisfing Eq. (36): E ¼ M n H e G e J ð38þ We want to derive a relationship between E and M d. rom (5) and (9) the optimal sensitivit can be written as " # ¼ G e J J ud ð39þ I which combined with (6) gives " # " # M d ¼ M n HG e J J ud W d ¼ðEþ e JÞ J J ud I I Here e J J J ud ¼ 0 which gives " I # M d ¼ E J J ud W d I W d ð40þ Note that the disturbance sensitivit is zero (M d ¼ 0) if and onl if E ¼ 0. qffiffiffiffiffiffiffiffiffiffiffiffiffi Let kek ¼ denote the robenius (Euclidean) P i;j e ij norm of a matrix, and let denote the pseudo-inverse of a matrix. Then we have the following theorem: Theorem (Explicit expression for H in extended nullspace method). Selecting H ¼ M n e JðW n G e Þ W n ð4þ minimizes kek, and in addition minimizes the noise sensitivit km n k among all solutions that minimize kek. Proof. Rewrite the definition (38) for E as E ¼ M n HW n W fflfflfflfflffl{zfflfflfflfflffl} n G e e J ð4þ M n rom the theor of linear algebra [8], the solution for M n that minimizes kek, and in addition minimizes km n k among all solutions that minimize kek, is given b M n ¼ e JðW n G e Þ, which gives (4). To see this, note that minimizing kek is equivalent to finding the leastsquare solution X ¼ BA to XA ¼ B, where X ¼ M n, A ¼ W n G e and B ¼ e J. h Remark. If we have enough measurements (n P n u þ n d ) then the choice for H in Eq. (4) gives E ¼ 0 and thus M d ¼ 0. However, for the case with too few measurements the above choice for H minimizes kek, whereas we reall want to minimize km d k. Nevertheless, since km d k 6 kek k J J ud I W d k, we see that minimizing kek will result in a small value of km d k. Remark. The matrix H is non-unique and the matrix M n in (4) can be viewed as a parameter that can be selected freel. or example, one ma select M n ¼ I, or one ma select M n to get a decoupled response from u to c, i.e. G ¼ HG ¼ I. However, note that M n H, and the measurement noise sensitivit M n ¼ M n HW n, are not affected as M n H is given b (36) and (4). Remark 3. It is appropriate at this point to make a comment about the pseudo-inverse A of a matrix. In general, we can write the least-square solution of XA ¼ B as X ¼ BA where the following are true: () A ¼ðA T AÞ A T is the left inverse for the case when A has full column rank (we have extra measurements). In this case, there are an infinite number of solutions and we seek the solution that minimizes kxk. () A ¼ A T ðaa T Þ is the right inverse for the case when A has row column rank (we have too few measurements). In this case there is no solution and we seek the solution that minimizes the robenius norm of E ¼ B XA (regular least squares). (3) In the general case with extra measurements, but where some are dependent, A has neither full column or row rank, and the singular value decomposition ma be used to compute the pseudo-inverse A Special cases of Theorem We have some important special cases of Theorem : Just-enough measurements (original nullspace method) When n ¼ n u þ n d, the measurements and disturbances are independent, so e G is invertible and (4) becomes H ¼ M n e Jð e G Þ ð43þ as derived earlier in (37). This choice gives M d ¼ 0 (zero disturbance loss) and from (6) the resulting effect of the measurement noise is M n ¼ e J½ e G Š W n ð44þ Note that we in this case have no degrees of freedom left for affecting the matrix M n Extra measurements: select just-enough subset If we have extra measurements (n > n u þ n d ), then one possibilit is to select a just-enough subset (such that we get n ¼ n u þ n d ) before forming c and then obtain H from (43) to achieve zero disturbance loss (M d ¼ 0). The degrees of freedom in selecting the measurement subset can then be used to minimize the loss with respect to the measurement noise, that is, to minimize the norm of M n in (44). The worst-case loss caused b measurement noise is L wc ¼ max L ¼ kn 0 k 6 rðm n Þ ¼ rðe JðG f Þ W n Þ 6 ðrðe JÞrð f G ÞrðW n ÞÞ ð45þ

7 44 V. Alstad et al. / Journal of Process Control 9 (009) The selection of measurements does not affect the matrix e J, since it from (33) depends onl on the Hessian matrices J and J ud. However, the selection of measurements affects the matrix e G. Thus, in order to minimize the effect of the implementation error, we propose the following two rules: () Optimal: In order to minimize the worst-case loss, select measurements such that rðm n Þ¼ rð e J½G f Š W n Þ is minimized. () Sub-optimal: Assume that the measurements have been scaled with respect the measurement error such that W n ¼ I. rom the inequalit in Eq. (45), it then follows that the effect of the measurement error n will be small when rðg e Þ (the minimum singular value of G e ) is large. Thus, it is reasonable to select measurements such that rðg f Þ is maximized. the optimal rule requires evaluation of all possible measurement combination, which ma be impractical. On the other hand, for the sub-optimal selection rule of maximizing rð f G Þ there exists efficient branch and bound algorithms [9]. The sub-optimal rule was used successfull in [] to select measurements from 60 candidates for a Petluk distillation case stud Extra measurements: use all or the case with extra measurements (n > n u þ n d ), we ma alternativel use all the measurements when forming c and obtain H from (4) in Theorem. This gives the solution that minimizes the implementation (measurement error) loss subject to having zero disturbance loss (M d ¼ 0). More precisel, when n > n u þ n d and the measurements and disturbances are independent, the choice for H in (4), where denotes the left inverse, minimizes km n k (robenius norm) among all solutions with M d ¼ 0. Note that we need to include the noise weight before taking the pseudo-inverse in (4) Too few measurements If there are man disturbances, then we ma have too few measurements to get M d ¼ 0. or the case when both the measurements and disturbances are independent, we have too few measurements when n < n u þ n d. In this case, the optimal H given in (4) in Theorem is not affected b the noise weight, and (4) becomes H ¼ M n e Jð e G Þ ð46þ where denotes the right inverse and M n is, as before, free to choose. However, this explicit expression for H minimizes kek, whereas, as noted in Remark, we reall want to minimize km d k. Minimizing km d k is equivalent to solving the following optimization problem H ¼ arg min khw d k subject to HG ¼ J = ð47þ H or this case, with few measurements and no consideration of measurement error, we have not been able to derive an explicit expression for H, similar to (3) in Theorem. However, for practical applications, Eq. (46) is most likel acceptable, at least provided we scale the sstem (i.e. G d ) such that W d ¼ I. There ma also be cases where we do have enough measurements, but we nevertheless want to use too few measurements to simplif implementation. In this case, we have that M n ¼ e Jð e G Þ W n and to minimize rðm n Þ (the effect of measurement error), we ma first select the set of measurements that maximizes rðm n Þ, and then select H according to (46). Also, note that if we have sufficientl few measurements, i.e. n < n d, then (3) applies with W n ¼ 0 (see comment following Theorem ). 5. Example As a simple example, consider a scalar problem with n u ¼ and n d ¼ [3]. The cost function to be minimized is J ¼ðu dþ ð48þ where the nominal disturbance is d ¼ 0. Assume that the following four measurements are available: ¼ 0:ðu dþ; ¼ 0u; 3 ¼ 0u 5d; 4 ¼ u We assume that the sstem is scaled such that jdj 6 and jn i j 6, i.e., W d ¼ ; W n ¼ I ð49þ and we want to find the optimal measurements or combinations to control at constant setpoints. Solution. rom (48), it is clear that J opt ðdþ ¼0 8d and the optimal input is u opt ðdþ ¼d. We find J ¼ and J ud ¼ and G T ¼½0: 0 0 Š and G T d ¼½ 0: 0 5 0Š ð50þ The optimal sensitivit matrix is obtained from (5) or (39). This gives ¼ ½0 0 5 Š T. 5.. Single measurement candidates Let us first consider the use of individual measurements as controlled variables (c ¼ i, i ¼ ; ; 3; 4). The losses L wc ¼ rðmþ are L wc ¼ 00; L wc ¼ :005; L3 wc ¼ 0:6; L4 wc ¼ ð5þ Measurement has D opt ¼ 0, so it happens to have zero disturbance loss (M d ¼ 0). However, this measurement is sensitive to noise (as can be seen from the small gain in G ) and we see that this choice actuall has the largest loss L wc ¼ is the best single measurement candidate. This illustrates the importance of taking into account the implementation error (measurement noise). 5.. Measurement combinations: use two of the four measurements Consider combining two measurements, c ¼ H ¼ h i þ h j. Let us first consider combinations that give

8 V. Alstad et al. / Journal of Process Control 9 (009) zero disturbance loss M d ¼ 0, which is possible since n u þ n d ¼ n ¼. The null space combination (H ¼ðh h Þ) is most easil obtained using (33). or example, for measurements (; 3), ¼ ½0 5 Š T and H ¼½h h Š¼Nð½0 5ŠÞ ¼ ½ 0:45 0:970Š ð5þ The controlled variable is then c ¼ 0:45 þ 0: The same result is obtained from (4). The results with the nullspace method for all six possible combinations are given in Table. The table gives the worst-case loss L wc caused b the measurement error. We have L wc ¼ rðmþ, where, since M d ¼ 0, M ¼ M n ¼ ejð G e Þ W n. To compare, we also show in Table rðg e Þ, which according to the sub-optimal rule for selecting measurements should be maximized in order to minimize the implementation error. We note that for this example that maximizing rðg e Þ gives the same (correct) ranking as minimizing L wc. rom Table, we see that combinations involving measurement are all sensitive to noise. Combination ði; jþ ¼ð; 3Þ is the best, followed b (3, 4), while (, ), (, 4) and (, 3) have the same noise sensitivit when the are combined using the nullspace method. The reason is that Nð T Þ¼½ 0Š, so that onl measurement is used. Combination (; 4) ields infinite noise sensitivit to noise with the nullspace method, since G e is singular. Next, consider the optimal combination of two measurements for disturbances and measurements error ( exact local method ). The results are summarized in Table 3. Again, we find that the best combination is ð; 3Þ with a Table Combinations of two measurements, c ¼ h i þ h j, with zero disturbance loss (M d ¼ 0) and resulting loss L wc caused b measurement error i j H from (4) rðg e Þ h h L wc Table 3 Combinations of two measurements, c ¼ h i þ h j, with minimum loss L wc for combined disturbances and measurement error i j H from (3) h h L wc loss L 3 wc ¼ 0:0406. This gives M d ¼ 0:0635 so, as expected, the disturbance loss is non-zero. or this combination, the result is ver similar to the extended nullspace method which gave L 3 wc ¼ 0:045 and M d ¼ 0. However, for the other 5 two-measurement combinations, the differences are much larger as the use of (3) gives a significantl lower sensitivit to measurements error, see Table 3. However, note that L wc for those five cases is onl slightl better than using a single measurement, see (5) Measurement combinations: use all four measurements Consider again first the case when we want zero disturbance loss (M d ¼ 0), and Eq. (4) in the extended nullspace method gives (after normalizing (scaling) the elements in H to get khk ¼ ): H ¼ ½0:006 0:49 0:9700 0:0 Š ð53þ which gives G ¼ 4:85 and M n ¼ 0:95. The loss contributions from the disturbance and the noise are M d ¼ 0 and M n ¼ ½ 0:0060 0:0705 0:87 0:0035 Š, respectivel. The corresponding loss is L wc ¼ r ½M d M n Š= ¼ 0:0448. To compare, the optimal combination ( exact local method ) with respect to combined disturbances and measurement noise, obtained from (3) is (after normalizing to get khk ¼ ) [3] H opt ¼½0:008 0:37 0:975 0:06 Š ð54þ which gives G ¼ 5:08 and M n ¼ 0:783. The loss contribution from the disturbance and the noise are M d ¼ 0:0606 and M n ¼½ 0:0057 0:0645 0:706 0:003Š, respectivel. The resulting loss is L wc ¼ 0:0405, which is ver similar to the extended null space method, and onl marginall improved compared to using onl two measurements (L 3 wc ¼ 0:0406). The reduction in loss is small compared to using onl two measurements (L 3 wc ¼ 0:0406). In summar, the simple two-step nullspace method, where one first selects a just-enough set of measurements b maximizing rðg e Þ, and then obtains H from the null space method, using either Eq. (43) or (33), works well for the example. 6. Example : control of refrigeration ccle or a more phsicall motivated example, we consider the optimal operation of a CO refrigeration ccle [6], for example, it could be the air condition (AC) unit for a house, see ig. 3. The ccle has one unconstrained degree of freedom (n u ¼ ), which ma be viewed as the high pressure ( ¼ p h ) in the ccle. Ideall, p h should be kept at its optimal value b varing the free input (u); which is the choke valve position. However, simpl keeping a constant setpoint p h;s is far from optimal because of disturbances. Three disturbances (n d ¼ 3) are considered: the outside temperature T H, the inside temperature T C (e.g. because of a setpoint change) and the heat transfer rate UA. As a start, control of single variables is considered, because this is the simplest and is the preferred choice if such a variable

9 46 V. Alstad et al. / Journal of Process Control 9 (009) P h successfull to a distillation case stud where the issue is to select temperature combinations [4]. k 7. Discussion T T H T 7.. Local method P s h,combine PC z Q C V l Q H The above derivations are local, since we assume a linear process and a second-order objective function in the inputs and the disturbances. Thus, the proposed controlled variables are onl globall optimal for the case with a linear model and a quadratic objective. In general, we should alwas, for a final validation, check the losses for the proposed structures using a nonlinear model of the process. T H Q loss T3 T 4 Pl W s can be found. The best single controlled variable (with a large scaled gain and a small loss) was found to be the mass holdup (c ¼ M c ) in the condenser [6], but it is ver difficult to measure in practice. Therefore, a combination of measurements needs to be controlled. Theoreticall, we need to combine at least four measurements (n u þ n d ¼ 4) to get zero loss, independent of the disturbances (i.e., to get M d ¼ 0). However, to simplif the implementation we prefer to use fewer measurements. Two measurements which are eas to measure and have a reasonabl large scaled gain [6], are the high pressure ( ¼ p h ) and the temperature before the choke valve ( ¼ T h ). It is not possible to get M d ¼ 0 with onl two measurement, so instead H ¼½h h Š was obtained numericall b minimizing km d k ; see Eq. (46) (actuall, the matrix H can be obtained explicitl from (3) with W n ¼ 0, since we have n ¼ < n d ¼ 3; see comment following Theorem ). This gives a controlled variable c ¼ h þ h ¼ h p h þ h T h. To get a more phsical variable, we select h ¼, which gives a controlled variable in the units of pressure. We find [6] c ¼ p h;combined ¼ p h þ kðt h 5:5 CÞ where k ¼ h =h ¼ 8:53 bar= C and the (constant) setpoint for c ¼ p h;combined is 97.6 bar, which is the nominall optimal value for the high pressure. Controlling c ma be viewed as controlling the high pressure, but with a temperature-corrected setpoint. A more detailed analsis using a nonlinear model shows that this combination gives ver small losses for all disturbances and measurement errors [6]. Other case studies. The results of this paper, and in particular the use of (3) in Theorem have also been applied T 4 T C TC ig. 3. Proposed control structure for the refrigeration ccle. s T C 7.. Quadratic optimization problem The following reformulations of the results in this paper ma be useful for extending them to other applications and comparing them with other results. irst, we give a reformulation of the original nullspace method in []. Theorem 3 (Linear invariants for quadratic optimization problem). Consider an unconstrained quadratic optimization problem in the variables u (input vector of length n u )andd (disturbance vector of length n d ) J J ud u min Jðu; dþ ¼½u dš ð55þ u d J T ud J dd In addition, there are measurement variables ¼ G uþ G d d. If there exists n P n u þ n d independent measurements (where independent means that the matrix G e ¼½G G d Š has full rank), then the optimal solution to (55) has the propert that there exists n c ¼ n u linear variable combinations (constraints) c ¼ H that are invariant to the disturbances d. Here, H ma be obtained from the nullspace method using (33) (where the optimal sensitivit ma be obtained from (5)) or from the explicit expression (37). Next, the result on the worst-case loss [3] and the explicit expression for the exact local method in Theorem can be reformulated as follows. Theorem 4. (Loss b introducing linear constraint for nois quadratic optimization problem). Consider the unconstrained quadratic optimization problem in Theorem 3, J J ud u min Jðu; dþ ¼½u dš u d J T ud J dd and a set of nois measurements m ¼ þ n, where Y ¼ G u þ G d d. Assume that n c ¼ n u constraints c ¼ H m ¼ c s are added to the problem, which will result in a non-optimal solution with a loss L ¼ Jðu; dþ J opt ðdþ. Consider disturbances d and noise n with magnitudes d ¼ W d d 0 d 0 ; n ¼ W n n 0 ; 6 n 0

10 V. Alstad et al. / Journal of Process Control 9 (009) Then for a given H, the worst-case loss is L wc ¼ rðmþ =, where M is given in (0) (), and the optimal H that minimizes rðmþ is given b (3) in Theorem. This optimal H also minimizes kmk. Note from Theorem 3 that if there are a sufficient number (n P n u þ n d ) of noise-free measurements, then we can obtain zero loss in Theorem Relationship to indirect control Indirect control is when we want to find a set of controlled variables c ¼ H such that the primar variables are indirectl kept at constant setpoints. The case of indirect control is discussed in more detail b Hori et al. [5] and their results are a special case of the results for the nullspace method presented in this paper if we select J ¼ k s k ¼ ½ s ŠT ½ s Š To show this, write D ¼ G Du þ G d ¼ G f Du ð56þ ð57þ and assume n ¼ n u,sog is a square matrix. We find that J ¼ G T G J ud ¼ G T G d ð58þ ð59þ Consider the case with n ¼ n u þ n d, where we can achieve perfect indirect control with respect to disturbances. Substituting (58) and (59) into the explicit expression (37) for the nullspace method gives H ¼ P c;0 e G ð e G Þ ð60þ where we have introduced the new free parameter P c;0 ¼ G J = M n ¼ G G. This is identical to the results of Hori et al. [5]. 8. Conclusion Explicit expressions have been derived for the optimal linear measurement combination c ¼ H. The null space method [] for selecting linear measurement combinations c ¼ H has been extended to the general case with extra measurements, n > n u þ n d, see Eq. (4) in Theorem. The idea of the extended nullspace method is to first focus on minimizing the stead-state loss caused b disturbances, and then, if there are remaining degrees of freedom, minimize the effect of measurement errors. Alternativel, one ma minimize the effect of combined disturbances and measurements errors, which is the exact local method of Halvorsen et al. [3]. In this paper, we have derived an explicit solution for H for this problem, see Eq. (3) in Theorem. This expression applies to an number of measurements, including n < n u þ n d. To simplif, one often uses onl a subset of the available measurements when obtaining the combination c ¼ H. A simple rule, which can aims at minimizing the effect of measurement errors, is to select measurements to maximize rð e G Þ or even better, to minimize rð e Jð eg Þ W n Þ. Here, eg is the stead-state gain matrix from the inputs and disturbances to the selected measurements. inall, the results can be interpreted as adding linear constraints that minimize the effect on the solution to a quadratic optimization problem; see Theorems 3 and 4. Appendix A. Analtical solution for the exact local method A.. Scalar case The minimization problem for the scalar case in (9) can be rewritten as: min k et xk ¼ min x T e e T x subject to G T x ¼ J = x x ða:þ where we have introduced x,h T, which is a column-vector in the scalar case. The solution to this problem must satisf the following KKT-conditions (e.g. [, p. 444]): " e e # T G x ¼ 0 G T 0 k J = ða:þ To find the optimal x, we must invert the KKT-matrix and from the Schur complement of the inverse of a partitioned matrix (e.g. [4, p. 56]), we obtain that the optimal x is x ¼ H T ¼ð e e T Þ G ðg T ð e e T Þ G Þ J = ða:3þ Comment: In the scalar case, G is a (column) vector and J = is a scalar. However, the expression and proof also applies if G were a matrix and J = were a vector. This fact is important for the extension to the multivariable case. A.. Extension to multivariable case To show that the solution for the scalar case also applies to the multivariable case, we first transform the multivariable case into a scalar problem. In this proof, we consider a sstem with two controlled variables (n u ¼ ), but it can easil be extended to an dimension. The optimization problem is min H TkHk e subject to HG ¼ J =, where we introduce X ¼ HT. The matrices X and J = are split into vectors X,½ x x Š; J = ¼½J J Š ða:4þ We further introduce the long vectors x n ¼ x ; J n ¼ J x J and the large matrices " # " G T n ¼ GT 0 ; n e ¼ e # 0 0 G T 0 e ða:5þ ða:6þ

11 48 V. Alstad et al. / Journal of Process Control 9 (009) " Then, H e ¼ X T e x T ¼ e ¼ xte # x Te and for the -norm, x T the following applies " khk e x T ¼ # x T e ¼k½x T e x T e Šk ¼kx Te n n k ða:7þ where it is noted that kk ¼kk for a vector. The constraints G T X ¼ J = become ½ G T x G T x Š¼½J J Š ða:8þ or G T x ¼ J and G T x ¼ J, which can be rewritten as " # G T x ¼ J or G T G T n x J x n ¼ J n ða:9þ (A.7) and (A.9) is a vector optimization problem of the form in (A.) and from (A.3) the solution is x n ¼ð f n f T n Þ G n ðgt n ðf n f T n Þ G n Þ J n ða:0þ We now need to unpack this to find the optimal H T ¼ X. Substituting the values and rearranging (A.0) 0 " #" # T x e 0 e 0 G x 0 e 0 e A 0 0 G 0 0 G T " #" # T 0 e 0 e 0 0 G 0 e 0 e A A 0 G J J we see that " x ¼ ðe et Þ G ðg T ð e e # T Þ G Þ J x ð e et Þ G ðg T ð e e ða:þ T Þ G Þ J and finall, H T ¼ X ¼ ½x x Š ¼ð e et Þ G ðg T ð e et Þ G Þ ½J J Š ¼ ¼ð e et Þ G ðg T ð e et Þ G Þ J = ða:þ This proves that the solution for the scalar case also applies for the multivariable. References [] V. Alstad, S. Skogestad, Null space method for selecting optimal measurement combinations as controlled variables, Ind. Eng. Chem. Res. 46 (3) (007) [] D. Bonvin, G. rancois, B. Srinivasan, Use of measurements for enforcing the necessar conditions of optimalit in the presence of constraints and uncertaint, J. Proc. Control 5 (6) (005) [3] I.J. Halvorsen, S. Skogestad, J.C. Morud, V. Alstad, Optimal selection of controlled variables, Ind. Eng. Chem. Res. 4 (4) (003) [4] E. Hori, S. Skogestad, Selection of controlled variables: maximum gain rule and combination of measurements, Ind. Eng. Chem. Res., submitted for publication. [5] E. Hori, S. Skogestad, V. Alstad, Perfect stead state indirect control, Ind. Eng. Chem. Res. 44 (4) (005) [6] J.B. Jensen, S. Skogestad, Optimal operation of simple refrigeration ccles. Part II: selection of controlled variables, Comp. Chem. Eng. 3 (007) [7] J.V. Kadam, W. Marquardt, M. Schlegel, T. Backx, O.H. Bosgra, P.- J. Brouwer, G. Dünnebier, D.V. Hessem, A. Tiagounovand, S.D. Wolf, Towards integrated dnamic real-time optimization and control of industrial processes, in: OCAPO 003, 4th International Conference of Computer-Aided Process Operations, Proceedings of the Conference held at Coral Springs, lorida, Januar 5, 003, pp [8] V. Kariwala, Optimal measurement combination for local selfoptimizing control, Ind. Eng. Chem. Res. 46 (007) [9] V. Kariwala, Y. Cao, S. Janardhanan, Local self-optimizing control with average loss minimization, Ind. Eng. Chem. Res. (008), in press, doi:0.0/ie [0] T.E. Marlin, A.N. Hrmak, Real-time optimization of continuous processes, in: American Institute of Chemical Engineering Smposium Series ifth International Conference on Chemical Process Control, vol. 93, 997, pp. 85. [] M. Morari, G. Stephanopoulos, Y. Arkun, Studies in the snthesis of control structures for chemical processes. Part I: formulation of the problem, process decomposition and the classification of the controller task. Analsis of the optimizing control structures, AIChE J. 6 () (980) 0 3. [] J. Nocedal, S.J. Wright, Numerical Optimization, Springer, 999. [3] S. Skogestad, Plantwide control: the search for the selfoptimizing control structure, J. Proc. Control 0 (000) [4] S. Skogestad, I. Postlethwaite, Multivariable eedback Control, second ed., John Wile & Sons, 005. [5] B. Srinivasan, CJ. Primus, D. Bonvin, N.L. Ricker, Run-to-run optimization via control of generalized constraints, Control Eng. Pract. 9 (8) (00) [6] B. Srinivasan, D. Bonvin, E. Visser, Dnamic optimization of batch processes I. Characterization of the nominal solution, Comp. Chem. Eng. 7 () (003) 6. [7] B. Srinivasan, D. Bonvin, E. Visser, Dnamic optimization of batch processes III. Role of measurements in handling uncertaint, Comp. Chem. Eng. 7 () (003) [8] G. Strang, Linear Algebra and its Applications, third ed., Harcourt Brace & Compan, 988. [9] Y. Zhang, D. Nader, J.. orbes, Results analsis for trust constrained real-lime optimization, J. Proc. Control (00) [0] Y. Zhang, D. Monder, J.. orbes, Real-time optimization under parametric uncertaint: a probabilit constrained approach, J. Proc. Control (3) (00)

COMBINATIONS OF MEASUREMENTS AS CONTROLLED VARIABLES: APPLICATION TO A PETLYUK DISTILLATION COLUMN.

COMBINATIONS OF MEASUREMENTS AS CONTROLLED VARIABLES: APPLICATION TO A PETLYUK DISTILLATION COLUMN. COMBINATIONS OF MEASUREMENTS AS CONTROLLED VARIABLES: APPLICATION TO A PETLYUK DISTILLATION COLUMN. V.Alstad, S. Skogestad 1 Department of Chemical Engineering, Norwegian University of Science and Technology,

More information

CONTROLLED VARIABLE AND MEASUREMENT SELECTION

CONTROLLED VARIABLE AND MEASUREMENT SELECTION CONTROLLED VARIABLE AND MEASUREMENT SELECTION Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim, Norway Bratislava, Nov. 2010 1 NTNU, Trondheim

More information

Model Predictive Control for the Self-optimized Operation in Wastewater Treatment Plants

Model Predictive Control for the Self-optimized Operation in Wastewater Treatment Plants Krist V. Gernae, Jakob K. Huusom and Rafiqul Gani (Eds.), 12th International Smposium on Process Sstems Engineering and 25th European Smposium on Computer Aided Process Engineering. 31 Ma 4 June 2015,

More information

Optimal operation of simple refrigeration cycles Part II: Selection of controlled variables

Optimal operation of simple refrigeration cycles Part II: Selection of controlled variables Computers and Chemical Engineering 31 (2007) 1590 1601 Optimal operation of simple refrigeration cycles Part II: Selection of controlled variables Jørgen Bauck Jensen, Sigurd Skogestad Department of Chemical

More information

Incorporating Feedforward Action into Self-optimizing Control Policies

Incorporating Feedforward Action into Self-optimizing Control Policies Incorporating Feedforward Action into Self-optimizing Control Policies Lia Maisarah Umar, Yi Cao and Vinay Kariwala School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore

More information

Studies on Selection of Controlled Variables

Studies on Selection of Controlled Variables Vidar Alstad Studies on Selection of Controlled Variables Doctoral thesis for the degree of doktor ingeniør Trondheim, May 2005 Norwegian University of Science and Technology Faculty of Natural Sciences

More information

Local Self-Optimizing Control with Average Loss Minimization

Local Self-Optimizing Control with Average Loss Minimization Local Self-Optimizing Control with Average Loss Minimization Vinay Kariwala, Yi Cao and S. Janardhanan Division of Chemical & Biomolecular Engineering, Nanyang Technological University, Singapore 637459

More information

Feedback: Still the simplest and best solution

Feedback: Still the simplest and best solution Feedback: Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian Univ. of Science and Tech. (NTNU) Trondheim, Norway skoge@ntnu.no Abstract Most engineers are

More information

Surrogate Model Generation Using the Concepts of Self-Optimizing Control

Surrogate Model Generation Using the Concepts of Self-Optimizing Control Surrogate Model Generation Using the Concepts of Self-Optimizing Control Julian Straus, Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology Trondheim, Norway

More information

Citation for published version (APA): Berner, J., Hägglund, T., & Åström, K. J. (2016). Improved Relay Autotuning using Normalized Time Delay.

Citation for published version (APA): Berner, J., Hägglund, T., & Åström, K. J. (2016). Improved Relay Autotuning using Normalized Time Delay. Improved Rela Autotuning using Normalized Time Dela Berner, Josefin; Hägglund, Tore; Åström, Karl Johan Published in: American Control Conference (ACC), 26 DOI:.9/ACC.26.75259 26 Document Version: Peer

More information

THE DOS AND DON TS OF DISTILLATION COLUMN CONTROL

THE DOS AND DON TS OF DISTILLATION COLUMN CONTROL THE DOS AND DON TS OF DISTILLATION COLUMN CONTROL Sigurd Skogestad Department of Chemical Engineering, Norwegian University of Science and Technology, N-7491 Trondheim, Norway The paper discusses distillation

More information

Implementation issues for real-time optimization of a crude unit heat exchanger network

Implementation issues for real-time optimization of a crude unit heat exchanger network 1 Implementation issues for real-time optimization of a crude unit heat exchanger network Tore Lid a, Sigurd Skogestad b a Statoil Mongstad, N-5954 Mongstad b Department of Chemical Engineering, NTNU,

More information

Direct and Indirect Gradient Control for Static Optimisation

Direct and Indirect Gradient Control for Static Optimisation International Journal of Automation and Computing 1 (2005) 60-66 Direct and Indirect Gradient Control for Static Optimisation Yi Cao School of Engineering, Cranfield University, UK Abstract: Static self-optimising

More information

A Survey for the Selection of Control Structure for Distillation Columns Based on Steady State Controllability Indexes

A Survey for the Selection of Control Structure for Distillation Columns Based on Steady State Controllability Indexes Iranian Journal of Chemical Engineering Vol. 6, No. 2 (Spring), 2009, IAChE A Survey for the Selection of Control Structure for Distillation Columns Based on Steady State Controllability Indexes K. Razzaghi,

More information

Week 3 September 5-7.

Week 3 September 5-7. MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using

More information

Linear programming: Theory

Linear programming: Theory Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The

More information

3.1 Overview 3.2 Process and control-loop interactions

3.1 Overview 3.2 Process and control-loop interactions 3. Multivariable 3.1 Overview 3.2 and control-loop interactions 3.2.1 Interaction analysis 3.2.2 Closed-loop stability 3.3 Decoupling control 3.3.1 Basic design principle 3.3.2 Complete decoupling 3.3.3

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

z = 1 2 x 3 4 y + 3 y dt

z = 1 2 x 3 4 y + 3 y dt Exact First Order Differential Equations This Lecture covers material in Section 2.6. A first order differential equations is exact if it can be written in the form M(x, ) + N(x, ) d dx = 0, where M =

More information

Time scale separation and the link between open-loop and closed-loop dynamics

Time scale separation and the link between open-loop and closed-loop dynamics Time scale separation and the link between open-loop and closed-loop dynamics Antonio Araújo a Michael Baldea b Sigurd Skogestad a,1 Prodromos Daoutidis b a Department of Chemical Engineering Norwegian

More information

Static Optimization based on. Output Feedback

Static Optimization based on. Output Feedback Static Optimization based on Output Feedback S. Gros, B. Srinivasan, and D. Bonvin Laboratoire d Automatique École Polytechnique Fédérale de Lausanne CH-1015 Lausanne, Switzerland January 21, 2007 Abstract

More information

Chapter #6 EEE State Space Analysis and Controller Design

Chapter #6 EEE State Space Analysis and Controller Design hapter #6 EEE3-83 State Space nalsis and ontroller Design State feedack Linear Quadratic Regulator (LQR) Open and closed loop estimators Reduced Order Estimators (not assessed) racking Module Leader: Dr

More information

Identification of Nonlinear Dynamic Systems with Multiple Inputs and Single Output using discrete-time Volterra Type Equations

Identification of Nonlinear Dynamic Systems with Multiple Inputs and Single Output using discrete-time Volterra Type Equations Identification of Nonlinear Dnamic Sstems with Multiple Inputs and Single Output using discrete-time Volterra Tpe Equations Thomas Treichl, Stefan Hofmann, Dierk Schröder Institute for Electrical Drive

More information

Research Article Chaos and Control of Game Model Based on Heterogeneous Expectations in Electric Power Triopoly

Research Article Chaos and Control of Game Model Based on Heterogeneous Expectations in Electric Power Triopoly Discrete Dnamics in Nature and Societ Volume 29, Article ID 469564, 8 pages doi:.55/29/469564 Research Article Chaos and Control of Game Model Based on Heterogeneous Epectations in Electric Power Triopol

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I,

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I, FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 7 MATRICES II Inverse of a matri Sstems of linear equations Solution of sets of linear equations elimination methods 4

More information

1 GSW Gaussian Elimination

1 GSW Gaussian Elimination Gaussian elimination is probabl the simplest technique for solving a set of simultaneous linear equations, such as: = A x + A x + A x +... + A x,,,, n n = A x + A x + A x +... + A x,,,, n n... m = Am,x

More information

Lecture 13 Date:

Lecture 13 Date: ecture 3 Date: 6.09.204 The Signal Flow Graph (Contd.) Impedance Matching and Tuning Tpe Matching Network Example Signal Flow Graph (contd.) Splitting Rule Now consider the three equations SFG a a b 2

More information

Equations of lines in

Equations of lines in Roberto s Notes on Linear Algebra Chapter 6: Lines, planes an other straight objects Section 1 Equations of lines in What ou nee to know alrea: The ot prouct. The corresponence between equations an graphs.

More information

12.1 Systems of Linear equations: Substitution and Elimination

12.1 Systems of Linear equations: Substitution and Elimination . Sstems of Linear equations: Substitution and Elimination Sstems of two linear equations in two variables A sstem of equations is a collection of two or more equations. A solution of a sstem in two variables

More information

The dos and don ts of distillation column control

The dos and don ts of distillation column control The dos and don ts of distillation column control Sigurd Skogestad * Department of Chemical Engineering Norwegian University of Science and Technology N-7491 Trondheim, Norway Abstract The paper discusses

More information

The approximation of piecewise linear membership functions and Łukasiewicz operators

The approximation of piecewise linear membership functions and Łukasiewicz operators Fuzz Sets and Sstems 54 25 275 286 www.elsevier.com/locate/fss The approimation of piecewise linear membership functions and Łukasiewicz operators József Dombi, Zsolt Gera Universit of Szeged, Institute

More information

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form.

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form. Chapter Linear lgebra Objective Demonstrate solution methods for sstems of linear equations. Show that a sstem of equations can be represented in matri-vector form. 4 Flowrates in kmol/hr Figure.: Two

More information

PRACTICAL CONTROL OF DIVIDING-WALL COLUMNS

PRACTICAL CONTROL OF DIVIDING-WALL COLUMNS Distillation Absorption 2010. Strandberg, S. Skogestad and I.. Halvorsen All rights reserved by authors as per DA2010 copyright notice PRACTICAL CONTROL OF DIVIDING-WALL COLUMNS ens Strandberg 1, Sigurd

More information

Ch 3 Alg 2 Note Sheet.doc 3.1 Graphing Systems of Equations

Ch 3 Alg 2 Note Sheet.doc 3.1 Graphing Systems of Equations Ch 3 Alg Note Sheet.doc 3.1 Graphing Sstems of Equations Sstems of Linear Equations A sstem of equations is a set of two or more equations that use the same variables. If the graph of each equation =.4

More information

ARTIFICIAL SATELLITES, Vol. 41, No DOI: /v

ARTIFICIAL SATELLITES, Vol. 41, No DOI: /v ARTIFICIAL SATELLITES, Vol. 41, No. 1-006 DOI: 10.478/v10018-007-000-8 On InSAR AMBIGUITY RESOLUTION FOR DEFORMATION MONITORING P.J.G. Teunissen Department of Earth Observation and Space Sstems (DEOS)

More information

The Fusion of Parametric and Non-Parametric Hypothesis Tests

The Fusion of Parametric and Non-Parametric Hypothesis Tests The Fusion of arametric and Non-arametric pothesis Tests aul Frank Singer, h.d. Ratheon Co. EO/E1/A173.O. Bo 902 El Segundo, CA 90245 310-647-2643 FSinger@ratheon.com Abstract: This paper considers the

More information

Model-Free Predictive Anti-Slug Control of a Well-Pipeline-Riser.

Model-Free Predictive Anti-Slug Control of a Well-Pipeline-Riser. Modeling, Identification and Control, Vol. 37, No. 1, 16, pp. 1 5, ISSN 189 138 Model-Free Predictive Anti-Slug Control of a Well-Pipeline-Riser. Christer Dalen 1 David Di Ruscio 1 Skien, Norwa. E-mail:

More information

2.2 SEPARABLE VARIABLES

2.2 SEPARABLE VARIABLES 44 CHAPTER FIRST-ORDER DIFFERENTIAL EQUATIONS 6 Consider the autonomous DE 6 Use our ideas from Problem 5 to find intervals on the -ais for which solution curves are concave up and intervals for which

More information

Multiscale Principal Components Analysis for Image Local Orientation Estimation

Multiscale Principal Components Analysis for Image Local Orientation Estimation Multiscale Principal Components Analsis for Image Local Orientation Estimation XiaoGuang Feng Peman Milanfar Computer Engineering Electrical Engineering Universit of California, Santa Cruz Universit of

More information

Digital Workbook for GRA 6035 Mathematics

Digital Workbook for GRA 6035 Mathematics Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................

More information

Functions of Several Variables

Functions of Several Variables Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is

More information

Invariants for Optimal Operation of Process Systems

Invariants for Optimal Operation of Process Systems Johannes Jäschke Invariants for Optimal Operation of Process Systems Doctoral thesis for the degree of philosophiae doctor Trondheim, June 2011 Norwegian University of Science and Technology The Faculty

More information

Self-Optimizing Control of a Two-Stage Refrigeration Cycle

Self-Optimizing Control of a Two-Stage Refrigeration Cycle Preprint, 11th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems June 6-8, 16. NTNU, Trondheim, Norway Self-Optimizing Control of a Two-Stage Refrigeration Cycle Adriaen Verheyleweghen,

More information

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

Optimizing Control of Petlyuk Distillation: Understanding the Steady-State Behavior

Optimizing Control of Petlyuk Distillation: Understanding the Steady-State Behavior Optimizing Control of Petlyuk Distillation: Understanding the Steady-State Behavior Ivar J. Halvorsen and Sigurd Skogestad Abstract. Norwegian University of Science and Technology, Department of Chemical

More information

I would like to thank the following people for their contributions to this project:

I would like to thank the following people for their contributions to this project: 1 Abstract This report presents optimization and control structure suggestions for a model of petroleum production wells. The model presented in the report is based the semi-realistic model of the Troll

More information

Control Schemes to Reduce Risk of Extinction in the Lotka-Volterra Predator-Prey Model

Control Schemes to Reduce Risk of Extinction in the Lotka-Volterra Predator-Prey Model Journal of Applied Mathematics and Phsics, 2014, 2, 644-652 Published Online June 2014 in SciRes. http://www.scirp.org/journal/jamp http://d.doi.org/10.4236/jamp.2014.27071 Control Schemes to Reduce Risk

More information

ES.1803 Topic 16 Notes Jeremy Orloff

ES.1803 Topic 16 Notes Jeremy Orloff ES803 Topic 6 Notes Jerem Orloff 6 Eigenalues, diagonalization, decoupling This note coers topics that will take us seeral classes to get through We will look almost eclusiel at 2 2 matrices These hae

More information

The dos and don ts of distillation column control

The dos and don ts of distillation column control The dos and don ts of distillation column control Sigurd Skogestad * Department of Chemical Engineering Norwegian University of Science and Technology N-7491 Trondheim, Norway Abstract The paper discusses

More information

CHAPTER 2. Matrices. is basically the same as

CHAPTER 2. Matrices. is basically the same as CHAPTER Matrices. Blah Blah Matrices will provide a link between linear equations and vector spaces. This chapter first introduces the basic operations of addition, scalar multiplication, and multiplication,

More information

Discontinuous Galerkin method for a class of elliptic multi-scale problems

Discontinuous Galerkin method for a class of elliptic multi-scale problems INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids 000; 00: 6 [Version: 00/09/8 v.0] Discontinuous Galerkin method for a class of elliptic multi-scale problems Ling Yuan

More information

Dynamics of multiple pendula without gravity

Dynamics of multiple pendula without gravity Chaotic Modeling and Simulation (CMSIM) 1: 57 67, 014 Dnamics of multiple pendula without gravit Wojciech Szumiński Institute of Phsics, Universit of Zielona Góra, Poland (E-mail: uz88szuminski@gmail.com)

More information

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Journal of the Operations Research Societ of Japan 008, Vol. 51, No., 191-01 QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Tomonari Kitahara Shinji Mizuno Kazuhide Nakata Toko Institute of Technolog

More information

Eigenvectors and Eigenvalues 1

Eigenvectors and Eigenvalues 1 Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and

More information

Ordinal One-Switch Utility Functions

Ordinal One-Switch Utility Functions Ordinal One-Switch Utilit Functions Ali E. Abbas Universit of Southern California, Los Angeles, California 90089, aliabbas@usc.edu David E. Bell Harvard Business School, Boston, Massachusetts 0163, dbell@hbs.edu

More information

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway

More information

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System Discrete Dnamics in Nature and Societ Volume, Article ID 836, 8 pages doi:.//836 Research Article Chaotic Attractor Generation via a Simple Linear Time-Varing Sstem Baiu Ou and Desheng Liu Department of

More information

PROCESS DESIGN AND CONTROL Offset-Free Tracking of Model Predictive Control with Model Mismatch: Experimental Results

PROCESS DESIGN AND CONTROL Offset-Free Tracking of Model Predictive Control with Model Mismatch: Experimental Results 3966 Ind. Eng. Chem. Res. 2005, 44, 3966-3972 PROCESS DESIGN AND CONTROL Offset-Free Tracking of Model Predictive Control with Model Mismatch: Experimental Results Audun Faanes and Sigurd Skogestad* Department

More information

Newton-homotopy analysis method for nonlinear equations

Newton-homotopy analysis method for nonlinear equations Applied Mathematics and Computation 188 (2007) 1794 1800 www.elsevier.com/locate/amc Newton-homotopy analysis method for nonlinear equations S. Abbasbandy a, *, Y. Tan b, S.J. Liao b a Department of Mathematics,

More information

Robust Nash Equilibria and Second-Order Cone Complementarity Problems

Robust Nash Equilibria and Second-Order Cone Complementarity Problems Robust Nash Equilibria and Second-Order Cone Complementarit Problems Shunsuke Haashi, Nobuo Yamashita, Masao Fukushima Department of Applied Mathematics and Phsics, Graduate School of Informatics, Koto

More information

LESSON #48 - INTEGER EXPONENTS COMMON CORE ALGEBRA II

LESSON #48 - INTEGER EXPONENTS COMMON CORE ALGEBRA II LESSON #8 - INTEGER EXPONENTS COMMON CORE ALGEBRA II We just finished our review of linear functions. Linear functions are those that grow b equal differences for equal intervals. In this unit we will

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

8.7 Systems of Non-Linear Equations and Inequalities

8.7 Systems of Non-Linear Equations and Inequalities 8.7 Sstems of Non-Linear Equations and Inequalities 67 8.7 Sstems of Non-Linear Equations and Inequalities In this section, we stud sstems of non-linear equations and inequalities. Unlike the sstems of

More information

UC San Francisco UC San Francisco Previously Published Works

UC San Francisco UC San Francisco Previously Published Works UC San Francisco UC San Francisco Previousl Published Works Title Radiative Corrections and Quantum Chaos. Permalink https://escholarship.org/uc/item/4jk9mg Journal PHYSICAL REVIEW LETTERS, 77(3) ISSN

More information

SEPARABLE EQUATIONS 2.2

SEPARABLE EQUATIONS 2.2 46 CHAPTER FIRST-ORDER DIFFERENTIAL EQUATIONS 4. Chemical Reactions When certain kinds of chemicals are combined, the rate at which the new compound is formed is modeled b the autonomous differential equation

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions

More information

LECTURE NOTES - VIII. Prof. Dr. Atıl BULU

LECTURE NOTES - VIII. Prof. Dr. Atıl BULU LECTURE NOTES - VIII «LUID MECHNICS» Istanbul Technical Universit College of Civil Engineering Civil Engineering Department Hdraulics Division CHPTER 8 DIMENSIONL NLYSIS 8. INTRODUCTION Dimensional analsis

More information

14.1 Systems of Linear Equations in Two Variables

14.1 Systems of Linear Equations in Two Variables 86 Chapter 1 Sstems of Equations and Matrices 1.1 Sstems of Linear Equations in Two Variables Use the method of substitution to solve sstems of equations in two variables. Use the method of elimination

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

A weakly dispersive edge wave model

A weakly dispersive edge wave model Coastal Engineering 38 1999 47 5 wwwelseviercomrlocatercoastaleng Technical Note A weakl dispersive edge wave model A Sheremet ), RT Guza Center for Coastal Studies, Scripps Institution of Oceanograph,

More information

2 Ordinary Differential Equations: Initial Value Problems

2 Ordinary Differential Equations: Initial Value Problems Ordinar Differential Equations: Initial Value Problems Read sections 9., (9. for information), 9.3, 9.3., 9.3. (up to p. 396), 9.3.6. Review questions 9.3, 9.4, 9.8, 9.9, 9.4 9.6.. Two Examples.. Foxes

More information

Cost-detectability and Stability of Adaptive Control Systems

Cost-detectability and Stability of Adaptive Control Systems Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 TuC04.2 Cost-detectabilit and Stabilit of Adaptive Control

More information

THE ADJOINT OF A MATRIX The transpose of this matrix is called the adjoint of A That is, C C n1 C 22.. adj A. C n C nn.

THE ADJOINT OF A MATRIX The transpose of this matrix is called the adjoint of A That is, C C n1 C 22.. adj A. C n C nn. 8 Chapter Determinants.4 Applications of Determinants Find the adjoint of a matrix use it to find the inverse of the matrix. Use Cramer s Rule to solve a sstem of n linear equations in n variables. Use

More information

Math 107: Calculus II, Spring 2015: Midterm Exam II Monday, April Give your name, TA and section number:

Math 107: Calculus II, Spring 2015: Midterm Exam II Monday, April Give your name, TA and section number: Math 7: Calculus II, Spring 25: Midterm Exam II Monda, April 3 25 Give our name, TA and section number: Name: TA: Section number:. There are 5 questions for a total of points. The value of each part of

More information

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS Electronic Journal of Differential Equations, Vol. 2013 (2013), No. 143, pp. 1 7. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu PICONE S IDENTITY

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

3 a 21 a a 2N. 3 a 21 a a 2M

3 a 21 a a 2N. 3 a 21 a a 2M APPENDIX: MATHEMATICS REVIEW G 12.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers 2 A = 6 4 a 11 a 12... a 1N a 21 a 22... a 2N. 7..... 5 a M1 a M2...

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 2, Issue 2, Article 15, 21 ON SOME FUNDAMENTAL INTEGRAL INEQUALITIES AND THEIR DISCRETE ANALOGUES B.G. PACHPATTE DEPARTMENT

More information

Mechanical Spring Reliability Assessments Based on FEA Generated Fatigue Stresses and Monte Carlo Simulated Stress/Strength Distributions

Mechanical Spring Reliability Assessments Based on FEA Generated Fatigue Stresses and Monte Carlo Simulated Stress/Strength Distributions Purdue Universit Purdue e-pubs International Compressor Engineering Conference School of Mechanical Engineering 4 Mechanical Spring Reliabilit Assessments Based on FEA Generated Fatigue Stresses and Monte

More information

How Important is the Detection of Changes in Active Constraints in Real-Time Optimization?

How Important is the Detection of Changes in Active Constraints in Real-Time Optimization? How Important is the Detection of Changes in Active Constraints in Real-Time Optimization? Saurabh Deshpande Bala Srinivasan Dominique Bonvin Laboratoire d Automatique, École Polytechnique Fédérale de

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations

Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations Blow-up collocation solutions of nonlinear homogeneous Volterra integral equations R. Benítez 1,, V. J. Bolós 2, 1 Dpto. Matemáticas, Centro Universitario de Plasencia, Universidad de Extremadura. Avda.

More information

* τσ σκ. Supporting Text. A. Stability Analysis of System 2

* τσ σκ. Supporting Text. A. Stability Analysis of System 2 Supporting Tet A. Stabilit Analsis of Sstem In this Appendi, we stud the stabilit of the equilibria of sstem. If we redefine the sstem as, T when -, then there are at most three equilibria: E,, E κ -,,

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Simplified marginal effects in discrete choice models

Simplified marginal effects in discrete choice models Economics Letters 81 (2003) 321 326 www.elsevier.com/locate/econbase Simplified marginal effects in discrete choice models Soren Anderson a, Richard G. Newell b, * a University of Michigan, Ann Arbor,

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Bidirectional branch and bound for controlled variable selection Part II : exact local method for self-optimizing

More information

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs 3 Analytical Solutions of Linear ODE-IVPs Before developing numerical

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1 + x 1/2. b) For what values of k is g a quasi-concave function? For what values of k is g a concave function? Explain your answers.

1 + x 1/2. b) For what values of k is g a quasi-concave function? For what values of k is g a concave function? Explain your answers. Questions and Answers from Econ 0A Final: Fall 008 I have gone to some trouble to explain the answers to all of these questions, because I think that there is much to be learned b working through them

More information

AN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU

AN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU ANZIAM J. 452003, 26 269 AN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU JOHN MALONEY and JACK HEIDEL Received 26 June, 2000; revised 2 Januar, 200 Abstract The fractal kinetics curve derived b Savageau

More information

MATH 2070 Test 3 (Sections , , & )

MATH 2070 Test 3 (Sections , , & ) Multiple Choice: Use a # pencil and completel fill in each bubble on our scantron to indicate the answer to each question. Each question has one correct answer. If ou indicate more than one answer, or

More information

ELEC4612 Power System Analysis Power Flow Analysis

ELEC4612 Power System Analysis Power Flow Analysis ELEC462 Power Sstem Analsis Power Flow Analsis Dr Jaashri Ravishankar jaashri.ravishankar@unsw.edu.au Busbars The meeting point of various components of a PS is called bus. The bus or busbar is a conductor

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization

Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization Dynamic Real-Time Optimization: Linking Off-line Planning with On-line Optimization L. T. Biegler and V. Zavala Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA 15213 April 12,

More information