THE PROBLEM of solving systems of linear inequalities

Size: px
Start display at page:

Download "THE PROBLEM of solving systems of linear inequalities"

Transcription

1 452 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 Recurrent Neural Networks for Solving Linear Inequalities Equations Youshen Xia, Jun Wang, Senior Member, IEEE, Donald L. Hung, Member, IEEE Abstract This paper presents two types of recurrent neural networks, continuous-time discrete-time ones, for solving linear inequality equality systems. In addition to the basic continuous-time discrete-time neural-network models, two improved discrete-time neural networks with faster convergence rate are proposed by use of scaling techniques. The proposed neural networks can solve a linear inequality equality system, can solve a linear program its dual simultaneously, thus extend modify existing neural networks for solving linear equations or inequalities. Rigorous proofs on the global convergence of the proposed neural networks are given. Digital realization of the proposed recurrent neural networks are also discussed. Index Terms Linear equalities equations, recurrent neural networks. I. INTRODUCTION THE PROBLEM of solving systems of linear inequalities equations arises in numerous fields in science, engineering, business. It is usually an initial part of many solution processes, e.g., as a preliminary step for solving optimization problems subject to linear constraints using interior-point methods [1]. Furthermore, numerous applications, such as image restoration, computer tomography, system identification, control system synthesis, lead to a very large system of linear equations inequalities which needs to be solved within a reasonable time window. There are two classes of well-developed approaches for solving linear inequalities. One of the classes transforms this problem into a phase I linear-programming problem, which is then solved by using well-established methods such as the simplex method or the penalty method. These methods employ many of the matrix operations or have to deal with the difficulty in setting penalty parameters. The second class of approaches is based on iterative methods. Most of them do not need matrix manipulations the basic computational step in iterative methods is extremely simple easy to program. One type of iterative methods is derived from the relaxation method for linear inequalities [2] [6]. These methods are called relaxation methods because they consider one constraint at a time, so that in each iteration, all but one constraint is identified Manuscript received July 9, 1997; revised May 3, This work was supported in part by the Hong Kong Research Grants Council under Grant CUHK 381/96E. This paper was recommended by Associate Editor J. Zurada. Y. Xia J. Wang are with the Department of Mechanical Automation Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong. D. L. Hung is with the School of Electrical Engineering Computer Science, Washington State University, Richl, WA USA. Publisher Item Identifier S (99) orthogonal projection is made onto the hyperplane corresponding to it from the current point. As a result, they are also called the successive orthogonal projection methods. Making an orthogonal projection onto a single linear constraint is computationally inexpensive. However, when solving a huge system which may have thouss of constraints, considering only one constraint at a time leads to slow convergence. Therefore, effective parallel solution methods which can process a group or all of constraints at a time are desirable. With the advances in new technologies [especially very large scale integration (VLSI) technology], the dynamicalsystems approach to solving optimization problems with artificial neural networks has been proposed [6] [16]. The neuralnetwork approach enables us to solve many optimization problems in real time due to the massively parallel operations of the computing units faster convergence properties. In particular, neural networks for solving linear equations inequalities have been presented in recent literature in separate settings. For solving linear equations, Cichocki Unbehauen [20], [21] first developed various recurrent neural networks. In parallel, Wang [18] Wang Li [19] presented similar continuous-time neural networks for solving linear equations. For solving linear inequalities, Cichocki Bargiela [20] developed three continuous-time neural networks using the aforementioned first approach. These neural networks have penalty parameters which decreases to zero as time increases to infinity in order to get better accuracy of solution. Labonte [22] presented a class of discrete-time neural networks for solving linear inequalities which implement each of the different versions of the aforementioned relaxation-projection method. He showed that the neural network that implemented the simultaneous projection algorithm developed by Pierro Iusem [4] had fewer neural processing units better computational performance. However, as Pierro Iusem pointed out, their method is only a special case of Censor Elfving s method [3]. Moreover, we feel that Censor Elfving s method can be more straightforward to be realized by a hardware implementation neural network than by the simultaneous projection method. In this paper, we generalize Censor Elfving s method propose two recurrent neural networks, continuous time discrete time, for solving linear inequality equality systems. Furthermore, two modified discrete-time neural networks with good values for step-size parameters are given by use of scaling techniques. The proposed neural networks retain the same merit as the simultaneous projection network are guaranteed to globally converge to a solution of the /99$ IEEE

2 XIA et al.: RECURRENT NEURAL NETWORKS FOR SOLVING LINEAR INEQUALITIES AND EQUATIONS 453 linear inequalities equations. In addition, the proof given in this paper is different from the ones presented before. This paper is organized as follows. In Section II, the formulations of the linear inequalities equations are introduced some related properties are discussed. In Section III, basic network models network architectures are proposed their global convergence is proved. In Section IV, two modified discrete-time neural networks are given their architectures global convergence is shown. In Section V, digital realization of the proposed discrete-time recurrent neural networks are discussed. In Section VI, operating characteristics of the proposed neural networks are demonstrated via some illustrative examples. Finally, Section VII concludes this paper. II. PROBLEM FORMULATION This section summarizes some fundamental properties of linear inequalities equations their basic application. Let, be arbitrary real matrixes let, be given vectors, respectively. No relation is assumed among, the matrix or can be rank deficient or even a zero matrix. We want to find a vector solving the systems The problem of (1) has a solution which satisfies all the inequalities equations if only if the intersect set between the solution of the one of is nonempty. It contains two special important cases. One is the system of inequalities (1) Thus, they can be formulated in the form of (1) where To study the problem of (1) we first define the following energy function: where. Let solve (1). From [14], we have the following proposition which shows the equivalence of (1) being zero. Proposition 1: The function is convex continuously differentiable (but not necessarily twice differentiable) piecewise quadratic if only if if only if. Although the Hessian of fails to exist at points where for any where is the row of the gradient of is globally Lipschitz. Proposition 2: where is globally Lipschitz with constant. Proof: Note that then Another is the system of equations with a nonnegative constraint So As an important application we consider the following linear program (LP): thus, for all, Minimize subject to (2) where. Its dual LP is the following Maximize subject to (3) Finally, the function property. Proposition 3: For any, has an important inequality By Kuhn Tucker conditions we know that is an optimal solution to (1) (2), respectively, if only if satisfies Proof: Using the lemmas in the Appendix the second-order Taylor formula, we can complete the proof of Proposition 3. (4)

3 454 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 Fig. 1. Architecture of the continuous-time recurrent neural networks. III. BASIC MODELS In this section, we propose two neural-network models, a continuous-time one a discrete-time one, for solving linear inequalities equations (1), discuss their network architectures. Then we prove global convergence of the proposed networks. A. Model Descriptions Using the stard gradient descent method for the minimization of the function we can derive the dynamic equation of the proposed continuous-time neural-network model as follows: (5) the corresponding discrete-time neural-network model where is a fixed-step parameter is the learning rate. On the basis of the set of differential equations (5) difference equations (6), the design of the neural networks implementing these equations is very easy. Figs. 1 2 illustrate the architectures of the proposed continuoustime discrete-time neural networks, respectively, where,,. It shows that each one has two layers of processing units consists of adders, simple limiters, integrators or time delays only. Compared with existing neural networks [20], [21] for solving (1), the proposed neural network in (5) contains no (6)

4 XIA et al.: RECURRENT NEURAL NETWORKS FOR SOLVING LINEAR INEQUALITIES AND EQUATIONS 455 Fig. 2. Architecture of the discrete-time recurrent neural networks. time-varying design parameter. Compared with existing neural networks [22] for solving (1), the proposed neural network in (6) can solve linear inequality /or equality systems, thus can linear program its dual simultaneously. Moreover, it is straightforward to realize in hardware implementation. B. Global Convergence We first give the result of global convergence for the continuous-time network in (5). Theorem 1: Let. The neural network in (5) is asymptotically stable in the large at a solution of (1). Proof: First, from Proposition 2 we obtain that for any an fixed initial point there exists only solution of the initial value problem associated with (5). Let then Since the function is continuously differentiable convex on it follows [23] that Note that then

5 456 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 thus so Hence, the solution for (5) is bounded. Furthermore, if then. So except at the equilibrium points. Therefore, the neural network in (5) is globally Lyapunov stable. Finally, the proof of the global convergence for (5) is similar to the one of Theorem 2 in our paper [13]. It should be pointed out that there are different advantages between continuous-time discrete-time neural networks. For example, the convergence properties of the continuoustime systems can be much better since certain controlling parameters (the learning rate) can be set arbitrarily large without affecting the stability of the system, in contrast to the discrete-time systems where the corresponding controlling parameters (the step parameter) must be bounded in a small range. Otherwise, the network will diverge. On the other h, in many operations discrete-time networks are preferable to their continuous-time counterparts because of the availability of design tools the compatibility with computers other digital devices. Generally speaking, a discrete-time neural-network model can be obtained from a continuoustime one by converting differential equations into appropriate difference equations though the Euler method. However, the resulting discrete-time mode is usually not guaranteed to be globally convergent since the controlling parameters may not be bounded in a small range. Therefore, we need to prove global convergence of the discrete-time neural network. Theorem 2: Let. Then the sequence generated by (6) is globally convergent to a solution of (1). Proof: First, from Proposition 3 we have On the other h, for any we have Note that then Thus, substituting (7) we get Since, thus is bounded. Then there exists a subsequence such that Then Substituting we get since is continuous, thus,. Finally, because is symmetric positive semidefi- is monotonically decreasing bounded. (7) then since the matrix nite. Thus, Moreover, where hence then the sequence thus has only one accumulation point Corollary 1: Assume that is bounded. If, then the sequence generated by (6) is globally convergent to a solution of (1). Proof: Note that is bounded any level set of the function is also so [18], then the sequence generated by (6) is bounded since is monotonically decreasing. Similar to the proof of Theorem 2 we can complete the rest of the proof. Remark 1: The above analytical results for discrete-time neural network in (6) provide only sufficient conditions for global convergence. Because they are not necessary conditions the network in (6) could still converge when. This point will be shown in illustrative examples.

6 XIA et al.: RECURRENT NEURAL NETWORKS FOR SOLVING LINEAR INEQUALITIES AND EQUATIONS 457 Remark 2: From the Courant Fischer minmax Theorem [24], it follows that, but this inequality is not strict. For example, let, then Proof: The case of Condition 1). Note that are symmetric positive definite. Let where, then similar to the proof of Proposition 3 we have Thus,. This shows that the step-size parameter of the network in (6) does not decrease necessarily as the number of the constraint in (1) increases. IV. SCALED MODELS From the preceding section we see that the convergence rate of the discrete-time neural network in (6) depends upon the step-size parameter thus upon the size of the maximum eigenvalue of the matrix. In the present section, by scaling techniques, we give two improved models which have good values for step-size parameters which do not depend upon the size of the maximum eigenvalue of the matrix. A. Model Descriptions Using scaling techniques, we introduce two modifications of (6). They are the following: (8) where Since with, thus where,, are symmetric semipositive definite matrixes. From the view of circuit implementation, the modified network models almost resemble the network model (6). There is only one difference among the three network models, that is, the difference lives in the connection weights of the second layer since the matrix,,, can be prescaled, respectively. However, these modified models shall have better values for step-size parameters than the basic model (6) when thus increase the convergence rate. B. Global Convergence Theorem 3: Assume that. If one of the following conditions is satisfied then the sequence generated by (8) is globally convergent to a solution of (1). 1) Let where is the th column vector of the matrix let where is the th column vector of the matrix. 2) Let rank rank let. (9) Thus the rest of proof is similar to the Proof of Theorem 2. The case of Condition 2). Note that thus So by the above mentioned proof for Condition 1) we can complete the rest of the proof. Theorem 4: Let. If one of the following conditions is satisfied, then the sequence generated by (9) is globally convergent to a solution of (1). 1) Let where is the th row vector of the matrix. 2) Let rank.

7 458 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 Proof: The case Condition of 1). Since is a symmetric positive definite matrix is as well. By Proposition 3 we have where we get. Substituting Corollary 2: Let let.ifone of the following conditions is satisfied, then the sequence generated by (8) is globally convergent to a solution of (2). 1) Let where is the th column vector of the matrix. 2) Let rank. Proof: Note that Because thus is monotonically decreasing bounded. Furthermore, we have then of Theorem 2 we can obtain. Similar to the proof The case of Condition 2). It is similar to the proof of Theorem 3 with the Condition 2). Remark 3: In order to solve (1), we see by the conditions of Theorems 3 4 that if rank rank, then we may use (8) If rank, then we may use (9). On the other h, for the computational simplicity of,,,, we should select (8) for the case, we should select (9) for the case. Remark 4: In general, rank rank. But rank often occurs. For example, let Then rank, rank, then by Theorem 3 we have the results of Corollary 2. Corollary 3: Let,.If one of the following conditions is satisfied, then the sequence generated by (9) is globally convergent to a solution of (3). 1) Let where is the th row vector of the matrix. 2) Let. Proof: Similar to the proof of Corollary 2. Remark 5: The result of Corollary 2 under Condition 1) has been given by Censor Elfving [6]. But our proof differs from theirs. Moreover, their method can not prove the results of Theorems 2 through 4. Remark 6: Although all the above mentioned theorems assume that there exists a solution of (1), that is, the systems (1) are consistent, the proposed models can identify this case. When the system is not consistent, the proposed models can give a solution of (1) in a least squares sense (a least squares solution to (1) is any vector that minimizes [6]). This point will be illustrated via Example 2 in Section VI. Remark 7: When the coefficient matrix of the system (1) is ill conditioned, the system of equations leads to stiff differential equations, thus, affect convergence rate. On the other h, if the row vector norms of the matrix is very large, the step size in model (9) is smaller, thus its convergence rate will decrease. To alleviate the stiffness of differential equations simultaneously to improve the convergence properties, one may use preconditioning techniques for the matrix [24]. or design a linear transformation for the vector V. DIGITAL REALIZATION The proposed discrete-time neural networks are suitable for digital realization. In this section, we discuss the implementation issues. The discrete-time neural networks represented by (6), (8), (9) can be put in the generalized form (10) rank rank For (6) From Theorems 3 4 we obtain easily the following two corollaries.

8 XIA et al.: RECURRENT NEURAL NETWORKS FOR SOLVING LINEAR INEQUALITIES AND EQUATIONS 459 Fig. 3. Block diagram of the one-dimensional systolic array. For (8) For (9) Note that. Let we can augment (10) by adding appropriate zeros into the matrixes vectors in (10) such that Then (10) can be rewritten as (11) In (11),,,,, can be precalculated will converge as the number of iteration increases. The converged includes the solution to the problem (1). To ease our later discussion let us define,,. For digital realization (11) can be realized by a onedimensional systolic array consisting of processing elements (PE s) as shown in Fig. 3. Where each PE is responsible for updating one element of the vector contains elements of the th row of the matrix rearranged with the following data structure: for is stored in the registers R1 R2 of the th PE. The element is then passed into the next PE s R1 R2 through the multiplexer MUX3 while at the same time processed by the current PE s multiplier-accumulator (MAC) units, MAC1 MAC2. After MAC executions, the values the th element of are produced by MAC1 MAC2, respectively. The th element of is stored in register R1 through the multiplexer MUX1, the th element of is stored in register R2 through the multiplexer MUX2. The systolic array then circulates the elements of in the ring. After another MAC executions the value of the th element of is generated by the MAC2 in PE stored in the PE s R2 register. Finally, the updated th element of is obtained by sum up (in R1) (in R2), stored back into R1 R2 for the next iteration cycle. Since operations in all PE s are strictly identical concurrent, the systolic array requires 2 -MAC execution time to complete an iteration cycle based on (11). Compared with a single processor that has the same MAC-execution speed as the PE for the computation based on (6), (8), (9), assuming all constant matrix or vector multiplications are precalculated, it will take -MAC execution time per iteration cycle. VI. ILLUSTRATIVE EXAMPLES In this section, we demonstrate the performance of the proposed neural networks for solving linear inequalities equations using four numerical examples. Example 1: Consider the following linear equality inequality system: Then where is an element of at the th row the th column. The functionality of an individual PE is shown in Fig. 4. At the beginning of an iteration, the th element of the vector It is easy to see that. Using the discrete-time neural network in (6) with the solution is.

9 460 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 Fig. 4. Data flow diagram of a processing element. Then Fig. 5. Transient behavior of the energy function in Example 1. First, we use the continuous-time neural network in (5) to solve the inequality systems. In this case, for any an initial point, the neural network always converges globally. Next, we use the discrete-time neural network in (6) to solve the above inequalities. Let, then the neural-network solution in a least squares sense is with the residue vector of. Fig. 6 shows the transient behavior of the recurrent neural network in this example. Example 3: Consider the following linear equations with nonnegative constraint: Fig. 5 depicts the convergence characteristics of the discretetime recurrent neural network with three different values of the design parameter. It shows that the states of the neural network converge to the solution to the problem within ten iterations. Example 2: Consider an inconsistent linear inequality system [19] Then, where is the identity matrix. Taking, we use the discretetime neural network in (8) to solve the above inequalities equations. When the initial point is the neuralnetwork solution is. Example 4: Consider the following linear program [16]: Minimize subject to

10 XIA et al.: RECURRENT NEURAL NETWORKS FOR SOLVING LINEAR INEQUALITIES AND EQUATIONS 461 Fig. 6. Transient behavior of the energy function in Example 2. whose optimal solution the following: Maximize subject to. Its dual program is The linear program its dual can be formulated the form of (1) where we use the discrete-time neural network in (6) to solve the above linear program its dual. Let let the initial point be, then the neural-network solution is. Since the actual value of the duality gap enables us to estimate directly the quality of the solution. Fig. 7 illustrates the values of energy function squared duality gap over iterations along the trajectory of the recurrent neural network in this example. It shows that the squared duality gap decreases zig zag while the energy function decreases monotonically. VII. CONCLUDING REMARKS Systems of linear inequalities equations are very important in engineering design, planning, optimization. In this paper we have proposed two types of globally convergent recurrent neural networks, a continuous-time a discretetime one, for solving linear inequality equality systems in Fig. 7. Transient behavior of the energy function duality gap in Example 4. real-time. In addition to the basic models, we have discussed two scaled discrete-time neural networks in order to improve the convergence rate ease the design task in selecting step-size parameters. For the proposed networks (continuous time discrete time) we have given detailed architectures of implementation which are composed of simple elements only, such as adders, limiters, integrators or time delays. Furthermore, each of the networks has the number of neurons increasing only linearly with the problem size. Compared with the existing neural networks for solving linear inequalities equations, the proposed ones have no need for setting a time-varying design parameter. The present neural networks can solve linear inequalities /or equations a linear program its dual simultaneously, thus, extend a class of discrete-time simultaneous projection networks described in [20] in computational capability. Moreover, our proof on the global convergence differs from any other published. The proposed neural networks are more straightforward to realize in hardware than the simultaneous projection networks. Further investigation has been aimed at the digital implementation verification of the proposed discrete-time neural networks on field-programmable gate arrays (FPGA). Lemma 1: For any APPENDIX then (12) Proof: Consider four cases as follows. 1) For both. So (36) holds. 2) For both. So (36) holds equally. 3) For both So (36) holds. (13)

11 462 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL ) For both. So (36) holds. From Lemma 1 we easily get the following similar result in [25]. Lemma 2: Let. Then for any Proof: Let Lemma 1 we have (14). Then.By [20] J. Wang H. Li, Solving simultaneous linear equations using recurrent neural networks, Inform. Sci., vol. 76, pp , [21] A. Cichocki A. Bargiela, Neural networks for solving linear inequalities, Parallel Comput., vol. 22, pp , [22] G. Labonte, On solving systems of linear inequalities with artificial neural networks, IEEE Trans. Neural Networks, vol. 8, pp , [23] J. M. Ortega W. G. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic, [24] G. H. Golub C. F. Loan, Matrix Computations, 3rd ed. Baltimore, MD: The Johns Hopkins Press, [25] X. Lu, An approximate Newton method for linear programming, J. Numer. Comput. Appl., vol. 15, no. 1, [26] T. H. Corman, C. E. Leiserson, R. L. Rivest, Introduction to Algorithms. New York: McGraw Hill, 1990, ch. 2. Furthermore, we can generalize the following result. Lemma 3: Let where. For any Proof: From Lemmas 1 2, the conclusion follows. REFERENCES (15) [1] D. Hertog, Interior Point Approach to Linear, Quadratic Convex Programming: Algorithms Complexity. Boston, MA: Kluwer, [2] S. Agmon, The relaxation method for linear inequalities, Canadian J. Math., vol. 6, pp , [3] Y. Censor T. Elfving, New method for linear inequalities, Linear Alg. Appl., vol. 42, pp , [4] A. R. De Pierro A. N. Iusem, A simultaneous projections method for linear inequalities, Linear Alg. Appl., vol. 64, pp , [5] Y. Censor, Row action techniques for huge sparse systems their applications, SIAM Rev., vol. 23, pp , [6] R. Bramley B. Winnicka, Solving linear inequalities in a least squares sense, SIAM J. Sci. Comput., vol. 17, no. 1, pp , [7] D. W. Tank J. J. Hopfield, Simple neural optimization networks: An A/D converter, signal decision circuit, a linear programming circuit, IEEE Trans. Circuits Syst., vol. 33, pp , [8] M. P. Kennedy L. O. Chua, Neural networks for nonlinear programming, IEEE Trans. Circuits Syst., vol. 35, pp , [9] A. Rodríguez-Vázquez, R. Domínguez-Castro, A. Rueda, J. L. Huertas, E. Sánchez-Sinencio, Nonlinear switched-capacitor neural networks for optimization problems, IEEE Trans. Circuits Syst., vol. 37, pp , [10] C. Y. Maa M. A. Shanblatt, Linear quadratic programming neural network analysis, IEEE Trans. Neural Networks, vol. 3, pp , [11] J. Wang, Analysis design of a recurrent neural network for linear programming, IEEE Trans. Circuits Cyst. I, vol. 40, pp , [12] J. Wang, A deterministic annealing neural network for convex programming, Neural Networks, vol. 7, no. 4, pp , [13] Y. Xia J. Wang, Neural network for solving linear programming problems with bounded variables, IEEE Trans. Neural Networks, vol. 6, pp , [14] Y. Xia, A new neural network for solving linear programming problems its applications, IEEE Trans. Neural Networks, vol. 7, pp , [15], A new neural network for solving linear quadratic programming problems, IEEE Trans. Neural Networks, vol. 7, pp , [16] A. Cichocki R. Unbehauen, Neural Networks for Optimization Signal Processing. London, U.K.: Wiley, [17], Neural networks for solving systems of linear equations related problems, IEEE Trans. Circuits Syst., vol. 39, pp , [18], Neural networks for solving systems of linear equations; Part II: Minimax least absolute value problems, IEEE Trans. Circuits Syst., vol. 39, pp , [19] J. Wang, Electronic realization of recurrent neural network for solving simultaneous linear equations, Electron. Lett., vol. 28, no. 5, pp , Youshen Xia received the B.S. M.S. degrees in computational mathematics from Nanjing University, China, in , respectively. Since 1995, he has been an Associate Professor with Department of Mathematics, Nanjing University of Posts Telecommunication in China. He is now working towards the Ph.D. degree in the Department of Mechanical Automation Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong. His research interests include computational mathematics, neural networks, signal processing, control theory. Jun Wang (S 89 M 90 SM 93) received the B.S. degree in electrical engineering the M.S. degree in systems engineering from Dalian Institute of Technology, Dalian, China, the Ph.D. degree in systems engineering from Case Western Reserve University, Clevel, OH. He is now an Associate Professor of Mechanical Automation Engineering at the Chinese University of Hong Kong, Shatin, NT, Hong Kong. He was an Associate Professor at the University of North Dakota, Gr Forks. He has also held various positions at Dalian University of Technology, Case Western Reserve University, Zagar, Incorporated. His current research interests include theory methodology of neural networks, their applications to decision systems, control systems, manufacturing systems. He is the author or coauthor of more than 40 journal papers, several book chapters, two edited books, numerous conference papers. Dr. Wang is an Associate Editor of the IEEE TRANSACTIONS ON NEURAL NETWORKS. Donald L. Hung (M 90) received the B.S.E.E. degree from Tongji University, Shanghai, China, the M.S. degree in systems engineering the Ph.D. degree in electrical engineering from Case Western Reserve University, Clevel, OH. From August 1990 to July 1995 he was an Assistant Professor later an Associate Professor in the Department of Electrical Engineering, Gannon University, Erie, PA. Since August 1995 he has been on the faculty of the School of Electrical Engineering Computer Science, Washington State University, Richl, WA. He is currently visiting the Department of Computer Science Engineering, the Chinese University of Hong Kong, Shatin, NT, Hong Kong. His primary research interests are in applicationdriven algorithms architectures, reconfigurable computing, design of high-performance digital/computing systems for applications in areas such as image/signal processing, pattern classification, real-time control, optimization, computational intelligence. Dr. Hung is a member of Eta Kappa Nu.

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

A Recurrent Neural Network for Solving Sylvester Equation With Time-Varying Coefficients

A Recurrent Neural Network for Solving Sylvester Equation With Time-Varying Coefficients IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 13, NO 5, SEPTEMBER 2002 1053 A Recurrent Neural Network for Solving Sylvester Equation With Time-Varying Coefficients Yunong Zhang, Danchi Jiang, Jun Wang, Senior

More information

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays 34 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 50, NO 1, JANUARY 2003 Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying

More information

LINEAR variational inequality (LVI) is to find

LINEAR variational inequality (LVI) is to find IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 6, NOVEMBER 2007 1697 Solving Generally Constrained Generalized Linear Variational Inequalities Using the General Projection Neural Networks Xiaolin Hu,

More information

A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

A dual neural network for convex quadratic programming subject to linear equality and inequality constraints 10 June 2002 Physics Letters A 298 2002) 271 278 www.elsevier.com/locate/pla A dual neural network for convex quadratic programming subject to linear equality and inequality constraints Yunong Zhang 1,

More information

SOLVING optimization-related problems by using recurrent

SOLVING optimization-related problems by using recurrent 2022 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its k-winners-take-all Application Xiaolin

More information

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS P. Pandian et al. / International Journal of Engineering and echnology (IJE) A SINGLE NEURON MODEL FOR SOLVING BOH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS P. Pandian #1, G. Selvaraj # # Dept. of Mathematics,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Blind Extraction of Singularly Mixed Source Signals

Blind Extraction of Singularly Mixed Source Signals IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 11, NO 6, NOVEMBER 2000 1413 Blind Extraction of Singularly Mixed Source Signals Yuanqing Li, Jun Wang, Senior Member, IEEE, and Jacek M Zurada, Fellow, IEEE Abstract

More information

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (23 30) 23 APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE Raja Das Madhu Sudan Reddy VIT Unversity Vellore, Tamil Nadu

More information

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation 710 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 48, NO 6, JUNE 2001 A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation Tingshu

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays IEEE TRANSACTIONS ON AUTOMATIC CONTROL VOL. 56 NO. 3 MARCH 2011 655 Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays Nikolaos Bekiaris-Liberis Miroslav Krstic In this case system

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

A Plane Wave Expansion of Spherical Wave Functions for Modal Analysis of Guided Wave Structures and Scatterers

A Plane Wave Expansion of Spherical Wave Functions for Modal Analysis of Guided Wave Structures and Scatterers IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 51, NO. 10, OCTOBER 2003 2801 A Plane Wave Expansion of Spherical Wave Functions for Modal Analysis of Guided Wave Structures and Scatterers Robert H.

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

WE CONSIDER linear systems subject to input saturation

WE CONSIDER linear systems subject to input saturation 440 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 3, MARCH 2003 Composite Quadratic Lyapunov Functions for Constrained Control Systems Tingshu Hu, Senior Member, IEEE, Zongli Lin, Senior Member, IEEE

More information

Stability of Switched Linear Hyperbolic Systems by Lyapunov Techniques

Stability of Switched Linear Hyperbolic Systems by Lyapunov Techniques 2196 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 8, AUGUST 2014 Stability of Switched Linear Hyperbolic Systems by Lyapunov Techniques Christophe Prieur, Antoine Girard, Emmanuel Witrant Abstract

More information

Stability of interval positive continuous-time linear systems

Stability of interval positive continuous-time linear systems BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 66, No. 1, 2018 DOI: 10.24425/119056 Stability of interval positive continuous-time linear systems T. KACZOREK Białystok University of

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems Mehdi Tavan, Kamel Sabahi, and Saeid Hoseinzadeh Abstract This paper addresses the problem of state and

More information

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS International Journal of Bifurcation and Chaos, Vol. 12, No. 1 (2002) 187 192 c World Scientific Publishing Company SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS XIAO FAN WANG Department of Automation,

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

Sharpening the Karush-John optimality conditions

Sharpening the Karush-John optimality conditions Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay

Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay Wu-Sheng Lu Takao Hinamoto Dept. of Elec. and Comp. Engineering Graduate School of Engineering University of Victoria Hiroshima University

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators

Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 2, APRIL 2001 315 Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators Hugang Han, Chun-Yi Su, Yury Stepanenko

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 11, NO 2, APRIL 2003 271 H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions Doo Jin Choi and PooGyeon

More information

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling ISSN 746-7233, England, UK World Journal of Modelling and Simulation Vol. 3 (2007) No. 4, pp. 289-298 Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling Yuhui Wang, Qingxian

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

THE winner-take-all (WTA) network has been playing a

THE winner-take-all (WTA) network has been playing a 64 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 Analysis for a Class of Winner-Take-All Model John P. F. Sum, Chi-Sing Leung, Peter K. S. Tam, Member, IEEE, Gilbert H. Young, W. K.

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011 1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 56, NO 5, MAY 2011 L L 2 Low-Gain Feedback: Their Properties, Characterizations Applications in Constrained Control Bin Zhou, Member, IEEE, Zongli Lin,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A Note on KKT Points of Homogeneous Programs 1

A Note on KKT Points of Homogeneous Programs 1 A Note on KKT Points of Homogeneous Programs 1 Y. B. Zhao 2 and D. Li 3 Abstract. Homogeneous programming is an important class of optimization problems. The purpose of this note is to give a truly equivalent

More information

Design and Stability Analysis of Single-Input Fuzzy Logic Controller

Design and Stability Analysis of Single-Input Fuzzy Logic Controller IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 30, NO. 2, APRIL 2000 303 Design and Stability Analysis of Single-Input Fuzzy Logic Controller Byung-Jae Choi, Seong-Woo Kwak,

More information

OVER the past one decade, Takagi Sugeno (T-S) fuzzy

OVER the past one decade, Takagi Sugeno (T-S) fuzzy 2838 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 53, NO. 12, DECEMBER 2006 Discrete H 2 =H Nonlinear Controller Design Based on Fuzzy Region Concept and Takagi Sugeno Fuzzy Framework

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

AQUANTIZER is a device that converts a real-valued

AQUANTIZER is a device that converts a real-valued 830 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 57, NO 4, APRIL 2012 Input to State Stabilizing Controller for Systems With Coarse Quantization Yoav Sharon, Member, IEEE, Daniel Liberzon, Senior Member,

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

CONVENTIONAL stability analyses of switching power

CONVENTIONAL stability analyses of switching power IEEE TRANSACTIONS ON POWER ELECTRONICS, VOL. 23, NO. 3, MAY 2008 1449 Multiple Lyapunov Function Based Reaching Condition for Orbital Existence of Switching Power Converters Sudip K. Mazumder, Senior Member,

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian

More information

ADAPTIVE control of uncertain time-varying plants is a

ADAPTIVE control of uncertain time-varying plants is a IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 1, JANUARY 2011 27 Supervisory Control of Uncertain Linear Time-Varying Systems Linh Vu, Member, IEEE, Daniel Liberzon, Senior Member, IEEE Abstract

More information

The speed of Shor s R-algorithm

The speed of Shor s R-algorithm IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University

More information

Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics

Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics 622 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 48, NO. 4, APRIL 2003 Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics MingQing Xiao, Nikolaos Kazantzis, Costas Kravaris, Arthur

More information

Two-Layer Network Equivalent for Electromagnetic Transients

Two-Layer Network Equivalent for Electromagnetic Transients 1328 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 18, NO. 4, OCTOBER 2003 Two-Layer Network Equivalent for Electromagnetic Transients Mohamed Abdel-Rahman, Member, IEEE, Adam Semlyen, Life Fellow, IEEE, and

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

A Digit-Serial Systolic Multiplier for Finite Fields GF(2 m )

A Digit-Serial Systolic Multiplier for Finite Fields GF(2 m ) A Digit-Serial Systolic Multiplier for Finite Fields GF( m ) Chang Hoon Kim, Sang Duk Han, and Chun Pyo Hong Department of Computer and Information Engineering Taegu University 5 Naeri, Jinryang, Kyungsan,

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

RECENTLY, many artificial neural networks especially

RECENTLY, many artificial neural networks especially 502 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 54, NO. 6, JUNE 2007 Robust Adaptive Control of Unknown Modified Cohen Grossberg Neural Netwks With Delays Wenwu Yu, Student Member,

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Nonlinear Support Vector Machines through Iterative Majorization and I-Splines

Nonlinear Support Vector Machines through Iterative Majorization and I-Splines Nonlinear Support Vector Machines through Iterative Majorization and I-Splines P.J.F. Groenen G. Nalbantov J.C. Bioch July 9, 26 Econometric Institute Report EI 26-25 Abstract To minimize the primal support

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS*

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS* SIAM J APPL MATH Vol 35, No 3, November 1978 1978 Society for Industrial and Applied Mathematics 0036-1399/78/3503-0002 $0100/0 THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Research Article A Recurrent Neural Network for Nonlinear Fractional Programming

Research Article A Recurrent Neural Network for Nonlinear Fractional Programming Mathematical Problems in Engineering Volume 2012, Article ID 807656, 18 pages doi:101155/2012/807656 Research Article A Recurrent Neural Network for Nonlinear Fractional Programming Quan-Ju Zhang 1 and

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional

More information

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Manli Li, Jiali Yu, Stones Lei Zhang, Hong Qu Computational Intelligence Laboratory, School of Computer Science and Engineering,

More information

Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities

Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 4, APRIL 2006 1421 Fixed-Order Robust H Filter Design for Markovian Jump Systems With Uncertain Switching Probabilities Junlin Xiong and James Lam,

More information

Approximation Metrics for Discrete and Continuous Systems

Approximation Metrics for Discrete and Continuous Systems University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University

More information

MEASUREMENTS that are telemetered to the control

MEASUREMENTS that are telemetered to the control 2006 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 19, NO. 4, NOVEMBER 2004 Auto Tuning of Measurement Weights in WLS State Estimation Shan Zhong, Student Member, IEEE, and Ali Abur, Fellow, IEEE Abstract This

More information

Risk-Sensitive Control with HARA Utility

Risk-Sensitive Control with HARA Utility IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 4, APRIL 2001 563 Risk-Sensitive Control with HARA Utility Andrew E. B. Lim Xun Yu Zhou, Senior Member, IEEE Abstract In this paper, a control methodology

More information

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities Research Journal of Applied Sciences, Engineering and Technology 7(4): 728-734, 214 DOI:1.1926/rjaset.7.39 ISSN: 24-7459; e-issn: 24-7467 214 Maxwell Scientific Publication Corp. Submitted: February 25,

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

WE consider finite-state Markov decision processes

WE consider finite-state Markov decision processes IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 7, JULY 2009 1515 Convergence Results for Some Temporal Difference Methods Based on Least Squares Huizhen Yu and Dimitri P. Bertsekas Abstract We consider

More information