Research Article A Recurrent Neural Network for Nonlinear Fractional Programming

Similar documents
Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS

IN THIS PAPER, we consider a class of continuous-time recurrent

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article On Decomposable Measures Induced by Metrics

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems

Research Article Convergence Theorems for Infinite Family of Multivalued Quasi-Nonexpansive Mappings in Uniformly Convex Banach Spaces

Research Article Strong Convergence of a Projected Gradient Method

Research Article Existence and Uniqueness of Homoclinic Solution for a Class of Nonlinear Second-Order Differential Equations

Research Article Cyclic Iterative Method for Strictly Pseudononspreading in Hilbert Space

Research Article Mathematical Model and Cluster Synchronization for a Complex Dynamical Network with Two Types of Chaotic Oscillators

Research Article Adaptive Control of Chaos in Chua s Circuit

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

Research Article H Control Theory Using in the Air Pollution Control System

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

Strong convergence theorems for total quasi-ϕasymptotically

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Research Article A Characterization of E-Benson Proper Efficiency via Nonlinear Scalarization in Vector Optimization

Research Article Hybrid Algorithm of Fixed Point for Weak Relatively Nonexpansive Multivalued Mappings and Applications

Research Article A New Roper-Suffridge Extension Operator on a Reinhardt Domain

Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of Ćirić Quasi-Contractive Operators

Department of Mathematics, College of Sciences, Shiraz University, Shiraz, Iran

Research Article On the Stability Property of the Infection-Free Equilibrium of a Viral Infection Model

A derivative-free nonmonotone line search and its application to the spectral residual method

Research Article Solvability of a Class of Integral Inclusions

Research Article On Global Solutions for the Cauchy Problem of a Boussinesq-Type Equation

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Runge-Kutta Method for Solving Uncertain Differential Equations

Research Article Product of Extended Cesàro Operator and Composition Operator from Lipschitz Space to F p, q, s Space on the Unit Ball

Research Article Remarks on Asymptotic Centers and Fixed Points

Mixed 0-1 Linear Programming for an Absolute. Value Linear Fractional Programming with Interval. Coefficients in the Objective Function

Research Article Constrained Solutions of a System of Matrix Equations

Research Article A Data Envelopment Analysis Approach to Supply Chain Efficiency

Topic 6: Projected Dynamical Systems

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article An Auxiliary Function Method for Global Minimization in Integer Programming

Chapter 2: Preliminaries and elements of convex analysis

Research Article On Multivalued Nonexpansive Mappings in R-Trees

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

Research Article Wave Scattering in Inhomogeneous Strings

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

Research Article A New Fractional Integral Inequality with Singularity and Its Application

LINEAR variational inequality (LVI) is to find

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Research Article Modulus of Convexity, the Coeffcient R 1,X, and Normal Structure in Banach Spaces

Research Article On a Quasi-Neutral Approximation to the Incompressible Euler Equations

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

Research Article Global Existence and Boundedness of Solutions to a Second-Order Nonlinear Differential System

Auxiliary signal design for failure detection in uncertain systems

Research Article Generalized Mann Iterations for Approximating Fixed Points of a Family of Hemicontractions

1 Lyapunov theory of stability

Research Article Another Aspect of Triangle Inequality

Solving Dual Problems

Research Article Normal and Osculating Planes of Δ-Regular Curves

Research Article Existence of Periodic Positive Solutions for Abstract Difference Equations

The Equilibrium Equivalent Representation for Variational Inequalities Problems with Its Application in Mixed Traffic Flows

Research Article Almost Periodic Solutions of Prey-Predator Discrete Models with Delay

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS

Research Article Global Dynamics of a Competitive System of Rational Difference Equations in the Plane

Research Article Common Fixed Points of Weakly Contractive and Strongly Expansive Mappings in Topological Spaces

Research Article A Note on the Central Limit Theorems for Dependent Random Variables

Research Article Quasilinearization Technique for Φ-Laplacian Type Equations

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

Research Article The Entanglement of Independent Quantum Systems

Research Article Variational-Like Inclusions and Resolvent Equations Involving Infinite Family of Set-Valued Mappings

New Iterative Algorithm for Variational Inequality Problem and Fixed Point Problem in Hilbert Spaces

Research Article Positive Solutions for Neumann Boundary Value Problems of Second-Order Impulsive Differential Equations in Banach Spaces

Research Article An Iterative Algorithm for the Split Equality and Multiple-Sets Split Equality Problem

Research Article Approximation of Analytic Functions by Bessel s Functions of Fractional Order

Research Article Iterative Approximation of Common Fixed Points of Two Nonself Asymptotically Nonexpansive Mappings

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods

Research Article Global Attractivity of a Higher-Order Difference Equation

Symmetric and Asymmetric Duality

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

Research Article A Fictitious Play Algorithm for Matrix Games with Fuzzy Payoffs

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Research Article A Hamilton-Poisson Model of the Chen-Lee System

The discrete-time second-best day-to-day dynamic pricing scheme

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

Research Article Frequent Oscillatory Behavior of Delay Partial Difference Equations with Positive and Negative Coefficients

The Split Common Fixed Point Problem for Asymptotically Quasi-Nonexpansive Mappings in the Intermediate Sense

arxiv: v1 [math.oc] 1 Jul 2016

Research Article Function Spaces with a Random Variable Exponent

Near-Potential Games: Geometry and Dynamics

Deakin Research Online

Mathematical models in economy. Short descriptions

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

Research Article Existence and Uniqueness Results for Perturbed Neumann Boundary Value Problems

Transcription:

Mathematical Problems in Engineering Volume 2012, Article ID 807656, 18 pages doi:101155/2012/807656 Research Article A Recurrent Neural Network for Nonlinear Fractional Programming Quan-Ju Zhang 1 and Xiao Qing Lu 2 1 Management Department, City College of Dongguan University of Technology, Dongguan 523106, China 2 Financial Department, City College of Dongguan University of Technology, Dongguan 523106, China Correspondence should be addressed to Quan-Ju Zhang, zhangquanju@sinacom Received 13 April 2012; Accepted 7 July 2012 Academic Editor: Hai L Liu Copyright q 2012 Q-J Zhang and X Q Lu This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints 1 Introduction Compared with the well-known applications of nonlinear programming to various branches of human activity, especially to economics, the applications of fractional programming are less known until now Of course, the linearity of a problem makes it easier to tackle and hence contributes its wide recognition However, it is certain that not all real-life economic problems can be described by linear models and hence are not likely applications of linear programming Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance in solving concrete problems is steadily increasing Moreover, it is known that the nonlinear optimization models describe practical problems much better than the linear optimization models do The fractional programming problems are particularly useful in the solution of economic problems in which various activities use certain resources in various proportions,

2 Mathematical Problems in Engineering while the objective is to optimize a certain indicator, usually the most favorable return-onallocation ratio subject to the constraint imposed on the availability of goods The detailed descriptions of these models can be found in Charnes et al 1, Patkar 2, and Mjelde 3 Besides the economic applications, it was found that the fractional programming problems also appeared in other domains, such as physics, information theory, game theory, and others Nonlinear fractional programming problems are, of course, the dominant ones for their much widely applications, see Stancu-Minasian 4 in details As it is known, conventional algorithms are time consuming in solving optimization problems with large-scale variables and so new parallel and distributed algorithms are more competent then Artificial neural networks RNNs governed by a system of differential equations can be implemented physically by designated hardware with integrated circuits and an optimization process with different specific purposes could be conducted in a truly parallel way An overview and paradigm descriptions of various neural network models for tackling a great deal of optimization problems can be found in the book by Cichocki and Unbehauen 5 Unlike most numerical algorithms, neural network approach can handle, as described in Hopfield s seminal work 6, 7, optimization process in real-time on line and hence to be the top choice Neural network models for optimization problems have been investigated intensively since the pioneer work of Wang et al, see 8 14 Wang et al proposed several different neural network models for solving convex programming problems 8, linear programming 9, 10, which, proved to be globally convergent to the problem s exact solutions Kennedy and Chua 11 developed a neural network model for solving nonlinear programming problems where a penalty parameter needed to tune in the optimization process and hence only approximate solutions were generated Xia and Wang 12 gave a general neural network model designing methodology which put together many gradient-based network models for solving the convex programming problems under this framework with globally convergent stability Neural networks for the quadratic optimization and nonlinear optimization with interval constraints were developed by Bouzerdorm and Pattison 13 and Liang and Wang 14, respectively All these neural networks can be classified into the following three types: 1 the gradient-based models 8 10 and its extension 12 ; 2 the penalty-function-based model 11 ; 3 the projection based models 13, 14 Among them the first was proved to have the global convergence 12 ; the third quasi-convergence 8 10 only when the optimization problems are convex programming problems The second could only be demonstrated to have local convergence 11 and more unfortunately, it might fail to find exact solutions, see 15 for a numerical example Because of this, the penalty-function-based model has little applications in practice As it is known, nonlinear fractional programming does not belong to convex optimization problems 4 and how to construct a good performance neural network model to solve this optimization problem becomes a challenge now since Motivated by this idea, a promising recurrent continous-time neural network model is going to be proposed in the present paper The proposing RNN model has the following two most important features 1 The model is complete in the sense that the set of optima of the nonlinear fractional programming with interval constraints coincides with the set of equilibria of the proposing RNN model 2 The RNN model is invariant with respect to the problem s feasible set and has the global convergence property in the sense that all the trajectories of the proposing network converge to the exact solution set for any initial point starting at the feasible interval region These two properties demonstrate that the proposing network model is quite suitable for solving nonlinear fractional programming problems with interval constraints

Mathematical Problems in Engineering 3 Remains of the paper are organized as follows Section 2 formulates the optimization problem and Section 3 describes the construction of the proposing RNN model Complete property and global convergence of the proposing model are discussed in Sections 4 and 5, respectively Section 6 gives some typical application areas of the fractional programming Illustrative examples with computational results are reported in Section 7 to demonstrate further the good performance of the proposing RNN model in solving the intervalconstrained nonlinear fractional programming problems Finally, Section 8 is a conclusion remark which presents a summary of the main results of the paper 2 Problem Formulation The study of the nonlinear fractional programming with interval constraints is motivated by the study of the following linear fractional interval programming: { min f x c T x c } 0 d T x d 0 : a x b, 21 where: i f x c T x c 0 /d T x d 0, ii c, d are n dimensional column vectors, iii c 0,d 0 are scalars, iv superscript T denotes the transpose operator, v x x 1,x 2,,x n T R n is the decision vector, vi a a 1,a 2,,a n T R n,b b 1,b 2,,b n T R n are constant vectors with a i b i i 1, 2,,n It is assumed that the denominator of the objective function f x maintains a constant sign on an open set O which contains the interval constraints W {x : a x b}, say positive, that is, d T x d 0 > 0, x O W, 22 and that the function f x does not reduce to a linear function, that is, d T x d 0 / constant on O W and c, d are linearly independent If x W and f x f x for any x W, then x is called an optimal solution to the problem 21 The set of all solutions to problem 21 is denoted by Ω,thatis,Ω {x W f x f x, x W} Studies on linear fractional interval programming replaced the constraints in programming 21 with a Ax b commenced in a series number of paper by Charnes et al 16 18 Charnse and Cooper, see 16, employed the change of variable method and developed a solution algorithm for this programming by the duality theory of linear programming A little later, in 17, 18, Charnse et al gave a different method which transformed the fractional interval problem into an equivalent problem like 21 by using the generalized inverse of A, and the explicit solutions were followed then Also, Bühler 19 transformed the problem into another equivalent one of the same format, to which

4 Mathematical Problems in Engineering he associated a linear parametric program used to obtain solution for the original interval programming problem Accordingly, constraints a Ax b can always be transformed into a x b by change of variable method without changing the programming s format, see 13 for quadratic programming and 17 for linear fractional interval programming So, it is necessary to pay our attention on problem 21 only As the existing studies on problem 21, see 16 19, focused on the classical method which is time consuming in optimization computational aspects, it is sure that the neural network method should be the top choice to meet the real-time computation requirement To reach this goal, the present paper is to construct a RNN model that is available both for solving nonlinear fractional interval programming and for linear fractional interval programming problem 21 as well Consider the following more general nonlinear fractional programming problem: { min F x g x } h x : a x b, 23 where g x,h x are continuously differentiable function defined on an open convex set O R n which contains the problem s feasible set W {x a x b} and x, a, b the same as in problem 21, see the previous v - vi Similarly, we suppose the objective function s dominator g x always keeps a constant sign, say g x > 0 As the most fractional programming problems arising in real-life world associate a kind of generalized convex properties, we suppose the objective function F x to be pseudoconvex over O There are several sufficient conditions for the function F x g x /h x being pseudoconvex, two of which, see 20, are 1 g is convex and g 0, while h concave and h>0; 2 g is convex and g 0, while h is convex and h>0 It is easy to see that the problem 21 is a special case of problem 23 We are going to state the neural network model which can be employed for solving problem 23 and so for problem 21 as well Details are described in the coming section 3 The Neural Network Model Consider the following single-layered recurrent neural network whose state variable x is described by the differential equation: dx dt x f W x F x, 31 where is the gradient operator and f W : R n W is the projection operator defined by f W x arg min x w 32 w W

Mathematical Problems in Engineering 5 f i (x i ) a i a i bi x i Figure 1: The activation function f i x i of the neural network model 31 For the interval constrained feasible set W, the operator f W can be expressed explicitly as f W x f W 1 x,,f W n x whose ith component is a i, f W i x f W i x i x i, b i, a i x i <a i, x i b i, x i >b i 33 The activation function to one node of the neural network model 31 is the typical piecewise linear f W i x i which is visibly illustrated in Figure 1 To make a clear description of the proposed neural network model, we reformulate the compact matrix form 31 as the following component ones: ( dx i dt x i f W i x i F x x i ), i 1, 2,,n 34 When the RNN model 31 is employed to solve optimization problem 23, the initial state is required to be mapped into the feasible interval region W That is, for any x 0 x 0 1,x0 2,,x0 n R n, the corresponding neural trajectory initial point should be chosen as x 0 f W x 0, or in the component form, x i 0 f W i x0 i The block functional diagram of the RNN model 34 is depicted in Figure 2 Accordingly, the architecture of the proposed neural network model 34 is composed of n integrators, n processors for F x, 2n piece-wise linear activation functions, and 2n summers

6 Mathematical Problems in Engineering F(x) x1 + + x 0 1 x 1 (t) F(x) x2 + + x 0 2 x 2 (t) F(x) xn + + x 0 n x n (t) Figure 2: Functional block diagram of the neural network model 31 Let the equilibrium state of the RNN model 31 be Ω e which is defined by the following equation: Ω e { x e R n x e f W x e F x e } W 35 The relationship between the minimizer set Ω of problem 23 and the equilibrium set Ω e is explored in the following section It is guaranteed that the two sets coincide exactly and, this case is the most available expected one in the neural network model designs 4 Complete Property As proposed for binary-valued neural network model in 21, a neural network is said to be regular or normal if the set of minimizers of an energy function is a subset or superset of the set of the stable states of the neural network, respectively If the two sets are the same, the neural network is said to be complete The regular property implies the neural network s reliability and normal effectiveness, respectively, for the optimization process Complete property means both reliability and effectiveness and it is the top choice in the neural network designing Here, for the continuous-time RNN model 31, we say the model to be regular, normal, and complete respectively if three cases of Ω Ω e, Ω e Ω,andΩ Ω e occur, respectively The complete property of the neural network 31 is stated in the following theorem Theorem 41 The RNN model 31 is complete, that is, Ω Ω e In order to prove Theorem 41, one needs the following lemmas Lemma 42 Suppose that x isasolutiontoproblem 23, that is, F x min y W F( y ), 41

Mathematical Problems in Engineering 7 then x is a solution to the variational inequality: x W : ( F x,y x ) 0 for y W 42 Proof See 22, Proposition 51 Lemma 43 Function F x c T x c 0 /d T x d 0 defined in 23 is both pseudoconvex and pseudoconcave over W Proof See 20, Lemma 1141 Lemma 44 Let F x : R n R be a differentiable pseudoconvex function on an open set Y R n, and W Y any given nonempty and convex set Then x is an optimal solution to the problem of minimizing F x subject to x W if and only if x x T F x 0 for all x W Proof See 4, Theorem 231 b Now, we turn to the proof of Theorem 41: letx x 1,x 2,,x n T Ω, then F x F x for any x W and hence, Lemma 42 means that x solves 42, thatis, x W : ( y x ) T F x 0, y W, 43 which is equivalent to, see 23, x f W x F x, 44 so, x Ω e Thus,Ω Ω e Conversely, let x e x e 1,xe 2,,xe n T Ω e,thatis, x e f W x e F x e, 45 which, also see 23, means x e W : ( y x e) T F x e 0, y W 46 Since the function F x is pseudoconvex over W, seelemma 43, it can be obtained by Lemma 44 that F x F x e, x W, 47 so, x e Ω Thus,Ω e Ω Therefore, the obtained results Ω e Ω and Ω Ω e lead the result of Theorem 41, Ω Ω e, to be true then

8 Mathematical Problems in Engineering 5 Stability Analysis First, it can be shown that the RNN model 31 has a solution trajectory which is global in the sense that its existence interval can be extended to on the right hand for any initial point in W The continuity of the right hand of 31 means, by Peano s local existence theorem, see 24, that there exists a solution x t; x 0 for t 0,t max with any initial point x 0 W, here t max is the maximal right hand point of the existence interval The following lemma states that this t max to be Lemma 51 The solution x t; x 0 of RNN model 31 with any initial point x 0; x 0 x 0 W is bounded and so, it can be extended to Proof It is easy to check that the solution x t x t; x 0 for t 0,t max with initial condition x 0; x 0 x 0 is given by x t e t x 0 e t t 0 e s f W x s F x s ds 51 Obviously, mapping f W is bounded, that is f W K for some positive number K>0, where is the Euclidean 2 norm It follows from 51 that x t e t x 0 e t K t 0 e s ds e t x 0 K ( 1 e t ) { } x 0 max,k 52 Thus, solution x t is bounded and so, by the extension theorem for ODEs, see 24, it can be concluded that t max which completes the proof of this lemma Now, we are going to show another vital dynamical property which says the set W is positive invariant with respect to the RNN model 31 That is, any solution x t starting from a point in W, for example, x 0 W, it will stay in W for all time t elapsing Additionally, we can also prove that any solution starting from outside of W will either enter into the set W in finite time elapsing and hence stay in it for ever or approach it eventually Theorem 52 For the neural dynamical system 31, the following two dynamical properties hold: a W is a positive invariant set of the RNN model 31 ; b if x 0 / W, then, either x t enters into W in finite time elapsing and hence stays in it for ever or ρ t dist x t,w 0, ast,wheredist x t,w inf x y y W Proof Method to prove this theorem can be found in 14 and for the purpose of completeness and readability, here we give the whole proof as follows again

Mathematical Problems in Engineering 9 Suppose that, for i 1,,n,W i {x i R a i x i b i } and x 0 i x i 0; x 0 W i We first prove that for all i 1, 2,,n,theith component x i t x i t; x 0 belongs to W i,thatis, x i t W i for all t 0 Let t i sup{ t x i t W i, t [ ]} 0, t 0 53 We show by a contradiction that t i Suppose t i <, then x i t W i for t 0,t i and x i t / W i for t t i,t i δ where δ being a positive number With no loss of generality, we assume that x i t <a i, t ( t i,t i δ) 54 The proof for x i t >b i, t t i,t i δ is similar By the definition of f Wi, the RNN model 34 and the assumption 54, it follows that dx i t dt So, x i t is strictly increasing in t t i,t i δ and hence x i t a i > 0, t ( t i,t i δ) 55 ( ) ( x i t >x i t i, t t i,t i δ) 56 Noting that x i t W for t 0,t i and assumption 54 implies x i t i a i,so,by 56, we get x i t >a i, t ( t i,t i δ) 57 This is in contradiction with the assumption 54 So,t i, thatisx i t W i for all t 0 This means W is positive invariant and hence a is guaranteed Second, for some i, suppose x 0 i x i 0; x 0 / W i If there is a t i > 0 such that x t i W i, then, according to a, x i t will stay in W i for all t t i Thatisx i t will enter into W i Conversely, for all t 0, suppose x i t / W i Without loss of generality, we assume that x i t <a i It can be guaranteed by a contradiction that sup{x i t t 0} a i Ifitisnotso, note that x i t <a i, then sup{x i t t 0} m<a i It can be followed by 34 that dx i t dt m a i δ>0 58 Integrating 58 gives us x i t δt x 0 i, t > 0, 59 which is a contradiction because of x i t <a i Thus, we obtain sup{x i t t 0} a i This and the previous argument show that, for x 0 / W, either x t enters into W in finite time and hence stays in it for ever or ρ t dist x t,w 0, as t

10 Mathematical Problems in Engineering We can now explore the global convergence of the neural network model 31 To proceed, we need an inequality result about the projection operator f W and the definition of convergence for a neural network Definition 53 Let x t be a solution of system ẋ F x The system is said to be globally convergent to a set X with respect to set W if every solution x t starting at W satisfies ρ x t,x 0, as t, 510 here ρ x t,x inf y X x y and x 0 x 0 W Definition 54 The neural network 31 is said to be globally convergent to a set X with respect to set W if the corresponding dynamical system is so Lemma 55 For all v R n and all u W ( v fw v ) T( fw v u ) 0 511 Proof See 22, pp 9-10 Theorem 56 The neural network 31 is globally convergent to the solution set Ω with respect to set W Proof From Lemma 55, we know that ( v fw v ) T( fw v u ) 0, v R n, u W 512 Let v x F x and u x, then ( x F x fw x F x ) T( fw x F x x ) 0, 513 that is, F x T{ f W x F x x } fw x F x x 2 514 Define an energy function F x, then, differentiating this function along the solution x t of 31 gives us df x t dt F x T dx dt F x T{ f W x F x x } 515 According to 514, it follows that df x t dt fw x F x x 2 0 516

Mathematical Problems in Engineering 11 It means the energy of F x is decreasing along any trajectory of 31 ByLemma 51, we know the solution x t is bounded So, F x is a Liapunov function to system 31 Therefore, by LaSalle s invariant principle 25, it follows that all trajectories of 31 starting at W will converge to the largest invariant subset Σ of set E like E { x df } dt 0 517 However, it can be guaranteed from 516 that df/dt 0onlyiff W x F x x 0, which means that x must be an equilibrium of 31 or, x Ω Thus,Ω is the convergent set for all trajectories of neural network 31 starting at WNotingthatTheorem 41 tells us that Ω Ωand hence, Theorem 56 is proved to be true then Up to now, we have demonstrated that the proposed neural network 31 is a promising neural network model both in implementable construction sense and in theoretic convergence sense for solving nonlinear fractional programming problems and linear fractional programming problems with bound constraints Certainly, it is also important to simulate the network s effectiveness by numerical experiment to test its performance in practice In next section, we will focus our attention on handling illustrative examples to reach this goal 6 Typical Application Problems This section contributes to some typical problems from various branches of human activity, especially in economics and engineering, that can be formulated as fractional programming We choose three problems from information theory, optical processing of information and macroeconomic planning to identify the various applications of fractional programming 61 Information Theory For calculating maximum transmission rate in an information channel Meister and Oettli 26, Aggarwal and Sharma 27 employed the fractional programming described briefly as follows Consider a constant and discrete transmission channel with m input symbols and n output symbols, characterized by a transition matrix P p ij,i 1,,m,p ij 0, i p ij 1, where p ij represents the probability of getting the symbol i at the output subject to the constraint that the input symbol was j The probability distribution function of the inputs is denoted by x x j, and obviously, x j 0, j x j 1 Define the transmission rate of the channel as: T x i j x jp ij log ( p ij / k x kp ik ) j t jx j 61

12 Mathematical Problems in Engineering The relative capacity of the channel is defined by the maximum of T x, and we get the following fractional programming problem: G max x X T x x j 0, j x j 1 62 With the notations: c j i p ij log p ij, c ( c j ), yi j p ij x j, y ( y j ), z ( xj,y i ), 63 problem 62 becomes max T z c x y log y t x x j 0, j x j 1,y i j p ij x j 64 62 Optical Processing of Information In some physics problems, fractional programming can also be applied In spectral filters for the detection of quadratic law for infrared radiation, the problem of maximizing the signalto-noise ration appears This means to maximize the filter function φ x a x 2 x Bx β 65 on the domain S {x R n, 0 x i 1,i 1,,n} in which a and β are strict positive vector, and constant, respectively, B is a symmetric and positive definite matrix, a x represents the input signal, and x Bx β represents the variance in the background signal The domain of the feasible solutions S illustrates the fact that the filter cannot transmit more than 100% and less than 0% of the total energy The optical filtering problems are very important in today s information technology, especially in coherent light applications, and optically based computers have already been built 63 Macroeconomic Planning One of the most significant applications of fractional programming is that of dynamic modeling of macroeconomic planning using the input-output method Let Y t be the national income created in year t Obviously, Y t i Y i t If we denote by C ik t the consumption, in branch k, of goods of type i that were created in branch i and by I ik the part of the national income created in branch i and allocated to investment in branch k, then the following repartition equation applies to the national income created in branch i: Y i t k C ik t I ik t 66

Mathematical Problems in Engineering 13 The increase of the national income in branch k is function of the investment made in this branch ΔY i t Y k t 1 Y k t b k I k b k0, 67 where I k i I ik In these conditions, the macroeconomic planning leads to maximize the increase rate of the national income: ΔY Y k b ki k t b k0 k C ik t I ik t i 68 subject to the constraints C k t max C k,c k 0, where C k t i C ik t,i k 0 I k t I k max, andc k represents minimum consumption attributed to branch k whereas I k max is the maximum level of investments for branch k 7 Illustrative Examples We give some computational examples as simulation experiment to show the proposed network s good performance Example 71 Consider the following linear fractional programming: min F x x 1 x 2 1 2x 1 x 2 3, st 0 x 1 2, 71 0 x 2 2 This problem has an exact solution x gradient of F x can be expressed as 0, 0 T with the optimal value F x 1/3 The 3x 2 1 F x 2x 1 x 2 3 2 3x 1 4 2x 1 x 2 3 2 72 Define z i x i F xi, i 1, 2 73

14 Mathematical Problems in Engineering 1 09 08 07 06 x1,x2 05 04 03 02 01 x 1 x 2 0 0 5 10 15 Figure 3: Transient behaviors of neural trajectories x 1, x 2 from the inside of W t and pay attention to 72, weget z 1 4x3 1 4x2 1 x 2 2x 1 x 2 2 6x 1x 2 12x 2 1 9x 1 3x 2 1 2x 1 x 2 3 2, z 2 x3 2 4x2 2 x 1 12x 2 x 1 4x 2 x 2 1 6x2 2 9x 2 3x 1 4 2x 1 x 2 3 2 74 The dynamical systems are given by dx 1 dt dx 2 dt x 1, x 1 z 1, x 1 2, x 2, x 2 z 2, x 2 2, 0 0 z 1 < 0, z 1 2, z 1 > 2, z 2 < 0, z 2 2, z 2 > 2 75 Various combinations of 75 formulate the proposed neural network model 31 to this problem Conducted on MATLAB 70, by ODE 23 solver, the simulation results are carried out and the transient behaviors of the neural trajectories x 1,x 2 starting at x 0 04, 1 T, which is in the feasible region W, areshowninfigure 3 It can be seen visibly from the figure that the proposed neural network converges to the exact solution very soon Also, according to b of Theorem 52, the solution may be searched from outside of the feasible region Figure 4 shows this by presenting how the solution of this problem is located by the proposed neural trajectories from the initial point x 0 05, 3 T which is not in W

Mathematical Problems in Engineering 15 3 25 2 x1,x2 15 1 x 1 05 x 2 0 0 5 10 15 20 25 30 35 Figure 4: Transient behaviors of neural trajectories x 1,x 2 from the outside of W t Example 72 Consider the following nonlinear fractional programming: x 1 min F x 15 x 2 1, x2 2 st 1 x 1 2, 76 1 x 2 2 This problem has an exact solution x 1, 1 T with the optimal value F x 1/13 The gradient of F x can be expressed as F x 15 x 2 1 x2 2 ( 15 x 2 1 x2 2 2x 1 x 2 ( 15 x 2 1 x2 2 ) 2 ) 2 77 Define z i x i F xi, i 1, 2 78 and pay attention to 77, weget z 1 x5 1 x 1x 4 2 30x3 1 30x 1x 2 2 2x3 1 x2 2 x2 1 x2 2 225x 1 15 ( 15 x 2 1 ) 2, x2 2 z 2 x4 1 x 2 x 5 2 30x2 1 x 2 30x 3 2 2x2 1 x3 2 225x 2 2x 1 x 2 ( 15 x 2 1 ) 2 x2 2 79

16 Mathematical Problems in Engineering 3 28 26 24 22 x1,x2 2 x 1 18 16 x 2 14 12 1 0 2 4 6 8 10 12 Figure 5: Transient behaviors of neural trajectories x 1, x 2 from the inside of W t The dynamical systems are given by dx 1 dt dx 2 dt x 1 1, x 1 z 1, x 1 2, x 2 1, x 2 z 2, x 2 2, 1 1 z 1 < 1, z 1 2, z 1 > 2, z 2 < 1, z 2 2, z 2 > 2 710 Similarly, conducted on MATLAB 70, by ODE 23 solver, the transient behaviors of the neural trajectories x 1,x 2 from inside of the feasible region W, here x 0 2, 3, are depicted in Figure 5 which shows the rapid convergence of the proposed neural network Figure 6 presents a trajectory from outside of W, here 4, 3, it can be seen clearly from this that the solution of this problem is searched by the proposed neural trajectory soon 8 Conclusions In this paper, we have proposed a neural network model for solving nonlinear fractional programming problems with interval constraints The network is governed by a system of differential equations with a projection method The stability of the proposed neural network has been demonstrated to have global convergence with respect to the problem s feasible set As it is known, the existing neural network models with penalty function method for solving nonlinear programming problems may fail to find the exact solution of the problems The new model has overcome this stability defect appearing in all penalty-function-based models

Mathematical Problems in Engineering 17 4 35 3 x1,x2 25 x 1 2 15 x 2 1 0 5 10 15 Figure 6: Transient behaviors of neural trajectories x 1,x 2 from the outside of W t Certainly, the network presented here can perform well in the sense of real-time computation which, in the time elapsing sense, is also superior to the classical algorithms Finally, numerical simulation results demonstrate further that the new model can act both effectively and reliably on the purpose of locating the involved problem s solutions Acknowledgment The research is supported by Grant 2011108102056 from Science and Technology Bureau of Dong Guan, Guang Dong, China References 1 ACharnes,WWCooper,andERhodes, Measuringtheefficiency of decision making units, European Journal of Operational Research, vol 2, no 6, pp 429 444, 1978 2 V N Patkar, Fractional programming models for sharing of urban development responsabilities, Nagarlok, vol 22, no 1, pp 88 94, 1990 3 K M Mjelde, Fractional resource allocation with S-shaped return functions, The Journal of the Operational Research Society, vol 34, no 7, pp 627 632, 1983 4 I M Stancu-Minasian, Fractional Programming, Theory, Methods and Applications, Kluwer Academic Publishers, Dodrecht, The Netherlands, 1992 5 A Cichocki and R Unbehauen, Neural Networks for Optimization and Signal Processing, John Wiley & Sons, New York, NY, USA, 1993 6 J J Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proceedings of the National Academy of Sciences of the United States of America, vol 81, no 10, pp 3088 3092, 1984 7 J J Hopfield and D W Tank, Neural computation of decisons in optimization problems, Biological Cybernetics, vol 52, no 3, pp 141 152, 1985 8 J Wang, A deterministic annealing neural network for convex programming, Neural Networks, vol 7, no 4, pp 629 641, 1994 9 J Wang and V Chankong, Recurrent neural networks for linear programming: analysis and design principles, Computers and Operations Research, vol 19, no 3-4, pp 297 311, 1992

18 Mathematical Problems in Engineering 10 J Wang, Analysis and design of a recurrent neural network for linear programming, IEEE Transactions on Circuits and Systems I, vol 40, no 9, pp 613 618, 1993 11 M P Kennedy and L O Chua, Neural networks for nonlinear programming, IEEE Transaction on Circuits and Systems, vol 35, no 5, pp 554 562, 1988 12 Y S Xia and J Wang, A general methodology for designing globally convergent optimization neural networks, IEEE Transactions on Neural Networks, vol 9, no 6, pp 1331 1343, 1998 13 A Bouzerdoum and T R Pattison, Neural network for quadratic optimization with bound constraints, IEEE Transactions on Neural Networks, vol 4, no 2, pp 293 304, 1993 14 X B Liang and J Wang, A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints, IEEE Transactions on Neural Networks, vol 11, no 6, pp 1251 1262, 2000 15 Y S Xia, H Leung, and J Wang, A projection neural network and its application to constrained optimization problems, IEEE Transactions on Circuits and Systems I, vol 49, no 4, pp 447 458, 2002 16 A Charnes and W W Cooper, An explicit general solution in linear fractional programming, Naval Research Logistics Quarterly, vol 20, pp 449 467, 1973 17 A Charnes, D Granot, and F Granot, A note on explicit solution in linear fractional programming, Naval Research Logistics Quarterly, vol 23, no 1, pp 161 167, 1976 18 A Charnes, D Granot, and F Granot, An algorithm for solving general fractional interval programming problems, Naval Research Logistics Quarterly, vol 23, no 1, pp 67 84, 1976 19 W Bühler, A note on fractional interval programming, Zeitschrift für Operations Research, vol 19, no 1, pp 29 36, 1975 20 M S Bazaraa and C M Shetty, Nonlinear Programming, Theory and Algorithms, John Wiley & Sons, New York, NY, USA, 1979 21 Z B Xu, G Q Hu, and C P Kwong, Asymmetric Hopfield-type networks: theory and applications, Neural Networks, vol 9, no 3, pp 483 501, 1996 22 D Kinderlehrer and G Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, NY, USA, 1980 23 B C Eaves, On the basic theorem of complementarity, Mathematical Programming, vol 1, no 1, pp 68 75, 1971 24 J K Hale, Ordinary Differential Equations, John Wiley & Sons, New York, NY, USA, 1993 25 J P LaSalle, Stability theory for ordinary differential equations, Journal of Differential Equations, vol 4, pp 57 65, 1968 26 B Meister and W Oettli, On the capacity of a discrete, constant channel, Information and Control, vol 11, no 3, pp 341 351, 1967 27 S P Aggarwal and I C Sharma, Maximization of the transmission rate of a discrete, constant channel, Mathematical Methods of Operations Research, vol 14, no 1, pp 152 155, 1970

Advances in Operations Research Advances in Decision Sciences Mathematical Problems in Engineering Journal of Algebra Probability and Statistics The Scientific World Journal International Journal of Differential Equations Submit your manuscripts at International Journal of Advances in Combinatorics Mathematical Physics Journal of Complex Analysis International Journal of Mathematics and Mathematical Sciences Journal of Stochastic Analysis Abstract and Applied Analysis International Journal of Mathematics Discrete Dynamics in Nature and Society Journal of Journal of Discrete Mathematics Journal of Applied Mathematics Journal of Function Spaces Optimization