Inversion of Complex Valued Neural Networks Using Complex Back-propagation Algorithm

Similar documents
Variants of Pegasos. December 11, 2009

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture VI Regression

Solution in semi infinite diffusion couples (error function analysis)

Chapter 6: AC Circuits

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

CHAPTER 10: LINEAR DISCRIMINATION

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

On One Analytic Method of. Constructing Program Controls

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

On computing differential transform of nonlinear non-autonomous functions and its applications

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Lecture 2 L n i e n a e r a M od o e d l e s

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

Robust and Accurate Cancer Classification with Gene Expression Profiling

Advanced Machine Learning & Perception

An introduction to Support Vector Machine

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

Mechanics Physics 151

Linear Response Theory: The connection between QFT and experiments

Relative controllability of nonlinear systems with delays in control

Volatility Interpolation

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

( ) () we define the interaction representation by the unitary transformation () = ()

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process

Robustness Experiments with Two Variance Components

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

WiH Wei He

2/20/2013. EE 101 Midterm 2 Review

Comb Filters. Comb Filters

Cubic Bezier Homotopy Function for Solving Exponential Equations

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

FTCS Solution to the Heat Equation

Gauss-newton Based Learning For Fully Recurrent Neural Networks

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

Notes on the stability of dynamic systems and the use of Eigen Values.

Introduction to Boosting

Lecture 11 SVM cont

Fall 2010 Graduate Course on Dynamic Learning

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

Machine Learning Linear Regression

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Clustering (Bishop ch 9)

ISSN MIT Publications

The Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c

Li An-Ping. Beijing , P.R.China

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

MANY real-world applications (e.g. production

FI 3103 Quantum Physics

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Computing Relevance, Similarity: The Vector Space Model

TSS = SST + SSE An orthogonal partition of the total SS

Chapter Lagrangian Interpolation

3. OVERVIEW OF NUMERICAL METHODS

A NOVEL NETWORK METHOD DESIGNING MULTIRATE FILTER BANKS AND WAVELETS

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current :

A NEW TECHNIQUE FOR SOLVING THE 1-D BURGERS EQUATION

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

Department of Economics University of Toronto

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Comparison of Differences between Power Means 1

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Method of upper lower solutions for nonlinear system of fractional differential equations and applications

M. Y. Adamu Mathematical Sciences Programme, AbubakarTafawaBalewa University, Bauchi, Nigeria

Graduate Macroeconomics 2 Problem set 5. - Solutions

A Novel Efficient Stopping Criterion for BICM-ID System

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition

A Deterministic Algorithm for Summarizing Asynchronous Streams over a Sliding Window

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Math 128b Project. Jude Yuen

RADIAL BASIS FUNCTION PROCESS NEURAL NETWORK TRAINING BASED ON GENERALIZED FRÉCHET DISTANCE AND GA-SA HYBRID STRATEGY

Chapter 4. Neural Networks Based on Competition

Let s treat the problem of the response of a system to an applied external force. Again,

How about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Constrained-Storage Variable-Branch Neural Tree for. Classification

First-order piecewise-linear dynamic circuits

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

CS 268: Packet Scheduling

( ) [ ] MAP Decision Rule

Extracting Duration Facts in Qualitative Simulation using Comparison Calculus

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

CS286.2 Lecture 14: Quantum de Finetti Theorems II

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Iterative Learning Control and Applications in Rehabilitation

Mechanics Physics 151

Testing a new idea to solve the P = NP problem with mathematical induction

CHAPTER 2: Supervised Learning

Improved Coupled Tank Liquid Levels System Based on Swarm Adaptive Tuning of Hybrid Proportional-Integral Neural Network Controller

Transcription:

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO Inverson of Complex Valued eural ewors Usng Complex Bac-propagaon Algorhm Ana S. Gangal, P.K. Kalra, and D.S.Chauhan Absrac Ths paper presens he nverson of complex valued neural newors. Inverson means predcng he npus for gven oupu. We have red nverson of complex valued neural newor usng complex bac-propagaon algorhm. We have used spl sgmod acvaon funcon boh for ranng and nverson of neural newor o overcome he problem of sngulares. Snce nverson s a one o many mappng, means for a gven oupu here are number of possble combnaons of npus. So n order o ge he npus n he desred range condonal consrans are appled o npus. Smulaon on benchmar complex valued problems suppor he nvesgaon. Keywords acvaon funcon, bac propagaon, complex valued neural newor, nverson I. ITRODUCTIO The complex valued neural newors are hose neural newors whose weghs, hreshold values, npu and oupu sgnals all are complex numbers. The complex valued neural newor s exendng s feld boh n heores and applcaons. Typcally sgnal processng, mage processng, radar magng, array anenna, and mappng nverse nemacs of robos are he areas where such requremens exs. eural newor nverson procedure sees o fnd one or more npu values ha produce a desred oupu response. For nverson of real valued neural newor researchers have wored wh many approaches. These nverson algorhms can be placed no hree broad classes: ) Exhausve Search 2) ul-componen Evoluonary ehod 3) Sngle-elemen Search ehod In choosng among nverson echnques for real valued neural newors, Exhausve Search should be consdered when boh he dmensonaly of he npu and allowable range of each npu varable are low. The smplcy of he approach coupled wh he swfness n whch a layered percepron can be execued n he feedforward mode maes hs approach even more aracve as compuaonal speed ncreases. ulcomponen Evoluonary mehod proposed by Reed and ars on he oher hand, sees o mnmze he obecve funcon usng numerous search pons n urn resulng n anuscrp receved ovember 29, 2008. Ana S. Gangal s wh Uar Pradesh Techncal Unversy, Lucnow, Inda. (Phone:9-94504454; emal: ana.seha@yahoo.co.n) P. K. Kalra s Professor and Head of Elecrcal Engneerng Deparmen, Indan Insue of Technology, Kanpur, Inda, (emal: alra@.ac.n) D.S.Chauhan s Vce Chancellor of J P Unversy of Informaon Technology, Wanagha, Solan, Inda, (emal: pdschauhan@gmal.com) numerous soluons. Ths mehod resuls n populaon of nal pons n he search space a a me and new pons are generaed n he npu space o replace exsng pons so as o explore all he soluons. Sngle elemen search mehod for nverson of real valued neural newor was frs nroduced by Wllams 2 and hen Knderman and Lnden 3. They used hs o exrac codeboo vecors for dgs. Ths mehod of nverson nvolves wo man seps: frs ranng he newor and he second sep s nverson. Durng he ranng neural newor s raned o learn a mappng from npu o oupu wh he help of ranng daa. The weghs are he free parameers and by fndng he proper se by mnmzng some error creron, neural newor learns a funconal relaonshp beween he npus and he oupus. All he weghs are fxed afer ranng of neural newor. Afer ranng, he newor s nalzed wh a random npu vecor. Oupu s calculaed, compared wh he gven oupu. Error s calculaed. Ths error s bac propagaed o mnmze he error funcon and he npu vecor s updaed. Ths erave process connues ll he error s less han he mnmum se value. Eberhar and Dobbns 4 appled o nver he raned real valued neural newor for he dagnoss of appendcs. Jordan and Rumelhar 5 have proposed a mehod o nver he feed forward real valued neural newor. They red o solve he nverse nemacs problems for redundan manpulaors. There approach s a wo-sage procedure. In he frs sage, a newor s raned o approxmae he forward mappng. In he second sage, a parcular nverse soluon s obaned by connecng anoher newor wh he prevously raned newor n seres and learnng an deny mappng across he compose newor. Behera, Gopal, Chaudhary 6 used real valued neural newor nverson n he conrol of mulln robo manpulaors. They have developed an nverson algorhm for nverng radal bass funcon (RBS) neural newors whch s based on an exended Kalman fler. Bo- Lang Lu, Hame, and shawa 7 have formulaed he nverson problem as a nonlnear programmng problem and a separable programmng problem or a lnear programmng problem accordng o he archecures of he real valued newor o be nvered. II. IVERSIO OF REAL VALUED EURAL ETWORK USIG BACK-PROPAGATIO ALGORITH Inverson s fndng a se of npu vecors for gven oupu; whch when appled o a sysem wll produce he same oupu. 0 The search s nalzed wh a random npu vecor x. The erave nverson algorhm consss of wo passes of compuaon frs, he forward pass and second, he bacward Issue, Volume 3, 2009

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO pass. In he forward pass he oupu s calculaed for he randomly nalzed npus usng raned newor. The error sgnal beween he gven oupu and he acual oupu s calculaed. In he bacward pass, he error sgnal s bac propagaed o he npu layer hrough he newor layer by layer, and he npu s adused o decrease he oupu error. If x s he h componen of he npu vecor afer eraons, hen graden descen suggess he recurson x x η x Where, η s he learnng rae consan. Ieraon for nverson can be solved as δ, x Where, for any neuron I,H,O w m ' φ o d ( o )( o d ) I O δ φ ' ; (3) ( o ) δ mwm ' φ ; I, H m H, O number of npu, hdden and oupu neurons synapc wegh from neuron m o neuron dervave of he h neuron acvaon funcon acual oupu of he h desred oupu of he h neuron neuron The neuron dervave δ n (3) s solved n a bacward order from oupu o npu smlar o he sandard bac propagaon algorhm. III. COPLEX VALUED EURAL ETWORK The complex plane s very much dfferen from real lne. Complex plane s wo dmensonal wh respec o real numbers and s one dmensonal wh respec o complex number. The order ha exsed on he real numbers s absen n he se of complex numbers hence, no wo numbers can be compared as beng bg or small wh respec o each oher bu her magnudes can be compared whch are real values. The complex numbers have a magnude assocaed wh hem and a phase ha locaes he complex number unquely on he plane. The generalzaon of real valued algorhms canno be smply done as complex valued algorhm. Complex verson of bac-propagaon (CVBP) algorhm made s frs appearance when Wdrow, ccool and Ball 8 announced her complex leas mean squares (LS) algorhm. Km and Gues 9 publshed a complex valued learnng algorhm for sgnal processng applcaon. Georgou and Kousougeras 0 publshed anoher verson of CVBP ncorporang a dfferen acvaon funcon and have shown f real valued algorhms be smply done as complex valued algorhm hen () (2) sngulares and oher such unpleasan phenomena may arse. In he complex bac propagaon algorhm suggesed by Leung and Hayns, he nonlnear funcon maps he complex value whou splng no he real and magnary par Where, f ( z) (4) z e z x y The funcon f(z) s holomorphc complex funcon. Bu accordng o he Louvlle s heorem, a bounded holomorphc funcon n he complex plane C s a consan. So he aemp o exend he sgmodal funcon o complex plane s me wh he dffculy of sngulares n he oupu. To deal, wh hs dffculy A Prashanh 2 suggesed ha he npu daa should be scaled o some regon n complex doman. Alhough he npu daa can be scaled bu here s no lm over he values he complex weghs can ae hence s dffcul o mplemen. To overcome hs problem spl acvaon funcon s used boh for ranng and nverson of complex valued neural newor (CV). An exensve sudy of CVBP was repored by a 3. Decson boundary of a sngle complex valued neuron consss of wo hyper-surfaces whch nersec orhogonally, and dvde a decson regon no four equal secons. If boh he absolue values of real and magnary pars of he ne npus o all hdden neurons are suffcenly large, hen he decson boundares for real and magnary pars of an oupu neuron n hree layered complex valued neural newor nersec orhogonally. The average learnng speed of complex BP algorhm s faser han ha of real BP algorhm. The sandard devaon of he learnng speed of complex BP s smaller han ha of he real BP. Hence he complex valued neural newor and he relaed algorhm are naural for learnng of complex valued paerns. The complex BP algorhm can be appled o mullayered neural newors whose weghs, hreshold values, npus and oupus all are complex numbers. In spl acvaon funcon, nonlnear funcon s appled separaely o real and magnary pars of he aggregaon a he npu of he neuron Where, φ c z) R ( φ ( x) φ R ( y ) (5) φ R( a) (6) a e Here sgmod acvaon funcon s used separaely for real and magnary par. Ths arrangemen ensures ha he magnude of real and magnary par of f(z) s bounded beween 0 and. Bu now he funcon f(z) s no longer holomorphc, because he Cauchy-Remann equaon does no hold.e. Issue, Volume 3, 2009 2

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO f ( z) f ( z) ( fr ( x)) fr( x) ( fr( y)) fr( y) 0 (7) x y So, effecvely he holomorphy s compromsed for boundedness of he acvaon funcon. We have red he nverson of a hree layered complex valued neural newor shown n Fg. x w 2 v o w 22 v 2 x 2 o 2 x w v IPUT LAYER HIDDE LAYER Fg. () complex valued neural newor o OUTPUT LAYER 2 E o d (3) 2 For real me applcaon he cos funcon of he newor s gven by 2 2 2 E e ee (Re e Im e ) (4) 2 2 2 (.) denoes he complex conugae. E s a real-valued funcon, and we are requred o derve he graden of Ep w.r.. boh he real and magnary par of he complex weghs. w E (5) w Im w Inpu vecor One or more hdden layer neurons Oupu neurons Desred oupus Acual oupus Σ In hs complex valued neural newor: L number of npu layer neurons number of hdden layer neurons number of oupu layer neurons x oupu value of npu neuron (npu) z oupu of hdden layer neuron o oupu of he oupu neuron w wegh beween npu layer neuron and hdden layer neuron v wegh beween hdden layer neuron and oupu layer neuron θ hreshold / bas of hdden layer neurons hreshold / bas of oupu layer neurons γ Tranng s done wh a gven se of npu and oupu daa o learn a funconal relaonshp beween npu and oupu. Inernal poenal of hdden neuron : L u ( w x ) θ Re u Im u (8) Oupu of hdden neuron : z φ ( u ) Re z Im Re u Im u z (9) e e Inernal poenal of oupu neuron : s ( v z ) γ Re s Im s (0) Oupu of oupu neuron : o φ ( s ) Re y Im Re S Im S y () e e Error e o d (2) Sum squared error for he oupus Δω ω (n) n (n) η ω(n) ω n Δω Fg 2 wegh updae durng ranng The ranng process of neural newor s shown n Fg. 2. Durng ranng he newor cos funcon E s mnmzed by recursvely alerng he wegh coeffcen based on graden descen algorhm, gven by w( ) w( ) Δw ( ) w( ) ηή w E (6) Where s he number of eraons and η s he learnng rae consan. Once he newor s raned for he gven ranng daa, all he weghs are fxed. IV. IVERSIO OF COPLEX VALUED EURAL ETWORK Once he newor s raned, he weghs are fxed. Inverson s he procedure ha sees o fnd ou he npus whch wll produce he desred oupu. We have used complex bacpropagaon algorhm for nverson. The npu vecor x 0 s nalzed o some random value. The oupu of hs raned newor s calculae wh hs nalzed npu vecor and s compared wh he desred oupu. The error beween acual oupu and he desred oupu s calculaed. Ths error s bac propagaed o mnmze he error funcon and he npu vecor s updaed as shown n Fg. 3. n Issue, Volume 3, 2009 3

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO Fg 3 npu updae for nverson Ths erave process s connued ll he error becomes less hen he mnmum defned error accordng o he followng equaon x x η x Cos funcon E s a scalar quany whch s mnmzed by modfyng npu. Δx έ E η έ Re m η έ E x έ Im x u u u Im u Im u u u u Im x Im u Im u Im x From (8) nernal poenal of hdden neuron : u Inpu vecor ( Re w Re x Im w Im x ) ( Re w Im x Im w x ) L Re From (8)" he npu updae s gven by, Δx η One or more hdden layer neuron s Re u u Oupu neurons Inpu updae έ E( p) Δx () p mη έ x x ( p) x ( p) Δx ( p) w Im ( Im w ) Desred oupus Acual oupu u Σ Error funcon Ef(e ) Im Im w u Re (7) (9) w (8) η η η ( Re w ) Im w u ( Im w ) Re w Im u ( Re w ) Im w u ( Re w ) Im w (20) Im u w u Im u The paral dervave of he cos funcon w.r.. Re u s: έ E έ Re u έ Re z From (9) we ge z έ E u έ Im z z έ E έ Re έ Im (2) έ Re έ Re u z u 0 z e { Re z ( Re z )} e z e Im z e Re e Im e z z e z e s From () and (2) we ge e 0 e s Re s z e y ( Re y ) u z (22) Issue, Volume 3, 2009 4

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO From (0), we ge ( Re v Re z Im v Im z ) s ( ) Re v Im z Im v Re z From (0) () and (2) we ge e Re y ( Re y ) Re v z z s s z 0 s Im y Im y Im e Im v z Subsung hese values n (22 )" we ge Re e Re y Re y z ( ) Im e Hence from (2) u z Re z ( ) Re v Im y Im y { Re z ( Re z )} z ( Re z ) Re e Re y Re y Im ( ) Im v ( ) Re v e Im y Im y ( ) Im v (23) Smlarly, he paral dervave of he cos funcon w.r.. Imu s z Im z Im u z Im u Im z Im u (24) Once agan from (9) we ge z 0 Im u Im z Im z ( Im z ) Im u e e e Re e Im e e e s s From () and (2), we ge e 0 e Re y Re y e ( ){ Im v ( Re y ) Im v y s s Im z y Im y Re Im Re v ( ) (25) } (26) (27) Subsung hese values from (26), (27) n (25) we ge Re e Re y ( Re y ) Im v Im e Therefore from (24) Im u Im z Im y Im y { Im z ( Im z )} ( Im z ) Re e Re y Re y ( ) Re v ( ) Im v Im y Im y Im e Subsung he values of έ E έ Im u έ E έ Re u from (28) n (20) we ge ( ) Re v (28) from (23) and Issue, Volume 3, 2009 5

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO Δx η η w w u Im u Re z ( Re z ) Re e Re y ( Re y ) Re v ( ) Im e Im y Im y Im v Im z ( Im z ) Re e Re y ( Re y ) Im v Im e Im y ( Im y ) Re v (29) x s he npu updae. Hence new npus are calculaed a each eraon by he followng relaon x new x old x (30) Wh hese new values of npus he oupus are calculaed. Ths oupu s compared wh desred oupu and error s calculaed. When hs error s less han he mnmum se error value, erave process s sopped and he nverson s compleed. Ths fnal value of he npu vecor x s he acual value of npu by nverson of complex valued neural newor. EXPERIET We have a aen 3 layered neural newor wh 2 npus, 5 hdden layer neurons and one oupu neuron. Frs we raned he newor for he npu and oupu daa of complex valued XOR gae gven n able I. Once he newor s creaed by ranng on he gven daa, he funconal relaonshp beween npus and oupus s se. The complex valued arge oupus are gven n able II for whch we have done nverson. We predced he npus by nverson of complex valued neural newor. For hs raned newor he npus are naed o some random values. The oupus are obaned for hese random npu values. These acual oupus are compared o he arge oupus and he error s calculaed. Ths error s bacpropagaed and he new values of npus are calculaed by updang he npus usng (29) and (30). Wh hese new npu values once agan he oupus are calculaed, compared wh he arge oupus, and hen he error s calculaed and bac-propagaed o correc he npus o furher new values. Ths process s repeaed ll he error s mnmzed and becomes less hen he assumed mnmum value of he error. Fnally wh hese predced npus we found he acual oupus as gven n able III. The acual oupus obaned from he predced npus are nearly he same o he arge oupus. Table I Tranng daa for expermen (Complex XOR gae) Inpu Inpu oupu x ; (a b ) x 2 :(a 2 b 2 ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Table II Targe oupus, desred npus and correspondng acual npus from nverson Desred npus Acual Inpus by nverson X X 2 X X 2 Targe oupu 0 0.973 0.056 0 0 0.2345 0.8834 0.98340.8765 0.89760.982 0.8976 0.923 Table III Targe oupus and acual oupus calculaed from npus obaned by nverson Targe oupus Acual oupus 0 0.380.067 0.00570.8405 0.86920.094 0.89790.904 The man problem n nverson usng complex bac propagaon algorhm s o fnd he nverse soluon lyng neares o a specfed pon. For hs we have used neares nverson approach whch s a sngle elemen search mehod. Gven a funcon f(), a arge oupu level, and an nal base pon 0. We ry o fnd he pon * ha sasfes f(*) and s closes o 0 n some sense. eares nverson s a consraned opmzaon problem. Ths consraned problem s solved by mnmzng E- 0 subec o f(). Issue, Volume 3, 2009 6

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO EXPERIET 2 In hs expermen we have red he nverson for smlary ransformaon. We have aen a hree layered neural newor wh archecure (-5-). The complex npu paern s scaled down by 0.5. The scalng s n erms of magnude only he angle s preserved. The ranng npu paern consss of a se of complex values represened by sar sgns and correspondng oupu paern daa pons are represened by damond sgn as shown n Fg.5. Once he newor s creaed by ranng on he gven daa, he funconal relaonshp beween npus and oupus s se. Ths raned model of CV for smlary ransformaon s used for nverson. The newor s presened wh he arge oupu pons shown by damond symbols arranged n he shape of a recangle as shown n Fg. 6. For hs raned newor he npus are naed o some random values. The oupus are obaned for hese random npu values. These acual oupus are compared o he arge oupus and he error s calculaed. Ths error s bac-propagaed and he new values of npus are calculaed by updang he npus usng (29) and (30). Ths erave process s connued ll he error s mnmzed and becomes less hen he assumed mnmum value of he error. In Fg. 6 desred npus are ndcaed by sars and he plus sgns denoe he acual npus obaned from he nverson of he newor. As seen n he fgure he npus from nverson are very close o he expeced npus. Thus nverson of complex valued neural newor s successfully done. EXPERIET 3 In hs expermen we have aen (-7-) neural newor. The newor s raned for roaonal ransformaon daa n couner clocwse drecon. The ranng npu daa pons are represened by sars and he correspondng oupu daa pons are represened by damonds n Fg. 7. Afer ranng he weghs of he neural newor are fxed. We have red he nverson on some dfferen values of oupus n he same range. 0.8 0.7 0.6 magnary par 0.5 0.4 0.3 0.2 0. Fg. 5: smlary ransformaon: ranng npu pons (sar sgns) and ranng oupu pons (damond sgns) 0 0 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 real par Fg. 7 ranng daa for roaonal ransform n complex plane: sars showng npus and damond symbols showng correspondng oupus Fg. 6 nverson resuls for smlary ransformaon showng arge oupus by damonds expeced npus by sars acual npus obaned from nverson by plus sgns Fg. 8 showng arge oupus by plus sgn, desred npus by sar symbols and he npus predced by nverson by damond symbols for roaonal ransform n complex plane Issue, Volume 3, 2009 7

ITERATIOAL JOURAL OF ATHEATICS AD COPUTERS I SIULATIO For nverson he arge oupu pons are shown n Fg. 8 by plus sgns. These arge daa pons are arranged n he shape of Englsh leer z. Inpus are naed wh some random values. Then he nverson of hs neural newor s done by usng complex bac-propagaon algorhm. Inpus obaned by he nverson of he raned neural newor are represened by damond sgns and he expeced npus are represened by he sar sgns as shown n Fg. 8. As clear from he fgure ha he npus obaned from nverson are nearly he same as o he expeced npus. Hence nverson s done successfully for roaonal ransformaon. V. COCLUSIOS Inverson of complex valued neural newor s sll a relavely low explored feld and here are many aspecs whch can be furher suded and explored. Some oher nverson algorhms of real doman can be expanded o complex doman. In mos researches conduced on he complex valued neural newors, he learnng consan used s real valued. In prncple a complex learnng consan could be employed. In hs approach, we have used complex Quadrac error funcon for opmzaon. The oher real doman error funcons exended o complex doman can be appled for opmzaon durng nverson. as lecurer n Elecroncs Engneerng Deparmen a HBTI, Kanpur, Inda and C.S.J.. Unversy, Kanpur, Inda. She s member of IETE, Inda. Her maor felds of neress are eural ewors, Compuaonal euroscence and Power Elecroncs. P. K. Kalra He receved hs BSc (Engg) degree from DEI Agra, Inda n 978,. Tech degree from Indan Insue of Technology, Kanpur, Inda n 982, and Ph.D. degree from anoba Unversy, Canada n 987. He wored as asssan professor n he Deparmen of Elecrcal Engneerng, onana Sae Unversy Bozeman, T, USA from January 987 o June 988. In July-Augus 988 he was vsng asssan professor n he Deparmen of Elecrcal Engneerng, Unversy of Washngon Seale, WA, USA. Snce Sepember 988 he s wh he Deparmen of Elecrcal Engneerng, Indan Insue of Technology Kanpur, Inda where he s Professor and Head of Deparmen. Dr. Kalra s a member of IEEE, fellow of IETE and Lfe member of IE(I), Inda. He has publshed over 50 papers n repued aonal and Inernaonal ournals and conferences. Hs research neress are Exper Sysems applcaons, Fuzzy Logc, eural ewors and Power Sysems. D. S. Chauhan He receved hs BSc (Engg) degree from BHU Varanas, Inda n 972,.E. degree from adras Unversy, Inda n 978, and Ph.D. degree from Indan Insue of Technology Delh, Inda n 986. He s Former Vce Chancellor of Uar Pradesh Techncal Unversy, Inda. He s vce chancellor of L.P. Unversy, Jalandhar, Inda. Dr. Chauhan s Fellow of IE(I), member of IEEE, USA, and member of aonal Power Worng Group, Inda. He has publshed over 70 papers n repued aonal and Inernaonal ournals and conferences. Hs research neress are Lnear Conrols, Power Sysems Analyss, Arfcal Inellgence, Fuzzy Sysems HVDC Transmsson, and eural ewors. REFERECES R. D. Reed and R. J. ars, II, An evoluonary algorhm for funcon nverson and boundary marng, n Proc. IEEE In. Conf. Evoluonary Compuaon (ICEC 95), Perh, Wesern Ausrala, pp. 794-797, 995. 2 T. J. Wllams, Inverng a connecons newor mappng by bacpropagaon of error, n Proc. 8h Annu. Conf. Cognve Scence Socey. Hllsdale, J: Lawrence Erlbaum, pp. 859-865, 986 3 J. Knderman and A. Lnden, Inverson of neural newors by graden descen, Parallel Compu., pp.277-286, 990. 4 R.C. Eberhar and R.W. Dobbns, Desgnng neural newor explanaon facles usng genec algorhms, n Proc.In. Jon Conf. eural newors, vol.ii, Sngapore, pp.758-763, 99. 5.I. Jordan and D.E. Rumelhar, Forward models: supervsed learnng wh a dsal eacher, Cognve Sc., vol. 6, pp.307-354, 992. 6 L. Behera,. Gopal, and S. Chaudhary, On adapve raecory racng of a robo manpulaor usng nverson of s neural emulaor, IEEE Tans. eural ewors, vol. 7, no. 6, ov. 996. 7 Bo-Lang Lu, Hame Ka and Y. shawa, Inverson of feedforward neural newors by separable programmng, n Proc. World Congr. eural newors (Porland), vol. 4, 993, pp 45-420. 8 B.Wdrow, J. ccool, and. Ball, The Complex LS algorhm, Proc. of he IEEE, Aprl, 975. 9.S. Km, and C.C. Gues, 990, odfcaon of bac-propagaon for complex- valued sgnal processng n frequency doman, IJC In.Jon Conf. eural ewors, pp. III-27-III-3,June. 0 G.. Georgou and C..Kousougeras, Complex doman bacpropagaon, IEEETrans. On Crcus and Sysems II: Analog and Dgal Sgnal Processng, Vol.39, o. 5., ay992. H. Leung and S. Hayns, The complex bac-propagaon algorhm, IEEE Trans. On sgnal Processng, Vol. 39,o.9, Sepember99. 2 A. Prashanh, Invesgaon on complex varable based bac-propagaon algorhm and applcaons, Ph.D. hess, IIT, Kanpur, Inda, 2003. 3 a, An exenson of he bac-propagaon algorhm o complex numbers, neural newors, Vol. 0, o. 8, 997. Ana S. Gangal She receved B.Tech Degree n Elecroncs Engneerng from HBTI, Kanpur n 992. She s pursung her Ph. D. n Elecroncs Engneerng from Uar Pradesh Techncal Unversy, Inda. She had wored Issue, Volume 3, 2009 8