Long-term Forecasting of Electrical Load using. Gustafson-Kessel clustering algorithm on Takagi-Sugeno type MISO Neuro- Fuzzy network

Similar documents
Comparison of BPA and LMA methods for Takagi - Sugeno type MIMO Neuro-Fuzzy Network to forecast Electrical Load Time Series

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques

MARKOV CHAIN AND HIDDEN MARKOV MODEL

Generalized Linear Methods

Associative Memories

Neural network-based athletics performance prediction optimization model applied research

Image Classification Using EM And JE algorithms

Predicting Model of Traffic Volume Based on Grey-Markov

IV. Performance Optimization

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages

A finite difference method for heat equation in the unbounded domain

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Supporting Information

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.

Adaptive LRBP Using Learning Automata for Neural Networks

COXREG. Estimation (1)

Lecture Notes on Linear Regression

Optimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA

Nested case-control and case-cohort studies

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS

Note 2. Ling fong Li. 1 Klein Gordon Equation Probablity interpretation Solutions to Klein-Gordon Equation... 2

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Optimum Selection Combining for M-QAM on Fading Channels

3. Stress-strain relationships of a composite layer

Ensemble Methods: Boosting

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES

Determining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application

Multigradient for Neural Networks for Equalizers 1

[WAVES] 1. Waves and wave forces. Definition of waves

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students.

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

MULTIVARIABLE FUZZY CONTROL WITH ITS APPLICATIONS IN MULTI EVAPORATOR REFRIGERATION SYSTEMS

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Neuro-Adaptive Design - I:

Which Separator? Spring 1

1 Convex Optimization

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Chapter 6 Hidden Markov Models. Chaochun Wei Spring 2018

CHAPTER 7 STOCHASTIC ECONOMIC EMISSION DISPATCH-MODELED USING WEIGHTING METHOD

EEE 241: Linear Systems

Lecture 12: Classification

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Cyclic Codes BCH Codes

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Chapter Newton s Method

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

10-701/ Machine Learning, Fall 2005 Homework 3

Delay tomography for large scale networks

A Three-Phase State Estimation in Unbalanced Distribution Networks with Switch Modelling

Errors for Linear Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Quantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry

Numerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes

Distributed Moving Horizon State Estimation of Nonlinear Systems. Jing Zhang

Non-linear Canonical Correlation Analysis Using a RBF Network

CSC 411 / CSC D11 / CSC C11

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

A Hybrid Learning Algorithm for Locally Recurrent Neural Networks

Chapter 6. Rotations and Tensors

Laboratory 3: Method of Least Squares

Negative Binomial Regression

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Laboratory 1c: Method of Least Squares

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Relevance Vector Machines Explained

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

The internal structure of natural numbers and one method for the definition of large prime numbers

Lecture 3. Ax x i a i. i i

Chapter 8 Indicator Variables

A Rigorous Framework for Robust Data Assimilation

Application of support vector machine in health monitoring of plate structures

CHAPTER III Neural Networks as Associative Memory

Chapter 11: Simple Linear Regression and Correlation

Discriminating Fuzzy Preference Relations Based on Heuristic Possibilistic Clustering

1 The Mistake Bound Model

Chapter - 2. Distribution System Power Flow Analysis

A Novel Feistel Cipher Involving a Bunch of Keys supplemented with Modular Arithmetic Addition

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Problem Set 9 Solutions

Feature Selection: Part 1

An Interactive Optimisation Tool for Allocation Problems

Decentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions

4DVAR, according to the name, is a four-dimensional variational method.

Backpropagation Based Training Algorithm for Takagi - Sugeno Type MIMO Neuro-Fuzzy Network to Forecast Electrical Load Time Series

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Chapter 13: Multiple Regression

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

APPENDIX A Some Linear Algebra

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

= z 20 z n. (k 20) + 4 z k = 4

Transcription:

Long-term Forecastng of Eectrca Load usng Gustafson-Kesse custerng agorthm on akag-sugeno type ISO euro- Fuzzy network By: Fex Pasa Eectrca Engneerng Department, Petra Chrstan Unversty, Surabaya fex@petra.ac.d Keywords: Long-term forecastng, GK Custerng, S-type ISO F network, LA acceerated. Introducton.. Probem defnton: euro-fuzzy Approach n Eectrca Load Forecastng odeng and dentfcaton of eectrca oad processes are essenta for the operaton and aso pannng of a utty ether for a company or for a country. Eectrca oad forecastng s needed because peope ntend to make mportant decson on generatng power generators, oad swtchng, purchasng strategy and aso nfrastructure deveopment. Furthermore, oad forecasts are extremey mportant for energy suppers, transmsson, dstrbuton and markets. In other words, oad forecasts pay a fundamenta roe n the formuaton of economc, reabe and secure operatng strateges for the power system. Lke other predcton n tme seres, oad forecasts are deang wth sequenta tme. In genera, oad forecasts are dvded nto two bg categores: short-term forecastng (SF), whch can usuay be defned as the capabty of the network to forecast the next severa days to the some weeks, and ong-term forecastng (LF) whch s deang wth future forecastng. For exampe, f peope ony have severa weeks of data for tranng, how can they create euro-fuzzy network to forecast what happened n the next severa weeks, next month or next year. How ong they w beeve that the network st can or can not be trusted. Another mportant ssue n LF s the annua peak demand for dstrbuton of substatons and feeders. Annua peak oad s the most mportant vaue to area pannng, snce peak oad most strongy mpacts capacty requrements (Fenberg, 3). In addton, both categores have unque characterstc. SF and LF can be determned usng the sampng nterva and ead tme of forecast from data tme seres. Here, choosng the sampng nterva and ead tme w nfuence the resut of forecastng performance. In the past many researchers were workng on SF scheme. Some of them exposed very good resut on tranng and forecastng performance (Pat, Computatona ntegence, 5, p.57). Because of that nspraton, euro-fuzzy network w be used to acheve powerfu tranng and forecastng performance on LF-eectrca oad appcatons. Furthermore, Gustafson-Kesse (GK) custerng agorthm w be used to reduce mode compexty and provde nta parameters for akag-sugeno-type ISO on F network. Choosng number of custers s the key on ths paper. By some experments, some number of custers has been chosen to tran and test the F network. By usng GK custerng, error performance of the network can be reduce sgnfcanty by choosng no of custer = 5 (membershp functons =5) nstead of number of membershp functons=5.

.. atrx Rearrangement of eectrca oad data For Long-term mode, ISO system w be used for tranng and forecastng. hs case, 7 nputs and output mode for the gven tme seres modeng and forecastng appcaton the ISO euro-fuzzy predctor shoud be arranged on XIO matrx, as shown beow: Day Day Day3 Day4 Day5 Day XIO = Day Day3 Day4 Day5 Day6 Day LF..................... Day8 Day9 Day Day Day Day 6 7 3 Day7 Day8 Day8 Day9 Day4 Day5 (.) As shown n equaton (.), 7 nput days are traned to produce output day n F network. Each nput and output represents day set of data. In case of eectrca oad data from EectrcaLoad.txt, day has 96 data. Output from the frst tranng Day w 8 repace the 7 th nput for the frst forecastng. After 7 oops of forecastng, a the data are fu nput forecast data (means a nput s comng from forecast outputs).. euro-fuzzy Systems seecton for Forecastng A euro-fuzzy network wth an mproved tranng agorthm for IO case was deveoped by Pat and Popovc (999, ) and Pat and Babuška () for eectrca oad tme seres forecastng. Compared to AFIS, ths smar mode has acheved better mode accuracy and faster tranng. Because of the reason above, some achevements have been reached n order to accompsh the optmum mode accuracy based on a akag-sugeno-type euro-fuzzy network wth IO mode. In addton, ths mode s an upgradng verson from akag-sugeno-type mutpe nput snge output euro-fuzzy network As contnuty from ISO structure, feedforward mut nput mut output s proposed by Pat and Popovc () and Pat and Babuška (), as shown n Fgure. (Pat, Sprnger, 5, p.3) X X n G G n X Z / Z /b y y + f X X n G G n X Z + Degree of fufment b / Z /b y m y m + f m Fgure. Fuzzy system IO feedforward akag-sugeno-type eura-fuzzy network (Pat, 5) For LF type ISO, m w be set to one.

.. eura etwork Representaton of Fuzzy Logc System (FLS) euro-fuzzy representaton of the FLS s based on nference S-type whch has been expaned ceary by Pat (5, p.53, p.33). here are two mportant steps n ths representaton: cacuatng of the degree of fufment and normazed degree of fufment. he FLS consdered here for constructng neuro-fuzzy structures s based on S-type fuzzy mode wth Gaussan membershp functons. It uses product nference rues and a weghted average defuzzfer defnes as: he correspondng th rue from the above FLS can be wrtten as sg and x sg and... xn sgn then y = Wo + W x + W x +... Wn xn (.) R = If x + Where, x wth =,,..., n; are the n system nputs, whereas f wth =,,..., m; are ts m outputs, and G wth =,,..., n; and =,,..., ; are the Gaussan membershp functons of the form (.) wth the correspondng mean and varance parameters c and τ respectvey and wth y as the output consequent of the th rue. It must be remembered that the Gaussan membershp functons G actuay represent ngustc terms such as ow, medum, hgh, very hgh, etc. he rues as wrtten n (.) are known as akag-sugeno rues. It shows that the FLS can be represented as a three ayer IO feedforward network as shown n Fgure.. Because of the mpementaton of the akag-sugeno-type FLS, ths fgure represents a akag-sugeno-type of IO neuro-fuzzy network, where nstead of the connecton weghts and the bases n tranng agorthm such as BPA-, we have the mean c and aso the varance τ parameters of Gaussan membershp functons, aong wthw o, W parameters from the rues consequent, as the equvaent adustabe parameters of the network. If a the parameters for F network are propery seected, then the FLS can correcty approxmate any nonnear system based on gven data XIO matrx. f = = y h (.a) y = W + W x + W x +... + W h x n n (.b) ( z / b) =, and b = z = (.c) z = n x c exp = σ.. Acceerated Levenberg-arquardt agorthm (LA) (.d) o acceerate the convergence speed on neuro-fuzzy network tranng that happened n BPA, the Levenberg-arquardt agorthm (LA) was proposed and proved (Pat and Popovc, 999). V s meant to mnmze wth respect to the parameter vector w usng ewton s method, the update of parameter vector w s defned as: If a functon ( w)

w = [ V ( w) ] V ( w) (.3a) w( k + ) = w( k) + w (.3b) From equaton (.3a), V ( w) s the Hessan matrx and V ( w) s the gradent of ( w) If the functon V ( w) s taken to be SSE functon as foows: V r r =! ( w) =.5 e ( w) (.4) hen the gradent of V ( w) and the Hessan matrx V ( w) ( w) = ( w) e( w) are generay defned as: V (.5a) V ( w) = ( w) ( w) + e ( w) e ( w) r = where the acoban matrx ( w) as foows ( w) e e = e ( w) e ( w) e ( w) p ( w) e ( w) e ( w) ( w) e ( w) e ( w) r L L L From (.5c), t s seen that the dmenson of the acoban matrx s ( s the number of tranng sampes and p p r p V. (.5b) (.5c) ), where p s the number of adustabe parameters n the network. For the Gauss-ewton method, the second term n (.5b) s assumed to be zero. herefore, the update equatons accordng to (.3a) w be: w = [ ( w) ( w) ] ( w) e( w) ow et us see the LA modfcatons of the Gauss-ewton method. w = [ ( w) ( w) + I ] ( w) e( w) (.6a) µ (.6b) where dmenson of I s the ( p p ) dentty matrx, and the parameter µ s mutped or dvded by some factor whenever the teraton steps ncrease or decrease V w. the vaue of ( ) Here, the updated equaton accordng to (.3a) ( + ) = w( k) ( w) ( w) w k [ + I ] ( w) e( w) µ (.6c) hs s mportant to know that for arge µ, the agorthm becomes the steepest descent agorthm wth step sze / µ, and for sma µ, t becomes the Gauss-ewton method. For faster convergence reason and aso to overcome the possbe trap at oca mnma and to reduce oscaton durng the tranng (Pat, 5. p.4), ke n BPA, a sma momentum term mo (practcay n eectrca oad forecastng, addng mo around 5% to % w gve better resuts) aso can be added, so that fna update (.6c) becomes

( + ) = w( k) ( w) ( w) w k [ + I ] ( w) e( w) + mo w( k) w( k ) ( ) µ (.6d) Furthermore, Xaosong et a (995) aso proposed to add modfed error ndex (EI) term n order to mprove tranng convergence. he correspondng gradent wth EI can now be defned by usng a acoban matrx as: new [ ( e )] ( w) = ( w) e( w) + e( w) SSE γ (.7) avg where e ( w) s the coumn vector of errors, e avg s sum of error of each coumn dvded by number of tranng, whe γ s a constant factor, γ << has to be chosen appropratey. ow, comes to the computaton of acoban atrces. he gradent V ( W ) SSE be wrtten as ( W ) ( S W ) = { z b} ( f d ) V / can / (.8) Where f and d are the actua output of the akag-sugeno type IO and the correspondng desred output from matrx nput-output tranng data. And then by comparng (.8) to (.6a), where the gradent V ( w) s expressed wth the transpose of the acoban matrx mutped wth the network's error vector ( w) = ( w) e( w) V (. 9) then the acoban matrx, the transpose of acoban matrx for the parameter the F network can be wrtten by ( W ) ( z / b) W of = (.a) ( W ) [ ( W )] = [ z b] / (.b) wth predcton error of fuzzy network e ( f d ) (.) But f the normazed predcton error on F network s consdered, then nstead of equatons (.a) and (.b), the equatons w be ( W ) ( z ) ( W ) [ ( W )] [ z ] =, (.a) = (.b) hs s because the normazed predcton error of the IO-F network s e ( normazed ) ( f d ) b / (.3) he transpose of acoban matrx and acoban matrx for the parameter network can be wrtten as ( W ) ( z b) x W of the F = / (.4a) ( W ) [ ( W )] = [( z b) x ] / (.4b) Aso, by consderng normazed predcton error from (.), equatons (.4a)-(.4b) then become:

( W ) = ( z x ) ( W ) [ ( W )] = [ z x ] (.5a) (.5b) ow, comes to the computaton of the rest parameters c andτ by defnng the terms D and e as A D e = ( D e + D e + L+ D e ) m m (.6) Wth e as the same amount of sum squared error that can be found by a the errors e from the IO network. p p p p ( e + e + em ) e = L + (.7) Where, p =,, 3,..., ; correspondng to as number of tranng data. From (.6), the term D can be determned as ( e ) D = A (.8a) hs can aso be wrtten n matrx form usng pseudo nverse as ( Ε ) ( Ε Ε ) D = Α (.8b) he terms E (s the equvaent error vector), D and A are matrces of sze (x), (x) and (x) respectvey. ow matrx A can be repaced wth scaar product of e and D A = D e (.9) ow, by consderng normazed equvaent error n (.3), takng nto account the equaton (.9), the transposed acoban matrx, the acoban for the parameters c andτ can be computed as: { } ( c ) D z ( x c )/( σ ) = (.a) ( c ) = [ ( c )] = D z ( x c ) ( σ ) { } 3 ( ) ( ) σ D z x c /( σ ) / (.b) = (.c) 3 ( ) = [ ( σ )] = D z ( x c ) ( σ ) σ / (.d) It s to be noted that normazed predcton error s consdered for computaton of acoban matrces for the free parameters W and W. eanwhe normazed equvaent error has been consdered for the computaton of transposed acoban matrces and ther acoban matrces respectvey for the free parameters mean c and varanceτ.

3. LA wth Fuzzy Custerng he purpose of dong such fuzzy custerng before data enters the network s smar wth mode reducton. Because of the compexty of data, usuay some number of membershp functons ( ) shoud be enough to brng the SSE as ow as possbe. In Chapter 4, t can be seen that usng fuzzy custerng, such as GK custerng agorthm w reduce from 5 to 3, 5 or 7. For detectng custers of dfferent geometrca shapes n one data set such as eectrca data, Gustafson-Kesse custerng agorthm (GK) w be proposed accordng to Pancharya (Pancharya et a., 3) and Pat (Sprnger, 5, p.77-p.87). Gven the data set: Z = { Z, Z, Z 3,..., Z }, (3.) needs some parameters for GK custerng agorthm as foowng: he number of custers < c < he weghtng exponent or fuzzness exponent parameter m > he termnaton toerance e > he custer voume S whch must be seected. Furthermore, GK custerng agorthm s proposed wth some steps: ( ) Wth U = = random, repeat for teratons =,, 3,... Step compute the custerng centers (mean) ( ) ( µ ) Z s s= Vg = ; g c ( ) s= ( µ ) Step determne the custer covarance matrces ( ) ( µ ) ( Z s Vg ) ( Z s Vg ) s= Pg = ; g c ( ) s= ( µ ) Step 3 cacuate the dstance D gsag = ( Z V ) S det( P ) s g g c, / n [ g Pg ] ( Z s Vg ), s Step 4 update the partton matrx For s If DgsA > for a g =,,..., c; (3.a) (3.b) (3.c)

µ = ese c h=, g c; s ; / ( D D ) ( m ) gsag hsag (3.d) unt µ = and µ () ( ) U U < e c [, ], wth µ = g c; s ; g= g (3.e) (3.f) How GK custerng agorthm can be apped to the S-type IO F network for modeng and forecastng eectrca oad data? Here are the steps:. Create XIO data from eectrca oad. Random Partton matrx U 3. Cacuate custer centers, custer covarance, dstances, norm nducng matrces and update the partton matrx agan 4. Decare parameters mean ( c) and sgma ( ) σ from number 3 5. Cacuate the remanng parameters W o, W by usng east squared error agorthm (LSE) 6. Cacuate SSE from LSE agorthm and save a parameters 7. Usng parameters from number 6 as nput parameters of S-type IO F network. ote that the nta performance SSE from F shoud be same wth the SSE resut from the GK Custerng and LSE. 4. Resuts and Dscusson 4.. Performance of F etworks usng Custerng In case of eectrcaoad.txt, Custerng and LSE resut can be seen on Fgure 4.a. he parameters whch are determned from ths part brng the SSE =.56, SE =. and RSE =.46. Inta SSE for F network. In addton, F tranng, wth 5 data tranng reduced nta SSE from Custerng-LSE to.343, SE =.4 and RSE =.37. It means, F network reduced the SSE tranng 3% ower compared to nta SSE usng 499 epochs. Fgures 4.b and 4.d show the transton of GF from Custerng-LSE and GF from F network. Furthermore, for forecastng purposes, the 8 data of eectrca oad as ustrated n Fgures 4.f gve the resut of SSE forecastng = 77.858 wth SE forecastng =.4 and RSE forecastng =.. Fu forecastng nput start from data 763 (comes from 7 days mutpy to 96 data pus ) and fu LF from output F network can foow unt the next 3 actua data (around 3 days from eectrca oad data).

Eectrca Load (Back), GK CLusterng - LSE (Red), Error (Bue) GKC+LSE - Actua.8.6.4. 5 5 5 3 35 4 45 5.6 SSE =.56, SE =., RSE =.46.4 Error. -. -.4 5 5 5 3 35 4 45 5 me Fgure 4.a GK Fuzzy Custerng pus LSE Performance from eectrcaoad.txt, n= 7, m=, d=96, 48 data ranng, c = 5, Fuzzness exponent m=. GF of Input from GKC+LSE.8 Degree of embershp.6.4. -....3.4.5.6.7.8.9 Input Range Fgure 4.b 5 GF tuned for Input, produced by GK custer + LSE, Data tranng = 5, c=5, m=, Input=7, output=

.5 Pot of SSE vs. Epoch.5.48.46 SSE.44.4.4.38.36.34 5 5 5 3 35 4 45 5 Epochs Fgure 4.c Graph of SSE vs Epochs of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7, m=, Epochs= 499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5, 5 data ranng, c =5, Fuzzness exponent m=. LF scheme of Eectrca Load: F Output (Red), Actua (Back), Error (Bue) F Output - Actua.8.6.4. 5 5 5 3 35 4 45 5.6.4 SSE =.343, SE =.4, RSE =.37 Error. -. -.4 5 5 5 3 35 4 45 5 me Fgure 4.d ranng Performance of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7, m=, Epochs= 499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5, 5 data ranng, c = 5, Fuzzness exponent m=

GF of Input from F Output.8 Degree of embershp.6.4. -....3.4.5.6.7.8.9 Input Range Fgure 4.e 5 GF tuned for Input, produced by F network, Data tranng = 5, c=5, m=, Input=7, output=, Epochs=499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5 3 LF case: F Output (Red), Actua (Back), Error (Bue) F Output - Actua - 3 4 5 6 7 8 9 3 Error - - 3 4 5 6 7 8 9 me Fgure 4.f Forecastng Performance of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7 Inputs, m= output, Epochs= 499, Lead me d=96, Learnng Rate for LA =65, Wdness Factor WF =.5, Gamma =.5, omentum mo =.5, 4 data Forecastng, o of custers c = 5, Fuzzness exponent m =

5. Summary and Concuson Performance resuts from Secton 4 prove that the traned F network usng LA s found to be very effcent n modeng and predcton of the varous nonnear dynamcs. An effcent tranng agorthm based on combnaton of LA wth addtona modfed error ndex extenson (EI) and adaptve verson of earnng rate (momentum) have been deveoped to tran the akag-sugeno type mut-nput snge-output (ISO) eurofuzzy network, mprovng the tranng performance. he good resut from ths report s combnaton LA acceerated wth Fuzzy Custerng (based on Gustafson-Kesse custerng agorthm). he proposed Fuzzy Custerng w reduce number of membershp functons () and aso reduces sum squared error tranng compared to LA acceerate resuts. ow, what can be nferred about these overa resuts? Frst, there s st a bg queston n deang wth optmzaton. How do we know that choosng such the combnaton among determnaton of nput-output and ther structure, choosng the parameters, and deang wth over-/under fttng n akag- Sugeno-type euro-fuzzy network n the same tme w gve the optmzed performance?. Answerng ths queston s beyond the scope of ths report. Second, another thng whch s mportant here s that the fuzzy rues generated from these resuts are occasonay found to be non-transparent or ess nterpretabe. hs s due to the fact that some of the membershp functons fnay tuned through neurofuzzy network tranng are hghy smar or overappng on each other, gvng rse to a dffcut stuaton to nterpret. o mprove the transparency of fuzzy rues, set theoretc smarty measures [5] shoud be computed for each par of fuzzy sets and the fuzzy sets whch are hghy smar shoud be merged together nto a snge one. hs dea of transparency of course w ncrease the cost of sacrfcng of the mode accuracy. he ast concuson s about ong-term forecastng resut. he proposed rearrangement of matrx XIO accordng to the approprate ead tme gves much better resuts compared to sma ead tme. By re-arrangng the XIO matrx, F network can foow the actua output unt some hundreds of data. he probem n ths scenaro s to fnd the optma ead tme wth the optmum number of tranng whch gves the mnmum goba error n forecastng, and aso the possbty to get maxmum tme of forecastng (ong-term) before the network can not foow the actua output. Here a ot of combnaton and repetton of smuaton s needed n order to fnd the sutabe resut n ong-term forecastng. By repeatng the smuaton, onte Caro procedure can be apped to study the dstrbuton and the statstcs from data that gve nformaton on the ong term dstrbuton of tme seres [6]. hs method reduces the rsk of smuaton by fndng constrants smuaton such as tranng agorthm for F networks and reduces or deetes unwanted resuts of smuaton. For the future, the possbty research to get optma resut s combnng Hybrd onte-caro procedure (HC) wth akag-sugeno-type euro-fuzzy network, or Hybrd HC-S-type F network.

References. Eugene Fenberg, Cosng Sesson of Apped athematcs for Dereguated Eectrc Power Systems: Optmzaton, Contro, and Computatona Integence, SF workshop, ovember 3. Hans-Peter Preuss, Voker resp, euro-fuzzy, Automatserungstechnsche Praxs 5/94, R. Odenbourg Verag, 994, pp.-4 3. Haykn S (994) eura etworks: a comprehensve foundaton. can, USA 4. Iyer S, Rhnehart RR (), A nove method to stop eura network ranng. Amercan Contro Conferences, Paper W7-3 5. ang SR (993), AFIS: Adaptve network Based Fuzzy Inference System, IEEE rans. On SC., 3(3):665-685 6. Kohonen (995), Sef-organzng maps ( nd ed.) Sprnger seres n nformaton scences, Bern 7. ao Y, Pat AK, hee G (6), ransparent Fuzzy mode for eectrca oad forecastng, hess Report, Unversty of Bremen 8. Pat AK, Babuška R (), Effcent tranng agorthm for akag-sugeno type euro-fuzzy network, Proc. of FUZZ-IEEE, ebourne, Austraa, vo. 3: 538-543 9. Pat AK, Doedng, Anheer, Popovc () Backpropagaton based tranng agorthm for akag-sugeno-type IO neuro-fuzzy network to forecast eectrca oad tme seres, Proc. Of Fuzz-IEEE, Honouu, Hawa, vo. :86-9. Pat AK, Popovc D (999), Forecastng chaotc tme seres usng neuro-fuzzy adaptve genetc approach, Proc. of IEEE-IC, Washngton DC, USA, vo.3:538-543. Pat AK, Popovc D (), Integent processng of tme seres usng neuro fuzzy adaptve genetc approach, Proc. Of IEEE-ICI, Goa, Inda, vo.q:86-9. Pat AK, Popovc D (), onnear combnaton of forecasts usng A, FL and F approaches, FUZZ-IEEE, :566-57 3. Pat AK, Popovc D (5), Computatona Integence n me Seres Forecastng, heory and Engneerng Appcatons, Sprnger 4. Prechet L (998), Eary stoppng but when? In: Orr GB and oeer K-R (Eds.), eura networks: rcks of the trade. Sprnger, Bern: 55-69 5. Setnes, Babuška R, Kaymark U (998), Smarty measures n Fuzzy rue base smpfcaton, IEEE ransacton on System, an and Cybernetcs, vo.8:77-775 6. Smon G, Lendasse, Cottre, Fort, Vereysen (4), Doube quantzaton of the regressor space for ong-term tme seres predcton: method and proof of stabty 7. Wang LX (994), Adaptve fuzzy systems and contro: desgn and stabty anayss, Engewood Cffs, ew ersey: Prentce Ha. 8. Wang, Pat AK, hee G, Forecastng of eectrca oad usng akag-sugeno type wth Fuzzy Custerng and smpfcaton of rue base, aster hess, Unversty of Bremen, 5 9. Xaosong D, Popovc D, Schuz-Ekoff (995), Oscaton resstng n the earnng of Backpropagaton neura networks, Proc. of 3 rd IFAC/IFIP, Ostend, Begum