Efficient Weather Forecasting using Artificial Neural Network as Function Approximator

Similar documents
Kernel Methods and SVMs Extension

Neural Networks & Learning

Generalized Linear Methods

The Study of Teaching-learning-based Optimization Algorithm

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH

VQ widely used in coding speech, image, and video

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Multigradient for Neural Networks for Equalizers 1

Solving Nonlinear Differential Equations by a Neural Network Method

Short Term Load Forecasting using an Artificial Neural Network

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

A Robust Method for Calculating the Correlation Coefficient

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

Chapter 13: Multiple Regression

Lecture Notes on Linear Regression

Chapter 9: Statistical Inference and the Relationship between Two Variables

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

x = , so that calculated

Non-linear Canonical Correlation Analysis Using a RBF Network

EEE 241: Linear Systems

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Radial-Basis Function Networks

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Atmospheric Environmental Quality Assessment RBF Model Based on the MATLAB

Semi-supervised Classification with Active Query Selection

Supporting Information

Comparison of Regression Lines

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

The Minimum Universal Cost Flow in an Infeasible Flow Network

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

Problem Set 9 Solutions

A Hybrid Variational Iteration Method for Blasius Equation

A Network Intrusion Detection Method Based on Improved K-means Algorithm

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Numerical Heat and Mass Transfer

Lecture 23: Artificial neural networks

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Lecture 10 Support Vector Machines II

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Chapter Newton s Method

Operating conditions of a mine fan under conditions of variable resistance

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than

Global Sensitivity. Tuesday 20 th February, 2018

Unified Subspace Analysis for Face Recognition

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Chapter 11: Simple Linear Regression and Correlation

Report on Image warping

Week 5: Neural Networks

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

Multiple Sound Source Location in 3D Space with a Synchronized Neural System

Appendix B: Resampling Algorithms

Cathy Walker March 5, 2010

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17

Multilayer Perceptron (MLP)

Tracking with Kalman Filter

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

An Improved multiple fractal algorithm

Uncertainty in measurements of power and energy on power networks

SIMPLE LINEAR REGRESSION

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Statistics Chapter 4

Negative Binomial Regression

A neural network with localized receptive fields for visual pattern classification

MATH 567: Mathematical Techniques in Data Science Lab 8

Uncertainty as the Overlap of Alternate Conditional Distributions

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Improvement of Histogram Equalization for Minimum Mean Brightness Error

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

/ n ) are compared. The logic is: if the two

Transient Stability Assessment of Power System Based on Support Vector Machine

Polynomial Regression Models

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

Linear Feature Engineering 11

4DVAR, according to the name, is a four-dimensional variational method.

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

An Empirical Study of Fuzzy Approach with Artificial Neural Network Models

Chapter - 2. Distribution System Power Flow Analysis

A LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS. Dr. Derald E. Wentzien, Wesley College, (302) ,

Classification as a Regression Problem

Statistics for Economics & Business

Definition. Measures of Dispersion. Measures of Dispersion. Definition. The Range. Measures of Dispersion 3/24/2014

T E C O L O T E R E S E A R C H, I N C.

APPLICATION OF RBF NEURAL NETWORK IMPROVED BY PSO ALGORITHM IN FAULT DIAGNOSIS

Markov Chain Monte Carlo Lecture 6

Transcription:

Effcent Weather Forecastng usng Artfcal Neural Network as Functon Approxmator I. El-Fegh, Z. Zuba and S. Abozgaya Abstract Forecastng s the referred to as the process of estmaton n unknown stuatons. Weather forecastng, especally ar temperature, s one of the most mportant factors n many applcatons. Ths paper presents an approach to develop Artfcal Neural Networks (ANNs) to forecast ar temperatures. One mportant archtecture of neural networks namely Radal Bass Functon (RBF) wll be used as a functon approxmator. The RBF traned usng meteorologcal data of one year and tested on another year. The data consst of observatons of varous meteorologcal varables such as relatve humdty, dew pont, wnd speed, wnd drecton and ar pressure. To come up wth approprate centers for the RBF neurons, the weather data was clustered nto several groups usng k- means clusterng algorthm. The goal of developng ths network s to forecast the ar temperature and to have a regresson model wth mnmum error of predcton The data of one year s used for supervsed tranng usng labelled data whle the other year data was used of testng the traned ANN. Several testes were run to come up wth the most sutable ANN structure based on lowest MSE n predctng ar temperature. Results have shown that these structures gave very good predcton n term of accuracy. Keywords Artfcal neural network, weather forecastng,, Radal Bass Functon, classfcaton, k-means. I. INTRODUCTION Forecastng s the process of estmaton n unknown stuatons. Predcton s a smlar, but more general term. Weather forecastng s one of the most mportant factors n many felds, especally ar temperature snce t touches the lfe of to human, cattle, and agrculture. Tradtonally the role of provdng weather forecasts has been the responsblty of the atmospherc meteorologcal natonal centers. Whle data requred to make temperature predctons has been avalable for qute some tme, the complex relatonshps between the data and ts effect on the approxmaton of I.El-Fegh s a professor at Unversty of Trpol, Trpol-Lbya, (Phone:+218-91-178-76; e-mal drse@ee.edu.ly ). Z. Zuba s wth Unversty of Srt, Srt, Lbya (e-mal zszub@su.edu.ly). S. Abnozgaya s wth the Araban Cement Company, AlKhomas-Lbya (emal: S.Abozgaya@yahoo.com). temperature has often proved to be dffcult usng conventonal computer analyss [1]. An Artfcal Neural Network (ANN) that mmcs the behavor of neurons n the bran. The basc components of an ANN are ts nodes or neurons and the connectons between the nodes. A node s prmarly a computatonal unt. It receves nputs, calculates a weghted sum and presents the sum to an actvaton functon. The nodes are generally arranged n layers [2]. The use of a neural network, however, whch learns rather than analyses these complex relatonshps, has shown a great deal of promse n accomplshng the goal of predctng ar temperature wth hgh accuracy [3-]. ANNs have the advantage of ther ablty to learn and adapt. They have been extensvely used to predct the ar temperature [3][4] and Forex market predcton [6][7] and stock market[8]. There are several types of ANN structure such as Mult- Layer Perceptron(MLP), Radal Bass Functon (RBF) and Support Vector Machne(SVM)[2]. For an ANN to be useful t should be able to learn or capture the complex relatonshps between nputs and outputs. Ths s done by searchng for an optmal set of the weghts of the connectons between the nodes. It s acheved by frst sendng one set of nputs n the feed forward mode through the ANN. In ths research RBF was selected as a mean of forecastng. The selecton of forecastng methods depends on several factors, such as the avalablty of data, the accuracy requred and the ease of operaton [9]. However forecastng s the process of estmaton n unknown stuatons from the hstorcal data. For example forecastng weather, use hstorcal data as the bass for estmatng future outcomes [4]. Before RBF can be used, the locaton of the centers and wdths for the hdden layer and the weghts of the output layer must be determned [2]. To come up wth the best centers K-means algorthm s used to cluster the data nto several groups [1]. The goal of ths research s to hghlght how the applcaton of RBF Artfcal Neural Networks (RBFANNs) can be used to forecast ar temperatures around the clock for a selected regon n the Medterranean sea for whch, past weather data s avalable. II. DATA PREPARATION The weather data for ths research were collected from the ISSN: 2313-63 49

meteorologcal department of Msurata, Lbya for the year 6 through 7. The data represents the selected the weather pattern of Msurata cty. The data of January through December for year 6, for ts usng to tranng network, whle data year 7 for testng network. The model development data were based on hstorcal weather data for the year 6. We used fve nput varables and one output for tme wth varables correspondng to twenty four hours, : hrs, 3: hrs, 6: hrs, 9: hrs, 12: hrs, 1: hrs, 18: hrs and 21: hrs. The nputs requred by the ar temperature forecastng nclude the parameters lsted n table 1. TABLE 1. METEOROLOGICAL VRAIABLES No. Varable taken every 3 hour Unt 1 Wnd speed Knots 2 Wnd drecton Degree 3 Relatve humdty Rh % 4 Dew pont Deg. c Ar Pressure hpa These weather parameters were selected due to ther effect on ar temperatures. For example the relatonshp between ar temperature, dew pont temperature and the relatve humdty, when the ar temperature s close to the dew pont, the relatve humdty s hgh (often 8% or greater). Relatve humdty s dependent upon the temperature. So, f the temperature changes, the relatve humdty wll change. [11] III. FUNCTION APPROXIMATION The need for functon approxmaton arses n many branches of appled mathematcs and computer scence n partcular. In general a functon approxmaton problem asks us to select a functon among a well-defned class that closely matches a target functon n specfc way. One can dstngush two major classes of functon approxmaton problems: Frst known target functons approxmaton theory s the branch of numercal analyss that nvestgates how certan known functon can be approxmated by a specfc class of functon or curve fttng methods. Second, the target functon s unknown nstead of an explct formula where only a set of ponts are provded. Dependng on the structure of the doman, several technques for approxmatng may be applcable. For example f an operaton on the real numbers, technques of nterpolaton, regresson analyss, and curve fttng can be used. To some problems as regresson, classfcaton has receved a unfed treatment n statstcal learnng theory where they are vewed as supervsed learnng problems[2][3]. IV. RADIAL BASES FUNCTION Radal Bass Functonal Networks (RBFN) s a non-lnear layered feed forward networks used as a unversal approxmator. The RBFN s capable of mplementng arbtrary non-lnear transformatons of the nput space. Ths learnng s equvalent to fndng a surface n a multdmensonal space that provdes a best ft to the tranng data. Ths s generally known as the curve fttng approxmaton problem. These RBFNs are used n a wde range of applcatons and are most effectve n forecastng as weather forecastng, modellng, pattern recognton, and mage compresson and more[12][13]. RBF conssts of three dfferent layers namely nput layer, hdden layer and output layer where the hdden layer s multdmensonal and known as radal counters [2]. The transformaton from the nput space to the hdden unt space s nonlnear whereas the transformaton from the hdden unt space to the output space s lnear. Thus RBFN produce a lnear combnaton of non-lnear bass functons where the dmenson of nput matches wth the dmenson of each radal center. Each hdden unt known as radal center and each center s representatve of one or some of the nput patterns. The network s known as localzed receptve feld network'. The problem s solved the nput space s matched wth these receptve felds where the nputs are clustered around the centers and the output s lnear. y k 1 w (1) Thus we obtan a smooth ft to the desred functon. The hdden unts n RBFN have Gaussan actvaton functons as : x) ( x t ) (2) ( where x t s the Eucldean norm functon A) Approxmaton by RBF Radal bass functon neural networks (RBFNN) whch employ for nonlnear functon approxmaton have two stages. The supervsed and unsupervsed tranng procedure adapted n numerous RBFNN applcatons usually provdes satsfactory network performance. Ths method allow fast tranng and mproves convergence, the ntal stage s selectng the network centres. Orthogonal least squares and nput clusterng are two of such methods that show consderable results of whch can provde an amcable soluton to problem of ntal estmates of the Gaussan kernels[7]. We dspose of a set of nputs xt and a set of outputs yt. The approxmaton of y, by a RBF wll be noted ŷt.ths approxmaton wll be the weghted sum of m Gaussan kernels As shown n Fg.1 yˆ t m ( x, c, ) (3) t=1 to N 1 ISSN: 2313-63

x, C, exp x x 2 2 Fgure 1. Standard structure of RBFANN The complexty of a RBFN s determned by the number of Gaussan kernels. The dfferent parameters to specfy are the poston of the Gaussan kernels (C), ther varances σ. The second parameter to be chosen s the standard devaton (or wdth) of the dfferent Gaussan kernels σ. The last parameters to determne are the multplcatve factors λ. When all other parameters are defned, these are determned by the soluton of a system of lnear equatons. If we buld a lnear model between the nputs and the output, the latter wll be approxmated by a weghted sum of the dfferent nputs. The weghts assocated to each nput determnes the mportance that ths latter has on the approxmaton of the output. A) Hdden layer (Radbas Layer) The hdden layer n RBFNN s of hgh dmenson, whch serves a dfferent purpose than n a multlayer feed forward network. Each neuron of the hdden layer represents a RBF wth equal dmensons to the nput data, and the number of RBFs depends on the problem to be solved. Frst, the radal dstance d, between the nput vector x and the center of bass functon c s computed for each unt n the hdden layer as d x c k 1 k y f ( x) w ( x, c ) w( x c ) 1 (4) () (6) Where f s a nonlnear actvaton functon, x s the nput. c are the RBF centers n the nput vector space, Each neuron n the hdden layer has ts assocated center, 1, 2, 3,..., m are the number of the hdden layer, W, the lnear layer neurons weght vector, X, the nput vector, k s the total number of hdden layer neurons and represents the j th node n the hdden layer. B K-Means The K-Means most common algorthm used to determne the approprate centers of the prototype hdden neurons n RBFNs. The basc dea of ths teratve clusterng algorthm s to start wth an ntal random partton, and then assgn patterns to clusters so as to reduce the squared error. The number of clusters K must be specfed beforehand[14][1]. The K-means algorthm works wthn four steps: Step 1: Randomly set cluster centers centrods. Step 2: Generate a new partton by assgnng each data vector (pattern) to ts closest cluster center. Step 3: Compute new cluster centers as the centrod of the data vectors assgned to the consdered cluster. Step 4: Repeat step 2 and 3 untl there s no more change n cluster center. C) Learnng Strateges The RBF networks s traned by a varety of supervsed and unsupervsed learnng algorthms. In the ntal approaches, all data samples were assgned to the hdden layer to act lke a centrod. The number of hdden unts was reduced by the use of k-medan clusterng algorthms. The learnng phase n the (RBFNN) can be dvded nto three steps and two phases. The three steps are Fnd the centers (unsupervsed). Fnd the wdths(unsupervsed) Weght adjustment on the outer layer (supervsed). The performance of an RBFNN crtcally depends on the chosen centers and wdths. The man objectve of the frst stage s to optmze the locaton of centers and wdth. After the RBF centers have been found, the wdth s calculated. The wdth represents a measure of the spread of data assocated wth each node. In the nput space where there are few patterns, t s desrable to have hdden unts wth a wde area of recepton whle narrow area s needed for crowded portons. In the second phase, the output layer weghts are adjusted n a supervsed fashon usng the Least Mean Square (LMS) algorthm so that the dfference between the desred output and the obtaned output s mnmzed. IV. FORCASTING METHOD The selecton of forecastng methods depends on several factors, such as the avalablty of data, the accuracy requred and the ease of operaton. For example forecastng weather, we use hstorcal data as the bass for estmatng future outcomes. As shown n fg. 4 the algorthm can be outlned n the followng steps:. Read and load weather data to the system. Dvde the databases n two groups, one group s tranng set and other group s testng set and hde the test set ISSN: 2313-63 1

Fnd cluster centers by K-means clusterng. Calculate the Centre Wdths of the functon. Supervsed tranng phase. Use tranng set as nput to the RBFNN. Test the REBNN system by usng test set of databases. Pressure Relatve Humdty Dew pont Wnd Speed Wnd Drecton RBF RBF Model Model Fgure 3. General structure of RBF Ths forecastng requres calculatng the dstance between the temperature and all data n the database, whch requres amount of tme and computaton, especally f we use data more than one year. To reduce the computaton burden and tme, we wll ntroduce a novel approach that wll reduce the requred tme and computaton. The approach s based on usng the K-means clusterng algorthm [14], whch to dvde the feature vectors of all daly readngs of the weather nto several homogenous groups based on a set of features/attrbutes. After dvdng the database nto groups, we wll compute the center of each group. There are many dfferent methods of groupng, n ths research we use non-herarchcal that are moved the objects from one cluster to another cluster accordng to one of the crtera. To llustrate ths dea, we have a database contans 8 readngs, each readng has features daly, therefore we have 29 records (objects) and attrbutes n one year. Then we assume the number of clusters are 3 for example, after that we dvde all the data nto 3 groups. We wll be fndng each cluster contanng 73 objects. Usng ths technque, we do not only reduce the tme requrement, but also ncrease the research accuracy. Fgure (a) and (b) show the data before and after clusterng (a) Fguer 4. Block dagram of the forcastng algorthm A) Data Clusterng Data clusterng s the process of data n homologous aggregatons, and t s branches out for searchng of data. Clusterng algorthm dvdng the group of data nto many aggregatons, where the smlarty between ponts wthn rhomb cluster bgger than smlarty between two ponts wthn two dfferent clusters. (b) Fgure. (a) Weather data are dstrbuted before applyng clusterng algorthm.(b) after applyng data clusterng The dstances of each vector from the K cluster centers are computed, and the vector s assgned to the cluster havng ts center at a mnmum dstance. ISSN: 2313-63 2

v. EXPERIMENTAL RESULTS We start wth nput data representng the varables lsted n table 1 for one year as shown n table 2. A small sample of the output value s shown n fgure 6. The RBF conssts of three layer, nput layer, hdden layer and output layer. Number of neurons n the nput layer s decded by the sze of the nput vector. In our case t s neurons. The output layer s one neuron representng the ar temperature. There s no specfc rule to determne the exact number of neurons n hdden layer for the network archtecture. To fnd out how many neurons n hdden layer that can possbly provde better performance n terms of least error of forecastng. We start from a network wth one hdden layer n whch there are two neurons, by ncreasng the number of neurons, we consder the performance error of the network. Fg. 8,9 and 1 show the dfference between the predcted values and true values usng dfferent number of hdden layer neurons. Fg. 8 shows the dfference usng 22 neurons, fg. 9 usng 49 neurons and fg. 1 usng 6 neurons. As shown n fg. 11, the optmal structure s reached when the LMS s mnmum when number of hdden neurons s 49. It can also be seen from fg. 11 that the error does not reduce sgnfcantly when number of neurons ncreases. The mnmum error has been obtaned for 49 neurons n the hdden layer. Fgure 7. The optmal structure of RBF The optmal structure for developed RBF neural network, for obtanng wth mnmum predcton error s shown n Table 3. TABLE 2. nput/output data for the year 6 whch s used for ranng 2 1 1 1 14 27 4 3 66 79 92 1 118 131 144 17 17 183 196 9 222 23 248 261 274 287 3 Number of readngs Fgure 6. Sample output data 2 1 1 1 13 2 37 49 61 73 8 97 19 121 133 14 17 169 181193 217 229 241 Number of readngs Fgure 8. Comparson between observed and predcted values usng 22 hdden neurons for samples ISSN: 2313-63 3

TABLE 3. The optmal structure of RBF wth mnmum tranng error 2 1 1 1 13 2 37 49 61 73 8 97 19 121 133 1417 169 181193 217 229241 Number of readngs Fgure 9. Comparson between observed and predcted values usng 49 hdden neurons for samples 2 1 1 1 13 2 37 49 61 73 8 97 191211331417169181193217229241 Fgure 1. Number of readngs Comparson between observed and predcted values usng 6 hdden neurons for samples Error..4.4.3.3.2.2.1.1. 2 8 11 14 17 23 26 29 32 Number of neurons Fgure.11 Change of MSE as number of neurons s ncreased 3 38 41 49 2 Input Layer Hdden Layer Output Layer Type Number of unts 1. Pressure 2. Relatve Humdty 3. Dew Pont 4. Wnd Drecton. Wnd Speed Guassan Functon 49 1. Ar 1 TABLE 4. Values of LMS for dfferent number of neurons n the hdden layer Number of neurons Mean Square Error Number of neurons Mean Square Error 2 7.61829 27.368681 3 9.2264 28.44982 4 4.39989 29.377412.369212 3.376 6.363826 31.394 7.3394 32.41761 8.4377 33.3932 9.348618 34.396723 1.36348 3.39612 11.3996 36.4377 12.39186 37.361726 13.37324 38.33974 14.3439 39.262964 1.41814 4.46326 16.413 41.22247 17.38349 47.2339 18.3816 48.3846 19.412778 49.174462.336892.27717 21.37714 1.667 22.462961 2.26289 23.37827 3.281783 24.391772 6.2262 2.32866 7.27331 26.398196 6.3429 As can be seen from table that best results are obtaned when the number of hdden neurons s 49. ISSN: 2313-63 4

VI. CONCLUSIONS In ths paper we have presented an effcent method for the forecastng of ar temperature based on the use of observatons of varous meteorologcal varables. RBF ANN was used as a functon approxmator to predct the ar temperature. One year data was used as a tranng set and another year was used for testng the traned ANN. Expermental results have shown that RBF ANN can be traned effectvely wth smallest number of neurons wthout compromsng the performance. The RBF can acheve good learnng performance n very short tranng tme and was also able to learn from the labeled learnng pattern and generalze to the test data even though test data was hdden durng tranng stage. Results have also shown that a good approxmaton wth average MSE of.17446 usng 49 neurons n the hdden layer. REFERENCES [12] Boopath, G., Arockasamy S. " Image ompresson: Wavelet transform usng radal bass functon (RBF) neural Network," (INDICON) pp:34-344, Koch, Inda Dec 7-9, 12 [13] E.-D. Vrgna, Bometrc dentfcaton system usng a radal bass network, n Proc 34th Annu. IEEE Int.Carnahan Conf. Securty Technol.,, pp. 47 1. [14] I. Almerhag, I.El-Fegh and A. Dulla, " A Modfed K-means Clusterng Algorthm for Gray Image Segmentaton," Proceedngs of ACIT'1, vol. 1, pp: -9, Benghaz, Lbya,1 [1] Shmshon Y, Intrator N, "Classfcaton of sesmc sgnals by ntegratng ensembles of neural networks," IEEE Trans Sgnal Proces, Vol. 46 No., pp:1194-11 Idrs El-Fegh Born n Msurata Lbya 199 Obtaned BSc form Unversty of Portland (1983) n Computer engneerng, MSc. and PhD. from Unversty of Wndsor, Canada, (1999) and(3) n Electrcal and Computer engneerng. Currently he s a professor at Unversty of Trpol, Trpol-Lbya. Hs area of nterest nclude Image Processng, Artfcal Intellgence, Optmzaton and Data Mnng. [1] I. Maqsood, M. R. Khan and A. Abraham, "An ensemble of neural networks for weather forecastng," Neural Comput. & Applc., Vol. 13,4, pp. 113-122 [2] S. Haykn. Neural Networks, A Comprehensve Foundaton, New York, Macmllan Publshng., 1994 [3] C. Dev, B. Reddy, K. Kumar, B. Reddy,N. Nayak, "ANN Approach for Weather Predcton usng Back Propagaton," Internatonal Journal of Engneerng Trends and Technology- Vol. No.3-12, pp.19-23 [4] Mohsen Hayat, and Zahra Moheb, "Applcaton of Artfcal Neural Networks for Forecastng," Internatonal Journal of Engneerng and Appled Scences, Vol.4 No.3, 8,pp:164-168, [] Moro QI, Alonso L, Vvaracho CE," Applcaton of neural networks to weather forecastng wth local data," Proceedngs of the 12th IASTED nternatonal conference on appled nformatcs, Annecy, France, May 1994, pp 68-7 [6] G. Zhang and M. Y. Hu, Neural Network Forecastng of the Brtsh Pound/US Dollar Exchange Rate, OMEGA:Int. Journal of Management Scence, Vol. 26, pp: 49-6, 1998. [7] J. Yao and C.L. Tan, A case study on usng neural networks to perform techncal forecastng of forex, Neurocomputng, vol. 34, pp. 79-98,. [8] J. H.Wang and J. Y. Leu, Stock Market Trend Predcton Usng ARIMAbased Neural Networks, Proc. of IEEE Int. Conf. on Neural Networks, vol. 4, pp. 216-216, 1996. [9] Ch.Jyosthna Dev, B.Syam Prasad Reddy, K.Vagdhan Kumar,B. Musala Reddy, N.Raja Nayak, " ANN Approach for Weather Predcton usng Back Propagaton," Internatonal Journal of Engneerng Trends and Technology- Vol.(3). No.1, 12 [1] A.K. Jan and R.C. Dubes, Algorthms for Clusterng Data, Prentce Hall, Englewood Clffs, NJ, 1988 [11] McCormck-Goodhart, M.H., The Allowable and Relatve Humdty Range for the Safe Storage of Photographc Materals, Journal of the Socety of Archvsts, Vol. 17, No. 1, 7-21, 1996. ISSN: 2313-63