Neural Network Training By Gradient Descent Algorithms: Application on the Solar Cell

Similar documents
Levenberg-Marquardt and Conjugate Gradient Training Algorithms of Neural Network for Parameter Determination of Solar Cell

Radial Basis-Function Networks

State of Charge Estimation of Cells in Series Connection by Using only the Total Voltage Measurement

Module 2. DC Circuit. Version 2 EE IIT, Kharagpur

A Quantitative Analysis of Coupling for a WPT System Including Dielectric/Magnetic Materials

Cascaded redundancy reduction

Admin BACKPROPAGATION. Neural network. Neural network 11/3/16. Assignment 7. Assignment 8 Goals today. David Kauchak CS158 Fall 2016

Backcalculation of Airport Flexible Pavement Non-Linear Moduli Using Artificial Neural Networks

Lecture contents. Metal-semiconductor contact

24th European Photovoltaic Solar Energy Conference, September 2009, Hamburg, Germany

PD Controller for Car-Following Models Based on Real Data

Interaction force in a vertical dust chain inside a glass box

State-Space Model for a Multi-Machine System

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

A NONLINEAR SOURCE SEPARATION APPROACH FOR THE NICOLSKY-EISENMAN MODEL

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Lecture 6: Control of Three-Phase Inverters

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Two Dimensional Numerical Simulator for Modeling NDC Region in SNDC Devices

Chapter 11: Feedback and PID Control Theory

A Comparison between a Conventional Power System Stabilizer (PSS) and Novel PSS Based on Feedback Linearization Technique

Optimization of Geometries by Energy Minimization

Strength Analysis of CFRP Composite Material Considering Multiple Fracture Modes

Damage identification based on incomplete modal data and constrained nonlinear multivariable function

Deriving ARX Models for Synchronous Generators

Damage detection of shear building structure based on FRF response variation

Situation awareness of power system based on static voltage security region

Chapter 11: Feedback and PID Control Theory

Conservation Laws. Chapter Conservation of Energy

ELEC3114 Control Systems 1

EE 330 Lecture 12. Devices in Semiconductor Processes. Diodes

arxiv: v5 [cs.lg] 28 Mar 2017

Harmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method

Examining Geometric Integration for Propagating Orbit Trajectories with Non-Conservative Forcing

Advanced Partial Differential Equations with Applications

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Quasi optimal feedforward control of a very low frequency high-voltage test system

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Introduction to Machine Learning

AN3400 Application note

ECE 422 Power System Operations & Planning 7 Transient Stability

Code_Aster. Detection of the singularities and calculation of a map of size of elements

Schrödinger s equation.

Least-Squares Regression on Sparse Spaces

Code_Aster. Detection of the singularities and computation of a card of size of elements

Implicit Differentiation

Analysis. Idea/Purpose. Matematisk Modellering FK (FRT095) Welcome to Mathematical Modelling FK (FRT095) Written report. Project

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

OF CHS. associated. indicate. the need. Rio de Janeiro, Brazil. a) Footbridge Rio. d) Maria Lenk. CHS K joints

A Simple Model for the Calculation of Plasma Impedance in Atmospheric Radio Frequency Discharges

Solar Radiation Forecasting Using Ad-Hoc Time Series Preprocessing and Neural Networks.

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

Application of the homotopy perturbation method to a magneto-elastico-viscous fluid along a semi-infinite plate

FET Inrush Protection

EE 330 Lecture 14. Devices in Semiconductor Processes. Diodes Capacitors MOSFETs

A Course in Machine Learning

Three-Dimensional Modeling of Green Sand and Squeeze Molding Simulation Yuuka Ito 1,a and Yasuhiro Maeda 2,b*

Investigation of local load effect on damping characteristics of synchronous generator using transfer-function block-diagram model

MATH , 06 Differential Equations Section 03: MWF 1:00pm-1:50pm McLaury 306 Section 06: MWF 3:00pm-3:50pm EEP 208

Chapter 6. Electromagnetic Oscillations and Alternating Current

Simple Electromagnetic Motor Model for Torsional Analysis of Variable Speed Drives with an Induction Motor

Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan

Semiclassical analysis of long-wavelength multiphoton processes: The Rydberg atom

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

A new identification method of the supply hole discharge coefficient of gas bearings

Design and Application of Fault Current Limiter in Iran Power System Utility

Lecture 2: Correlated Topic Model

u!i = a T u = 0. Then S satisfies

Artificial Intelligence Techniques for Food Drying Technology

Fabrizio Pelliccia *, Stefania Bonafoni, Patrizia Basili University of Perugia, Perugia, Italy

Neural Networks Analysis of Airfield Pavement Heavy Weight Deflectometer Data

Physics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2

Chapter 4. Electrostatics of Macroscopic Media

6. Friction and viscosity in gasses

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

PARALLEL-PLATE CAPACITATOR

Power Generation and Distribution via Distributed Coordination Control

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

DYNAMIC PERFORMANCE OF RELUCTANCE SYNCHRONOUS MACHINES

Text S1: Simulation models and detailed method for early warning signal calculation

A. Exclusive KL View of the MLE

Capacity Analysis of MIMO Systems with Unknown Channel State Information

A simple model for the small-strain behaviour of soils

EE 330 Lecture 13. Devices in Semiconductor Processes. Diodes Capacitors Transistors

Angles-Only Orbit Determination Copyright 2006 Michel Santos Page 1

Light-Soaking Effects on the Open-Circuit Voltage of a-si:h Solar Cells

Lagrangian and Hamiltonian Mechanics

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Neural Networks. Tobias Scheffer

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

Math 342 Partial Differential Equations «Viktor Grigoryan

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

Image Based Monitoring for Combustion Systems

Table of Common Derivatives By David Abraham

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009

SiC-based Power Converters for High Temperature Applications

Chapter 11: Feedback and PID Control Theory

Modelling of Three Phase Short Circuit and Measuring Parameters of a Turbo Generator for Improved Performance

Transcription:

ISSN: 319-8753 Neural Networ Training By Graient Descent Algorithms: Application on the Solar Cell Fayrouz Dhichi*, Benyounes Ouarfi Department of Electrical Engineering, EEA&TI laboratory, Faculty of Sciences an Techniques, Hassan II Mohammeia- Casablanca University, Mohammeia, Morocco ABSTRACT: This present paper eals with the parameter etermination of solar cell by using an artificial neural networ traine at every time, separately, by one algorithm among the optimization algorithms of graient escent (Levenberg-Marquart, Gauss-Newton, Quasi-Newton, steepest escent an conjugate graient). This etermination issue is mae for ifferent values of temperature an irraiance. The training process is insure by the minimization of the error generate at the networ output. Therefore, from the outcomes obtaine by each graient escent algorithm, we conucte a comparative stuy between the overall of training algorithms in orer to now which one ha the best performances. As a result the Levenberg-Marquart algorithm presents the best potential compare to the other investigate optimization algorithms of graient escent. KEYWORDS: Artificial neural networ, training, graient escent optimization algorithms, comparison, electrical parameters, solar cell. I. INTRODUCTION The exhibitions uner irraiance, temperature lea to the egraation of the internal characteristics of solar cell an prevent the photovoltaic (PV) panel to generate electrical power uner its optimal performances. In orer to stuy the influence of these hanicapping factors, we must now the internal behavior of solar cell by etermining the electrical parameters accoring to ifferent values of irraiance an temperature. The PV current (I PV ) prouce at the output of solar cell is in a nonlinear implicit relationship with the internal electrical parameters. The latter can be ientifie analytically [1] or numerically [] for a specific temperature an irraiance. In other han, the stuy of the behavior of solar cell requires the ientification of its parameters for various values of irraiance an temperature. Therefore, the Artificial Neural Networ (ANN) seems the best aapte to insure this role. The choice behin the use of the ANN returns to its capacity to preict results from the exploitation of the acquire ata. The information is carrie by weights representing the values of the connections between neurons. The functioning of the ANN requires its training by an algorithm insuring the minimization of the error generate at the output. In the aim to etermine the electrical parameters values, we compare in this stuy between the optimization algorithms of graient escent that allow the training of the ANN. We istinguish three algorithms of secon orer of graient (Levenberg-Marquart, Gauss-Newton an Quasi-Newton) an two algorithms of the first orer of graient (steepest escent an conjugate graient). Copyright to IJIRSET www.ijirset.com 15696

Current [A] 0.6 0.5 0.4 0.3 0. 0.1 18 C, 100W/m² 0 C, 50W/m² C, 400W/m² 4 C, 500W/m², 800W/m² 8 C, 600W/m² 0 0 0.1 0. 0.3 0.4 0.5 0.6 Voltage [V] ISSN: 319-8753 II. SOLAR CELL MODEL.1. Single ioe solar cell moel In our stuy the solar cell is moele by an electrical moel [3] with a single ioe shown in Fig. 1: I PV I ph R sh R s V PV Fig. 1 Equivalent circuit of solar cell R s : Series resistance representing the losses ue to the various contacts an the connections. R sh : Shunt resistance characterizing the lea currents of the ioe junction. I ph : Photocurrent epening on both irraiance an temperature. I s : Dioe saturation current. n: Dioe ieality factor. V th : Thermal voltage ( V th AT. q). T: Temperature of solar cell by Kelvin. -3 A: Boltzmann constant ( A 1.3806503. 10 J/K). q: Electrical charge of the electron ( q 1.6017646.10 C ). The mathematical equation eucte from the electrical circuit Fig. 1 is expresse as follows: -19 I PV VPV RsI PV VPV RsI PV I ph I s exp 1 (1) n V th R sh.. The operating process of solar cell uner illumination An illuminate solar cell generates a characteristic I PV =f(v PV ) for every value of irraiance an temperature. We obtain this characteristic by varying the value of loa R (Fig. ). Irraiance Temperature _ + V PV I PV Solar cell Loa R Characteristics I PV =f(v PV ) Fig. Impact of irraiance an temperature on the solar cell characteristic. The change of the solar irraiance between 100W/m² an 1000W/m² an the cellular temperature between 18 C an 65 C affects the values of the five electrical parameters R s, R sh, I ph, I s, an n of solar cell. Effectively, the current I ph varies accoring to irraiance an the current I s varies accoring to temperature while R s, R sh an n vary accoring to the both meteorological factors [4]. III. THE USED ARTIFICIAL NEURAL NETWORK The ientification of the internal electrical parameters for various values of temperature (T) an irraiance (G) is insure by the networ ANN [4] shown in Fig. 3. The architecture inclues an entrance layer, a hien layer an an output layer. The entrance layer contains two inputs [T, G], the hien layer contains twenty hien neurons an the layer of the output inclues five output neurons corresponing to the five parameters R s, R sh, I ph, I s an n whose we want to preict the values. Copyright to IJIRSET www.ijirset.com 15697

ISSN: 319-8753 T ( C) G (W/m²) z ji Σ f h 1 Σ f h Σ f h 3 y ji z jm Σ f o 1 Σ f o Σ f o 3 Σ f o 4 Σ f o 5 y(1) y() y(3) y(4) y(5) R s R sh I ph I s n Σ f h 0 Entrance layer Hien layer Output layer Fig. 3 Structure of the use ANN i =1, : Inex of inputs. b : Biais of the output neurons. j=1,, 0: Inex of hien neurons. x i : Input vector [T, G]. m=1...5: Inex of output neurons. z ji: Input of hien neurons. w ji : Weights of connections between the entrance layer z : Input of output neurons. an the hien layer. w : Weights of connections between the hien layer y ji : Output of hien neurons. an the output layer. b ij : Biais of the hien neurons. y: Matrix values of the networ outputs, y = [R s, R sh, I ph, I s, n]. h o f : Activation function «hyperbolic tangent» of the f : Activation function «linear» of the output neurons j m hien neurons. The input of the hien layer is calculate by the following expression: z ji xiw i1 ji b ji By the use of hyperbolic tangent as an activation function of the hien neurons, the neurons calculate the value of their output using the following equation: h y f z ) (3) j ( ji To compute the inputs of the output neurons, we use the values of y z 0 y j1 w b calculate previously. The output of the neural networ is calculate as follows: o y f m z ) (5) ( IV. THE TRAINING ALGORITHMS The mean square error J generate by the ANN is expresse by the following equation: mean(learning) J mean p v 1 ( learning ) ( Slearning ( t, s) ylearning ( t, s))² (6) p t 1 s1 p: Number of examples {input, output} learning. v: Number of the networ outputs. s: Inex inicates the number of output. S: Target values of the networ outputs. t: Inex inicates the example number of learning stage. Copyright to IJIRSET www.ijirset.com 15698 () (4)

ISSN: 319-8753 The minimization of the error J is insure by ajusting the weights (w) of the ANN (Fig. 3). The training of mean(learning ) the networ is mae at every time by one optimization algorithm from the set of algorithms (Levenberg-Marquart, Gauss-Newton, Quasi-Newton, steepest escent an conjugate graient). 4.1. Levenberg-Marquart () algorithm: Algorithm of secon orer of graient [5] allow the optimization of J, the ajustment of w is insure by the mean(learning) expression: J e w 1 w (7) ( J J I) J: Jacobian matrix of the function J mean (learning). e: Error between the target an the calculate networ outputs. : Number of iterations. I: Matrix ientity. The regulation of the Levenberg-Marquart amping factor is mae as follows: If the calculate J for w mean(learning ) +1, ecreases, so: = Else = an w +1 = w 4.. Gauss-Newton () algorithm: This algorithm [6] is from the same family as the Levenberg-Marquart. It minimize the error of output by varying w using step size Je w 1 w (8) ( J J I) 4.3. Quasi-Newton (QN) algorithm: Since the secon erivative of J is not obvious to compute, the Quasi-Newton algorithm [7] suggests an mean(learning ) alternative way, which is the approximation of the secon erivative by the hessian B calculate in Eq. (11). We therefore ajust the weight using the Eq. (1): J e u (9) B n ( J e J e ) (10) B w 1 1 u u 1 B un 1 J w ( J J B n n e B B n Bn ) (11) (1) 4.4. Steepest escent (SD) algorithm: The ajustment of ANN weights by the algorithm of steepest escent [8] is insure by the following equation: w 1 w J e (13) 4.5. Conjugate graient (CG) algorithm: This conjugate graient algorithm [9] is from the same family as the steepest escent algorithm, but the both are algorithms of the first orer of graient. At the first iteration ol J e (14) From the secon iteration Copyright to IJIRSET www.ijirset.com 15699

Test mean square error ISSN: 319-8753 w new 1 ( J ol ol e ) J e J w e new ol (15) (16) V. RESULTS AND DISCUSSION The training of the networ is mae with 130 inputs-outputs examples istribute in three sets (learning, valiation an test) [10]. 10 0 10-1 QN CG SD 10-10 -3 10-4 10 0 10 1 10 Iterations Fig. 4 Evolution of the test means square errors of the five training algorithms The Fig. 4 shows the curves of the test means square errors obtaine by the ANN. Each curve correspons one of the five optimization algorithms. We use the logarithmic scale in the axis of iterations in orer to well show the behavior of the algorithms convergence. Therefore, the algorithm allows a goo training of the ANN compare to other algorithms (, QN, CG an SD). The both QN an CG have a stiff slope compare to SD which converges slowly (Fig. 4). Table 1: Comparison between the behaviors of the algorithms Algorithm Time of training (s) Test mean square error SD 04min3s 1.9716. 10 ³ 99.50 CG 1.891. 10 ³ 99.51 QN 15 1.3. 10 ³ 99.64 18 1.0. 10 ³ 99.81 11 8.76. 10 ⁴ 99.95 Correct Rate (%) The Table 1 inclues the results obtaine after training the ANN at every time by one algorithm of the five optimization algorithms. As a result, the SD converges slowly an etermines the values of the five electrical parameters at the output of the ANN far from their targets with an error rate of 5%. By comparing the results of this algorithm to those of CG, the latter present fast convergence, but with error rate of 4 %. Both algorithms QN an present more important rates of correction: 99.51% an of 99.81% successively. Therefore by comparing the results of with that of the SD, CG, QN an algorithms, The presents better rate of convergence (time of training) an better rate of correction. Copyright to IJIRSET www.ijirset.com 15700

Courant e saturation Is [A] Courant e saturation [A] Photocourant Iph [A] Photocourant Iph [A] Résistance shunt Rsh [ ] Résistance shunt Rsh [ ] Résistance série Rs [ ] Résistance série Rs [ ] ISSN: 319-8753 1 1 1 1 1 1 400W/m² Fig. 5 Series resistance R s accoring to irraiance 0.0369 0.0369 0.0368 Fig. 10 Series resistance R s accoring to temperature 5 50.5 50 48 50 49.5 49 46 48.5 44 4 48 47.5 47 40 46.5 38 Fig. 6 Shunt resistance R sh accoring to irraiance 46 Fig. 11 Shunt resistance R sh accoring to temperature 0.9 0.35 0.8 0.7 0.6 0.3 0.5 0.4 0.3 0. 0.1 0 x 10-7 3.5 Fig. 7 photocurrent I ph accoring to irraiance 0.5 0. 0.15 0.1 x 10-7 3.5 Fig. 1 photocurrent I ph accoring to temperature 3 3.5 1 Fig. 8 Saturation current I s accoring to irraiance.5 1 0.5 Fig. 13 Saturation current I s accoring to temperature Copyright to IJIRSET www.ijirset.com 15701

Facteur iéalité e la ioe n Facteur iéalité e la ioe n ISSN: 319-8753 1.48 1.46 1.45 1.44 1.4 1.4 1.4 1.38 Fig. 9 Dioe Ieality factor n accoring to irraiance 1.35 Fig. 14 Dioe Ieality factor n accoring to temperature Figs. 5-9 show the evolution of the parameters R s, R sh, I ph, I s an n accoring to irraiance for two fixe values of temperature ( an ) an Figs. 10-14 escribe the evolution of the five electrical parameters accoring to temperature for two fixe values of the irraiance (00W/m² an 400W/m²). We observe that gives curves more compatible with those esire. By comparing with, the latter gives more or less curves close to the esire ones. In other han, generates an error, more important than that observe with. The rate correction of SD is low compare to other algorithms an that is explaine by its oscillation aroun the optimum, which prevents the convergence to reach the optimum solution (Fig. 4 an Table 1). The use of the coefficient by the Eq. (15), allow to the CG algorithm to converge quicly (Table 1) compare to SD. The QN an algorithms present two correction rates more interesting than those of the SD an CG algorithms. This behavior is explaine by the fact that QN an are better nown by their fast convergence near to the optimum. The algorithm presents the best behavior of the convergence compare to other algorithms, ue to the combination between the features of SD an. Therefore, behaves as SD for big values of An then, it behaves as for small values of VI. CONCLUSION The Levenberg-Marquart algorithm provies interesting performances at the training of the artificial neural networ compare with other optimization algorithms of graient escent. Effectively, it etermines the values of the five electrical parameters of the solar cell so close to esire ones ue to its capacity to optimize the mean square error to the minimal value in a small amount of time. REFERENCES [1] A. Jain, A. Kapoor, Exact analytical solutions of the parameters of real solar cells using Lambert W-function. Solar Energy Materials & Solar Cells, 004, vol. 81, pp. 69-77. [] H.Qin, J. W. Kimball. Parameter Determination of Photovoltaic Cells from Fiel Testing Data using Particle Swarm Optimization. IEEE, 011. [3] Iegami. T, Maezono. T, F. Y. Naanishi., Y. amagata, K. Ebihara, Estimation of equivalent circuit parameters of PV moule an its application to optimal operation of PV system. Solar Energy Materials & Solar Cells, 001, vol. 67, pp. 389-395. [4] E. Karatepe, M. Boztepe, M. Cola. Neural networ base solar cell moel. Energy Conversion an Management, 006, vol. 47, 1159-1178. [5] R. Zayani, R. Bouallegue, D. Roviras, Levenberg-Marquart learning neural networ for aaptative preistortion for time- varying HPA with memory in OFDM systems, 16th European Signal Processing Conference (EUSIPCO 008), 008, pp. 5-9. [6] P. R. Dimmer, O, P, D, Cutterige. Secon erivative Gauss-Newton-base metho for solving nonlinear simultaneous equations. IEEPROC, ecember 1980, vol. 17, 6, pp. 78-83. [7] R. Setiono, L. C. K. Hui. Use of a Quasi-Newton Metho in a Feeforwar Neural Networ Construction Algorithm. IEEE Transactions on neural networs, January 1995, vol. 6, 1, pp. 73-77. [8] L. Gong, C. Liu, Y. Li, F. Yuan, Training Fee-forwar Neural Networs Using the Graient Descent Metho with the Optimal Stepsize. Journal of Computational Information Systems, 01, vol. 8, pp. 1359-1371. [9] X. Gong, W. S. H. Xu, The Conjugate Graient Metho with Neural Networ Control, IEEE, 010, pp.8 84 [10] A. J. Aeloye, A. De Munari, Artificial neural networ base generalize storage yiel reliability moels using the Levenberg Marquart algorithm. Journal of Hyrology, 7 October 005, vol. 36, pp. 15-30. Copyright to IJIRSET www.ijirset.com 1570