Long-term Forecasting of Electrical Load using. Gustafson-Kessel clustering algorithm on Takagi-Sugeno type MISO Neuro- Fuzzy network
|
|
- Violet Gardner
- 5 years ago
- Views:
Transcription
1 Long-term Forecastng of Eectrca Load usng Gustafson-Kesse custerng agorthm on akag-sugeno type ISO euro- Fuzzy network By: Fex Pasa Eectrca Engneerng Department, Petra Chrstan Unversty, Surabaya Keywords: Long-term forecastng, GK Custerng, S-type ISO F network, LA acceerated. Introducton.. Probem defnton: euro-fuzzy Approach n Eectrca Load Forecastng odeng and dentfcaton of eectrca oad processes are essenta for the operaton and aso pannng of a utty ether for a company or for a country. Eectrca oad forecastng s needed because peope ntend to make mportant decson on generatng power generators, oad swtchng, purchasng strategy and aso nfrastructure deveopment. Furthermore, oad forecasts are extremey mportant for energy suppers, transmsson, dstrbuton and markets. In other words, oad forecasts pay a fundamenta roe n the formuaton of economc, reabe and secure operatng strateges for the power system. Lke other predcton n tme seres, oad forecasts are deang wth sequenta tme. In genera, oad forecasts are dvded nto two bg categores: short-term forecastng (SF), whch can usuay be defned as the capabty of the network to forecast the next severa days to the some weeks, and ong-term forecastng (LF) whch s deang wth future forecastng. For exampe, f peope ony have severa weeks of data for tranng, how can they create euro-fuzzy network to forecast what happened n the next severa weeks, next month or next year. How ong they w beeve that the network st can or can not be trusted. Another mportant ssue n LF s the annua peak demand for dstrbuton of substatons and feeders. Annua peak oad s the most mportant vaue to area pannng, snce peak oad most strongy mpacts capacty requrements (Fenberg, 3). In addton, both categores have unque characterstc. SF and LF can be determned usng the sampng nterva and ead tme of forecast from data tme seres. Here, choosng the sampng nterva and ead tme w nfuence the resut of forecastng performance. In the past many researchers were workng on SF scheme. Some of them exposed very good resut on tranng and forecastng performance (Pat, Computatona ntegence, 5, p.57). Because of that nspraton, euro-fuzzy network w be used to acheve powerfu tranng and forecastng performance on LF-eectrca oad appcatons. Furthermore, Gustafson-Kesse (GK) custerng agorthm w be used to reduce mode compexty and provde nta parameters for akag-sugeno-type ISO on F network. Choosng number of custers s the key on ths paper. By some experments, some number of custers has been chosen to tran and test the F network. By usng GK custerng, error performance of the network can be reduce sgnfcanty by choosng no of custer = 5 (membershp functons =5) nstead of number of membershp functons=5.
2 .. atrx Rearrangement of eectrca oad data For Long-term mode, ISO system w be used for tranng and forecastng. hs case, 7 nputs and output mode for the gven tme seres modeng and forecastng appcaton the ISO euro-fuzzy predctor shoud be arranged on XIO matrx, as shown beow: Day Day Day3 Day4 Day5 Day XIO = Day Day3 Day4 Day5 Day6 Day LF Day8 Day9 Day Day Day Day Day7 Day8 Day8 Day9 Day4 Day5 (.) As shown n equaton (.), 7 nput days are traned to produce output day n F network. Each nput and output represents day set of data. In case of eectrca oad data from EectrcaLoad.txt, day has 96 data. Output from the frst tranng Day w 8 repace the 7 th nput for the frst forecastng. After 7 oops of forecastng, a the data are fu nput forecast data (means a nput s comng from forecast outputs).. euro-fuzzy Systems seecton for Forecastng A euro-fuzzy network wth an mproved tranng agorthm for IO case was deveoped by Pat and Popovc (999, ) and Pat and Babuška () for eectrca oad tme seres forecastng. Compared to AFIS, ths smar mode has acheved better mode accuracy and faster tranng. Because of the reason above, some achevements have been reached n order to accompsh the optmum mode accuracy based on a akag-sugeno-type euro-fuzzy network wth IO mode. In addton, ths mode s an upgradng verson from akag-sugeno-type mutpe nput snge output euro-fuzzy network As contnuty from ISO structure, feedforward mut nput mut output s proposed by Pat and Popovc () and Pat and Babuška (), as shown n Fgure. (Pat, Sprnger, 5, p.3) X X n G G n X Z / Z /b y y + f X X n G G n X Z + Degree of fufment b / Z /b y m y m + f m Fgure. Fuzzy system IO feedforward akag-sugeno-type eura-fuzzy network (Pat, 5) For LF type ISO, m w be set to one.
3 .. eura etwork Representaton of Fuzzy Logc System (FLS) euro-fuzzy representaton of the FLS s based on nference S-type whch has been expaned ceary by Pat (5, p.53, p.33). here are two mportant steps n ths representaton: cacuatng of the degree of fufment and normazed degree of fufment. he FLS consdered here for constructng neuro-fuzzy structures s based on S-type fuzzy mode wth Gaussan membershp functons. It uses product nference rues and a weghted average defuzzfer defnes as: he correspondng th rue from the above FLS can be wrtten as sg and x sg and... xn sgn then y = Wo + W x + W x +... Wn xn (.) R = If x + Where, x wth =,,..., n; are the n system nputs, whereas f wth =,,..., m; are ts m outputs, and G wth =,,..., n; and =,,..., ; are the Gaussan membershp functons of the form (.) wth the correspondng mean and varance parameters c and τ respectvey and wth y as the output consequent of the th rue. It must be remembered that the Gaussan membershp functons G actuay represent ngustc terms such as ow, medum, hgh, very hgh, etc. he rues as wrtten n (.) are known as akag-sugeno rues. It shows that the FLS can be represented as a three ayer IO feedforward network as shown n Fgure.. Because of the mpementaton of the akag-sugeno-type FLS, ths fgure represents a akag-sugeno-type of IO neuro-fuzzy network, where nstead of the connecton weghts and the bases n tranng agorthm such as BPA-, we have the mean c and aso the varance τ parameters of Gaussan membershp functons, aong wthw o, W parameters from the rues consequent, as the equvaent adustabe parameters of the network. If a the parameters for F network are propery seected, then the FLS can correcty approxmate any nonnear system based on gven data XIO matrx. f = = y h (.a) y = W + W x + W x W h x n n (.b) ( z / b) =, and b = z = (.c) z = n x c exp = σ.. Acceerated Levenberg-arquardt agorthm (LA) (.d) o acceerate the convergence speed on neuro-fuzzy network tranng that happened n BPA, the Levenberg-arquardt agorthm (LA) was proposed and proved (Pat and Popovc, 999). V s meant to mnmze wth respect to the parameter vector w usng ewton s method, the update of parameter vector w s defned as: If a functon ( w)
4 w = [ V ( w) ] V ( w) (.3a) w( k + ) = w( k) + w (.3b) From equaton (.3a), V ( w) s the Hessan matrx and V ( w) s the gradent of ( w) If the functon V ( w) s taken to be SSE functon as foows: V r r =! ( w) =.5 e ( w) (.4) hen the gradent of V ( w) and the Hessan matrx V ( w) ( w) = ( w) e( w) are generay defned as: V (.5a) V ( w) = ( w) ( w) + e ( w) e ( w) r = where the acoban matrx ( w) as foows ( w) e e = e ( w) e ( w) e ( w) p ( w) e ( w) e ( w) ( w) e ( w) e ( w) r L L L From (.5c), t s seen that the dmenson of the acoban matrx s ( s the number of tranng sampes and p p r p V. (.5b) (.5c) ), where p s the number of adustabe parameters n the network. For the Gauss-ewton method, the second term n (.5b) s assumed to be zero. herefore, the update equatons accordng to (.3a) w be: w = [ ( w) ( w) ] ( w) e( w) ow et us see the LA modfcatons of the Gauss-ewton method. w = [ ( w) ( w) + I ] ( w) e( w) (.6a) µ (.6b) where dmenson of I s the ( p p ) dentty matrx, and the parameter µ s mutped or dvded by some factor whenever the teraton steps ncrease or decrease V w. the vaue of ( ) Here, the updated equaton accordng to (.3a) ( + ) = w( k) ( w) ( w) w k [ + I ] ( w) e( w) µ (.6c) hs s mportant to know that for arge µ, the agorthm becomes the steepest descent agorthm wth step sze / µ, and for sma µ, t becomes the Gauss-ewton method. For faster convergence reason and aso to overcome the possbe trap at oca mnma and to reduce oscaton durng the tranng (Pat, 5. p.4), ke n BPA, a sma momentum term mo (practcay n eectrca oad forecastng, addng mo around 5% to % w gve better resuts) aso can be added, so that fna update (.6c) becomes
5 ( + ) = w( k) ( w) ( w) w k [ + I ] ( w) e( w) + mo w( k) w( k ) ( ) µ (.6d) Furthermore, Xaosong et a (995) aso proposed to add modfed error ndex (EI) term n order to mprove tranng convergence. he correspondng gradent wth EI can now be defned by usng a acoban matrx as: new [ ( e )] ( w) = ( w) e( w) + e( w) SSE γ (.7) avg where e ( w) s the coumn vector of errors, e avg s sum of error of each coumn dvded by number of tranng, whe γ s a constant factor, γ << has to be chosen appropratey. ow, comes to the computaton of acoban atrces. he gradent V ( W ) SSE be wrtten as ( W ) ( S W ) = { z b} ( f d ) V / can / (.8) Where f and d are the actua output of the akag-sugeno type IO and the correspondng desred output from matrx nput-output tranng data. And then by comparng (.8) to (.6a), where the gradent V ( w) s expressed wth the transpose of the acoban matrx mutped wth the network's error vector ( w) = ( w) e( w) V (. 9) then the acoban matrx, the transpose of acoban matrx for the parameter the F network can be wrtten by ( W ) ( z / b) W of = (.a) ( W ) [ ( W )] = [ z b] / (.b) wth predcton error of fuzzy network e ( f d ) (.) But f the normazed predcton error on F network s consdered, then nstead of equatons (.a) and (.b), the equatons w be ( W ) ( z ) ( W ) [ ( W )] [ z ] =, (.a) = (.b) hs s because the normazed predcton error of the IO-F network s e ( normazed ) ( f d ) b / (.3) he transpose of acoban matrx and acoban matrx for the parameter network can be wrtten as ( W ) ( z b) x W of the F = / (.4a) ( W ) [ ( W )] = [( z b) x ] / (.4b) Aso, by consderng normazed predcton error from (.), equatons (.4a)-(.4b) then become:
6 ( W ) = ( z x ) ( W ) [ ( W )] = [ z x ] (.5a) (.5b) ow, comes to the computaton of the rest parameters c andτ by defnng the terms D and e as A D e = ( D e + D e + L+ D e ) m m (.6) Wth e as the same amount of sum squared error that can be found by a the errors e from the IO network. p p p p ( e + e + em ) e = L + (.7) Where, p =,, 3,..., ; correspondng to as number of tranng data. From (.6), the term D can be determned as ( e ) D = A (.8a) hs can aso be wrtten n matrx form usng pseudo nverse as ( Ε ) ( Ε Ε ) D = Α (.8b) he terms E (s the equvaent error vector), D and A are matrces of sze (x), (x) and (x) respectvey. ow matrx A can be repaced wth scaar product of e and D A = D e (.9) ow, by consderng normazed equvaent error n (.3), takng nto account the equaton (.9), the transposed acoban matrx, the acoban for the parameters c andτ can be computed as: { } ( c ) D z ( x c )/( σ ) = (.a) ( c ) = [ ( c )] = D z ( x c ) ( σ ) { } 3 ( ) ( ) σ D z x c /( σ ) / (.b) = (.c) 3 ( ) = [ ( σ )] = D z ( x c ) ( σ ) σ / (.d) It s to be noted that normazed predcton error s consdered for computaton of acoban matrces for the free parameters W and W. eanwhe normazed equvaent error has been consdered for the computaton of transposed acoban matrces and ther acoban matrces respectvey for the free parameters mean c and varanceτ.
7 3. LA wth Fuzzy Custerng he purpose of dong such fuzzy custerng before data enters the network s smar wth mode reducton. Because of the compexty of data, usuay some number of membershp functons ( ) shoud be enough to brng the SSE as ow as possbe. In Chapter 4, t can be seen that usng fuzzy custerng, such as GK custerng agorthm w reduce from 5 to 3, 5 or 7. For detectng custers of dfferent geometrca shapes n one data set such as eectrca data, Gustafson-Kesse custerng agorthm (GK) w be proposed accordng to Pancharya (Pancharya et a., 3) and Pat (Sprnger, 5, p.77-p.87). Gven the data set: Z = { Z, Z, Z 3,..., Z }, (3.) needs some parameters for GK custerng agorthm as foowng: he number of custers < c < he weghtng exponent or fuzzness exponent parameter m > he termnaton toerance e > he custer voume S whch must be seected. Furthermore, GK custerng agorthm s proposed wth some steps: ( ) Wth U = = random, repeat for teratons =,, 3,... Step compute the custerng centers (mean) ( ) ( µ ) Z s s= Vg = ; g c ( ) s= ( µ ) Step determne the custer covarance matrces ( ) ( µ ) ( Z s Vg ) ( Z s Vg ) s= Pg = ; g c ( ) s= ( µ ) Step 3 cacuate the dstance D gsag = ( Z V ) S det( P ) s g g c, / n [ g Pg ] ( Z s Vg ), s Step 4 update the partton matrx For s If DgsA > for a g =,,..., c; (3.a) (3.b) (3.c)
8 µ = ese c h=, g c; s ; / ( D D ) ( m ) gsag hsag (3.d) unt µ = and µ () ( ) U U < e c [, ], wth µ = g c; s ; g= g (3.e) (3.f) How GK custerng agorthm can be apped to the S-type IO F network for modeng and forecastng eectrca oad data? Here are the steps:. Create XIO data from eectrca oad. Random Partton matrx U 3. Cacuate custer centers, custer covarance, dstances, norm nducng matrces and update the partton matrx agan 4. Decare parameters mean ( c) and sgma ( ) σ from number 3 5. Cacuate the remanng parameters W o, W by usng east squared error agorthm (LSE) 6. Cacuate SSE from LSE agorthm and save a parameters 7. Usng parameters from number 6 as nput parameters of S-type IO F network. ote that the nta performance SSE from F shoud be same wth the SSE resut from the GK Custerng and LSE. 4. Resuts and Dscusson 4.. Performance of F etworks usng Custerng In case of eectrcaoad.txt, Custerng and LSE resut can be seen on Fgure 4.a. he parameters whch are determned from ths part brng the SSE =.56, SE =. and RSE =.46. Inta SSE for F network. In addton, F tranng, wth 5 data tranng reduced nta SSE from Custerng-LSE to.343, SE =.4 and RSE =.37. It means, F network reduced the SSE tranng 3% ower compared to nta SSE usng 499 epochs. Fgures 4.b and 4.d show the transton of GF from Custerng-LSE and GF from F network. Furthermore, for forecastng purposes, the 8 data of eectrca oad as ustrated n Fgures 4.f gve the resut of SSE forecastng = wth SE forecastng =.4 and RSE forecastng =.. Fu forecastng nput start from data 763 (comes from 7 days mutpy to 96 data pus ) and fu LF from output F network can foow unt the next 3 actua data (around 3 days from eectrca oad data).
9 Eectrca Load (Back), GK CLusterng - LSE (Red), Error (Bue) GKC+LSE - Actua SSE =.56, SE =., RSE =.46.4 Error me Fgure 4.a GK Fuzzy Custerng pus LSE Performance from eectrcaoad.txt, n= 7, m=, d=96, 48 data ranng, c = 5, Fuzzness exponent m=. GF of Input from GKC+LSE.8 Degree of embershp Input Range Fgure 4.b 5 GF tuned for Input, produced by GK custer + LSE, Data tranng = 5, c=5, m=, Input=7, output=
10 .5 Pot of SSE vs. Epoch SSE Epochs Fgure 4.c Graph of SSE vs Epochs of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7, m=, Epochs= 499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5, 5 data ranng, c =5, Fuzzness exponent m=. LF scheme of Eectrca Load: F Output (Red), Actua (Back), Error (Bue) F Output - Actua SSE =.343, SE =.4, RSE =.37 Error me Fgure 4.d ranng Performance of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7, m=, Epochs= 499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5, 5 data ranng, c = 5, Fuzzness exponent m=
11 GF of Input from F Output.8 Degree of embershp Input Range Fgure 4.e 5 GF tuned for Input, produced by F network, Data tranng = 5, c=5, m=, Input=7, output=, Epochs=499, d=96, Learnng Rate for LA =65, WF =.5, Gamma =.5, mo =.5 3 LF case: F Output (Red), Actua (Back), Error (Bue) F Output - Actua Error me Fgure 4.f Forecastng Performance of S-type IO F network wth LA acceerate for eectrcaoad.txt, n= 7 Inputs, m= output, Epochs= 499, Lead me d=96, Learnng Rate for LA =65, Wdness Factor WF =.5, Gamma =.5, omentum mo =.5, 4 data Forecastng, o of custers c = 5, Fuzzness exponent m =
12 5. Summary and Concuson Performance resuts from Secton 4 prove that the traned F network usng LA s found to be very effcent n modeng and predcton of the varous nonnear dynamcs. An effcent tranng agorthm based on combnaton of LA wth addtona modfed error ndex extenson (EI) and adaptve verson of earnng rate (momentum) have been deveoped to tran the akag-sugeno type mut-nput snge-output (ISO) eurofuzzy network, mprovng the tranng performance. he good resut from ths report s combnaton LA acceerated wth Fuzzy Custerng (based on Gustafson-Kesse custerng agorthm). he proposed Fuzzy Custerng w reduce number of membershp functons () and aso reduces sum squared error tranng compared to LA acceerate resuts. ow, what can be nferred about these overa resuts? Frst, there s st a bg queston n deang wth optmzaton. How do we know that choosng such the combnaton among determnaton of nput-output and ther structure, choosng the parameters, and deang wth over-/under fttng n akag- Sugeno-type euro-fuzzy network n the same tme w gve the optmzed performance?. Answerng ths queston s beyond the scope of ths report. Second, another thng whch s mportant here s that the fuzzy rues generated from these resuts are occasonay found to be non-transparent or ess nterpretabe. hs s due to the fact that some of the membershp functons fnay tuned through neurofuzzy network tranng are hghy smar or overappng on each other, gvng rse to a dffcut stuaton to nterpret. o mprove the transparency of fuzzy rues, set theoretc smarty measures [5] shoud be computed for each par of fuzzy sets and the fuzzy sets whch are hghy smar shoud be merged together nto a snge one. hs dea of transparency of course w ncrease the cost of sacrfcng of the mode accuracy. he ast concuson s about ong-term forecastng resut. he proposed rearrangement of matrx XIO accordng to the approprate ead tme gves much better resuts compared to sma ead tme. By re-arrangng the XIO matrx, F network can foow the actua output unt some hundreds of data. he probem n ths scenaro s to fnd the optma ead tme wth the optmum number of tranng whch gves the mnmum goba error n forecastng, and aso the possbty to get maxmum tme of forecastng (ong-term) before the network can not foow the actua output. Here a ot of combnaton and repetton of smuaton s needed n order to fnd the sutabe resut n ong-term forecastng. By repeatng the smuaton, onte Caro procedure can be apped to study the dstrbuton and the statstcs from data that gve nformaton on the ong term dstrbuton of tme seres [6]. hs method reduces the rsk of smuaton by fndng constrants smuaton such as tranng agorthm for F networks and reduces or deetes unwanted resuts of smuaton. For the future, the possbty research to get optma resut s combnng Hybrd onte-caro procedure (HC) wth akag-sugeno-type euro-fuzzy network, or Hybrd HC-S-type F network.
13 References. Eugene Fenberg, Cosng Sesson of Apped athematcs for Dereguated Eectrc Power Systems: Optmzaton, Contro, and Computatona Integence, SF workshop, ovember 3. Hans-Peter Preuss, Voker resp, euro-fuzzy, Automatserungstechnsche Praxs 5/94, R. Odenbourg Verag, 994, pp Haykn S (994) eura etworks: a comprehensve foundaton. can, USA 4. Iyer S, Rhnehart RR (), A nove method to stop eura network ranng. Amercan Contro Conferences, Paper W ang SR (993), AFIS: Adaptve network Based Fuzzy Inference System, IEEE rans. On SC., 3(3): Kohonen (995), Sef-organzng maps ( nd ed.) Sprnger seres n nformaton scences, Bern 7. ao Y, Pat AK, hee G (6), ransparent Fuzzy mode for eectrca oad forecastng, hess Report, Unversty of Bremen 8. Pat AK, Babuška R (), Effcent tranng agorthm for akag-sugeno type euro-fuzzy network, Proc. of FUZZ-IEEE, ebourne, Austraa, vo. 3: Pat AK, Doedng, Anheer, Popovc () Backpropagaton based tranng agorthm for akag-sugeno-type IO neuro-fuzzy network to forecast eectrca oad tme seres, Proc. Of Fuzz-IEEE, Honouu, Hawa, vo. :86-9. Pat AK, Popovc D (999), Forecastng chaotc tme seres usng neuro-fuzzy adaptve genetc approach, Proc. of IEEE-IC, Washngton DC, USA, vo.3: Pat AK, Popovc D (), Integent processng of tme seres usng neuro fuzzy adaptve genetc approach, Proc. Of IEEE-ICI, Goa, Inda, vo.q:86-9. Pat AK, Popovc D (), onnear combnaton of forecasts usng A, FL and F approaches, FUZZ-IEEE, : Pat AK, Popovc D (5), Computatona Integence n me Seres Forecastng, heory and Engneerng Appcatons, Sprnger 4. Prechet L (998), Eary stoppng but when? In: Orr GB and oeer K-R (Eds.), eura networks: rcks of the trade. Sprnger, Bern: Setnes, Babuška R, Kaymark U (998), Smarty measures n Fuzzy rue base smpfcaton, IEEE ransacton on System, an and Cybernetcs, vo.8: Smon G, Lendasse, Cottre, Fort, Vereysen (4), Doube quantzaton of the regressor space for ong-term tme seres predcton: method and proof of stabty 7. Wang LX (994), Adaptve fuzzy systems and contro: desgn and stabty anayss, Engewood Cffs, ew ersey: Prentce Ha. 8. Wang, Pat AK, hee G, Forecastng of eectrca oad usng akag-sugeno type wth Fuzzy Custerng and smpfcaton of rue base, aster hess, Unversty of Bremen, 5 9. Xaosong D, Popovc D, Schuz-Ekoff (995), Oscaton resstng n the earnng of Backpropagaton neura networks, Proc. of 3 rd IFAC/IFIP, Ostend, Begum
Comparison of BPA and LMA methods for Takagi - Sugeno type MIMO Neuro-Fuzzy Network to forecast Electrical Load Time Series
Comarson of BPA and LA methods for akag - Sugeno tye IO euro-fuzzy etwork to forecast Eectrca Load me Seres [Fex Pasa] Comarson of BPA and LA methods for akag - Sugeno tye IO euro-fuzzy etwork to forecast
More informationResearch on Complex Networks Control Based on Fuzzy Integral Sliding Theory
Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He
More informationSupplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks
Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke
More informationShort-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques
Energes 20, 4, 73-84; do:0.3390/en40073 Artce OPEN ACCESS energes ISSN 996-073 www.mdp.com/journa/energes Short-Term Load Forecastng for Eectrc Power Systems Usng the PSO-SVR and FCM Custerng Technques
More informationMARKOV CHAIN AND HIDDEN MARKOV MODEL
MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationAssociative Memories
Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete
More informationNeural network-based athletics performance prediction optimization model applied research
Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped
More informationImage Classification Using EM And JE algorithms
Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu
More informationPredicting Model of Traffic Volume Based on Grey-Markov
Vo. No. Modern Apped Scence Predctng Mode of Traffc Voume Based on Grey-Marov Ynpeng Zhang Zhengzhou Muncpa Engneerng Desgn & Research Insttute Zhengzhou 5005 Chna Abstract Grey-marov forecastng mode of
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationApplication of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages
Appcaton of Partce Swarm Optmzaton to Economc Dspatch Probem: Advantages and Dsadvantages Kwang Y. Lee, Feow, IEEE, and Jong-Bae Par, Member, IEEE Abstract--Ths paper summarzes the state-of-art partce
More informationA finite difference method for heat equation in the unbounded domain
Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy
More informationMultispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory
Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationSupervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.
Part 7: Neura Networ & earnng /2/05 Superved earnng Neura Networ and Bac-Propagaton earnng Produce dered output for tranng nput Generaze reaonaby & appropratey to other nput Good exampe: pattern recognton
More informationAdaptive LRBP Using Learning Automata for Neural Networks
Adaptve LRBP Usng Learnng Automata for eura etworks *B. MASHOUFI, *MOHAMMAD B. MEHAJ (#, *SAYED A. MOTAMEDI and **MOHAMMAD R. MEYBODI *Eectrca Engneerng Department **Computer Engneerng Department Amrkabr
More informationCOXREG. Estimation (1)
COXREG Cox (972) frst suggested the modes n whch factors reated to fetme have a mutpcatve effect on the hazard functon. These modes are caed proportona hazards (PH) modes. Under the proportona hazards
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationOptimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA
Journa of mathematcs and computer Scence 4 (05) - 5 Optmzaton of JK Fp Fop Layout wth Mnma Average Power of Consumpton based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA Farshd Kevanan *,, A Yekta *,, Nasser
More informationNested case-control and case-cohort studies
Outne: Nested case-contro and case-cohort studes Ørnuf Borgan Department of Mathematcs Unversty of Oso NORBIS course Unversty of Oso 4-8 December 217 1 Radaton and breast cancer data Nested case contro
More informationThe Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident
ICTCT Extra Workshop, Bejng Proceedngs The Appcaton of BP Neura Network prncpa component anayss n Forecastng Road Traffc Accdent He Mng, GuoXucheng &LuGuangmng Transportaton Coege of Souast Unversty 07
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationNONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM
Journa of heoretca and Apped Informaton echnoogy th February 3. Vo. 48 No. 5-3 JAI & LLS. A rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 NONLINEAR SYSEM IDENIFICAION BASE ON FW-LSSVM, XIANFANG
More informationA DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS
A DIMESIO-REDUCTIO METHOD FOR STOCHASTIC AALYSIS SECOD-MOMET AALYSIS S. Rahman Department of Mechanca Engneerng and Center for Computer-Aded Desgn The Unversty of Iowa Iowa Cty, IA 52245 June 2003 OUTLIE
More informationNote 2. Ling fong Li. 1 Klein Gordon Equation Probablity interpretation Solutions to Klein-Gordon Equation... 2
Note 2 Lng fong L Contents Ken Gordon Equaton. Probabty nterpretaton......................................2 Soutons to Ken-Gordon Equaton............................... 2 2 Drac Equaton 3 2. Probabty nterpretaton.....................................
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationOptimum Selection Combining for M-QAM on Fading Channels
Optmum Seecton Combnng for M-QAM on Fadng Channes M. Surendra Raju, Ramesh Annavajjaa and A. Chockangam Insca Semconductors Inda Pvt. Ltd, Bangaore-56000, Inda Department of ECE, Unversty of Caforna, San
More information3. Stress-strain relationships of a composite layer
OM PO I O U P U N I V I Y O F W N ompostes ourse 8-9 Unversty of wente ng. &ech... tress-stran reatonshps of a composte ayer - Laurent Warnet & emo Aerman.. tress-stran reatonshps of a composte ayer Introducton
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationWAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES
WAVELE-BASED IMAGE COMPRESSION USING SUPPOR VECOR MACHINE LEARNING AND ENCODING ECHNIQUES Rakb Ahmed Gppsand Schoo of Computng and Informaton echnoogy Monash Unversty, Gppsand Campus Austraa. Rakb.Ahmed@nfotech.monash.edu.au
More informationDetermining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application
7 Determnng Transmsson Losses Penalty Factor Usng Adaptve Neuro Fuzzy Inference System (ANFIS) For Economc Dspatch Applcaton Rony Seto Wbowo Maurdh Hery Purnomo Dod Prastanto Electrcal Engneerng Department,
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More information[WAVES] 1. Waves and wave forces. Definition of waves
1. Waves and forces Defnton of s In the smuatons on ong-crested s are consdered. The drecton of these s (μ) s defned as sketched beow n the goba co-ordnate sstem: North West East South The eevaton can
More informationExample: Suppose we want to build a classifier that recognizes WebPages of graduate students.
Exampe: Suppose we want to bud a cassfer that recognzes WebPages of graduate students. How can we fnd tranng data? We can browse the web and coect a sampe of WebPages of graduate students of varous unverstes.
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationMULTIVARIABLE FUZZY CONTROL WITH ITS APPLICATIONS IN MULTI EVAPORATOR REFRIGERATION SYSTEMS
MULTIVARIABLE FUZZY CONTROL WITH I APPLICATIONS IN MULTI EVAPORATOR REFRIGERATION SYSTEMS LIAO QIANFANG Schoo of Eectrca and Eectronc Engneerng A thess submtted to the Nanyang Technoogca Unversty n parta
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationNeuro-Adaptive Design - I:
Lecture 36 Neuro-Adaptve Desgn - I: A Robustfyng ool for Dynamc Inverson Desgn Dr. Radhakant Padh Asst. Professor Dept. of Aerospace Engneerng Indan Insttute of Scence - Bangalore Motvaton Perfect system
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationResearch Article H Estimates for Discrete-Time Markovian Jump Linear Systems
Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationChapter 6 Hidden Markov Models. Chaochun Wei Spring 2018
896 920 987 2006 Chapter 6 Hdden Markov Modes Chaochun We Sprng 208 Contents Readng materas Introducton to Hdden Markov Mode Markov chans Hdden Markov Modes Parameter estmaton for HMMs 2 Readng Rabner,
More informationCHAPTER 7 STOCHASTIC ECONOMIC EMISSION DISPATCH-MODELED USING WEIGHTING METHOD
90 CHAPTER 7 STOCHASTIC ECOOMIC EMISSIO DISPATCH-MODELED USIG WEIGHTIG METHOD 7.1 ITRODUCTIO early 70% of electrc power produced n the world s by means of thermal plants. Thermal power statons are the
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationCHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION
CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationCyclic Codes BCH Codes
Cycc Codes BCH Codes Gaos Feds GF m A Gaos fed of m eements can be obtaned usng the symbos 0,, á, and the eements beng 0,, á, á, á 3 m,... so that fed F* s cosed under mutpcaton wth m eements. The operator
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationDelay tomography for large scale networks
Deay tomography for arge scae networks MENG-FU SHIH ALFRED O. HERO III Communcatons and Sgna Processng Laboratory Eectrca Engneerng and Computer Scence Department Unversty of Mchgan, 30 Bea. Ave., Ann
More informationA Three-Phase State Estimation in Unbalanced Distribution Networks with Switch Modelling
A Three-Phase State Estmaton n Unbaanced Dstrbuton Networks wth Swtch Modeng Ankur Majumdar Student Member, IEEE Dept of Eectrca and Eectronc Engneerng Impera Coege London London, UK ankurmajumdar@mperaacuk
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationQuantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry
Quantum Runge-Lenz ector and the Hydrogen Atom, the hdden SO(4) symmetry Pasca Szrftgser and Edgardo S. Cheb-Terrab () Laboratore PhLAM, UMR CNRS 85, Unversté Le, F-59655, France () Mapesoft Let's consder
More informationNumerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes
Numerca Investgaton of Power Tunabty n Two-Secton QD Superumnescent Dodes Matta Rossett Paoo Bardea Ivo Montrosset POLITECNICO DI TORINO DELEN Summary 1. A smpfed mode for QD Super Lumnescent Dodes (SLD)
More informationDistributed Moving Horizon State Estimation of Nonlinear Systems. Jing Zhang
Dstrbuted Movng Horzon State Estmaton of Nonnear Systems by Jng Zhang A thess submtted n parta fufment of the requrements for the degree of Master of Scence n Chemca Engneerng Department of Chemca and
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationA Hybrid Learning Algorithm for Locally Recurrent Neural Networks
Contemporary Engneerng Scences, Vo. 11, 2018, no. 1, 1-13 HIKARI Ltd, www.m-hkar.com https://do.org/10.12988/ces.2018.711194 A Hybrd Learnng Agorthm for Locay Recurrent Neura Networks Dmtrs Varsams and
More informationChapter 6. Rotations and Tensors
Vector Spaces n Physcs 8/6/5 Chapter 6. Rotatons and ensors here s a speca knd of near transformaton whch s used to transforms coordnates from one set of axes to another set of axes (wth the same orgn).
More informationLaboratory 3: Method of Least Squares
Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationLaboratory 1c: Method of Least Squares
Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationRelevance Vector Machines Explained
October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationThe internal structure of natural numbers and one method for the definition of large prime numbers
The nternal structure of natural numbers and one method for the defnton of large prme numbers Emmanul Manousos APM Insttute for the Advancement of Physcs and Mathematcs 3 Poulou str. 53 Athens Greece Abstract
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationA Rigorous Framework for Robust Data Assimilation
A Rgorous Framework for Robust Data Assmlaton Adran Sandu 1 wth Vshwas Rao 1, Elas D. Nno 1, and Mchael Ng 2 1 Computatonal Scence Laboratory (CSL) Department of Computer Scence Vrgna Tech 2 Hong Kong
More informationApplication of support vector machine in health monitoring of plate structures
Appcaton of support vector machne n heath montorng of pate structures *Satsh Satpa 1), Yogesh Khandare ), Sauvk Banerjee 3) and Anrban Guha 4) 1), ), 4) Department of Mechanca Engneerng, Indan Insttute
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationDiscriminating Fuzzy Preference Relations Based on Heuristic Possibilistic Clustering
Mutcrtera Orderng and ankng: Parta Orders, Ambgutes and Apped Issues Jan W. Owsńsk and aner Brüggemann, Edtors Dscrmnatng Fuzzy Preerence eatons Based on Heurstc Possbstc Custerng Dmtr A. Vattchenn Unted
More information1 The Mistake Bound Model
5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationA Novel Feistel Cipher Involving a Bunch of Keys supplemented with Modular Arithmetic Addition
(IJACSA) Internatonal Journal of Advanced Computer Scence Applcatons, A Novel Festel Cpher Involvng a Bunch of Keys supplemented wth Modular Arthmetc Addton Dr. V.U.K Sastry Dean R&D, Department of Computer
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationAn Interactive Optimisation Tool for Allocation Problems
An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents
More informationDecentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions
Decentrazed Adaptve Contro for a Cass of Large-Scae onnear Systems wth Unknown Interactons Bahram Karm 1, Fatemeh Jahangr, Mohammad B. Menhaj 3, Iman Saboor 4 1. Center of Advanced Computatona Integence,
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationBackpropagation Based Training Algorithm for Takagi - Sugeno Type MIMO Neuro-Fuzzy Network to Forecast Electrical Load Time Series
Backpropagato Based Trag Agorthm for Takag - Sugeo Type IO Neuro-Fuzzy Network to Forecast Eectrca Load Tme Seres Aoy Kumar Pat, ember, IEEE, ad Gerhard Doedg deeg GmbH, Kurfuersteaee - 30, D-8 Breme,
More informationPredictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore
Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More information= z 20 z n. (k 20) + 4 z k = 4
Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5
More information