Johann Wolfgang Goethe-Universität Frankfurt am Main

Size: px
Start display at page:

Download "Johann Wolfgang Goethe-Universität Frankfurt am Main"

Transcription

1 Fachberech Informatk Johann Wolfgang Goethe-Unverstät Frankfurt am Man 7KH3ULQFLSDO,QGHSHQGHQW &RPSRQHQWVRI,PDJHV %$UOW5%UDXVH )DFKEHUHLFK,QIRUPDWLN 5REHUWD\HU6WUD H )UDQNIXUWDPDLQ

2 Abstract Classcally, encodng of mages by only a few, mportant components s done by the Prncpal Component Analyss (PCA). Recently, a data analyss tool called Independent Component Analyss (ICA) for the separaton of ndependent nfluences n sgnals has found strong nterest n the neural network communty. Ths approach has also been appled to mages. Whereas the approach assumes contnuous source channels mxed up to the same number of channels by a mxng matrx, we assume that mages are composed by only a few mage prmtves. Ths means that for mages we have less sources than pxels. Addtonally, n order to reduce unmportant nformaton, we am only for the most mportant source patterns wth the hghest occurrence probabltes or bggest nformaton called Prncpal Independent Components (PIC). For the example of a synthetc pcture composed by characters ths dea gves us the most mportant ones. Nevertheless, for natural mages where no a-pror probabltes can be computed ths does not lead to an acceptable reproducton error. Combnng the tradtonal prncpal component crtera of PCA wth the ndependence property of ICA we obtan a better encodng. It turns out that ths defnton of PIC mplements the classcal demand of Shannon s rate dstorton theory. Keywords: Prncpal Component Analyss PCA, Independent Component Analyss ICA, Prncpal Independent Component Analyss PICA, Rate Dstorton Theory 1 Introducton One of the most nterestng and ambtous propertes of artfcal neural networks s grounded n the actve nformaton processng of real world data: the unsupervsed analyss of sgnals. 1.1 Prncpal components and PCA An nterestng approach has been developed throughout the recent years: the lnear transformaton of the nput space to the base of prncpal components whch mnmzes the mean squared error when droppng some of the transformed channels. Ths transformaton called Prncpal Component Analyss (PCA) and obtaned by algnng the base vectors to the drectons of maxmal varance, s dentcal to a dscrete Karhunen-Loève or Hotellng transformaton. Here, we decompose the n sgnals (x 1,,x n ) T x by a lnear transform y = Wx wth y = (y 1,,y n ) T (1) such that a subset y = (y 1,,y m ) T of m < n components used wth the matrx W m 1 (consstng of m columns of W 1 ) to reconstruct the orgnal sgnals by x = W m 1 y obtans the smallest mean squared error (x x ) = mn n the reconstructon process. It s well known that ths s the case for the projectons of the nput x on the m egenvectors wth the bggest egenvalues λ 1,,λ m of the covarance matrx C xx = (x x )(x x ) T Thus, the varance of a component y s gven by λ = (y y ) = σ, and the rows of W meet the condtons for orthonormalty w T w = 1 and w T w j = for j () We see that the whole sgnal s decomposed by a nonscalng lnear transformaton nto dfferent drectons w. To obtan the smallest error of reconstructon, we use the drectons wth the bggest varances. So, the components (and the correspondng drectons or base vectors) are ordered accordng to a crteron. The selected m ones are called the prncpal components. Many neural networks have already been proposed whch let ther assocated weght vectors converge to the base of prncpal components, the egenvectors of the nput covarance matrx, by proper learnng rules, see e.g. [OJA9], [BRA93a]. For mages, the search for the prncpal components (called "transform mage codng") can be organzed as a local process. Thus, a whole pcture can be encoded n parallel by many neurons on a sensory plane wth local nteractons (e.g. lateral nhbton), usng only the selforganzed prncpal components [BRA96] obtaned by analog crcuts [BRA94]. 1. Independent components and ICA The approach of PCA s only optmal for the performance measure of the mean squared error and assumes no specfc nformaton about the hgher order statstcs of the observed sgnals. If we want to maxmze other measures of nformaton processng, for nstance the nformaton capacty of the encodng coeffcents (.e. the output sgnals of the transformng system), we have to obtan other propertes. Here, the mutual nformaton H(y 1 ;y ; ;y n ) between the output channels s a good measure for an effcent output codng. The output nformaton H(y 1,y ) of two channels y 1 and y H(y 1,y ) = H(y 1 ) + H(y ) H(y 1 ;y )

3 becomes maxmal f for constant channel nformaton H(y ) the mutual nformaton becomes mnmal. Ths s the case f H(y 1,y ) = H(y 1 ) + H(y ) whch means for the probablty densty functons (pdf) p(y 1,y ) = p(y 1 )p(y ) Thus, the demand for mnmal transnformaton s dentcal wth the demand for ndependent channel pdf ("factoral code"). For n channels ths means p(x) = p(x 1 )p(x ) p(x n ) (3) Let us assume that all observed sgnals x = (x 1,,x n ) T are derved from a lnear mxture of n unknown ndependent source sgnals s = (s 1,,s n ) T wth an unknown mxng matrx M wth rows m x = Ms, x = m s (4) How can the orgnal source sgnals be reconsttuted? Another lnear transformaton wth a matrx B y = Bx = BMs (5) mght obtan the sources f y = s BM = I (6) the demxng matrx B becomes the nverse of M. The problem of fndng the demxng matrx s known as the problem of "blnd separaton of sources" or "Independent Component Analyss" (ICA) and s a fast growng topc n neural network research, see e.g. [ACY96], [BUR9],[COM94], [DEO96], [HYO96]. The ndependent sgnals are obtaned by usng objectve functons (called contrast functons [COM94]). One of them s the demand for mnmal transnformaton between the sgnals and can be used to obtan learnng rules for the unknown base vectors of the nverse transformaton B of ICA, see [ACY96]. There are several condtons nvolved n the demxng process n order to get the source sgnals (see [COM94]): The mxng matrx M must be regular to have the nverse B=M 1 to exst wth Bx = BMs = M 1 Ms = s. Ths means that we have to have the same number n of sources as of observed mxtures. The source s determned regardless of the order (ndex) of the channels n s. Ths s due to the fact that the crucal condton for ndependence, the factorzaton p(s) = p(s 1 )p(s ) p(s n ) of the probablty dstrbuton functon (pdf) by the margnal pdfs, s stll vald for p(s) = p(s 1 )p(s n ) p(s ) or any other permutaton of the ndces. In eq.(4), the same mxture x s produced f we scale a source s by a factor c and the correspondng column M of M by a factor 1/c. Thus, wthout further knowledge, we cannot determne the scale of the source sgnals: the ICA s an "ll-posed problem". For two Gaussan sources s 1 and s a smple decorrelaton procedure (PCA) gves us ndependent sources. Nevertheless, t s well known that the PCA decorrelaton s done by an orthogonal matrx composed by the egenvectors of C xx, see eq.(). Snce we assume M to be generally not orthogonal (.e. t does perform more than a rotaton), we cannot demx the sgnals just by a rotaton: the demxng s not correct. The operaton of separatng the sgnals nto s 1 and s s not unque; wthout any further nformaton the ambguty for Gaussan sgnals cannot be resolved. For addtonal Gaussan sources, ths problem aggravates. Ths means for successful demxng at most one source can have a pdf wth Gaussan characterstc. Thus, we cannot expect to recover the exact source sgnals s but only ther scaled and permutated versons y = DPs wth a dagonal scalng matrx D and a permutaton matrx P. Ths relaxes the condtons on the demxng matrx B n eq.(5) to BM = DP (7) Here, B s n general not equal to M 1 although n the followng we stll call B "the nverse matrx of M" and y "the source sgnals". In order to enable a soluton t s convenent to assume that the recovered source sgnals y have unt varance σ ² snce D s unknown. Furthermore we assume that the y are centered,.e. y. Ths requres the demxng process to center the observed sgnals x as well for ther average x mght be non-zero. Consequently, we get the relaton y = B (x- x ) = BM (s- s ) = DP (s- s ) (8) The standard ICA procedure conssts manly of the followng stages (shown n Fg.1). M x- x W PCA W ICA s mx x center x whten v ndep. y B Fg.1 The processng stages n ICA The observed sgnals x are dmnshed by ther frst and second moments: They are centered, decorrelated and whtened to unt varance by a lnear transform wth a matrx W PCA, and then separated by ther hgher moments n the last stage by another lnear transform W ICA. The latter whch uses the preprocessed nput s often referred as "the ICA matrx". So far we have 3

4 y = B (x- x ) = W ICA W PCA (x x ) (9) wth B = W ICA W PCA. If we use a PCA process for the decorrelaton process n W PCA we also can addtonally scale the rows w of W PCA whch are the egenvectors of C xx by ther egenvalues w w λ ½ such that w = λ 1 Ths normalzes the varance of v because we have v = (w T x ) = w T x x T w = w T C xx w = w T w λ = 1 The whtenng process gves us an advantage: For whtened, decorrelated nput of vv T = I the ICA matrx W ICA s orthogonal,.e. just a rotaton of the base of the nput space. Ths can be easly shown: Wth v W PCA x and the assumptons of centered and ndependent sources havng unt varance (.e. y and yy T = I), we get I = yy T = W ICA vv T W ICA T = W ICA W ICA T Thus, the nverse matrx W ICA 1 s dentcal to the transposed matrx W ICA T whch mples that W ICA has to be orthogonal. The classcal ICA encodng system above can be traned usng separate layers of neural networks. The frst stage s obtaned by learnng the expectaton value as an offset n order to center the nput: x (t+1) = x (t) + 1/t (x(t) x (t)) For the second stage standard PCA learnng rule can be used, see e.g. [OJA9], coupled by a rescalng descrbed above. Otherwse, specal whtenng learnng rules can be used, see [SIL91],[PLUM93],[BRA98]. For the thrd stage, the ICA layer, one of the ICA learnng rules may be taken, e.g. [HYO96]. Now, for encodng pctures by a decomposton wth the most mportant, ndependent components we wll run nto trouble. Let us assume that we have just 4 ndependent vsual objects on a pcture of = pxels. Certanly, we want to obtan a sgnfcant smaller number of outputs to descrbe the pcture than But f we use less neurons for data compresson, ths becomes n conflct wth the demand of the same number for sources and mxtures, the frst condton for ICA cted above. What can we do? One common soluton, taken n [BES96] and [OLS96a,b] s to cut the mages nto smaller patches, say 1 1=144 pxels, present many patches of many mages (preferably natural scenes) and then make an ICA of the 144 channels. Ths gves us 144 ndependent "base pctures". Nevertheless, not all ICA components are equally mportant. Some of them are just spurous patterns wth a low occurrence probablty. Snce we want to obtan a stable code whch covers most of the nput data, we am for the m ICA components wth the hghest occurrence probablty. Here, we encounter a serous problem: how can we order the components, e.g. by an occurrence probablty, whch the ICA model so far dd not provde? In standard ICA applcatons, all (tme seres) channels are always present,.e. equally probable. However, ths s not the case for real world objects. In order to cover ths aspect also, we have to develop a new mage model whch s composed by sgnals and events. An event-orented mage model Let us model the mages as a superposton of many small, ndependent mage patches, just lke a sngle neuron of the retna sees the world by a very restrcted focus. Our task conssts now of fndng the most probable ones..1 Image event prmtves, sgnals, and ICA As an ntroductory example, let us consder as nput events several pctures composed of four pxels. The four sample pctures are shown n Fg.. The black pxels are coded as 1, the whte ones as +1 and the gray ones as zero. M 1 =(1,-1,,) T M =(,1,1,1) T M 3 =(-1,1,1,) T M 4 =(,-1,-1,) T Fg. The four sample pctures In the followng state-tme dagram (Fg.3) four events are presented ndependently. Here, each event s denoted by two states, present (on) or not present (off). The tme order of the ndependent events s assumed to be random. tme step on off pcture Fg.3 The state-tme dagram of nput events t

5 Each event ω manfests tself on all four pxels or four channels. Assgnng a sgnal vector s ω to the event ω ="pcture appears" we note the events by the vectors 1 1 s ω1 =, s ω =, s ω3 =, s ω4 = 1 1 The pcture tself can be descrbed by the nfluence of the event on the pxels. Formally, we can wrte ths as a lnear mxture performed accordng to eq.(4) by the mxng matrx 1 M = (M 1, M, M 3, M 4 ) = The superposton of the nfluences can be observed at each pxel as the tme seres of superposed sgnals. In Fg.4 the ntensty of all four pxels s shown for the ntroductory example. tme step pxels pxel value Fg.4 The tme seres of the pxel channels In Fg.5 the correspondng mages are shown. x 1 =(1,-1,,) T x =(,,,1) T x 3 =(,,1,) T x 4 =(,-1,-1,) T x 5 =(,1,1,1) T x 6 =(,-1,,) T -1 Fg.5 The sx sample pctures Snce we assume the four events to be ndependent, we can see our task as not only separatng the four channels of the source sgnal s from the lnear mxture x wthout t any knowledge about the mxture matrx M, but also to deduce the occurrence probabltes P(ω ) for the ndependent events ω.. Orderng the Independent Components To ntroduce the man dea for computng the probabltes for the prncple ndependent components, we notce that the source sgnals are defned as s = 1 for ω for ω Thus, we have as the average source sgnal s s = P(s =1) 1+ P(s =) = P(ω ) (1) The varance σ s of the source sgnal s s σ s = (s s ) = s s s + s = s s = P(s =1) 1+ P(s =) s = s s = s (1 s ) (11) Suppose that we have already computed the demxng matrx B satsfyng eq.(8). The recovered source sgnals y are derved from the centered source sgnals s by scalng and permutaton wth a matrx A BM = DP. As stated n secton Independent components and ICA t s mpossble to determne the permutaton matrx P so we assume P I and A D. For one component y we get y = a (s s ) (1) where a denotes the correspondng dagonal, non-zero, coeffcent of A. Snce y s centered and has unt varance σ y the followng relaton holds: 1 = σ y = (y ) = (a (s s )) = a σ s = a s (1 s ) (13) The average s of the source sgnals s transformed by the mxng matrx to the observed average sgnal x = M s (14) and by the demxng matrx B to the average transform output y = B x = BM s = A s (15) Note that here y s obvously non-zero snce we omtted the centerng stage. Therefore we have y = a s (16) Combnng eqs.(13) and (16) gves us the relaton between the observed, non-centered output and the needed occurrence probabltes 5

6 or 1 = ( y / s ) s (1 s ) P(ω ) = s = y / (1+ y ) (17) By ths we have a measure to order the obtaned ICA components accordng to ther assocated occurrence probabltes P(ω 1 ) > P(ω ) > P(ω m ). Snce the most probable events should not be neglected at all they are the most mportant ones. There s also an correspondence to the average nformaton of each component. Wth the defnton of the average Shannon nformaton 3 Smulatons and results In ths secton we want to vsualze our theoretcal results of the prevous secton and show the valdty of our mage model. 3.1 Recoverng the occurrence probabltes of events For the start we want to show that t s possble to obtan the occurrence probabltes of ndependent events. For ths purpose we use very basc mage events. We chose 16 letters A P, represented by a very coarse matrx of 8x8 pxels, see Fg.7. H(y) = α Ω P ( α ) log( P( α)) (18) and settng the state space to Ω {ω, ω } we obtan the margnal nformaton for one recovered source y H(y )= P(ω ) log(p(ω )) (1 P(ω )) log(1 P(ω )) (19) By assgnng an order to the components accordng to ther nformaton we defne wth H 1 H H m another order. How s ths order related to the prevous crteron of maxmal occurrence probablty? In Fg.6 the nformaton of one component s shown as a functon of ts probablty. Fg.7 The mage encodng of the events For each one of 496 tranng patterns, a random lnear combnaton of the letters was computed and presented to a network of 16 neurons. In Fg.8 ffteen nput sample pctures out of the 496 are shown.,7 H,6,5,4,3,,1 p p,5,5,75 1 Fg.6 The nformaton of one component as functon of ts occurrence probablty Snce nformaton s a convex functon, probablty and nformaton are both monotoncally ncreasng up to the local maxmum whch s located at or H (p)/ p = log(p ) + log(1 p ) = p =.5 Thus f we order the components n ths range accordng to j P(ω ) p P(ω j ) p H H j () we get the desred decreasng entropy order stated above. Fg.8 Sample nput pctures of mxed events The nput events are transformed to decorrelated components by a PCA stage. Intally, we used the full alphabet, but after the PCA stage some components wth zero egenvalues were observed. Ths means that some letters of the alphabet can be decomposed by a lnear combnaton of others. To obtan really ndependent sources we chose the subset of 16 letters shown n Fg.7. The egenmages formed n the PCA stage,.e. the rows of matrx W PCA, correspond to the decorrelated components found by the PCA stage and are shown n Fg.9. Fg.9 The egenmages of the nput pctures 6

7 Here, we observed a near-gaussan probablty dstrbuton of the sgnal values, see Fg.1. 6 w.pca Fg.11 The nverse source mages obtaned after the ICA stage w.pca w.pca w.pca Fg.1 The probablty dstrbuton of four mage sgnals obtaned after the PCA and whtenng stages To obtan the hstograms the 496 samples were quantfed nto 56 ntervals on the horzontal axs. After the ICA stage, we recovered the source sgnals. Snce we want to concentrate on the topc of prncple components we do not descrbe the algorthms used to obtan the PCA and ICA n detal. Nevertheless, t should be mentoned that the statstcal nature of the source sgnals presented a severe problem for some algorthms. For the concrete events of ths secton, we have exclusvely bmodal source dstrbutons wth negatve kurtoss, see Fg.13. Our smulatons showed that some of the algorthms had problems wth bmodal mages,.e. negatve kurtoss [BES96], and some wth the natural mages of postve kurtoss [ACY96]; they dd not converge for these mxtures. In order to obtan the desred results, we used versons of the algorthms descrbed n [HYO96]. The nverse of the resultng matrx B s the mxng matrx M, contanng the letters. The mages correspondng to the B matrx are shown n Fg.11, the nverted B matrx gves us the reconstructed source mages n Fg.1. Fg.1 The source mages obtaned after the ICA stage We see that nether the ntal order nor the sgn of the sources were preserved. The occurrence probablty dstrbuton of four components s shown n Fg ICA ICA ICA ICA Fg.13 The probablty dstrbuton of four mage sgnals obtaned after the ICA stage Ideally, the peaks seen n Fg.13 are just spkes wth zero varance. Thus, the functon values n a small nterval around each local average can be summed up n the center of the assocated nterval, and set to zero afterwards. Ths knd of quantzaton should gve us a better estmate of the orgnal probablty dstrbuton. 7

8 The ntal and estmated occurrence probabltes of the source letters are lsted n table 1. The error s due to the mperfectly learned ICA stage. source probablty error letter used observed D,715,73 -,17 F,696,73 -,36 I,743,695,49 B,69,673,19 G,577,68 -,51 M,64,618,6 L,5,534 -,14 O,538,53,6 C,43,484 -,61 A,49,466,7 J,444,463 -,19 H,75,396 -,11 E,454,36,9 N,48,34,66 K,415,3,9 P,341,31,31 be encoded wth a smaller number of bts. Now, for further consderatons, let us change to natural mages. 3. Reconstructng natural mages Image encodng by very few number of coeffcents s stll a demandng task and has a lot of applcatons. Perhaps, by usng the ICA approach, we mght obtan an encodng wth a fewer number of components. For ths purpose, let us regard the ndependent components of natural mages. The method to obtan these components s smlar to the one n conventonal transform codng: the whole mage s splt nto submages contanng n pxels, and each submage s used as one tranng sample. In our smulatons the pcture called Cactus (Fg.14) was dvded nto 4543 submages (sze: 8 8=64 pxels) whch were randomly chosen as tranng samples. Table 1 The source letters, ther assocated and ther recovered occurrence probabltes Now, our ntal goal s stll the effcent encodng of the mage sgnals. Ths s obtaned by reducng the margnal entropy of the channels. Table shows the approxmated average nformaton, the entropy, of the frst four channels before and after the ICA stage (calculated from the probablty dstrbuton n Fg.1 and Fg.13). component observed entropy component observed entropy orgnal entropy w.pca ICA1 ('J') w.pca 7.48 ICA ('K') w.pca3 7.3 ICA3 ('F') w.pca ICA4 ('M') Table The margnal entropy of four channels (n bts) Obvously, mnmzng the mutual nformaton dramatcally reduces the sngle channel nformaton. Snce the probablty dstrbutons of the ICA components are slghtly blurred ther margnal entropy s stll hgher than the orgnal entropy accordng to eq.(19). However, by applyng a rgorous quantzaton strategy we should be able to acheve further reducton as stated above. In lnear mage codng and restoraton, we know that by defnton the prncpal decorrelated components obtaned after the PCA stage yeld the mnmal mean squared error (MSE). Thus, we cannot expect that the prncpal ndependent components wll gve us a smaller MSE. Nevertheless, what we can attend s that they can Fg.14 The tranng pcture Cactus Frst, we centered and decorrelated the 64 components of the submage ensemble. The obtaned PCA egenmages are shown n Fg.15 (page 9). After ths, the components are transformed lnearly. The transform coeffcents are updated by an teratve ICA learnng algorthm gvng us the matrx W ICA used n eq.(9). The columns of matrx B are shown as mages n Fg.16. The nverse of B s the mxng matrx M. The columns of ths matrx are the source mages, shown n Fg.17. The source mages obtaned are very smlar to those already known n the lterature, see e.g. [BES96], [OLS96b]. 8

9 Fg.15 The PCA egenmages of Cactus Fg.16 The base ICA mages of Cactus Now, what are the most mportant events? Here, the measured probablty dstrbutons of the sources were not bmodal. Ths excluded our event model of secton Recoverng the occurrence probabltes of events for calculatng the occurrence probabltes and therefore prevents an order of mportance of the sources for analyzng the observed stuatons by mportant events. Nevertheless, we stll can use the margnal nformaton to compute the order of the components nstead. Interestngly, the ntal order gven by the ICA algorthm s characterzed by ncreasng entropy. Ths s due to the goal of our (sequental) ICA algorthm whch tres to mnmze the margnal entropy for the frst component by choosng the ICA component whch dffers the most from a Gaussan dstrbuton,.e. whch has the smallest avalable entropy. To answer the basc queston f there are prncpal ndependent components whch contan consderably more or less average nformaton than others we calculated the margnal entropy of all components the same way as n secton Recoverng the occurrence probabltes of events. The cumulated margnal entropy of the frst k whtened PCA components (n order of decreasng egenvalues) and ICA components (n order of ncreasng entropy) s shown n Fg cumulated entropy [bts] w.pca ICA k Fg.18 The cumulated margnal entropy of the frst k whtened PCA components (dotted lne) and ICA components Fg.17 The source mages of Cactus The dfference between the two cumulaton functons can hardly be seen: the margnal entropy of the ICA components s just slghtly smaller than the one of the whtened PCA components. Furthermore, the cumulated entropy of both the PCA and the ICA grows approxmately proportonal. Ths means that especally all the ICA components of the mage have nearly the same nformaton; there are no components whch dffer much from the others. If not n occurrence probablty or average nformaton, are there ICA components whch dffer n mportance? Are there some whch are more mportant than the others so we have to concentrate on them? 9

10 3.3 Component orderng by nformaton One crteron for mportance s the qualty of the mage reconstructed by the remanng components. In Fg.19 a cutout of the orgnal mage Cactus s shown. In both cases, the reconstructon qualty s not acceptable, especally, when compared wth the reconstructon result of the frst 16 PCA components, shown n Fg.. Fg.19 The cutout of the mage Cactus The cutout, reconstructed by the 16 ICA components wth the smallest average nformaton, and by the 16 ICA components wth the bggest average nformaton, can be seen n Fg. and Fg.1. Fg. The reconstructon by the frst 16 PCA components It seems that the pure nformaton crteron s not approprate for mage reconstructon. In contrast to ths, the PCA transform seems to gve better results. Reconstructng the mage by ts frst k components and comparng t wth the orgnal one gves us the average error for neglectng the n k components. Certanly, by usng the k egenmages of the PCA stage wth the bggest egenvalues, the mean squared error MSE s mnmzed because the PCA operaton s defned to obtan the smallest possble MSE. Are there prncpal ICA components whch also mnmze the error? Let us compare the MSE contrbuton by the PCA components by those by the ICA. In Fg.3, ths s shown for the mage Cactus. Obvously, usng the components wth the bggest entropy does decrease the MSE sgnfcantly faster than usng the ones wth the smallest entropy. Certanly, the smallest MSE s produced usng the PCA components (dotted lne). Fg. The reconstructon by the frst 16 ICA components 7 6 MSE ICA (ncreasng entropy) ICA (decreasng entropy) 1 PCA k Fg.3 Decreasng the MSE by addng components Fg.1 The reconstructon by the last 16 ICA components 1

11 3.4 Component orderng by vrtual varance How can we further mprove the performance of the selected ICA components? The PCA sortng crteron s the decreasng value of the egenvalues. Snce we know that the egenvalue λ s λ = σ = var(y ) equal to the varance of the component, we mght order the ICA components also approprate to ther varance. Here, we encounter a problem: the ICA transform s such that all varances of the components are made equal. How to select the ones wth the bggest varance? Inspectng the transform closer we notce that the output varances are equal, but not the length of the correspondng bass vectors w of the ICA transform (rows of matrx W). To compare t to the PCA transform whch has unt length bass vectors, we have to normalze the ICA bass vectors. Thus, we mght defne a vrtual varance of a component by w var* (y ) var w var(y ) 1 x = = w w (1) Orderng the ICA components by ths crteron, we obtan a better MSE-adapted reconstructon whle preservng the performance of the cumulated entropy. In Fg.4 the best ICA orderng of Fg.3 s compared to the vrtual varance orderng MSE PCA ICA (decreasng entropy) ICA (decreasng vrtual varance) Fg.4 The MSE of the ICA orderng by vrtual varance To obtan an mpresson of the reconstructon qualty, we present the reconstructed mage cutout of Cactus by usng the ICA components wth the bggest vrtual varance n Fg.5. Clearly, ths orderng performs better than the two prevous ones, but t s stll nferor to the classcal PCA approach. k Fg.5 The reconstructed mage cutout Now, wthout usng any other mage reconstructon qualty measure (lke, for nstance, the psychophysologcal approach, see e.g. [CHR9]), we ask: what can the ICA approach do for encodng and reconstructng mages when the mnmal MSE of the reconstructon s gven by the number of PCA components? 3.5 Prncpal ndependent components and rate dstorton theory When we reduce the number of components n the transform approach for encodng mages we reduce the full space of mage components (dmensons) to a subspace. The subspace of the ICA components s characterzed by ts nformaton content whereas the subspace of the PCA components s characterzed by ts low MSE reconstructon error. Now, f we cannot replace the prncpal components of PCA for obtanng a small MSE, what about reducng ther encodng nformaton by ICA? Ths dea can be performed n two ways: Get the frst k PCA components wth an acceptable MSE. Then, by an ICA transform, we wll get the same number of encodng coeffcents but wth less nformaton,.e. less encodng bts. For the same amount of encodng nformaton as the k PCA components take, we can also get p more ICA transformed PCA components. Snce these p+k base vectors of the ICA transform span the same space as the p+k PCA components, the resultng mage qualty wll be enhanced lke addng p more PCA components. Thus our approach, startng wth the search for ndependent mage prmtves, leads us to the error-bounded maxmal nformaton for each channel. Ths s not new: the approach of maxmzng the nformaton for a tme step n a channel when an upper bound for the error (more general: for a dstorton measure) exsts or, vce versa, to mnmze the error for a channel wth a constant nformaton per tme step s classcally known as the rate dstorton theory [SHA49] and has a broad range of applcatons n the classcal telecommuncaton area. 11

12 The frst one of the deas above can be expanded f we order the k ICA components accordng to ther decreasng vrtual varance and encode only the frst k' < k components wth low addtonal reconstructon error. Ths results n a further reducton of the number of encodng bts. To valdate the latter dea we computed the ICA components of the frst k PCA components (Fg.15) for k = 16,,1. In Fg.6a,b the ICA base vectors and mages can be seen for k = 17. Note that they are dfferent to those obtaned n Fg.16 because the data space s also dfferent. a) b) Fg.6 The 17 ICA base vectors and 17 mages Then the cumulated entropy was calculated and compared to the cumulated entropy of the frst k whtened PCA components. We found that for the same nformaton rate at most one addtonal ICA component can be encoded wth an error reducton of 5%. An example for 17 ICA components s shown n Fg.7: the reconstructed mage s slghtly better than the one of Fg.. Untl now we estmated the overall encodng amount by calculatng the margnal entropy of the components wthout consderng effcent quantzaton technques. In the next secton we shall take a closer look at ths task. 3.6 Robust encodng of natural mages wth prncpal ndependent components Suppose we have an mage decomposed nto submages whch we want to encode as effcent as possble (see secton Independent components). Snce we are dealng wth dgtzed mages the n components (pxels) x of an arbtrary submage x = (x 1,,x n ) T are dscrete,.e. each x stores one of N dfferent values. Thus there s a number N n of dfferent mage patches or "mage states" that can be assgned to x. Obvously, a lot of these mage patches are unlkely to occur n natural mage data (e.g. very nosy structures) whle others are qute smlar (dfferng n only a few pxels): we assume that we have to encode only a small number L ε <<N n of "necessary" states of x whch are suffcent to descrbe natural mages at an acceptable error ε. L ε s called the error-bounded descrptonal complexty of the submages [BRA93b]. The man dea of transform codng s to derve an optmzed error-bounded representaton y = (y 1,,y n ) T of x accordng to the mage statstcs,.e. y has to encode the L ε necessary states of x as effcent as possble. Consequently, we demand the relaton L ε Q < N n () where Q denotes the number of dfferent values that can be assgned to a component y 1. The determnaton of the Q at a gven error ε s a non-trval task whch wll not be addressed n ths paper. Instead, from an opposte pont of vew, we ask for the reconstructon error ε at gven numbers Q,.e. at a gven quantzaton of the y. In the prevous secton we used the (vrtual) varance of a component y to decde whether ts quantzaton number was set to Q =56 or to Q =1. But varance can tell us even more about "mportance": n case of the PCA or the DCT (Dscrete Cosne Transform) t s well-known that decreasng the quantzaton number Q (.e. the resoluton) of a component y wth low varance reduces the overall encodng amount wthout affectng the reconstructon error perceved by the human vsual system. Ths s why PCA or DCT components wth lower varance are encoded at coarser resoluton, and the same should hold for ICA. To prove the dea we used the k = 16,,1 ICA and PCA components of the prevous secton. The ICA com- Fg.7 The reconstrucon by 17 ICA components 1 Note that the margnal entropy of a component y wll not ncrease f Q s decreased; furthermore, y wll be set to a constant value (e.g. zero) f Q =1. 1

13 ponents were scaled wth the recprocal norm of the assocated base vectors to set ther former unt varance to the vrtual varance n order to be comparable to the PCA components. Snce the coeffcents of both the PCA and the scaled ICA lay wthn an nterval I = [I mn,i max ] R we unformly dvded I nto Q subntervals I q of same length; the quantzaton was done by assgnng each (PCA or ICA) coeffcent c I q the arthmetcal mean of I q. After ths procedure me made the followng observatons: The boundares I mn and I max of I were gven by the smallest and the bggest coeffcent of the PCA component wth hghest varance. The components y wth low varance were encoded wth lower relatve resoluton than the components wth hgh varance because the length of the quantzaton ntervals were not adapted to the range of the y. We computed both the MSE and the cumulated entropy for dfferent k and Q. Fg.8 shows the resultng MSE as a functon of the entropy MSE Q=3 Q=64 Q=18 Q=19 Q= cumulated entropy PCA (k=16) ICA (k=16) PCA (k=) ICA (k=) Fg.8 The MSE as a functon of the cumulated entropy at dfferent quantzaton levels Q For both the PCA and the ICA the functonal dependency of reconstructon error and cumulated entropy s approxmately the same f k s equal. As n secton Prncpal ndependent components and rate dstorton theory, for the same amount of cumulated entropy t s possble to encode about one ICA component more than PCA components snce the margnal entropy of the ICA components s lower. Note that for k=16 components and quantzaton level Q=56 the MSE and the entropy are lower f we use more components (k=) at lower resoluton (Q=64). Accordng to ths observaton we may state that "varety" s more mportant than "accuracy",.e. to reduce the reconstructon error we should encode more components nstead of ncreasng the quantzaton resoluton. The systematc nvestgaton of ths behavor s subject to future research. 4 Dscusson In ths paper we showed that the concept of ndependent components known n Prncpal Component Analyss (PCA) can be enlarged to cover also the occurrence probabltes and the nformaton content of an Independent Component Analyss (ICA). Whereas the ICA approach assumes contnuous source channels mxed up to the same number of channels by a mxng matrx, we appled the ICA to mages assumng that they are composed by only a few mage prmtves. Certanly, the components wth the hghest probablty are also the ones whch should not be neglected. As shown n secton 3., ths corresponds roughly to the mean squared error nduced by neglectng the components, but s not dentcal to t. Theses components can be termed the Prncpal Independent Components PIC. For dstnctve mages, e.g. characters ths dea gves us the most mportant ones. Nevertheless, for natural mages we have no a-pror probabltes. Usng the ICA components wth most of the nformaton dd not lead to an acceptable reproducton error. The stuaton changed when we appled the ICA transform to the frst prncpal PCA components whch resulted n a compact and robust encodng. Ths approach combnes the tradtonal prncpal component crtera of PCA wth the ndependence property of ICA. It turned out that ths defnton of PIC mplements the classcal demand of the rate dstorton theory of Shannon. 5 References [ACY96] S.Amar, A.Cchock, H.Yang: A New Learnng Algorthm for Blnd Sgnal Separaton; Advances n Neural Informaton Processng Systems 8, Touretzky, Mozer, Hasselmo (Eds.), pp , MIT Press (1996) and avalable by [BES96] A.J.Bell, T.J.Sejnowsk: Edges are the `ndependent components' of natural scenes; Int. Conf. Advances n Neural Informaton Processng Systems NIPS 96, MIT press (1996). [BRA93a] R.Brause: A Symmetrcal Lateral Inhbted Network for PCA and Feature Decorrelaton; Proc. Int. Conf. Art. Neural Networks ICANN-93, pp , Sprnger Verlag (1993) [BRA93b] R.Brause: The Error-Bounded Descrptonal Complexty of Approxmaton Networks; Neural Networks, Vol.6, pp (1993) 13

14 [BRA94] R.Brause, A VLSI-Desgn of the Mnmum Entropy Neuron; J. Delgado-Fras, W. Moore (Eds.): VLSI for Artfcal Intellgence and Neural Networks, pp.53 6, Plenum Press (1994) [BRA96] R.Brause: Sensor Encodng Usng Lateral Inhbted, Self-organzed Cellular Neural Networks; Neural Networks, Vol.9, No.1, pp.99 1, (1996) [BRA98] R.Brause, M.Rppl: Nose Suppressng Sensor Encodng and Neural Sgnal Orthonormalzaton; accepted by IEEE Trans. on Neural Networks [BUR9] G.Burel: Blnd Separaton of Sources: A Nonlnear Neural Algorthm; Neural Networks, Vol. 5, pp (199) [COM94] P.Comon: Independent Component Analyss a new concept?; Sgnal Processng, Vol.36, pp (1994) [CHR9] B.Chtprasert, K.Rao: Human Vsual Weghted Progressve Image Transmsson; IEEE Trans.Comm., Vol.38, No.7, pp (199) [DEO96] G.Deco, D.Obradovc: An Informaton- Theoretc Approach to Neural Computng; Sprnger Verlag (1996) [HYO96] A.Hyvärnen, E.Oja: Independent Component Analyss by General Non-lnear Hebban-lke Rules; Helsnk Unversty of Technology, Dep. of Comp. Sc., Report A41 (1996) also avalable by [OJA9] E.Oja: Prncpal components, mnor components, and lnear neural networks; Neural Networks, Vol.5, pp (199) [OLS96a] B.A.Olshausen, D.J.Feld: Emergence of smple-cell receptve feld propertes by learnng a sparse code for natural mages; Nature 381, pp (1996) [OLS96b] B.A.Olshausen, D.J.Feld: Natural Image Statstcs and Effcent Codng; Network: Computaton n Neural Systems, No. 7, pp (1996) [PLUM93] M.Plumbley: Effcent Informaton Transfer and Ant-Hebban Neural Networks; Neural Networks, Vol.6, pp (1993) [SIL91] F.Slva, L.Almeda: A dstrbuted soluton for data orthonormalzaton; T.Kohonen et. al. (Eds.): Artfcal Neural Networks, Elsever Sc. Publ. (1991) [SHA49] C.E.Shannon, W.Weaver: The Mathematcal Theory of Informaton; Unversty of Illnos Press, Urbana (1949) 14

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Unified Subspace Analysis for Face Recognition

Unified Subspace Analysis for Face Recognition Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Codng Prncple of block-wse transform codng Propertes of orthonormal transforms Dscrete cosne transform (DCT) Bt allocaton for transform coeffcents Entropy codng of transform coeffcents Typcal

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

Quantum and Classical Information Theory with Disentropy

Quantum and Classical Information Theory with Disentropy Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant Tutoral 2 COMP434 ometrcs uthentcaton Jun Xu, Teachng sstant csjunxu@comp.polyu.edu.hk February 9, 207 Table of Contents Problems Problem : nswer the questons Problem 2: Power law functon Problem 3: Convoluton

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

On mutual information estimation for mixed-pair random variables

On mutual information estimation for mixed-pair random variables On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

7. Products and matrix elements

7. Products and matrix elements 7. Products and matrx elements 1 7. Products and matrx elements Based on the propertes of group representatons, a number of useful results can be derved. Consder a vector space V wth an nner product ψ

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski EPR Paradox and the Physcal Meanng of an Experment n Quantum Mechancs Vesseln C Nonnsk vesselnnonnsk@verzonnet Abstract It s shown that there s one purely determnstc outcome when measurement s made on

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering, COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Topic 23 - Randomized Complete Block Designs (RCBD)

Topic 23 - Randomized Complete Block Designs (RCBD) Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information