Decision Boundary Formation of Neural Networks 1
|
|
- Ella Candice Clark
- 6 years ago
- Views:
Transcription
1 Decson Boundary ormaton of Neural Networks C. LEE, E. JUNG, O. KWON, M. PARK, AND D. HONG Department of Electrcal and Electronc Engneerng, Yonse Unversty 34 Shnchon-Dong, Seodaemum-Ku, Seoul 0-749, Korea Abstract: In ths paper, we provde a thorough analyss of decson boundares of neural networks when they are used as a classfer. rst, we dvde the classfyng mechansm of the neural network nto two parts: dmenson expanson by hdden neurons and lnear decson boundary formaton by output neurons. In ths paradgm, the nput data s frst warped nto a hgher dmensonal space by the hdden neurons and the output neurons draw lnear decson boundares n the expanded space (hdden neuron space). We also found that the decson boundares n the hdden neuron space are not completely ndependent. hs dependency of decson boundares s extended to multclass problems, provdng a valuable nsght nto formaton of decson boundares n the hdden neuron space. hs analyss provdes a new understandng of how neural networks construct complex decson boundares and explans how dfferent sets of weghts may provde smlar results. Key-Words: neural networks, analyss of decson boundary, dmenson expanson, lnear boundary, dependent decson boundary. Introducton Neural networks have been successfully used n varous pattern recognton problems ncludng character recognton [], remote sensng [], and communcaton [3]. he ncreasng popularty of neural networks s partly due to ther ablty to learn and therefore generalze. And neural networks make no pror assumptons about the statstcs of nput data and can construct complex decson boundares. Although t s known that neural networks can defne arbtrary decson boundares wthout assumng any underlyng dstrbuton [4], the decson boundary of neural networks are not well understood. here have been many papers that analyzed how neural networks are workng [5-7]. Gbson and Cowan nvestgated decson regons of mult-layer perceptrons and derved some geometrc propertes of the decson regons [8]. Makhoul and et. al. showed that neural networks wth a sngle hdden layer are capable of formng dsconnected decson regons [9]. Blackmore and et. al nvestgated decson regon approxmaton of neural networks and compared neural networks wth polynomals [0]. Ntta analyzed the decson boundares of complex valued neural networks []. On the other hand, some researchers nvestgated a learnng algorthm based on decson based formulaton for pattern classfcaton problems []. And Pal and et. al. proposed a method to fnd decson boundares for pattern classfcaton usng genetc algorthms and made extensve performance comparsons wth neural networks and other classfers, provdng nsghts nto the decson boundares of complex pattern classfcaton problems [3]. In ths paper, we systematcally analyze the decson boundary of feedforward neural networks and provde a helpful nsght and new nterpretaton nto the workng mechansm of neural networks. In partcular, when neural networks are used as a classfer, we note that the workng mechansm of the neural network can be dvded nto two parts: dmenson expanson by hdden neurons and lnear decson boundary formaton by output neurons. In ths context, we defne the role of hdden neurons as mappng the orgnal data nto a dfferent dmensonal space. x x x M X n b b X bas Z' g.. Example of 3-layer feedforward neural networks ( pattern classes). Z Y' Y y y he Korea Scence and Engneerng oundaton partly supported the publcaton of ths paper through BERC at Yonse Unversty. - -
2 eedforward neural networks and termnologes A typcal neural network has the nput layer, a number of hdden layers, and the output layer. It may also nclude bas terms. g. shows an example of 3-layer feedforward neural networks ( pattern classes). he decson rule s to choose the class correspondng to the output neuron wth the largest output [4]. rst, we assume that the actvaton functon s the sgmod functon: ( x) = x e () + In the neural network of g., we wll assume that nput vector X n s an M x column vector, X an (M+) x column vector, Z an N x column vector, Z an (N+) x column vector. Vectors n the output layer, Y and Y, are x column vectors. We wll call Z the output vector of hdden neurons. Each component of the output vector of hdden neurons, Z, s computed as follows: = =,...,N and z =. + N+ z e φ X Consequently, ponts that satsfy φ X = c wll end up wth the same value. And φ X = c represents a pont, a lne, a plane or a hyper-plane n the nput space dependng on the dmenson of the nput vector. In ths paper, we wll call φ X = c equvalent-weght lnes, equvalent-weght planes, or equvalent-weght hyperplanes dependng on the nput dmenson. urthermore, φ X = 0, whch corresponds to z = 0.5, wll be called mddle-weght lnes, mddle-weght planes, or mddle-weght hyper-planes dependng on the dmenson. In g., there are two bas terms b, b. Wthout loss of generalty, we can assume b = b =. It can be easly seen that the relatonshps between these vectors are as follows: X = [ x, x,..., x ] n M X = [ x, x,..., x, b ] M Z =Φ X = [ φ, φ,..., φ ] X = [ φ X, φ X,..., φ X] () N where φ s an (M+) x column vector. We wll call Φ= [,,..., ] φ φ φ N the weght matrx between the nputs and the hdden neurons and φ the weght vector between hdden neuron and the nputs. And the remanng vectors are obtaned as follows: Z = [ ( φ X), ( φ X),..., ( φ X), b ] N Y =Ψ Z = [ ψ, ψ ] Z = [ ψ Z, ψ Z] Y = [ y, y ] = [ ( ψ Z), ( ψ Z)] N where ψ s an (N+) x column vector. We wll call Ψ = [ψ,ψ ] the weght matrx between the output and hdden neurons and ψ the weght vector between output neuron and the hdden neurons. urthermore, we wll call the space defned by the nput vector, X n, the nput space and the space defned by the outputs of the hdden neurons wth the last component removed, [z,z,..., z N ] = [(φ X ),(φ X),..., (φ N X)], the hdden neuron space. g.. Example of a radal bass functon neural network ( pattern classes). g. shows an example of a radal bass functon neural network wth the Gaussan functon n the hdden layer. Snce the Gaussan functon s used as the radal bass functon n the hdden neurons, the outputs of the hdden neurons are computed as follows: ϕ ( X ) = exp( X Μ ) where = [,,..., ], Μ = [ µ, µ,..., µ ], =,..., m m X x x x n. Μ s called the center and s one of the parameters to be updated durng the learnng process. Smlarly, the ponts that satsfy X Μ = c represent ponts, a crcle, a sphere or a hyper-sphere n the nput space dependng on the dmenson of the nput space. In ths paper, the ponts that satsfy X Μ = 0.833, whch corresponds to ϕ ( X ) = 0.5, wll be called a half crcle, a half sphere or a half hyper-sphere dependng on the nput dmenson. he -th output of the output layer s smply computed as follows: n ϕ j j j= y = w = W Φ [ ϕ ϕ ϕ ] where W = [ w, w,..., w ], Φ=,,..., n n here w j s the weght between j-th hdden neuron and -th output neuron. In order to avod confuson, we wll denote the neural network whose actvaton functon s the sgmod functon as SIGNN and the radal bass functon neural network as RBNN. - -
3 3 Dmenson expanson of hdden neurons rst, we vew the outputs of the hdden neurons as a non-lnear-mappng of nputs snce a typcal actvaton functon s a non-lnear functon such as the sgmod functon or the Gaussan functon. And we observe that addng a hdden neuron s equvalent to expandng the dmenson of the hdden neuron space. hus, f the number of hdden neurons s larger than the number of nputs, the nput data wll be warped nto a hgher dmensonal space. However, the ntrnsc dmenson of the data dstrbuton n the hdden neuron space can not exceed the dmenson of the orgnal nput space. or example, f the dmenson of the nput vector s and the number of hdden neurons s 3, the data wll be dstrbuted on a curved plane n the 3-dmensonal space as shown n g. 3. In other words, f the number of hdden neurons s larger than the dmenson of the nput space, the nput data wll be warped nto a hgher dmensonal space whle mantanng the same ntrnsc dmenson. On the other hand, f j j φ X = c and φ X = c are parallel, then the ntrnsc dmenson n the hdden neuron space can be smaller than the dmenson of the orgnal nput space. g. 4 llustrates an example of the case where the plane n the nput space s mapped onto a curved lne n the hdden neuron space. g. 5 shows an example of dmenson expanson of RBNN wth 3 hdden neurons. he correspondng three half crcles and the decson boundares are also dsplayed. In the next secton, we wll nvestgate decson boundares n the hdden neuron space, whch wll be always lnear boundares. rom ths pont of vew, t can be seen that, when neural networks are used as a classfer, the nput data s frst mapped non-lnearly nto a hgher dmensonal space and then dvded by lnear decson boundares. nally, the lnear decson boundares n the hdden neuron space wll be warped nto complex decson boundares n the orgnal nput space. g. 3. An example of dmenson expanson of SIGNN three mddle-weght lnes n the nput space the correspondng dstrbuton n the hdden neuron space. g. 4. A case where the plane of the nput space s mapped onto a curved lne n the hdden neuron space (SIGNN). g. 5. An example of dmenson expanson of RBNN three half crcles, whch correspond to 3 hdden neurons, n the dmensonal nput space the correspondng curved plane n the 3 dmensonal hdden neuron space. 4 Decson boundares n the hdden neuron space In ths secton, we analyze the decson boundares of neural networks whose actvaton functon s the sgmod functon. However, t can be easly seen that the same analyss can be appled to RBNN. 4. wo pattern classes rst, we wll consder decson boundares of neural networks for a -pattern class problem as shown n g.. Snce the decson boundares between two output neurons s a buldng block for multclass problems, a thorough analyss of the decson boundares of a - pattern class problem wll provde a valuable nsght nto how decson boundares of neural networks are defned. After tranng s fnshed, the decson rule s to choose the class correspondng to the output neuron wth the largest output. And the decson boundary defned by the neural network n g. s gven by y = y n the space defned by y and y ( y - y plane). And the equvalent decson boundary s gven by ( ψ Z) = ( ψ Z) (3) n the hdden neuron space that s defned by z,z,...,z N. Snce the sgmod functon of () s monotoncally ncreasng, the same decson boundary - 3 -
4 gven by (3) wll be obtaned n the hdden neuron space wth the followng equaton: ψ Z = ψ Z ( ψ ψ ) Z = C Z = 0 cz + cz c z + c b = 0 N N N+ cz + cz c z + c 0 N N N + = where b = and C = ψ ψ s an (N+) x column vector. In other words, the decson boundary between two classes n the hdden neuron space wll be always a lnear decson boundary, though decson boundares n the nput space are non-lnear complex decson boundares. rom ths analyss, t can be easly seen that neural networks wth two output neurons can be always reduced to neural networks wth one output neuron where the weght vector between the hdden neurons and the output neuron s gven by C = ψ ψ. In ths way, the complexty of neural networks can be reduced substantally. And the decson rule s as follows: y = C Z = ( ψ ψ ) Z > 0, decdeω. If Else, decdeω. 4. hree pattern classes If there are 3 pattern classes, typcally there wll be three output neurons and the decson rule s to choose the class correspondng to the output neuron wth the largest output. he output vector of output neurons, Y, wll be gven by Y = [ y, y, y ] = [ ( ψ Z), ( ψ Z), ( ψ Z)]. 3 3 In the hdden neuron space, the three decson boundares between each par of classes wll be gven by ψ Z = ψ Z ( ψ ψ ) Z = 0 (4) ψ Z = ψ Z ( ψ ψ ) Z = 0 (5) 3 3 ψ Z = ψ Z ( ψ ψ ) Z = 0. (6) 3 3 Each equaton represents a lnear decson boundary n the hdden neuron space. hus, any two of the three equatons wll have ntersecton except the trval case that they are parallel. Let Z be a pont on the ntersecton of (4) and (5). In other words, ( ψ ψ ) Z = 0 ( ψ ψ ) Z = 0. 3 hen we can easly show that ( ψ ψ ) Z = ( ψ ψ ) Z ( ψ ψ ) Z = In other words, Z wll also satsfy (6). Smlarly, we can show that any pont that s a soluton to any two of the three equatons wll be a soluton of the remanng equaton. hs ndcates that the three lnear decson boundares wll always meet at the same ntersecton. g. 6 llustrates how the decson boundares are formed n the hdden neuron space for a 3-class problem. he frst decson boundary dvdes the hdden neuron space nto two regons (g. 6a). When the second decson boundary between ω and ω s 3 added as shown n g. 6b, the upper-left regon can be classfed as ω ( y > y > y ). And the upper-rght 3 3 regon s classfed as ω and the lower-rght as ω. he lower-left regon s not determned yet. he thrd decson boundary can not dvde the upper-left, whch s the regon that s classfed as ω after the second 3 decson boundary was ntroduced, as shown n g. 6c snce t produces a contradcton ( y > y, y > y, y > y ) n the regon that s denoted 3 3 as NA (not allowed). hus, the thrd decson boundary should dvde the undetermned regon as shown n g. 6d. However, t can not produce the classfcaton result as shown n g. 6e. Let P be a pont on the lne y = y + C where C s an arbtrary postve number. of 3 As P approaches to the decson boundary between ω and ω, the dfference between y and y dmnshes to zero (g. 6f). However, snce y = y + C where C 3 can be arbtrarly large, P can not be classfed as ω but would be classfed as ω ( y >> y y ). It s 3 3 noted that the neural network for a 3-pattern class problem dvdes the hdden neuron space nto 3 regons, though the 3 decson boundares dvdes the hdden neuron space nto 6 regons. As llustrated n the above, f two decson boundares are gven, the remanng decson boundary wll be automatcally determned snce three equatons (4-6) are not lnearly ndependent. In other words, ( ψ ψ ) Z = ( ψ ψ ) Z ( ψ ψ ) Z. 3 3 We showed that that neural networks wth two output neurons can be always reduced to neural networks wth one output neuron. Smlarly, neural networks wth three output neurons can be always reduced to neural networks wth two output neurons where the weght vectors between the hdden neurons and the output neurons are gven by C = ψ ψ and C = ψ ψ Angles between lnear decson Boundares n the hdden neuron space In the hdden neuron space, (ψ ψ ) Z =0 represents lnear decson boundares. It can be rewrtten as ( ψ ψ ) Z = C Z = 0 c z +c z c N z N +c N+ b = 0 c z +c z c N z N = c N
5 where b = and C = ψ ψ s an (N+) x column vector. Let ψ be the vector whose components are the same as those of ψ wth the last component of ψ removed. In other words, ψ = [ ψ, ψ,..., ψ ],, N, where ψ, j s the j-th component of ψ. Dependng on the dmenson, ( ψ ψ ) Z = 0 may represent a lne, a plane or a hyperplane that are perpendcular to vector ψ ψ = [ ψ ψ, ψ ψ,..., ψ ψ ].,,,,, N, N On the other hand, t can be easly seen that ( ψ ψ ) = ( ψ ψ ) ( ψ ψ ). (7) 3 3 where θ s the angle between ω ω decson j boundary and ω ω decson boundary. Snce j k decson boundares are not a vector, we may have to take the absolute value of (8). g. 7 llustrates the relatonshp of the angles between the decson boundares n the hdden neuron space for a 3 pattern class problem. herefore, t can be sad that the angle between two decson boundares n the hdden neuron space s determned by the drectons and magntudes of vectors ( ψ ψ ), ( ψ ψ ). ψ ψ ( ψ ψ 3 ) Z = 0 j j k x θ ( ψ ψ ) Z = 0 y3>y y>y3 ψ ψ 3 θ y>y y>y x ND y>y y>y x (c) y3>y (e) y>y3 y>y3 NA y3>y y3>y NA y3>y y>y3 NA y>y3 y>y y>y NA y>y y>y (d) (f) y3=y+c P y3>y y>y 3 y3>y y3>y y>y3 y>y3 y>y y>y y>y3 y3>y y>y y>y g. 6. Decson boundares for 3 pattern class problems n the hdden neuron space. he relatonshp among the three vectors of (7) s llustrated n g. 7. he lnear decson boundary between class ω and class ω j, whch s perpendcular, s also shown n g. 7. Snce the angle to ψ ψ j between two lnes (or planes) s the same as that between two vectors that are normal to the two lnes (or planes), the angle between two decson boundares n the hdden neuron space s gven by ( ψ ψ ) ( ψ ψ ) j j k cosθ = (8) ( ψ ψ )( ψ ψ ) j j k ( ψ ψ 3 ) Z = 0 ψ 3 ψ = ( ψ ψ ) ( ψ g. 7. Angles between decson boundares for 3 classes n the hdden neuron space. ψ ) Decson boundares n the multclass problems If there are K pattern classes (K output neurons), K theoretcally there wll be () decson boundares n the hdden neuron space. However, only K- decson boundares are ndependent and the remanng boundares wll be automatcally determned. In other words, there are only K- degree of freedoms. or example, for a 4-pattern class problem, there are 6 decson boundares n the hdden neuron space. However, we have only 3 degree of freedoms. Once the 3 decson boundares are gven, the remanng 3 boundares wll be automatcally determned as shown n g. 8, where the sold lnes represent the 3 ndependent decson boundares and the dotted lnes represent the 3 dependent decson boundares. In other words, f the ω ω decson boundary and the j ω ω decson boundary are decded, the ω ω k j k decson boundary s automatcally determned. It should pass through the ntersecton where the ω ω j decson boundary and the ω ω decson boundary k meet and s gven by ( ψ ψ ) Z = ( ψ ψ ) Z + ( ψ ψ ) Z = 0. j k j k - 5 -
6 As shown prevously, the drecton of the ω ω j k decson boundary s also determned once the ω ω j decson boundary and the ω ω decson boundary k are decded. or a 4-class problem, there wll be 4 ntersectons (ponts, lnes, planes or hyper-planes) where 3 decson boundares meet except the trval case that some of decson boundares are parallel. At those ntersectons, the 3 decson regons correspondng to 3 classes wll be determned. By combnng those regons, one can construct the overall decson boundares that are dsplayed wth bold lnes n g. 8. In g. 8, the 4 bold dots represent 4 ntersectons where 3 decson boundares meet. K Although there are () decson boundares for a K- class problem and the decson boundares dvde the hdden neuron space nto numerous subregons, the fnal decson boundares for classfcaton dvde the hdden neuron space nto only K regons that are convex. In general, the lnear decson boundares, whch dvde the hdden neuron space nto K regons, wll be warped nto complex decson boundares that may dvde nonlnearly the orgnal nput space nto much more than K regons. or example, n the case of two pattern class problems, the decson boundary for SIGNN n the X- space s gven by cz + cz c z + c 0 N N N + = c c cn... cn 0 X X N X + e + φ + e + + φ + e + = φ + N c cn 0 X + = e φ + = + where C = ψ ψ. ypcally, decson boundares n the nput space can be straght lnes, curves, crcles, planes, curved surfaces, curved closed surfaces, and etc. the decson boundary wll be an equvalent-weght lne n ths case. More nterestng lnear boundares can be obtaned f we add more hdden neurons. or nstance, g. 9 llustrates how a lnear decson boundary of SIGNN, whch solves an XOR problem, can be obtaned wth two hdden neurons. In g. 9a, the mddle-weght lnes,.e., φ X = 0, are parallel. g. 0 shows another example of lnear decson boundares. In general, f the two mddle-weght lnes meet oblquely, as n the case of g. 0a, the decson boundary n the nput space s not lnear. However, f the decson boundary n the hdden neuron space s perpendcular to one of the coordnates n the hdden neuron space, we wll obtan a lnear decson boundary n the nput space as shown n g. 0. On the other hand, g. shows an example of lnear decson boundary when there are 3 hdden neurons (3 mddle-weght lnes n the nput space). g. shows an example of lnear decson boundares of RBNN wth two hdden neurons that are closely located. As can be seen from gs. 9-, we have more freedom n drawng a more flexble decson boundary wth more hdden neurons. y>y y>y y>y4 y3>y y>y4 y4>y y>y3 y4>y y3>y4 y4>y3 y3>y y>y3 y>y4 y4>y y3>y y>y3 y>y y>y 5 Examples of decson boundares of neural networks Wthout loss of generalty, we may assume that nput vectors to neural networks are bounded. One can always make nput data bounded by gven upper and lower bounds by scalng and translaton. And we assume that there are two pattern classes and lmt the dmenson of the nput space to for an easy llustraton. 5. Lnear decson boundares n the nput space he smplest lnear boundary n the nput space would be obtaned wth one hdden neuron. Assumng SIGNN, y3>y4 y4>y3 g. 8. Sx decson boundares of a 4 pattern class problem n the hdden neuron space. 5. Convex decson boundares n the nput space ypcally, f mddle-weght lnes meet oblquely n SIGNN as shown n gs. 3-4, the decson boundares n the nput space wll be convex except some specal cases, though the decson boundares n the hdden neuron space are lnear. Usually, the convex y3>y y>y3 y > y4 y4>y - 6 -
7 decson boundares n g. 3a and g. 4a dvde the nput space nto two regons correspondng to two classes. Due to the nature of the sgmod functon, z s bounded by (0,) as shown n gs 3b and 4b. In g. 4b, there are two lmt ponts through whch the lnear decson boundary passes n the hdden neuron space: (0.5,) and (,0.5). hus, t can be seen that the decson boundary n the nput space asymptotcally converges to two equvalent-weght lnes: φ X =.099 and φ X =.099. It s noted that t s mpossble to dvde the nput space nto more than regons wth two hdden neurons except the specal case of g. 9 where two mddle-weght lnes are parallel. However, f two mddle-weght lnes do not ntersect n a gven nput range, t s stll possble to dvde the gven nput space nto 3 regons as shown n g. 5. g.. An example of lnear decson boundary of SIGNN when there are 3 hdden neurons mddleweght lnes and decson boundares (bold lne) n the nput space decson boundary n the hdden neuron space. g.. An example of lnear decson boundares of RBNN wth two hdden neurons that are closely located the nput space the hdden neuron space. g. 9. An example of lnear decson boundary of SIGNN mddle-weght lnes and decson boundares (bold lne) n the nput space decson boundary n the hdden neuron space. g. 3. An example of a convex decson boundary of SIGNN (bold lne) the nput space the hdden neuron space. g. 0. Another example of lnear decson boundary of SIGNN mddle-weght lnes and decson boundary (bold lne) n the nput space decson boundary n the hdden neuron space. g. 4. Another example of convex decson boundares of SIGNN (bold lne) the nput space the hdden neuron space. g. 5. Dvdng the nput space nto 3 regons wth two non-parallel mddle-weght lne (SIGNN) the nput space the hdden neuron space
8 5.3 Dsconnected and closed decson boundares n the nput space As we add more hdden neurons, more complex decson boundares can be obtaned. As explaned prevously, the decson boundary n the hdden neuron space wll be always lnear. However, wth more hdden neurons, we have more freedom n drawng more complex decson boundares n the nput space wth lnear decson boundares n the hdden neuron space as shown n gs. 6-8 (SIGNN). In general, when there are K output neurons, the neural network dvdes the hdden neuron space nto K regons. However, the hdden neurons and the sgmod functon, whch maps the entre nput space nto bounded regon n the hdden neuron space, make t possble to dvde the nput space nto more than K regons. or nstance, the neural network wth 3 hdden neurons can dvde the nput space nto 4 regons (g. 6), 3 regons (g. 7) and regons (g. 8). In partcular, the decson boundary n g. 8 s a closed boundary. g. 9 shows an example of crcular decson boundares of RBNN wth two hdden neurons and g. 0 shows how two separate decson boundares can be obtaned n the nput space. If the half crcles n the nput space cross, the data dstrbuton n the hdden neuron space s convex (g. 9). If the half crcles n the nput space do not cross, the data dstrbuton n the hdden neuron space become concave (g. 0). Unlke SIGNN, the typcal decson boundary of RBNN s a closed boundary except the specal case of g.. g. 8. Closed decson boundary n the nput space wth 3 hdden neurons (SIGNN) the nput space the hdden neuron space. g. 9. An example of crcular decson boundares of RBNN wth two hdden neurons the nput space the hdden neuron space. g. 6. Dvdng the nput space nto 4 regons wth 3 hdden neurons (SIGNN) the nput space the hdden neuron space. g. 7. More complex decson boundares n the nput space wth 3 hdden neurons (SIGNN) the nput space the hdden neuron space. g. 0. An example of two crcular boundares wth two hdden neurons (RBNN) the nput space the hdden neuron space. 5.4 Identcal decson boundares wth dfferent weghts g. llustrates how neural networks wth dfferent weghts can defne the same decson boundary n the nput space. It can be seen that the same decson boundary n the nput space wll be obtaned even though we move the decson plane n the hdden neuron space (g. b). Although the neural network n g. a s a trval one, the same phenomenon can occur for general neural networks as shown n g.. It has been reported that dfferent sets of weghts can provde almost dentcal performance for a gven problem [5]. And these characterstcs of decson boundares n the hdden neuron space may provde some theoretcal background how dfferent sets of - 8 -
9 weghts can provde almost dentcal performance for a gven problem. g.. Obtanng the same decson boundares wth dfferent weghts (SIGNN) the nput space the hdden neuron space. g.. Another example of obtanng the same decson boundares wth dfferent weghts (SIGNN) the nput space the hdden neuron space. 6 Conclusons In ths paper, we nvestgated the decson boundares of neural networks whose actvaton functons are the sgmod functon and the radal bass functon neural network. We dvded the classfcaton mechansm of neural networks nto two parts: expandng the dmenson by hdden neurons and drawng lnear boundares by output neurons. In partcular, we analyzed the decson boundares n the hdden neuron space and found some nterestng propertes. rst, the decson boundares n the hdden neuron space are always lnear boundares and that the decson boundares are not completely ndependent. nally, we showed how the lnear boundares n the hdden neuron space can defne complex decson boundares n the nput space wth some nterestng propertes. he analyss of decson boundares provdes a way to reduce the complexty of neural networks and s helpful n weght ntalzaton. References: []. ukushma and N. Wake, "Handwrtten Alphanumerc Character Recognton by the Neocogntron," IEEE rans. on Neural Networks, Vol., No. 3, pp , 99. [] Jon Alt Benedktsson, Johannes R. Svensson, Orkan K. Ersoy, and Phlp H. Swan, "Parallel Consensual Neural Networks," IEEE rans. Neural Networks, Vol. 8, No., pp , 997. [3] K. Lee, S. Cho, S. Ong, C. You, and D. Hong, "Equalzaton technques usng neural networks for dgtal versatle dsk-read-only memory," Optcal Engneerng, Vol. 38, No., pp. 56-6, 999. [4] Chulhee Lee and Davd. A. Landgrebe, "Decson boundary feature extracton for neural networks," IEEE rans. Neural Networks, Vol. 8, No., pp , 997. [5] G. J. Gbson, "A combnatoral approach to understandng perceptron capabltes," IEEE rans. Neural Networks, Vol. 4, No. 6, pp , 993. [6] B. Scholkopf, S. Mka, C. J. C. Burges, P. Knrsch, K. R. Muller, G. Ratsch, and A. J. Smola, "Input space versus feature space n kernel-based methods," IEEE rans. Neural Networks, Vol. 0, No. 5, pp , 999. [7] I. Seth, "Entropy nets: from decson tree to neural networks," Proceedngs of the IEEE, Vol. 78, No. 0, pp. 605, 990. [8] G. J. Gbson and C.. N. Cowan, "On the decson regons of multlayer perceptrons," Proceedngs of the IEEE, Vol. 78, No. 0, pp , 990. [9] J. Makhoul, A. El-Jaroud, and R. Schwartz, "Parttonng capabltes of two-layer neural networks," IEEE rans. Sgnal Processng, Vol. 39, No. 6, pp , 99. [0] K. L. Blackmore, R. C. Wllamson, and I. Y. Mareels, "Decson regon approxmaton by polynomals or neural networks," IEEE rans. Informaton heory, Vol. 43, No. 3, pp , 997. []. Ntta. "An analyss on decson boundares n the complex back-propagaton network," n Proc. IEEE Conf. IEEE World Congress on Computatonal Intellgence,, pp , 994. [] S. Y. Kung and J. S. aur, "Decson-based neural networks wth sgnal/mage classfcaton applcatons," IEEE ran. Neural Networks, Vol. 6, No., pp. 70-8, 995. [3] S. K. Pal, S. Bandyopadhyay, and C. A. Murthy, "Genetc algorthms for generaton of class boundares," IEEE rans. Systems, Man and Cybernetcs, Part B, Vol. 8, No. 6, pp , 998. [4] R. Lppmann, "An Introducton to Computng wth Neural Nets," IEEE ASSP Magazne, Vol. 4, No., pp. 4-, Aprl 987. [5] J. Go and C. Lee. "Analyzng weght dstrbuton of neural networks," n Proc. IEEE IJCNN, pp
Multigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationMultilayer neural networks
Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationMulti-layer neural networks
Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationEvaluation of classifiers MLPs
Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationNonlinear Classifiers II
Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationImage classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?
Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationOn the Repeating Group Finding Problem
The 9th Workshop on Combnatoral Mathematcs and Computaton Theory On the Repeatng Group Fndng Problem Bo-Ren Kung, Wen-Hsen Chen, R.C.T Lee Graduate Insttute of Informaton Technology and Management Takmng
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationINF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018
INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton
More informationImprovement of Histogram Equalization for Minimum Mean Brightness Error
Proceedngs of the 7 WSEAS Int. Conference on Crcuts, Systems, Sgnal and elecommuncatons, Gold Coast, Australa, January 7-9, 7 3 Improvement of Hstogram Equalzaton for Mnmum Mean Brghtness Error AAPOG PHAHUA*,
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationCONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION
CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More information/ n ) are compared. The logic is: if the two
STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence
More informationMIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU
Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern
More informationSpin-rotation coupling of the angularly accelerated rigid body
Spn-rotaton couplng of the angularly accelerated rgd body Loua Hassan Elzen Basher Khartoum, Sudan. Postal code:11123 E-mal: louaelzen@gmal.com November 1, 2017 All Rghts Reserved. Abstract Ths paper s
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationModule 3: Element Properties Lecture 1: Natural Coordinates
Module 3: Element Propertes Lecture : Natural Coordnates Natural coordnate system s bascally a local coordnate system whch allows the specfcaton of a pont wthn the element by a set of dmensonless numbers
More informationModeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:
Modelng curves Types of Curves Graphs: y = ax+b, y = sn(x) Implct ax + by + c = 0, x 2 +y 2 =r 2 Parametrc: x = ax + bxt x = cos t y = ay + byt y = snt Parametrc are the most common mplct are also used,
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationWeek 9 Chapter 10 Section 1-5
Week 9 Chapter 10 Secton 1-5 Rotaton Rgd Object A rgd object s one that s nondeformable The relatve locatons of all partcles makng up the object reman constant All real objects are deformable to some extent,
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationWhy Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)
Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationDiscriminative classifier: Logistic Regression. CS534-Machine Learning
Dscrmnatve classfer: Logstc Regresson CS534-Machne Learnng 2 Logstc Regresson Gven tranng set D stc regresson learns the condtonal dstrbuton We ll assume onl to classes and a parametrc form for here s
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationCOMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,
COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,
More informationCOEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN
Int. J. Chem. Sc.: (4), 04, 645654 ISSN 097768X www.sadgurupublcatons.com COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN R. GOVINDARASU a, R. PARTHIBAN a and P. K. BHABA b* a Department
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationWhy feed-forward networks are in a bad shape
Why feed-forward networks are n a bad shape Patrck van der Smagt, Gerd Hrznger Insttute of Robotcs and System Dynamcs German Aerospace Center (DLR Oberpfaffenhofen) 82230 Wesslng, GERMANY emal smagt@dlr.de
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationModel of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification.
Page 1 Model of Neurons CS 416 Artfcal Intellgence Lecture 18 Neural Nets Chapter 20 Multple nputs/dendrtes (~10,000!!!) Cell body/soma performs computaton Sngle output/axon Computaton s typcally modeled
More informationCase A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.
THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationThe Quadratic Trigonometric Bézier Curve with Single Shape Parameter
J. Basc. Appl. Sc. Res., (3541-546, 01 01, TextRoad Publcaton ISSN 090-4304 Journal of Basc and Appled Scentfc Research www.textroad.com The Quadratc Trgonometrc Bézer Curve wth Sngle Shape Parameter Uzma
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More informationVapnik-Chervonenkis theory
Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown
More informationThe Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL
The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp
More informationSupport Vector Machines
CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationImproved delay-dependent stability criteria for discrete-time stochastic neural networks with time-varying delays
Avalable onlne at www.scencedrect.com Proceda Engneerng 5 ( 4456 446 Improved delay-dependent stablty crtera for dscrete-tme stochastc neural networs wth tme-varyng delays Meng-zhuo Luo a Shou-mng Zhong
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationAn Improved multiple fractal algorithm
Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationModule 14: THE INTEGRAL Exploring Calculus
Module 14: THE INTEGRAL Explorng Calculus Part I Approxmatons and the Defnte Integral It was known n the 1600s before the calculus was developed that the area of an rregularly shaped regon could be approxmated
More information