Adaptive Regularization for Transductive Support Vector Machine
|
|
- Lenard Cox
- 6 years ago
- Views:
Transcription
1 Adaptive Reguarization for Transductive Support Vector Machine Zengin Xu Custer MMCI Saarand Univ. & MPI INF Saarbrucken, Germany Rong Jin Computer Sci. & Eng. Michigan State Univ. East Lansing, MI, U.S. Jianke Zhu Computer Vision Lab ETH Zurich Zurich, Switzerand Irwin King Michae R. Lyu Computer Science & Engineering The Chinese Univ. of Hong Kong Shatin, N.T., Hong Kong Zhirong Yang Information & Computer Science Hesinki Univ. of Technoogy Espoo, Finand Abstract We discuss the framework of Transductive Support Vector Machine (TSVM) from the perspective of the reguarization strength induced by the unabeed data. In this framework, SVM and TSVM can be regarded as a earning machine without reguarization and one with fu reguarization from the unabeed data, respectivey. Therefore, to suppement this framework of the reguarization strength, it is necessary to introduce data-dependant partia reguarization. To this end, we reformuate TSVM into a form with controabe reguarization strength, which incudes SVM and TSVM as specia cases. Furthermore, we introduce a method of adaptive reguarization that is data dependant and is based on the smoothness assumption. Experiments on a set of benchmark data sets indicate the promising resuts of the proposed work compared with state-of-the-art TSVM agorithms. Introduction Semi-supervised earning has attracted a ot of research focus in recenty years. Most of the existing approaches can be roughy divided into two categories: () the custering-based methods [2, 4, 8, 7] assume that most of the data, incuding both the abeed ones and the unabeed ones, shoud be far away from the decision boundary of the target casses; (2) the manifod-based methods make the assumption that most of data ie on a ow-dimensiona manifod in the input space, which incude Labe Propagation [2], Graph Cuts [2], Spectra Kernes [9, 22], Spectra Graph Transducer [], and Manifod Reguarization []. The comprehensive study on semi-supervised earning techniques can be found in the recent surveys [23, 3]. Athough semi-supervised earning wins success in many rea-word appications, there sti remains two major unsoved chaenges. One is whether the unabeed data can hep the cassification, and the other is what is the reation between the custering assumption and the manifod assumption. As for the first chaenge, Singh et a. [6] provided a finite sampe anaysis on the usefuness of unabeed data based on the custer assumption. They show that unabeed data may
2 be usefu for improving the error bounds of supervised earning methods when the margin between different casses satisfies some conditions. However, in the rea-word probems, it is hard to identify the conditions that unabeed data can hep. On the other hand, it is interesting to expore the reation between the ow density assumption and the manifod assumption. Narayanan et a. [4] impied that the cut-size of the graph partition converges to the weighted voume of the boundary which separates the two regions of the domain for a fixed partition. This makes a step forward for exporing the connection between graph-based partitioning and the idea surrounding the ow density assumption. Unfortunatey, this approach cannot be generaized uniformy over a partitions. Lafferty and Wasserman [3] revisited the assumptions of semi-supervised earning from the perspective of imax theory, and suggested that the manifod assumption is stronger than the smoothness assumption for regression. Ti now, the underying reationships between the custer assumption and the manifod assumption are sti undiscosed. Specificay, it is uncear that in what kind of situation the custering assumption or the manifod assumption shoud be adopted. In this paper, we address these current imitations by a unified soution from the perspective of the reguarization strength of the unabeed data. Taking Transductive Support Vector Machine (TSVM) as an exampe, we suggest an framework that introduces the reguarization strength of the unabeed data when estimating the decision boundary. Therefore, we can obtain a spectrum of modes by varying the reguarization strength of unabeed data which corresponds to changing the modes from supervised SVM to Transductive SVM. To seect the optima mode under the proposed framework, we empoy the manifod reguarization assumption that enabes the prediction function to be smooth over the data space. Further, the optima function is a inear combination of supervised modes, weaky semi-supervised modes, and semi-supervised modes. Additionay, it provides an effective approach towards combining the custer assumption and the manifod assumption in semi-supervised earning. The rest of this paper is organized as foows. In Section 2, we review the background of Transductive SVM. In Section 3, we first present a framework of modes with different reguarization strength, foowed by an integrating approach based on manifod reguarization. In Section 4, we report the experimenta resuts on a series of benchmark data sets. Section 5 concudes the paper. 2 Reated Work on TSVM Before presenting the formuation of TSVM, we first describe the notations used in this paper. Let X = (x,...,x n ) denote the entire data set, incuding both the abeed exampes and the unabeed ones. We assume that the first exampes within X are abeed and the next n exampes are unabeed. We denote the unknown abes by y u = (y u +,..., yu n ). TSVM [2] maximizes the margin in the presence of unabeed data and keeps the boundary traversing through ow density regions whie respecting abes in the input space. Under the maximum-margin framework, TSVM aims to find the cassification mode with the maximum cassification margin for both abeed and unabeed exampes, which amounts to sove the foowing optimization probem: w R n,y u R n,ξ R n 2 w K + C ξ i + C i= n i=+ s. t. y i w φ(x i ) ξ i, ξ i 0, i, ξ i () y u i w φ(x i ) ξ i, ξ i 0, + i n, where C and C are the trade-off parameters between the compexity of the function w and the margin errors. Moreover, the prediction function can be formuated as f(x) = w φ(x). Note that we remove the bias term in the above formuation, since it can be taken into account by introducing a constant eement into the input pattern aternativey.
3 As in [9] and [20], we can rewrite () into the foowing optimization probem: f,ξ 2 f K f + C ξ i + C i= n i=+ s. t. y i f i ξ i, ξ i 0, i, f i ξ i, ξ i 0, + i n. ξ i (2) The optimization probem hed in TSVM is a non-inear non-convex optimization [6]. During past severa years, researchers have devoted a significant amount of research efforts to soving this critica probem. A branch-and-bound method [5] was deveoped to search for the optima soution, which is ony imited to sove the probem with a sma number of exampes due to invoving the heavy computationa cost. To appy TSVM for arge-scae probems, Joachims [2] proposed a abe-switching-retraining procedure to speed up the optimization procedure. Later, the hinge oss in TSVM is repaced by a smooth oss function, and a gradient descent method is used to find the decision boundary in a region of ow density [4]. In addition, there are some iterative methods, such as deteristic anneaing [5], concaveconvex procedure (CCCP) [8], and convex reaxation method [9, 8]. Despite the success of TSVM, the unabeed data not necessariy improve cassification accuracy. To better utiize the unabeed data, unike existing TSVM approaches, we propose a framework that tries to contro the reguarization strength of the unabeed data. To do this, we intend to earn the optima reguarization strength configuration from the combination of a spectrum of modes: supervised, weaky-supervised, and semi-supervised. 3 TSVM: A Reguarization View For the sake of iustration, we first study a mode that does not penaize on the cassification errors of unabeed data. Note that the penaization on the margin errors of unabeed data can be incuded if needed. Therefore, we have the foowing form of TSVM that can be derived through the duaity: f,ξ 2 f K f + C ξ i (3) i= s. t. y i f i ξ i, ξ i 0, i, f 2 i, + i n. 3. Fu Reguarization of Unabeed Data In order to adjust the strength of the reguarization raised from the unabeed exampes, we introduce a coefficient ρ 0, and modify the above probem (3) as beow: f,ξ 2 f K f + C ξ i (4) i= s. t. y i f i ξ i, ξ i 0, i, fi 2 ρ, + i n. Obviousy, it is the standard TSVM for ρ =. In particuar, the arger the ρ is, the stronger the reguarization of unabeed data is. It is aso important to note that we ony take into account the cassification errors on the abeed exampes in the above equation. Namey, we ony denote ξ i for each abeed exampe. Further, we write f = (f ;f u ) where f = (f,..., f ) and f u = (f +,...,f n ) represent the prediction for the abeed and the unabeed exampes, respectivey. According to the inverse emma of the bock matrix, we can write K as foows: ( K = M M u K u,k, K, K,uM u M u ),
4 where Thus, the term f K f is computed as M = K, K,u K u,u K u,, M u = K u,u K u, K, K,u. f K f = f M f + fu M u f u 2f K, K,uM u f u. When the unabeed data are oosey correated to the abeed data, namey when most of the eements within K u, are sma, this eads to M u K u. We refer to this case as weaky unsupervised earning. Using the above equations, we rewrite TSVM as foows: f,f u,ξ 2 f M f + C ξ i + ω(f, ρ) (5) i= s. t. y i f i ξ i, ξ i 0, i, where ω(f, ρ) is a reguarization function for f and it is the resut of the foowing optimization probem: f u 2 f u M u f u f K, K,uM u f u (6) s. t. [fi u ] 2 ρ, + i n. To understand the reguarization function ω(f, ρ), we first compute the dua of the probem (6) by the Lagrangian function: L = 2 f u M u f u f K, K,uM u f u = 2 f u (M u n u i= 2 λ i([f u i ]2 ρ) D(λ))f u f K, K,uM u f u + ρ 2 λ e, where D(λ) = diag(λ,...,λ n ) and e denotes a vector with a eements being one. As the derivatives vanish for optimaity, we have where I is an identity matrix. f u = (M u D(λ)) M u K u,k, f = (I M u D(λ)) K u, K, f, Repacing f u in (6) with the above equation, we have the foowing dua probem: max λ 2 f K, K,u(M u M u D(λ)M u ) K u, K, f + ρλ e (7) s. t. M u D(λ), λ i 0, i =,...,n. The above formuation aows us to understand how the parameter ρ contros the strength of reguarization from the unabeed data. In the foowing, we wi show that a series of earning modes can be derived through assigning various vaues for the coefficient ρ. 3.2 No Reguarization from Unabeed Data First, we study the case of ρ = 0. We have the foowing theorem to iustrate the reationship between the dua probem (7) and the supervised SVM. Theorem When ρ = 0, the optimization probem is reduced to the standard supervised SVM.
5 Proof It is not difficut to see that the optima soution to (7) is λ = 0. As a resut, ω(f, ρ) becomes ω(f, ρ = 0) = 2 f K, K,uM u K u, K, f Substituting ω(f, ρ) in (5) with the formuation above, the overa optimization probem becomes f,ξ 2 f (M K, K,uM u K u, K, )f + C ξ i s. t. y i f i ξ i, ξ i 0, i. According to the matrix inverse emma, we cacuate M M = (K, K,u K u,u K u,) as beow: i= = K, + K, K,u(K u,u K u, K, K,u) K u, K, = K, + K, K,uM u K u, K,. Finay, the optimization probem is simpified as f,ξ 2 f K, f + C ξ i (8) i= s. t. y i f i ξ i, ξ i 0, i. Ceary, the above optimization is identica to the standard supervised SVM. Hence, the unabeed data are not empoyed to reguarize the decision boundary when ρ = Partia Reguarization of Unabeed Data Second, we consider the case when ρ is sma. According to (7), we expect λ to be sma when ρ is sma. As a resut, we can approximate (M u M u D(λ)M u ) as foows: (M u M u D(λ)M u ) M u Consequenty, we can write ω(f, ρ) as foows: + D(λ). ω(f, ρ) = 2 f K, K,uM u K u,k, f + φ(f, ρ), (9) where φ(f, ρ) is the output of the foowing optimization probem max λ ρλ e 2 f K, K,uD(λ)K u, K, f s. t. M u D(λ), λ i 0, i =,...,n. We can simpify the above probem by approximating M u D(λ) as λ i [σ (M u )], i =,...,n, where σ (M u ) represents the maximum eigenvaue of matrix M u. The resuting simpified probem becomes ρ max λ 2 λ e 2 f K, K,uD(λ)K u, K s. t. 0 λ i [σ (M u )], i n. As the above probem is a inear programg probem, the soution for λ can be computed as: { 0 [K u, K, λ i = f ] 2 i > ρ, σ(m u ) [K u, K, f ] 2 i ρ. From the above formuation, we find that ρ pays the roe of a threshod of seecting the unabeed exampes. Since [K u, K, f ] i can be regarded as the approximation for the ith, f
6 unabeed exampe, the above formuation can be interpreted in the way that ony the unabeed exampes with ow prediction confidence wi be seected for reguarizing the decision boundary. Moreover, a the unabeed exampes with high prediction confidence wi be ignored. From the above discussions, we can concude that ρ deteres the reguarization strength of unabeed exampes. Then, we rewrite the overa optimization probem as beow: max f,ξ λ 2 f K, f + C i= s. t. y i f i ξ i, ξ i 0, i, 0 λ i [σ (M u )], i n. ξ i 2 f K, K,uD(λ)K u, K, f (0) This is a -max optimization probem and thus the goba optima soution can be guaranteed. To obtain the optima soution, we empoy an aternating optimization procedure, which iterativey computes the vaues of f and λ. To account for the penaty on the margin error from the unabeed data, we just need to add an extra constraint of λ i 2C for i =,...,n. By varying the parameter ρ from 0 to, we can indeed obtain a series of transductive modes for SVM. When ρ is sma, we ca the corresponding optimization probem as weaky semisupervised earning. Therefore, it is important to find an appropriate ρ which adapts for the input data. However, as the data distribution is usuay unknown, it is very chaenging to directy estimate an optima reguarization strength parameter ρ. Instead, we try to expore an aternative approach to seect an appropriate ρ by combining the prediction functions. Due to the arge cost in cacuating the inverse of kerne matrices, one can sove the dua probems according to the Representer theorem. 3.4 Adaptive Reguarization As stated in previous sections, ρ deteres the reguarization strength of the unabeed data. We now try to adapt the parameter ρ according to the unabeed data information. Specificay, we intend to impicity seect the best ρ from a given ist, i.e., Υ = {ρ,..., ρ m } where ρ = 0 and ρ m =. This is equivaent to seecting the optima f from a ist of prediction functions, i.e., F = {f,...,f m }. Motivated from the ensembe technique for semi-supervised earning [7], we assume that the optima f comes from a inear combination of the base functions {f i }. We then have: f = m θ i f i, i= m θ i =, θ i 0, i =,...,m. i= where θ i is the weight of the prediction function f i and θ R m. One can aso invove a priori to θ i. For exampe, if we have more confidences on the semi-supervised cassifier, we can introduce a constraint ike θ m 0.5. It is important to note that the earning functions in ensembe methods [7] are usuay weak earners, whie in our approach, the earning functions are strong earners with different degrees of reguarization. In the foowing, we study how to set the reguarization strength adaptive to data. Since TSVM naturay foows the custer assumption of semi-supervised earning, in order to compement the custer assumption, we adopt another principe in semi-supervised earning, i.e., the manifod assumption. From the point of view of manifod assumption in semisupervised earning, the prediction function f shoud be smooth on unabeed data. To this end, the approach of manifod reguarization is widey adopted as a smoothing term in semi-supervised earning iteratures, e.g., [, 0]. In the foowing, we wi empoy the manifod reguarization principe for seecting the reguarization strength. The manifod reguarization is mainy based on a graph G =< V, E > derived from the whoe data space X, where V = {x i } n i= is the vertex set, and E denotes the edges inking pairs of nodes. In genera, a graph is buit in the foowing four steps: () constructing adjacency graph; (2) cacuating the weights on edges; (3) computing the adjacency matrix W; (4)
7 obtaining the graph Lapacian by L = diag( n j= W ij) W. Then, we denote the manifod reguarization term as f Lf. For simpicity, we denote the predicted vaues of function f i on the data X as f i, such that f i = ([f i ],..., [f i ] n ). F = (f,...,f m ) is used to represent the set of the prediction vaues of a prediction functions. Finay, We have the foowing imization probem: θ 2 η(θ F)L(F θ) y (F θ) () s. t. θ e =, θ i 0, i =,...,m, where the second term, y (F θ), is used to strengthen the confidence on the prediction over the abeed data. η is a trade-off parameter. The above optimization probem is a simpe quadratic programg probem, which can be soved very efficienty. It is important to note that the above optimization probem is ess sensitive to the graph structure than Lapacian SVM as used in [], since the basic earning functions are a strong earners. It aso saves a huge amount of efforts in estimating the parameters compared with Lapacian SVM. The above approach indeed provides a practica approach towards a combination of both the custer assumption and the manifod assumption. It is empiricay suggested that combining these two assumptions heps to improve the prediction accuracy of semi-supervised earning according to the survey paper on semi-supervised SVM [6]. Moreover, when ρ = 0, supervised modes are incorporated in the framework. Thus the usefuness of unabeed in naturay considered by the reguarization. This therefore provides a practica soution to the probems described in Section. 4 Experiment In this section, we give detais of our impementation and discuss the resuts on severa benchmark data sets for our proposed approach. To conduct a comprehensive evauation, we empoy severa we-known datasets as the testbed. As summarized in Tabe, three image data sets and five text data sets are seected from the recent book ( mpg.de/ss-book/) and the iterature ( Tabe : Datasets used in our experiments. d represents the data dimensionaity, and n denotes the tota number of exampes. Data set n d Data set n d usps digit coi ibm vs rest pcmac page ink pageink For simpicity, our proposed adaptive reguarization approach is denoted as ARTSVM. To evauate it, we conduct an extensive comparison with severa state-of-the-art approaches, incuding the abe-switching-retraining agorithm in SVM-Light [2], CCCP [8], and TSVM [4]. We empoy SVM as the baseine method. In our experiments, we repeat a the agorithms 20 times for each dataset. In each run, 0% of the data are randomy seected as the training data and the remaining data are used as the unabeed data. The vaue of C in a agorithms are seected from [, 0, 00, 000] using cross-vaidation. The set of ρ is set to [0, 0.0, 0.05, 0., ] and η is fixed to As stated in Section 3.4, ARTSVM is ess sensitive to the graph structure. Thus, we adopt a simpe way to construct the graph: for each data, the number of neighbors is set to 20 and binary weighting is empoyed. In ARTSVM, the supervised, weaky semi-supervised, and semi-supervised agorithms are based on impementation in LibSVM ( tw/~cjin/ibsvm/), MOSEK ( and TSVM ( de/bs/peope/chapee/ds/), respectivey. For the comparison agorithms, we adopt the origina authors own impementations. Tabe 2 summarizes the cassification accuracy and the standard deviations of the proposed ARTSVM method and other competing methods. We can draw severa observations from
8 the resuts. First of a, we can ceary see that our proposed agorithm performs significanty better than the baseine SVM method across a the data sets. Note that some very arge deviations in SVM are mainy because the abeed data and the unabeed data may have quite different distributions after the random samping. On the other hand, the unabeed data capture the underying distribution and hep to correct such random error. Comparing ARTSVM with other TSVM agorithms, we observe that ARTSVM achieves the best performance in most cases. For exampe, for the digita image data sets, especiay digit, supervised earning usuay works we and the advantages of TSVM are very imited. However, the proposed ARTSVM outperforms both the supervised and other semisupervised agorithms. This indicates that the appropriate reguarization from the unabe data improves the cassification performance. Tabe 2: The cassification performance of Transductive SVMs on benchmark data sets. Data Set ARTSVM TSVM SVM CCCP SVM-ight usps 8.30± ± ± ± ±4.4 digit 82.0± ± ± ± ±4.24 coi 8.70± ± ± ± ±2.84 ibm vs rest 78.04± ± ± ± ±5.8 pcmac 95.50± ± ± ± ±7.24 page 94.65± ± ± ± ±2.60 ink 94.27± ± ± ± ±2.45 pageink 97.3± ± ± ± ±.8 5 Concusion This paper presents a nove framework for semi-supervised earning from the perspective of the reguarization strength from the unabeed data. In particuar, for Transductive SVM, we show that SVM and TSVM can be incorporated as specia cases within this framework. In more detai, the oss on the unabeed data can essentiay be regarded as an additiona reguarizer for the decision boundary in TSVM. To contro the reguarization strength, we introduce an aternative method of data-dependant reguarization based on the principe of manifod reguarization. Empirica studies on benchmark data sets demonstrate that the proposed framework is more effective than the previous transductive agorithms and purey supervised methods. For future work, we pan to design a controing strategy that is adaptive to data from the perspective of ow density assumption and manifod reguarization of semi-supervised earning. Finay, it is desirabe to integrate the ow density assumption and manifod reguarization into a unified framework. Acknowedgement The work was supported by the Nationa Science Foundation (IIS ), Nationa Institute of Heath (R0GM ), Research Grants Counci of Hong Kong (CUHK458/08E and CUHK428/08E), and MSRA (FY09-RES-OPP-03). It is aso affiiated with the MS-CUHK Joint Lab for Human-centric Computing & Interface Technoogies. References [] Mikhai Bekin, Partha Niyogi, and Vikas Sindhwani. Manifod reguarization: A geometric framework for earning from abeed and unabeed exampes. Journa of Machine Learning Research, 7: , [2] Avrim Bum and Shuchi Chawa. Learning from abeed and unabeed data using graph cuts. In ICML 0: Proceedings of the 8th internationa conference on Machine earning, pages Morgan Kaufmann, San Francisco, CA, 200. [3] O. Chapee, B. Schökopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006.
9 [4] O. Chapee and A. Zien. Semi-supervised cassification by ow density separation. In Proceedings of the Tenth Internationa Workshop on Artificia Inteigence and Statistics, pages 57 64, [5] Oivier Chapee, Vikas Sindhwani, and Sathiya Keerthi. Branch and bound for semi-supervised support vector machines. In B. Schökopf, J. Patt, and T. Hoffman, editors, Advances in Neura Information Processing Systems 9. MIT Press, Cambridge, MA, [6] Oivier Chapee, Vikas Sindhwani, and Sathiya S. Keerthi. Optimization techniques for semisupervised support vector machines. Journa of Machine Learning Research, 9: , [7] Ke Chen and Shihai Wang. Reguarized boost for semi-supervised earning. In J.C. Patt, D. Koer, Y. Singer, and S. Roweis, editors, Advances in Neura Information Processing Systems 20, pages MIT Press, Cambridge, MA, [8] Ronan Coobert, Fabian Sinz, Jason Weston, and Léon Bottou. Large scae transductive SVMs. Journa of Machine Learning Reseaerch, 7:687 72, [9] S. C. H. Hoi, M. R. Lyu, and E. Y. Chang. Learning the unified kerne machines for cassification. In Proceedings of Twentith Internationa Conference on Knowedge Discovery and Data Mining (KDD-2006), pages 87 96, New York, NY, USA, ACM Press. [0] Steven C. H. Hoi, Rong Jin, and Michae R. Lyu. Learning nonparametric kerne matrices from pairwise constraints. In ICML 07: Proceedings of the 24th internationa conference on Machine earning, pages , New York, NY, USA, ACM. [] T. Joachims. Transductive earning via spectra graph partitioning. In ICML 03: Proceedings of the 20th internationa conference on Machine earning, pages , [2] Thorsten Joachims. Transductive inference for text cassification using support vector machines. In ICML 99: Proceedings of the 6th internationa conference on Machine earning, pages , San Francisco, CA, USA, 999. Morgan Kaufmann Pubishers Inc. [3] John Lafferty and Larry Wasserman. Statistica anaysis of semi-supervised regression. In J.C. Patt, D. Koer, Y. Singer, and S. Roweis, editors, Advances in Neura Information Processing Systems 20, pages MIT Press, Cambridge, MA, [4] Hariharan Narayanan, Mikhai Bekin, and Partha Niyogi. On the reation between ow density separation, spectra custering and graph cuts. In B. Schökopf, J. Patt, and T. Hoffman, editors, Advances in Neura Information Processing Systems 9, pages MIT Press, Cambridge, MA, [5] Vikas Sindhwani, S. Sathiya Keerthi, and Oivier Chapee. Deteristic anneaing for semisupervised kerne machines. In ICML 06: Proceedings of the 23rd internationa conference on Machine earning, pages , New York, NY, USA, ACM Press. [6] Aarti Singh, Robert Nowak, and Xiaojin Zhu. Unabeed data: Now it heps, now it doesn t. In D. Koer, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neura Information Processing Systems 2, pages [7] Junhui Wang, Xiaotong Shen, and Wei Pan. On efficient arge margin semisupervised earning: Method and theory. Journa of Machine Learning Research, 0:79 742, [8] Lini Xu and Dae Schuurmans. Unsupervised and semi-supervised muti-cass support vector machines. In AAAI, pages , [9] Zengin Xu, Rong Jin, Jianke Zhu, Irwin King, and Michae R. Lyu. Efficient convex reaxation for transductive support vector machine. In J.C. Patt, D. Koer, Y. Singer, and S. Roweis, editors, Advances in Neura Information Processing Systems 20, pages MIT Press, Cambridge, MA, [20] T. Zhang and R. Ando. Anaysis of spectra kerne design based semi-supervised earning. In Y. Weiss, B. Schökopf, and J. Patt, editors, Advances in Neura Information Processing Systems (NIPS 8), pages MIT Press, Cambridge, MA, [2] Dengyong Zhou, Oivier Bousquet, Thomas Navin La, Jason Weston, and Bernhard Schökopf. Learning with oca and goba consistency. In Sebastian Thrun, Lawrence Sau, and Bernhard Schökopf, editors, Advances in Neura Information Processing Systems 6. MIT Press, Cambridge, MA, [22] X. Zhu, J. Kandoa, Z. Ghahramani, and J. Lafferty. Nonparametric transforms of graph kernes for semi-supervised earning. In Advances in Neura Information Processing Systems (NIPS 7), pages , Cambridge, MA, MIT Press. [23] Xiaojin Zhu. Semi-supervised earning iterature survey. Technica report, Computer Sciences, University of Wisconsin-Madison, 2005.
Statistical Learning Theory: A Primer
Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO
More informationMoreau-Yosida Regularization for Grouped Tree Structure Learning
Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State
More informationExplicit overall risk minimization transductive bound
1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,
More informationAn Algorithm for Pruning Redundant Modules in Min-Max Modular Network
An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai
More informationStatistical Learning Theory: a Primer
??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa
More informationMultilayer Kerceptron
Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,
More informationStochastic Variational Inference with Gradient Linearization
Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,
More informationarxiv: v1 [cs.lg] 31 Oct 2017
ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT
More informationFrom Margins to Probabilities in Multiclass Learning Problems
From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error
More informationSVM: Terminology 1(6) SVM: Terminology 2(6)
Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are
More informationAdaptive Localization in a Dynamic WiFi Environment Through Multi-view Learning
daptive Locaization in a Dynamic WiFi Environment Through Muti-view Learning Sinno Jiain Pan, James T. Kwok, Qiang Yang, and Jeffrey Junfeng Pan Department of Computer Science and Engineering Hong Kong
More informationSVM-based Supervised and Unsupervised Classification Schemes
SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro
More informationSoft Clustering on Graphs
Soft Custering on Graphs Kai Yu 1, Shipeng Yu 2, Voker Tresp 1 1 Siemens AG, Corporate Technoogy 2 Institute for Computer Science, University of Munich kai.yu@siemens.com, voker.tresp@siemens.com spyu@dbs.informatik.uni-muenchen.de
More informationBP neural network-based sports performance prediction model applied research
Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied
More informationAn Information Geometrical View of Stationary Subspace Analysis
An Information Geometrica View of Stationary Subspace Anaysis Motoaki Kawanabe, Wojciech Samek, Pau von Bünau, and Frank C. Meinecke Fraunhofer Institute FIRST, Kekuéstr. 7, 12489 Berin, Germany Berin
More informationA Brief Introduction to Markov Chains and Hidden Markov Models
A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,
More informationA unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio
MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999
More informationInductive Bias: How to generalize on novel data. CS Inductive Bias 1
Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow
More informationA. Distribution of the test statistic
A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch
More informationA Solution to the 4-bit Parity Problem with a Single Quaternary Neuron
Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria
More informationFRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)
1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using
More informationBayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?
Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine
More informationNEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION
NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen
More informationMINIMAX PROBABILITY MACHINE (MPM) is a
Efficient Minimax Custering Probabiity Machine by Generaized Probabiity Product Kerne Haiqin Yang, Kaizhu Huang, Irwin King and Michae R. Lyu Abstract Minimax Probabiity Machine (MPM), earning a decision
More informationASummaryofGaussianProcesses Coryn A.L. Bailer-Jones
ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe
More information(This is a sample cover image for this issue. The actual cover is not yet available at this time.)
(This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna
More informationCombining reaction kinetics to the multi-phase Gibbs energy calculation
7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation
More informationA Novel Learning Method for Elman Neural Network Using Local Search
Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama
More informationEfficiently Generating Random Bits from Finite State Markov Chains
1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown
More informationParagraph Topic Classification
Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA
More informationXSAT of linear CNF formulas
XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open
More informationCan Active Learning Experience Be Transferred?
Can Active Learning Experience Be Transferred? Hong-Min Chu Department of Computer Science and Information Engineering, Nationa Taiwan University E-mai: r04922031@csie.ntu.edu.tw Hsuan-Tien Lin Department
More informationSparse Semi-supervised Learning Using Conjugate Functions
Journa of Machine Learning Research (200) 2423-2455 Submitted 2/09; Pubished 9/0 Sparse Semi-supervised Learning Using Conjugate Functions Shiiang Sun Department of Computer Science and Technoogy East
More informationBayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems
Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University
More informationAppendix A: MATLAB commands for neural networks
Appendix A: MATLAB commands for neura networks 132 Appendix A: MATLAB commands for neura networks p=importdata('pn.xs'); t=importdata('tn.xs'); [pn,meanp,stdp,tn,meant,stdt]=prestd(p,t); for m=1:10 net=newff(minmax(pn),[m,1],{'tansig','purein'},'trainm');
More informationDistributed average consensus: Beyond the realm of linearity
Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,
More informationPARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR. Pierrick Bruneau, Marc Gelgon and Fabien Picarougne
17th European Signa Processing Conference (EUSIPCO 2009) Gasgow, Scotand, August 24-28, 2009 PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR Pierric Bruneau, Marc Gegon and Fabien
More informationSource and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems
Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,
More informationEfficient Generation of Random Bits from Finite State Markov Chains
Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown
More informationFUSED MULTIPLE GRAPHICAL LASSO
FUSED MULTIPLE GRAPHICAL LASSO SEN YANG, ZHAOSONG LU, XIAOTONG SHEN, PETER WONKA, JIEPING YE Abstract. In this paper, we consider the probem of estimating mutipe graphica modes simutaneousy using the fused
More informationPartial permutation decoding for MacDonald codes
Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics
More informationAn approximate method for solving the inverse scattering problem with fixed-energy data
J. Inv. I-Posed Probems, Vo. 7, No. 6, pp. 561 571 (1999) c VSP 1999 An approximate method for soving the inverse scattering probem with fixed-energy data A. G. Ramm and W. Scheid Received May 12, 1999
More informationMaintenance activities planning and grouping for complex structure systems
Maintenance activities panning and grouping for compex structure systems Hai Canh u, Phuc Do an, Anne Barros, Christophe Bérenguer To cite this version: Hai Canh u, Phuc Do an, Anne Barros, Christophe
More informationLearning Gaussian Processes from Multiple Tasks
Kai Yu kai.yu@siemens.com Information and Communication, Corporate Technoogy, Siemens AG, Munich, Germany Voker Tresp voker.tresp@siemens.com Information and Communication, Corporate Technoogy, Siemens
More informationCryptanalysis of PKP: A New Approach
Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in
More informationSupplement of Limited-memory Common-directions Method for Distributed Optimization and its Application on Empirical Risk Minimization
Suppement of Limited-memory Common-directions Method for Distributed Optimization and its Appication on Empirica Risk Minimization Ching-pei Lee Po-Wei Wang Weizhu Chen Chih-Jen Lin I Introduction In this
More informationBDD-Based Analysis of Gapped q-gram Filters
BDD-Based Anaysis of Gapped q-gram Fiters Marc Fontaine, Stefan Burkhardt 2 and Juha Kärkkäinen 2 Max-Panck-Institut für Informatik Stuhsatzenhausweg 85, 6623 Saarbrücken, Germany e-mai: stburk@mpi-sb.mpg.de
More informationThe EM Algorithm applied to determining new limit points of Mahler measures
Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,
More informationIdentification of macro and micro parameters in solidification model
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vo. 55, No. 1, 27 Identification of macro and micro parameters in soidification mode B. MOCHNACKI 1 and E. MAJCHRZAK 2,1 1 Czestochowa University
More informationTwo view learning: SVM-2K, Theory and Practice
Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk
More informationNew Efficiency Results for Makespan Cost Sharing
New Efficiency Resuts for Makespan Cost Sharing Yvonne Beischwitz a, Forian Schoppmann a, a University of Paderborn, Department of Computer Science Fürstenaee, 3302 Paderborn, Germany Abstract In the context
More informationarxiv: v1 [cs.lg] 23 Aug 2018
Muticass Universum SVM Sauptik Dhar 1 Vadimir Cherkassky 2 Mohak Shah 1 3 arxiv:1808.08111v1 [cs.lg] 23 Aug 2018 Abstract We introduce Universum earning for muticass probems and propose a nove formuation
More informationConvergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems
Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,
More informationUniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete
Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity
More informationarxiv: v1 [cs.db] 1 Aug 2012
Functiona Mechanism: Regression Anaysis under Differentia Privacy arxiv:208.029v [cs.db] Aug 202 Jun Zhang Zhenjie Zhang 2 Xiaokui Xiao Yin Yang 2 Marianne Winsett 2,3 ABSTRACT Schoo of Computer Engineering
More informationarxiv: v2 [stat.ml] 17 Mar 2015
Journa of Machine Learning Research 1x (201x) x-xx Submitted x/0x; Pubished x/0x A Unifying Framework in Vector-vaued Reproducing Kerne Hibert Spaces for Manifod Reguarization and Co-Reguarized Muti-view
More informationKernel pea and De-Noising in Feature Spaces
Kerne pea and De-Noising in Feature Spaces Sebastian Mika, Bernhard Schokopf, Aex Smoa Kaus-Robert Muer, Matthias Schoz, Gunnar Riitsch GMD FIRST, Rudower Chaussee 5, 12489 Berin, Germany {mika, bs, smoa,
More informationLecture Note 3: Stationary Iterative Methods
MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or
More informationIn-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017
In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative
More informationAsynchronous Control for Coupled Markov Decision Systems
INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of
More informationKernel Matching Pursuit
Kerne Matching Pursuit Pasca Vincent and Yoshua Bengio Dept. IRO, Université demontréa C.P. 6128, Montrea, Qc, H3C 3J7, Canada {vincentp,bengioy}@iro.umontrea.ca Technica Report #1179 Département d Informatique
More informationActive Learning & Experimental Design
Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection
More informationA SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS
A SIPLIFIED DESIGN OF ULTIDIENSIONAL TRANSFER FUNCTION ODELS Stefan Petrausch, Rudof Rabenstein utimedia Communications and Signa Procesg, University of Erangen-Nuremberg, Cauerstr. 7, 958 Erangen, GERANY
More informationMelodic contour estimation with B-spline models using a MDL criterion
Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex
More informationPrimal and dual active-set methods for convex quadratic programming
Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:
More information2-loop additive mass renormalization with clover fermions and Symanzik improved gluons
2-oop additive mass renormaization with cover fermions and Symanzik improved guons Apostoos Skouroupathis Department of Physics, University of Cyprus, Nicosia, CY-1678, Cyprus E-mai: php4as01@ucy.ac.cy
More informationMARKOV CHAINS AND MARKOV DECISION THEORY. Contents
MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After
More informationV.B The Cluster Expansion
V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f(q ) = exp ( βv( q )) 1, which is obtained by summing over
More informationTarget Location Estimation in Wireless Sensor Networks Using Binary Data
Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344
More informationConvolutional Networks 2: Training, deep convolutional networks
Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks
More informationT.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA
ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network
More informationDiscriminant Analysis: A Unified Approach
Discriminant Anaysis: A Unified Approach Peng Zhang & Jing Peng Tuane University Eectrica Engineering & Computer Science Department New Oreans, LA 708 {zhangp,jp}@eecs.tuane.edu Norbert Riede Tuane University
More informationSupport Vector Machine and Its Application to Regression and Classification
BearWorks Institutiona Repository MSU Graduate Theses Spring 2017 Support Vector Machine and Its Appication to Regression and Cassification Xiaotong Hu As with any inteectua project, the content and views
More informationDistributed Optimization With Local Domains: Applications in MPC and Network Flows
Distributed Optimization With Loca Domains: Appications in MPC and Network Fows João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, and Markus Püsche Abstract In this paper we consider a network with
More informationAnt Colony Algorithms for Constructing Bayesian Multi-net Classifiers
Ant Coony Agorithms for Constructing Bayesian Muti-net Cassifiers Khaid M. Saama and Aex A. Freitas Schoo of Computing, University of Kent, Canterbury, UK. {kms39,a.a.freitas}@kent.ac.uk December 5, 2013
More informationFORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS
FORECASTING TEECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODES Niesh Subhash naawade a, Mrs. Meenakshi Pawar b a SVERI's Coege of Engineering, Pandharpur. nieshsubhash15@gmai.com
More informationReliability Improvement with Optimal Placement of Distributed Generation in Distribution System
Reiabiity Improvement with Optima Pacement of Distributed Generation in Distribution System N. Rugthaicharoencheep, T. Langtharthong Abstract This paper presents the optima pacement and sizing of distributed
More informationV.B The Cluster Expansion
V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f( q ) = exp ( βv( q )), which is obtained by summing over
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view
More informationRobust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation
Robust Sensitivity Anaysis for Linear Programming with Eipsoida Perturbation Ruotian Gao and Wenxun Xing Department of Mathematica Sciences Tsinghua University, Beijing, China, 100084 September 27, 2017
More informationRecursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines
Recursive Constructions of Parae FIFO and LIFO Queues with Switched Deay Lines Po-Kai Huang, Cheng-Shang Chang, Feow, IEEE, Jay Cheng, Member, IEEE, and Duan-Shin Lee, Senior Member, IEEE Abstract One
More informationProblem set 6 The Perron Frobenius theorem.
Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator
More informationExpectation-Maximization for Estimating Parameters for a Mixture of Poissons
Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating
More information6 Wave Equation on an Interval: Separation of Variables
6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.
More informationDetermining The Degree of Generalization Using An Incremental Learning Algorithm
Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c
More informationOptimality of Inference in Hierarchical Coding for Distributed Object-Based Representations
Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,
More informationPower Control and Transmission Scheduling for Network Utility Maximization in Wireless Networks
ower Contro and Transmission Scheduing for Network Utiity Maximization in Wireess Networks Min Cao, Vivek Raghunathan, Stephen Hany, Vinod Sharma and. R. Kumar Abstract We consider a joint power contro
More informationRelated Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage
Magnetic induction TEP Reated Topics Maxwe s equations, eectrica eddy fied, magnetic fied of cois, coi, magnetic fux, induced votage Principe A magnetic fied of variabe frequency and varying strength is
More informationScalable Spectrum Allocation for Large Networks Based on Sparse Optimization
Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica
More informationSequential Decoding of Polar Codes with Arbitrary Binary Kernel
Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient
More informationAlgorithms to solve massively under-defined systems of multivariate quadratic equations
Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations
More informationGeneral Tensor Discriminant Analysis and Gabor Features for Gait Recognition
> PAI-057-005< Genera ensor Discriminant Anaysis and Gabor Features for Gait Recognition Dacheng ao, Xueong Li, Xindong Wu 2, and Stephen J. aybank. Schoo of Computer Science and Information Systems, Birkbeck
More informationResearch on liquid sloshing performance in vane type tank under microgravity
IOP Conference Series: Materias Science and Engineering PAPER OPEN ACCESS Research on iquid soshing performance in vane type tan under microgravity Reated content - Numerica simuation of fuid fow in the
More informationResearch of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance
Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient
More informationII. PROBLEM. A. Description. For the space of audio signals
CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time
More informationData Mining Technology for Failure Prognostic of Avionics
IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA
More informationNonlinear Gaussian Filtering via Radial Basis Function Approximation
51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This
More informationTheory of Generalized k-difference Operator and Its Application in Number Theory
Internationa Journa of Mathematica Anaysis Vo. 9, 2015, no. 19, 955-964 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ijma.2015.5389 Theory of Generaized -Difference Operator and Its Appication
More informationSupervised i-vector Modeling - Theory and Applications
Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,
More informationA Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)
A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,
More information